content
stringlengths 0
557k
| url
stringlengths 16
1.78k
| timestamp
timestamp[ms] | dump
stringlengths 9
15
| segment
stringlengths 13
17
| image_urls
stringlengths 2
55.5k
| netloc
stringlengths 7
77
|
---|---|---|---|---|---|---|
8.5.005.19
Widgets Release Notes
Helpful Links
Releases Info
Product Documentation
Genesys Products
What's New
This release includes only resolved issues.
Resolved Issues
This release contains the following resolved issues:
Corrected issue where POST requests for the CallBack widget had an unterminated boundary string in the request content. Previously, the Web Application Firewall blocked post data requests. (CXW-847)
Upgrade Notes
No special procedure is required to upgrade to release 8.5.005.19.
This page was last modified on November 6, 2018, at 22:09.
Feedback
Comment on this article: | https://docs.genesys.com/Documentation/RN/latest/gwc-wgt85rn/gwc-wgt8500519 | 2019-01-16T10:52:24 | CC-MAIN-2019-04 | 1547583657151.48 | [] | docs.genesys.com |
You can combine query conditions to select specific drawing objects for a new drawing layer in Display Manager.
For example, you can combine a layer condition with a location condition to find utility lines in the West quadrant of a city.
You can select from objects in the current map, in attached drawings, or in a topology. | http://docs.autodesk.com/MAP/2010/ENU/AutoCAD%20Map%203D%202010%20User%20Documentation/HTML%20Help/files/WS587233318B449340B7D664708E7F5BDB.htm | 2019-01-16T10:13:27 | CC-MAIN-2019-04 | 1547583657151.48 | [] | docs.autodesk.com |
Object data is an efficient method for storing small amounts of attribute data that you want to associate with drawing objects, but external databases store larger amounts of data more efficiently, and allow for more complex queries.
With AutoCAD Map 3D, you can convert object data into a linked database table that has the same data structure as the object data table. For each object containing object data in the specified table, AutoCAD Map 3D does the following:
When AutoCAD Map 3D converts the data, it creates a new table in an existing data source. It also creates a link template for the new table. In the link template, you can choose to use an existing field as the key field, or you can have AutoCAD Map 3D create a new field and assign a unique value to each record.
Field Names in the New Table
By default, the fields in the new database table have the same names as the fields in the object data table. AutoCAD Map 3D resolves any conflicts in the following ways: | http://docs.autodesk.com/MAP/2010/ENU/AutoCAD%20Map%203D%202010%20User%20Documentation/HTML%20Help/files/WS9571AA2C722617468AAFEA1D52B28EFB.htm | 2019-01-16T10:03:54 | CC-MAIN-2019-04 | 1547583657151.48 | [] | docs.autodesk.com |
A polygon is an object type with closed boundaries. Polygons store information about their inner and outer boundaries, and about other polygons nested within them or grouped with them.
Polygons can represent areas such as city limits, county boundaries, state borders, buildings, and parcels, as well as more complex objects, such as islands.
Example: A state map could be composed of a single polygon with an outer boundary representing the state, interior boundaries representing lakes, and boundaries within those boundaries representing islands. A country map could be composed of individual polygons representing each state.
The following table defines common terms used to describe the structure of polygons.
The figure below shows two polygon objects, each with three boundaries. The one on the left has two discrete outer boundaries and one inner boundary. The inner boundary is nested within the second discrete outer boundary. The polygon on the right also has two outer boundaries and one inner boundary. However, the second outer boundary is nested within the inner boundary.
Polygon objects maintain a tree structure to keep track of the boundaries and identify nesting levels. The illustration below shows the different tree structures for the two objects shown above. The first polygon tree contains two branches, while the second polygon tree contains a single branch.
In addition to outer and inner boundaries, there is an Annotation boundary type. This boundary has the characteristics of an inner boundary, but only affects the display of the pattern fill and is ignored when calculating the area or interior of the polygon object. Its primary purpose is to allow you to annotate your drawings without the fill pattern of the polygon obscuring the annotations. The annotation will typically consist of text or blocks. | http://docs.autodesk.com/MAP/2010/ENU/AutoCAD%20Map%203D%202010%20User%20Documentation/HTML%20Help/files/WSEC86ECA5903F1248B85DB3EFD15292F0.htm | 2019-01-16T09:41:50 | CC-MAIN-2019-04 | 1547583657151.48 | [] | docs.autodesk.com |
There are a number of processes and procedures for maintaining and administering an Alfresco production environment.
- Starting and stopping Alfresco Use this information to understand how to run the Alfresco server and Share.
- Managing Share features Use the Admin Tools to manages features of Alfresco Share such as look and feel, tagging, categories, and sites.
- Managing users and groups Use this information to administer your users and groups in Alfresco.
- Working with Alfresco licenses Access to Alfresco One is licensed on a per user basis.
- Setting up clustering You can implement multiple Alfresco instances in a clustered environment.
- Setting up multi-tenancy Alfresco supports a single-instance, single-tenant (ST) environment where each tenant (for example, customer, company, or organization) runs a single instance that is installed on one server or across a cluster of servers.
- Creating and managing workflows Alfresco comes with a set of predefined workflow definitions which can be used right out of the box. For more complex requirements, you can also create, deploy, and manage your own Activiti workflows.
- Setting up Enterprise to Cloud Sync Enterprise to Cloud Sync gives Alfresco on-premise users the ability to synchronize their content to Alfresco in the Cloud. This feature supports scenarios where users wish to collaborate on documents with external parties that do not have access to systems behind the firewall. In these circumstances, the on-premise Alfresco instance becomes the system of record, and the cloud instance is the system of engagement for external collaboration.
-. Alfresco replication suits an environment where you are running multiple, separate instances of Alfresco servers and databases.
- Monitoring Alfresco There are a number of methods for monitoring Alfresco.
- Backing up and restoring This information describes the process for backing up the Alfresco content repository only. It assumes that components other than the data residing in Alfresco (operating system, database, JRE, application server, Alfresco binaries and configuration, etc.) are being backed up independently.
- Auditing Alfresco.
Parent topic: Alfresco One | https://docs.alfresco.com/5.1/concepts/ch-administering.html | 2020-01-17T16:19:53 | CC-MAIN-2020-05 | 1579250589861.0 | [] | docs.alfresco.com |
Prodo’s Babel plugin is used to create components and actions using a more concise syntax. You can find out more about Babel from the official Babel docs.
To install the babel plugin, install the package
@prodo/babel-plugin as a dev-depenedency:
npm install --save-dev @prodo/babel-plugin# oryarn add --dev @prodo/babel-plugin
Before you use the babel plugin, you will need to ensure that you are using Babel to transpile your code. If you are using webpack, this can be done by passing your code to the
babel-loader. If you are using Parcel, your code will be transpiled by Babel automatically, so you do not need to do anything.
Once you have ensured that you are using Babel, you will need to configure Babel to include the babel plugin. This is done by adding
"@prodo" to the list of plugins used by Babel, usually defined in a file called .babelrc.
{"plugins": ["@prodo"]}
You can also use the full name,
"@prodo/babel-plugin".
When running your tests, you will also need to ensure that your test-runner is using babel to transpile your code before running your tests. For example, if you are using Jest you will need to transform your code with
babel-jest by adding a property to your package.json file:
{"jest": {"transform": {"^.+\\.[jt]sx?$": "babel-jest"}}}
You will need to install
babel-jest as a dev-dependency.
If you are using TypeScript, you will need to ensure that Babel transpiles your TypeScript syntax as well, since
ts-jest does not use Babel. This can be done by installing
@babel/preset-typescript as a dev-dependency and adding it to the list of presets in your Babel configuration:
{"presets": ["@babel/preset-typescript"]}
If you are not using Jest, please consult the documentation for your specific test-runner.
When using the Babel plugin, you can omit the calls to
model.action and
model.connect and instead write actions and components that import context from your model directly.
In order for this to work, your model must be defined in a model called model.js (or model.ts) and all of your model’s context (such as
state and
dispatch) must be exported from the same file, or a file in the same directory called model.ctx.js (or model.ctx.ts).
To write an action using the Babel plugin, simply import the various parts of your model’s context and use them as normal. Note that an action’s name must begin with a lowercase letter.
For example:
import { state } from "./model";export const increment = (amount: number) => {state.count += amount;};
is transpiled to:
import { model } from "./model";export const increment = model.action(({ state }) => (amount: number) => {state.count += amount;},"increment",);
You can write a component in the same way, except the component’s name must begin with an uppercase letter:
For example:
import { state, watch } from "./model";export const Counter = props => (<div><span>Hello, {watch(state.count)}!</span></div>);
is transpiled to:
import { model } from "./model";export const Counter = model.connect(({state}) => props => (<div><span>Hello, {watch(state.count)}!</span></div>;),"Counter"); | https://docs.prodo.dev/basics/babel-plugin/ | 2020-01-17T16:22:19 | CC-MAIN-2020-05 | 1579250589861.0 | [] | docs.prodo.dev |
Multiple. By default, RS binds this database endpoint to one of the proxies on a single node in the cluster. This proxy becomes an active proxy and receives all the operations for the given database. (note that if the node with the active proxy fails, a new proxy on another node takes over as part of the failover process automatically).
In most cases, a single proxy can handle a large number of operations without consuming additional resources. However, under high load, network bandwidth or a high rate of packets per second (PPS) on the single active proxy can become a bottleneck to how fast database operation can be performed. In such cases, having multiple active proxies, across multiple nodes, mapped to the same external database endpoint, can significantly improve throughput.
With the multiple active proxies capability, RS enables you to configure a database to have multiple internal proxies in order to improve performance, in some cases. It is important to note that, even though multiple active proxies can help improve the throughput of database operations, configuring multiple active proxies may cause additional latency in operations as the shards and proxies are spread across multiple nodes in the cluster.
Note: When the network on a single active proxy becomes the bottleneck, you might also look into enabling the multiple NIC support in RS. With nodes that have multiple physical NICs (Network Interface Cards), you can configure RS to separate internal and external traffic onto independent physical NICs. For more details, refer to Multi-IP & IPv6.
Having multiple proxies for a database can improve RS's ability for fast failover in case of proxy and/or node failure. With multiple proxies for a database, there is no need for a client to wait for the cluster to spin up another proxy and a DNS change in most cases, the client just uses the next IP in the list to connect to another proxy.
Proxy policies
A database can have one of the following four proxy policies:
Note: Manual intervention is also available via the rladmin bind add and remove commands.
Database configuration
A database can be configured with a proxy policy using rladmin bind.
Warning: Any configuration update which causes existing proxies to be unbounded can cause existing client connections to get disconnected.
You can run rladmin to control and view the existing settings for proxy configuration.
The info command on cluster returns the existing proxy policy for sharded and non-sharded (single shard) databases.
$ rladmin info cluster cluster configuration: repl_diskless: enabled default_non_sharded_proxy_policy: single default_sharded_proxy_policy: single default_shards_placement: dense default_shards_overbooking: disabled default_fork_evict_ram: enabled default_redis_version: 3.2 redis_migrate_node_threshold: 0KB (0 bytes) redis_migrate_node_threshold_percent: 8 (%) redis_provision_node_threshold: 0KB (0 bytes) redis_provision_node_threshold_percent: 12 (%) max_simultaneous_backups: 4 watchdog profile: local-network
You can configure the proxy policy using the
bind command in
rladmin. The following command is an example that changes the bind
policy for a database called "db1" with an endpoint id "1:1" to "All
Master Shards" proxy policy.
rladmin bind db db1 endpoint 1:1 policy all-master-shards
Note: you can find the endpoint id for the endpoint argument by running status command for rladmin. Look for the endpoint id information under the ENDPOINT section of the output.
Reapply Policies After Topology Changes
If you want to reapply the policy after topology changes, such as node restarts, failovers and migrations, run this command to reset the policy:
rladmin bind db <db_name> endpoint <endpoint id> policy <all-master-shards||all-nodes>
This is not required with single policies.
Other implications
During the regular operation of the cluster different actions might take place, such as automatic migration or automatic failover, which change what proxy needs to be bound to what database. When such actions take place the cluster attempts, as much as possible, to automatically change proxy bindings to adhere to the defined policies. That said, the cluster attempts to prevent any existing client connections from being disconnected, and hence might not entirely enforce the policies. In such cases, you can enforce the policy using the appropriate rladmin commands. | https://docs.redislabs.com/latest/rs/administering/designing-production/networking/multiple-active-proxy/ | 2020-01-17T17:20:44 | CC-MAIN-2020-05 | 1579250589861.0 | [] | docs.redislabs.com |
TOC & Recently Viewed
Recently Viewed Topics
Users
The User Profile page displays a table of all Nessus user accounts. This documentation refers to that table as the users table. Each row of the users table includes the user name, the date of the last login, and the role assigned to the account.
User accounts are assigned roles that dictate the level of access a user has in Nessus. You can change the role of a user account at any time, as well as disable the account. The following table describes the roles that can be assigned to users: | https://docs.tenable.com/nessus/8_8/Content/SettingsUsers.htm | 2020-01-17T16:55:59 | CC-MAIN-2020-05 | 1579250589861.0 | [] | docs.tenable.com |
RDB Native360 - User Synchronisation
This article shows you how to sync your RDB user profiles with your idibu account.
IMPORTANT! Un-synced user profiles are the biggest cause of connection errors that users may experience when trying to post a job/manage applicants from RDB. Should you or your team encounter any issues at all, please be sure to carry out the following steps:
1. Open the idibu RDB plugin control panel - here's how to do that if you're not sure
2. Click on 'Account Management'
3. Find the user in the 'Users' section, in the bottom window
4. Tick the box next to their name to sync the profile. Even if the box is already ticked, we'd recommend un-ticking and re-syncing anyway, to ensure the profiles are properly connected regardless.
5. Ask the user to try posting or managing their applicants again.
--
Please note, user sync must always be carried out after adding ANY new user profiles to RDB, otherwise you'll quickly run into connection issues.
If you're still experiencing any problems after re-syncing profiles, or have any questions at all about this article, please contact [email protected]. | https://v2-docs.idibu.com/article/169-10-account-management | 2020-01-17T16:59:09 | CC-MAIN-2020-05 | 1579250589861.0 | [array(['http://d33v4339jhl8k0.cloudfront.net/docs/assets/564e9cb1c697910ae05f547d/images/5aa16ac52c7d3a75495179a7/file-QzIm8KrEff.png',
None], dtype=object) ] | v2-docs.idibu.com |
Safety Configuration (Failsafes)
PX4 has a number of safety features to protect and recover your vehicle if something goes wrong:
- Failsafes allow you to specify areas and conditions under which you can safely fly, and the action that will be performed if a failsafe is triggered (for example, landing, holding position, or returning to a specified point). The most important failsafe settings are configured in the QGroundControl Safety Setup page. Others must be configured via parameters.
- Safety switches on the remote control can be used to immediately stop motors or return the vehicle in the event of a problem.
Failsafe Actions
Each failsafe defines its own set of actions. Some of the more common failsafe actions are:
It is possible to recover from a failsafe action (if the cause is fixed) by switching modes. For example, in the case where RC Loss failsafe causes the vehicle to enter Return mode, if RC is recovered you can change to Position mode and continue flying.
If a failsafe occurs while the vehicle is responding to another failsafe (e.g. Low battery while in Return mode due to RC Loss), the specified failsafe action for the second trigger is ignored. Instead the action is determined by separate system level and vehicle specific code. This might result in the vehicle being changed to a manual mode so the user can directly manage recovery.
QGroundControl Safety Setup
The QGroundControl Safety Setup page is accessed by clicking the QGroundControl Gear icon (Vehicle Setup - top toolbar) and then Safety in the sidebar). This includes the most important failsafe settings (battery, RC loss etc.) and the settings for the return actions Return and Land.
Low Battery Failsafe
The low battery failsafe is triggered when the battery capacity drops below one (or more warning) level values.
The most common configuration is to set the values and action as above (with
Warn > Failsafe > Emergency). With this configuration the failsafe will trigger warning, then return, and finally landing if capacity drops below the respective levels.
It is also possible to set the Failsafe Action to warn, return, or land when the Battery Failsafe Level failsafe level is reached.
The settings and underlying parameters are shown below.
RC Loss Failsafe
The RC Loss failsafe is triggered if the RC transmitter link is lost.
PX4 and the receiver may also need to be configured in order to detect RC loss: Radio Setup > RC Loss Detection.
The settings and underlying parameters are shown below.
Data Link Loss Failsafe
The Data Link Loss failsafe is triggered if a telemetry link (connection to ground station) is lost when flying a mission.
The settings and underlying parameters are shown below.
Geofence Failsafe
The Geofence Failsafe is a "virtual" cylinder centered around the home position. If the vehicle moves outside the radius or above the altitude the specified Failsafe Action will trigger.
PX4 separately supports more complicated GeoFence geometries with multiple arbitrary polygonal and circular inclusion and exclusion areas: Flying > GeoFence.
The settings and underlying geofence parameters are shown below.
Setting
GF_ACTIONto terminate will kill the vehicle on violation of the fence. Due to the inherent danger of this, this function is disabled using CBRK_FLIGHTTERM, which needs to be reset to 0 to really shut down the system.
The following settings also apply, but are not displayed in the QGC UI.
Return Mode Settings
Return is a common failsafe action that engages Return mode to return the vehicle to the home position. This section shows how to set the land/loiter behaviour after returning.
The settings and underlying parameters are shown below:
The return behavour is defined by RTL_LAND_DELAY. If negative the vehicle will land immediately. Additional information can be found in Return mode.
Land Mode Settings
Land at the current position is a common failsafe action that engages Land Mode. This section shows how to control when and if the vehicle automatically disarms after landing. For Multicopters (only) you can additionally set the descent rate.
The settings and underlying parameters are shown below:
Other Failsafe Settings
This section contains information about failsafe settings that cannot be configured through the QGroundControl Safety Setup page.
Position (GPS) Loss Failsafe
The Position Loss Failsafe is triggered if the quality of the PX4 position estimate falls below acceptable levels (this might be caused by GPS loss) while in a mode that requires an acceptable position estimate.
The failure action is controlled by COM_POSCTL_NAVL, based on whether RC control is assumed to be available (and altitude information):
0: Remote control available. Switch to Altitude mode if a height estimate is available, otherwise Stabilized mode.
1: Remote control not available. Switch to Land mode if a height estimate is available, otherwise enter flight termination.
Fixed Wing vehicles additionally have a parameter (NAV_GPSF_LT) for defining how long they will loiter (circle) after losing position before attempting to land.
The relevant parameters for all vehicles shown below (also see GPS Failure navigation parameters):
Parameters that only affect Fixed Wing vehicles:
Offboard Loss Failsafe
The Offboard Loss Failsafe is triggered if the offboard link is lost while under Offboard control. Different failsafe behaviour can be specified based on whether or not there is also an RC connection available.
The relevant parameters are shown below:
Mission Failsafe
The Mission Failsafe checks prevent a previous mission being started at a new takeoff location or if it is too big (distance between waypoints is too great). The failsafe action is that the mission will not be run.
The relevant parameters are shown below:
Traffic Avoidance Failsafe
The Traffic Avoidance Failsafe allows PX4 to respond to transponder data (e.g. from ADSB transponders) during missions.
The relevant parameters are shown below:
Adaptive QuadChute Failsafe
Failsafe for when a pusher motor fails (or airspeed sensor) and a VTOL vehicle can no longer achieve a desired altitude setpoint in fixed-wing mode. If triggered, the vehicle will transition to multicopter mode and enter failsafe Return mode.
The relevant parameters are shown below:
Failure Detector
The failure detector allows a vehicle to take protective action(s) if it unexpectedly flips - for example, it can launch a parachute or perform some other action.
Failure detection is deactivated by default using a circuit breaker. You can enable it by setting CBRK_FLIGHTTERM=0.
More precisely, the failure detector triggers flight termination (in all modes) if the vehicle attitude exceeds predefined pitch and roll values for more than a specified time.
The relevant parameters are shown below:
Emergency Switches
Remote control switches can be configured (as part of QGroundControl Flight Mode Setup) to allow you to take rapid corrective action in the event of a problem or emergency; for example, to stop all motors, or activate Return mode.
This section lists the available emergency switches.
Kill Switch
A kill switch immediately stops all motor outputs (and if flying, the vehicle will start to fall)! The motors will restart if the switch is reverted within 5 seconds. After 5 seconds the vehicle will automatically disarm; you will need to arm it again in order to start the motors.
Arm/Disarm Switch
The arm/disarm switch is a direct replacement for the default stick-based arming/disarming mechanism (and serves the same purpose: making sure there is an intentional step involved before the motors start/stop). It might be used in preference to the default mechanism because:
- Of a preference of a switch over a stick motion.
- It avoids accidentally triggering arming/disarming in-air with a certain stick motion.
- There is no delay (it reacts immediately).
The arm/disarm switch immediately disarms (stop) motors for those flight modes that support disarming in flight. This includes:
- Manual mode
- Acro mode
- 自稳
- Rattitude
For modes that do not support disarming in flight, the switch is ignored during flight, but may be used after landing is detected. This includes Position mode and autonomous modes (e.g. Mission, Land etc.).
Auto disarm timeouts (e.g. via COM_DISARM_LAND) are independent of the arm/disarm switch - ie even if the switch is armed the timeouts will still work.
Return Switch
A return switch can be used to immediately engage Return mode.
Other Safety Settings
Auto-disarming Timeouts
You can set timeouts to automatically disarm a vehicle if it is too slow to takeoff, and/or after landing (disarming the vehicle removes power to the motors, so the propellers won't spin).
The relevant parameters are shown below: | http://docs.px4.io/master/zh/config/safety.html | 2020-01-17T15:55:35 | CC-MAIN-2020-05 | 1579250589861.0 | [array(['../../images/qgc/setup/safety_setup.png', 'Safety Setup (QGC)'],
dtype=object)
array(['../../images/qgc/setup/safety_battery.png',
'Safety - Battery (QGC)'], dtype=object)
array(['../../images/qgc/setup/safety_rc_loss.png',
'Safety - RC Loss (QGC)'], dtype=object)
array(['../../images/qgc/setup/safety_data_link_loss.png',
'Safety - Data Link Loss (QGC)'], dtype=object)
array(['../../images/qgc/setup/safety_geofence.png',
'Safety - Geofence (QGC)'], dtype=object)
array(['../../images/qgc/setup/safety_return_home.png',
'Safety - Return Home Settings (QGC)'], dtype=object)
array(['../../images/qgc/setup/safety_land_mode.png',
'Safety - Land Mode Settings (QGC)'], dtype=object)] | docs.px4.io |
FAQs and additional resources
This topic provides information that supplements the TrueSight Intelligence documentation. It contains the following sections:
Frequently asked questions
This section provides answers to frequently asked questions (FAQs) about TrueSight Intelligence.
See the Release notes to view new feature (enhancement) releases and related information.
TrueSight Intelligence is built on a self-service model that is hosted in the cloud. You just need to configure it for collecting data from various sources. For more information, see Collecting data.
Yes, multiple TrueSight meters can be configured for collecting data. For more information, see Collecting data using meters and plugins.
The duration for which the data collected is stored is based on your choice of subscription.
Yes. However, you must ensure that the data is sequentially ingested into the system. For example, you must ingest data for 01/01/2015 05:00:00 before ingesting data for 01/01/2015 05:00:01.
You can search for an application, devices, entities, and metrics from the search bar. For more information, see Searching for data.
Data collected for all metrics from a source can be tagged with an application ID. For more information, see Setting metrics as KPIs for analysis.
Yes, you can configure multiple sources to send data to a single application by using the same app_id. You can then set KPIs that you want to monitor. For more information, see Setting metrics as KPIs for analysis.
Yes. However, you will need to update the mapping at the source level. All metrics collected by the source will be visible under the newly selected/created application thereafter.
No. Any change in the source type association will only impact measures that are ingested AFTER it is changed.
No. TrueSight Intelligence will NOT change the association for data that was ingested in the past..
Depending on the number of topics included in the export, it might take several minutes to create the PDF. After the export process is complete, you can download the PDF.
Additional resources from BMC
The following BMC sites provide information outside of the TrueSight Intelligence documentation that you might find helpful:
- BMC Communities, TrueSight Intelligence community
-, information about TrueSight Intelligence | https://docs.bmc.com/docs/intelligence/faqs-and-additional-resources-618653569.html | 2020-01-17T17:35:31 | CC-MAIN-2020-05 | 1579250589861.0 | [] | docs.bmc.com |
Create Notification Channel
Description #
Creates a new channel for receiving notifications when changes occur.
Notification channels can be created with additional filters which can be thought of as a subset of those available when reading events.
URL format #
api.cronofy.com/v1/channels
Example Request #
POST /v1/channels HTTP/1.1 Host: api.cronofy.com Authorization: Bearer {ACCESS_TOKEN} Content-Type: application/json; charset=utf-8 { "callback_url": {CALLBACK_URL}, "filters": { "calendar_ids": [ "cal_n23kjnwrw2_sakdnawerd3" ], "only_managed": false } }
Example Response #
HTTP/1.1 200 OK Content-Type: application/json; charset=utf-8 { "channel": { "channel_id": "chn_54cf7c7cb4ad4c1027000001", "callback_url": {CALLBACK_URL}, "filters": { "calendar_ids": [ "cal_n23kjnwrw2_sakdnawerd3" ], "only_managed": false } } }
Request parameters #
callback_url required #
The HTTP or HTTPS URL you wish to receive push notifications.
Must not be longer than 128 characters and should be HTTPS.
filters.calendar_ids optional #
Restricts the notifications to changes to events within the specified calendars.
The possible
calendar_ids can be discovered through the list calendars endpoint.
If omitted, notifications are sent for changes to events across all calendars. When specified, at least one
calendar_id must be provided.
filters.only_managed optional #
A
Boolean specifying whether only events that you are managing for the account should trigger notifications.
Response parameters #
channel.channel_id #
The ID of the channel.
Note that ID may be for an existing channel if you make a request to create a channel that is identical to an existing one.
channel.callback_url #
The URL that will receive push notification requests.
channel.filters #
Any non-default filters that were specified for the channel.
Error responses #
401 Unauthorized #
The request was refused as the provided authentication credentials were not recognized.
When an OAuth
refresh_token is available then it should be used to request a replacement
auth_token before the request is retried.
422 Unprocessable #
The request was unable to be processed due to it containing invalid parameters.
The response will contain a JSON object containing one or more errors relating to the invalid parameters.
For example, if you omitted the required
callback_url parameter, you would receive a response like:
{ "errors": { "callback_url": [ { "key": "errors.required", "description": "required" } ] } }
The
key field is intended for programmatic use and the
description field is a human-readable equivalent. | https://docs.cronofy.com/developers/api/push-notifications/create-channel/ | 2020-01-17T16:15:04 | CC-MAIN-2020-05 | 1579250589861.0 | [] | docs.cronofy.com |
VulnDB, a subscription service offered by Risk Based Security, offers a comprehensive and continuously updated source of vulnerability intelligence.
Organizations that consume VulnDB content benefit from data which has been enhanced, corrected, and made available sooner than the National Vulnerability Database. As a result, organizations are able to respond quicker and with more confidence to reduce risk.
Credit is provided to VulnDB with visual and textual cues on where the data originated. Links back to the original advisory are also provided.
Dependency-Track supports VulnDB in two ways:
- A VulnDB Analyzer may be enabled which integrates with VulnDB REST APIs to identify vulnerabilities in components with a CPE
- Ingests VulnDB mirrored content and incorporates the entire vulnerability database into Dependency-Track
Using the VulnDB Analyzer
The VulnDB Analyzer is capable of analyzing all components with CPEs against the VulnDB service. The analyzer is a consumer of the VulnDB REST APIs and requires an OAuth 1.0a Consumer Key and Consumer Secret be configured in Dependency-Track. Although not exclusive, any component with a CPE defined will be analyzed with VulnDB.
Using the Internal Analyzer
The native Dependency-Track internal analyzer is capable of analyzing components that have valid CPEs or Package URLs against a dictionary of vulnerable software which Dependency-Track maintains. When the NVD or VulnDB are mirrored, the vulnerability information for the affected products are added to the internal vulnerable software dictionary.
If VulnDB is mirrored using a tool such as VulnDB Data Mirror and the contents have been ingested by Dependency-Track, the internal analyzer will automatically benefit from the additional data in the dictionary that VulnDB provided.
Choosing an Approach
Both ways of integration have their advantages. Using the VulnDB analyzer is quick, can be used on an as-needed basis, and doesn’t have the overhead that a mirroring approach may have.
Using the mirror will provide faster responses, the ability to browse all VulnDB content within Dependency-Track, but comes at the expense of performing the initial mirror, which is time consuming and requires a lot of requests to VulnDB.
VulnDB subscription plans may have a limit on the number of requests that can be made to the service per month. Dependency-Track does not monitor this, nor throttle its requests when limits are nearing or have been reached. It is the responsibility of VulnDB customers to manage their subscription and ensure they’re using the service within the defined license terms.
VulnDB Mirror Setup
- Download the standalone VulnDB Data Mirror tool
- Execute the tool and specify the Dependency-Track vulndb directory as the target
- Dependency-Track will automatically sync the contents of the vulndb directory every 24 hours (and on startup)
Example
vulndb-data-mirror.sh \ --consumer-key mykey \ --consumer-secret mysecret \ --dir "~/.dependency-track/vulndb"
When running, the console output will resemble:
VulnDB API Status: -------------------------------------------------------------------------------- Organization Name.............: Example Inc. Name of User Requesting.......: Jane Doe Email of User Requesting......: [email protected] Subscription Expiration Date..: 2018-12-31 API Calls Allowed per Month...: 25000 API Calls Made This Month.....: 1523 -------------------------------------------------------------------------------- Mirroring Vendors feed... Processing 18344 of 18344 results Mirroring Products feed... Processing 136853 of 136853 results Mirroring Vulnerabilities feed... Processing 142500 of 166721 results | https://docs.dependencytrack.org/datasources/vulndb/ | 2020-01-17T17:15:14 | CC-MAIN-2020-05 | 1579250589861.0 | [] | docs.dependencytrack.org |
Stripe Credit Card field
Based on Stripe Elements this field will make collecting payment details more secure and help prevent malicious actors from stealing any sensitive information.
It eliminates entire classes of attacks by totally isolating the card secure information from your website with the help of a secure iframe. The sensitive data does not reach your server.
The field automatically offers real-time validation to ensure errors are caught early. | https://docs.dnnsharp.com/integrations/stripe/stripe-credit-card-field.html | 2020-01-17T16:53:33 | CC-MAIN-2020-05 | 1579250589861.0 | [array(['https://static.dnnsharp.com/documentation/stripe-elements-credit-card-field.png',
'Stripe Elements Credit card field Stripe Elements Credit card field'],
dtype=object) ] | docs.dnnsharp.com |
Innovation in energy
Energy UK continues its leadership in the low carbon transport arena by driving forwards the development of standards for smart charging in a position paper and consultation document on Acceptable Standards for Smart Charging. The consultation pre-empts work to be done by OLEV in developing mandatory standards for charge points, which will require all charge points to incorporate a level of smart capability to enable avoidance of network constraints.
The Flexibility Working Group and NESH Committee led the development of a paper examining near-term decisions to clarify uncertainty surrounding Roles and Responsibilities in the Provision of Flexibility. The paper examined the need for decisions on responsibility for imbalance payments and ownership or operation of storage assets or aggregation activities by network operators, as well as recommending the expansion of existing consumer protections across new business models and improvements to monitoring and communications capabilities across distribution networks.
Energy UK this year established an EV Charging Forum, a closed group for UK charge point operators to discuss and input into the EV Energy Taskforce and create common messaging. The group is free to join and is successfully bringing together the majority of the market to remove barriers to faster uptake of EVs.
The group met for the first time in September and agreed a Terms of Reference and areas of focus. Papers explaining the group and any agreed policy positions will be made publicly available to enable industry to move forwards swiftly. | https://docs.energy-uk.org.uk/our-work/new-energy-services-and-heat/innovation-in-energy.html | 2020-01-17T15:46:33 | CC-MAIN-2020-05 | 1579250589861.0 | [] | docs.energy-uk.org.uk |
Configuring Pexip Infinity as a Google Hangouts Meet gateway
The Pexip Distributed Gateway provides any-to-any video interoperability with Google Hangouts Meet.
Third-party systems can connect to Hangouts Meet conferences via the Pexip Distributed Gateway either by dialing the conference directly or via a Virtual Reception (IVR).
This topic describes the configuration and administrative steps that are required to use Pexip Infinity as a Google Hangouts Meet gateway. It assumes that you have already performed a basic installation and configuration of Pexip Infinity. This topic covers:
- Ensuring a Google Hangouts Meet (ghm) license is installed
- Configuring your access tokens
- Configuring Virtual Receptions and Call Routing Rules
- Interoperability and deployment features
See Integrating Google Hangouts Meet with Pexip Infinity for more information about the user experience within Hangouts Meet conferences.
Ensuring a Google Hangouts Meet (ghm) license is installed
You must have a ghm license enabled on your platform ( ). This allows you to configure access tokens and route calls to Hangouts Meet.
If necessary, contact your Pexip authorized support representative to purchase the required license. Note that the ghm license is required in addition to the standard Pexip Infinity call licenses.
Configuring your access tokens
All communication between Pexip Infinity and Hangouts Meet is authenticated by access tokens that identify your G Suite account (see Generating your gateway access tokens).
To configure your trusted and untrusted access tokens in Pexip Infinity:
- Go toand select .
Add the details of your
trustedtoken:
- Repeat the above steps to add the details of your untrusted token.
These tokens will now be available to associate with any Virtual Receptions and Call Routing Rules that you configure (as described below) to handle Hangouts Meet conferences.
Note that:
- The access tokens apply to the entire G Suite tenant, but you can enable interoperability on a per-OU (organizational unit) basis within G Suite.
- Service providers may need to apply multiple pairs of access tokens for each tenant they are managing.
Configuring Virtual Receptions and Call Routing Rules
There are two ways in which you can configure Pexip Infinity to route calls into Hangouts Meet conferences:
- Routing directly via the Pexip Distributed Gateway: here you use a Call Routing Rule to route incoming calls for specific alias patterns — that will typically include the meeting code — directly into the relevant Hangouts Meet conference. This means that the endpoint can dial an alias, such as [email protected] and be taken directly into the conference.
- Routing indirectly via a Virtual Reception: here you configure Pexip Infinity to act as an IVR gateway or "lobby" by configuring a Virtual Reception to prompt the caller to enter the meeting code of the required conference, and then use a Call Routing Rule (typically the same rule as used for direct routing) to route the call into the Hangouts Meet conference. This means that the endpoint can dial a general alias, such as [email protected] and then enter the specific meeting code, such as 123456, and then be transferred into the conference.
You can use either or both of these two methods, depending upon your requirements. The configuration required for direct and indirect routing is explained below.
Depending on your dial plan requirements, you may want to use multiple Call Routing Rules, where some rules use a trusted token and other rules use an untrusted token. For example, if you want to associate calls received via a particular location as trusted, and all calls received in other locations as untrusted then you will need to configure two rules — one rule for calls received in the trusted location that is associated with the trusted token, and then another lower priority rule that is associated with the untrusted token.
Routing directly via the Pexip Distributed Gateway
To route calls to Hangouts Meet conferences directly via the Pexip Distributed Gateway you need:
- To decide on an alias pattern that participants will dial to access the Hangouts Meet conferences.
- The alias pattern will typically include the meeting code, for example the pattern could be just <meeting code> or <meeting code>@<domain> i.e. the meeting code and then, optionally, the domain of your Pexip Infinity platform, for example [email protected] to access a Hangouts Meet conference with a meeting code of 123456.
- You can also configure within G Suite a PIN prefix (8, for example) to force all meeting codes to start with that prefix. This is typically required if you have a conflicting dial plan on your video conferencing side that could clash with your Hangouts Meet meeting codes. See Configuring G Suite for Google Hangouts Meet integration for more information.
One or more Call Routing Rules that match that alias pattern and, if necessary, transform it such that it contains just the Hangouts Meet meeting code which it can then use to connect to the conference. You can use multiple rules to differentiate between devices that are to be treated as trusted or not from Pexip Infinity's perspective, and hence which type of Access token is selected and whether Treat as trusted is selected or not:
- If devices register to Pexip Infinity we recommend using two gateway rules: one higher-priority rule to specifically handle registered devices, and one lower-priority rule to handle any device (registered and non-registered).
- If you are using third-party call control systems you also may want to use different rules to distinguish between calls arriving at Conferencing Nodes in different locations.
To configure each rule:
- Go toand select .
Configure the following fields (leave all other fields with default values or as required for your specific deployment):
- If you are creating multiple rules, for example when handling whether a device is registered to Pexip Infinity or not, return to step 1 and create the next rule.
Using the direct gateway service
After the Call Routing Rule has been configured, third-party systems and devices can now dial an alias that matches your specified pattern (e.g. 812345 or [email protected]) to be routed directly into the appropriate Hangouts Meet conference (in this example the conference with a meeting code of 812345).
Routing indirectly via a Virtual Reception (IVR gateway)
To route calls to Hangouts Meet conferences via a Virtual Reception (IVR gateway) you need:
- A Virtual Reception configured specifically to handle Hangouts Meet conferences.
- A Call Routing Rule to route the calls handled by the Virtual Reception into the relevant Hangouts Meet conference. Typically you would configure the Virtual Reception and Call Routing Rule patterns so that the same rule can also support direct routing as described above.
The Virtual Reception requests the caller to enter the Hangouts Meet meeting code which it then sends to Hangouts Meet for verification. You can then optionally transform the meeting code to meet any dial plan requirements, before the Pexip Distributed Gateway then matches the (optionally transformed) meeting code and routes the caller to the appropriate Hangouts Meet conference.
To configure the Virtual Reception:
- Go toand select .
Configure the following fields (leave all other fields with default values or as required for your specific deployment):
To configure the associated Call Routing Rule:
- Configure the Call Routing Rule as described above for direct routing.
- If you want to use a different rule for routing via a Virtual Reception than the rule you are using for direct routing (e.g. because you want to limit the supported incoming call protocols, or use a different outgoing location for calls placed via the Virtual Reception), then follow the same principles as the direct routing rule, but use a different alias pattern in your Virtual Reception's Post-lookup regex replace string and your rule's Destination alias regex match string.
Using the Hangouts Meet IVR gateway service
After the Virtual Reception and Call Routing Rule have been configured, third-party systems and devices can now dial the alias of the Virtual Reception (e.g. [email protected]) and then, when prompted by the IVR service, enter the meeting code of the Hangouts Meet conference they want to join.
The Pexip Distributed Gateway will then route the call into the appropriate Hangouts Meet conference.
Interoperability and deployment features
DNS and ports requirements
You need to ensure that the endpoints and systems you want to gateway into Hangouts Meet can call into Pexip Infinity Conferencing Nodes, and that Conferencing Nodes can call out to Hangouts Meet.
- See DNS record examples for information about enabling endpoints to route their calls to Conferencing Nodes.
- See Pexip Infinity port usage and firewall guidance for information about the ports used when Conferencing Nodes connect to Hangouts Meet and other devices.
Call and participant status
When using the Pexip Infinity Administrator interface to monitor calls that are placed into Hangouts Meet conferences, you should note that:
Each participant who is gatewayed into a Hangouts Meet conference is listed as a separate gateway call. However, if multiple participants are connected to the same Hangouts Meet conference, the Live View ( ) will show them as connected to the same external conference.
When viewing the participant status for a gateway call, the meeting code, such as 871189, is shown as a participant alias. This participant represents the gateway call leg into Hangouts Meet. If you look at the media streams associated with that participant you see that:
- Pexip Infinity sends (subject to bandwidth) three VP8 video streams (each at different resolutions) and one 1 audio stream to Hangouts Meet for that participant.
- Pexip Infinity receives one video and one audio stream for each external participant in the conference, up to a maximum of 8 video streams (to support Pexip's standard 1+7 layout). If there are more than 8 other participants then only an audio stream is received for those extra participants.
Other participant aliases that are displayed for that call include the device that placed the call (such as [email protected]) and one or more aliases in the format spaces/<id>/devices/<id> which represent the other participants in the Hangouts Meet conference.
- You cannot control (e.g. disconnect, mute or transfer) any of the other participants connected to the Hangouts Meet conference.
Additional information
- Each participant who is gatewayed via Pexip Infinity into a Hangouts Meet conference consumes two call licenses (one for the inbound leg of the call and one for the outbound leg, as is standard for calls via the Pexip Distributed Gateway calls). Any external participants who are connected directly to the Hangouts Meet conference do not consume a license. See Pexip Infinity license installation and usage for more information.
- You cannot limit the Maximum outbound call bandwidth (the call leg towards Hangouts Meet) — it is fixed at 2 Mbps.
- If the Hangouts Meet conference is recorded, "streaming enabled" indicators are included in the video stream sent to gatewayed participants.
- Chat messages are supported in both directions between Hangouts Meet and other chat-enabled clients. However, the name of the sender from the Hangouts Meet side is not identified on messages received by Skype for Business clients. | https://docs.pexip.com/admin/gmeet_configuration.htm | 2020-01-17T17:37:57 | CC-MAIN-2020-05 | 1579250589861.0 | [array(['../Resources/Images/gmeet/direct_gateway_flow_700x262.png', None],
dtype=object)
array(['../Resources/Images/gmeet/ivr_gateway_flow_700x264.png', None],
dtype=object) ] | docs.pexip.com |
If GitHub PR task lists are an important part of your process, there is a simple way to check their completion when writing PullApprove conditions.
For example, to ensure that the tasks are completed before reviews are requested:
version: 3 pullapprove_conditions: - condition: "body and '- [ ]' not in body" unmet_status: pending explanation: "Please finish all the tasks first" | https://docs.pullapprove.com/examples/task-lists/ | 2020-01-17T17:07:53 | CC-MAIN-2020-05 | 1579250589861.0 | [] | docs.pullapprove.com |
Slack Notifications
Send notifications to a channel in your slack workplace when a response is submitted to your survey, quiz or form.
Note: You must be signed into your Slack workplace in order for this integration to work.
- Open a project
- Click 'Settings' in the left sidebar
- Toggle the button beneath 'Slack Notification' to 'On'
- Authorize the integration with your slack workplace
You'll now receive notifications every time a participant submits a response to the survey you enabled this feature for. | https://docs.shout.com/article/224-slack-notifications | 2020-01-17T17:26:39 | CC-MAIN-2020-05 | 1579250589861.0 | [] | docs.shout.com |
public abstract class AbstractValueAdaptingCache extends Object implements Cache
Cacheimplementations that need to adapt
nullvalues (and potentially other such special values) before passing them on to the underlying store.
Transparently replaces given
null user values with an internal
NullValue.INSTANCE, if configured to support
null values
(as indicated by
isAllowNullValues().
Cache.ValueRetrievalException, Cache.ValueWrapper
clone, equals, finalize, getClass, hashCode, notify, notifyAll, toString, wait, wait, wait
clear, evict, get, getName, getNativeCache, put, putIfAbsent
protected AbstractValueAdaptingCache(boolean allowNullValues)
AbstractValueAdaptingCachewith the given setting.
allowNullValues- whether to allow for
nullvalues
public final boolean isAllowNullValues()
nullvalues are allowed in this cache.
@Nullable, @Nullable)
@Nullable protected abstract Object lookup(Object key)
key- the key whose associated value is to be returned
nullif none
@Nullable protected Object fromStoreValue(@Nullable Object storeValue)
null).
storeValue- the store value
protected Object toStoreValue(@Nullable Object userValue)
null).
userValue- the given user value
@Nullable protected Cache.ValueWrapper toValueWrapper(@Nullable Object storeValue)
SimpleValueWrapper, also going through
fromStoreValue(java.lang.Object)conversion. Useful for
get(Object)and
Cache.putIfAbsent(Object, Object)implementations.
storeValue- the original value | https://docs.spring.io/spring/docs/5.0.x/javadoc-api/org/springframework/cache/support/AbstractValueAdaptingCache.html?is-external=true | 2020-01-17T15:31:17 | CC-MAIN-2020-05 | 1579250589861.0 | [] | docs.spring.io |
Installation¶
Remember – deepTools are available for command line usage as well as for integration into Galaxy servers!
Requirements¶
- Python 2.7 or Python 3.x
- numpy >= 1.8.0
- scipy >= 0.17.0
- py2bit >= 0.1.0
- pyBigWig >= 0.2.1
- pysam >= 0.8
- matplotlib >= 1.4.0
The fastest way to obtain Python 2.7 or Python 3.x together with numpy and scipy is via the Anaconda Scientific Python Distribution. Just download the version that’s suitable for your operating system and follow the directions for its installation. All of the requirements for deepTools can be installed in Anaconda with:
$ conda install -c bioconda deeptools
Command line installation using
pip¶
Install deepTools using the following command:
$ pip install deeptools
All python requirements should be automatically installed.
If you need to specify a specific path for the installation of the tools, make use of pip install’s numerous options:
$ pip install --install-option="--prefix=/MyPath/Tools/deepTools2.0" git+
Command line installation without
pip¶
You are highly recommended to use pip rather than these more complicated steps.
- Install the requirements listed above in the “requirements” section. This is done automatically by pip.
2. Download source code
$ git clone
or if you want a particular release, choose one from:
$ wget $ tar -xzvf
3. install the source code (if you don’t have root permission, you can set
a specific folder using the
--prefix option)
$ python setup.py install --prefix /User/Tools/deepTools2.0
Galaxy installation¶
deepTools can be easily integrated into a local Galaxy. All wrappers and dependencies are available in the Galaxy Tool Shed.
Installation via Galaxy API (recommended)¶
First generate an API Key for your admin user and run the the installation script:
$ python ./scripts/api/install_tool_shed_repositories.py \ --api YOUR_API_KEY -l \ --url \ -o bgruening -r <revision> --name suite_deeptools \ --tool-deps --repository-deps --panel-section-name deepTools
The
-r argument specifies the version of deepTools. You can get the
latest revision number from the test tool shed or with the following
command:
$ hg identify
You can watch the installation status under: Top Panel –> Admin –> Manage installed tool shed repositories
Installation via web browser¶
- go to the admin page
- select Search and browse tool sheds
- Galaxy tool shed –> Sequence Analysis –> deeptools
- install deeptools
Installation with Docker¶
The deepTools Galaxy instance is also available as a docker container, for those wishing to use the Galaxy framework but who also prefer a virtualized solution. This container is quite simple to install:
$ sudo docker pull quay.io/bgruening/galaxy-deeptools
To start and otherwise modify this container, please see the instructions on the docker-galaxy-stable github repository. Note that you must use bgruening/galaxy-deeptools in place of bgruening/galaxy-stable in the examples, as the deepTools Galaxy container is built on top of the galaxy-stable container.
Tip
For support, questions, or feature requests contact: [email protected] | https://deeptools.readthedocs.io/en/latest/content/installation.html | 2019-03-18T18:30:17 | CC-MAIN-2019-13 | 1552912201521.60 | [] | deeptools.readthedocs.io |
This tutorial demonstrates how to add parameters to a report, provide them with default values, create multi-value and cascading parameters (i.e., filter parameter values based on the current value of another parameter). The created parameters are used to filter data at the data source level. The last document section describes other uses of report parameters not related to filtering data.
In the previous tutorials, you bound a report to a data source and constructed the report layout with data fields from this data source. This tutorial demonstrates how to filter data at the level of this data source.
You need to add another data source containing the same queries to provide values for report parameters.
A report parameter stores one or more values that can be modified in Print Preview and passed to a report before its creation.
The following steps illustrate how to make a report display data corresponding to a specific order selected in Print Preview:
Select the Parameters node in the Field List panel and click the plus button to create a new report parameter.
Click the Edit
button for the created parameter to expand the property list. Specify the parameter's Name (by which it can be referred to in the filter expression) and Description (to display in Print Preview).
Set the Type property corresponding to the type of a data field against which this parameter should be compared in the filter expression.
Set the Look-Up Settings Type property to Dynamic List to supply parameter values from a dedicated data source. This enables the following look-up settings:
Filter String (optional)
Enables you to filter the list of parameter values (e.g., to create cascading parameters that are described further down in this tutorial).
Data Source
Specifies the data source to which the parameter is bound.
Data Member
Specifies the name of a data column storing the parameter values.
Display Member (optional)
Specifies the name of a data field providing parameter value descriptions displayed in Print Preview.
Value Member
Specifies the name of a data field providing the parameter values.
Access a data source providing data for a report and click the Edit query button for the query that you want to filter (Orders query for this step).
In the invoked Data Source Wizard page, click the Run Query Builder button.
In the Query Builder, switch to the Query Properties section and click the ellipsis button for the Filter property.
In the invoked Filter Editor, construct an expression in which the OrderID data field is compared to a query parameter value. Expand the drop-down menu for a value placeholder and select Parameter.
This converts the value placeholder into a parameter placeholder. Click this placeholder and select Create new parameter.
In the dedicated editor, specify the query parameter name.
Click Save to save the filter condition and close the editor.
Click OK in the Query Builder, and then, click Next on the wizard page to proceed.
On the following wizard page, map the created query parameter to the report parameter. Expand the drop-down list for the parameter's Type property and select Expression. Then, click the ellipsis button for the Value property and specify the report parameter in the invoked Expression Editor.
Switch to Print Preview by clicking the Preview
button in the main toolbar. Select a required order in the parameter's lookup editor. Click the Submit button to pass the corresponding value to the filter expression and generate the document.
Do the following to enable a report parameter to accept multiple values at once and filter the report against these values:
Go to the Field List and enable the parameter's MultiValue option.
Once again, run the Query Builder for the Orders query and invoke the Filter Editor. Customize the filter expression so that the OrderID data field is compared to all of the parameter values.
Switch to Print Preview and select one or more values in the parameter's lookup editor. Click Submit to pass the corresponding values to the report and generate the document.
The following steps describe how to create a new parameter and filter its values depending on the values selected for another parameter:
Specify the parameter's name, description and other options as you did before. Ensure that you correctly provide look-up settings (for this tutorial, set the Data Member property to the detail query and assign the Value Member and Display Member properties to the same ProductName data field).
Click the ellipsis button for the parameter's Filter String property to filter the list of available products according to the selected order. In the invoked editor, construct an expression in which the OrderID data field is compared with another parameter value.
Click the Edit query button for the detail query to filter data in the detail report.
Run the Query Builder and invoke the Filter Editor. Create a new query parameter as described above and specify a filter expression to compare the Product Name field with this parameter.
Close the editor and complete the Query Builder.
On the next wizard page, map the query parameter to the corresponding report parameter.
Switch to Print Preview and select the required orders and products. Click Submit to generate the document.
The Report and Dashboard Server provides a particular set of functions that enable you to access information about a current user. Using these functions, you can make a data model return specific subsets of data to different users. This restricts access to sensitive information in your database, and users cannot access and modify these functions unless they have the required permissions.
However, non-privileged users are still able to use these functions at the document level (to further filter the data available to them).
To learn more on this, see the "User-Specific Functions" section in the Manage Data Models and Connect to Data document.
Besides filtering report data, you can use report parameters to accomplish the following tasks:
Data Binding
You can bind a report control to a parameter and display its value in the report by dropping the parameter from the Field List onto the required band. This creates a Label control bound to the parameter as with an ordinary data field.
Calculated Fields
Parameters can participate in constructing expressions for calculated fields as well as standard data fields. The only difference is that a parameter is inserted into the expression text using the Parameters. prefix before its name.
See the Schedule a Document and Select Subscribers document to learn how to pass report parameters to a scheduled report. | https://docs.devexpress.com/ReportServer/17739/create-reports/web-report-designer/add-parameters-and-filter-data | 2019-03-18T17:34:46 | CC-MAIN-2019-13 | 1552912201521.60 | [] | docs.devexpress.com |
Directory
Entry
Directory Entry
Directory Entry
Directory Entry
Class
Definition
The DirectoryEntry class encapsulates a node or object in the Active Directory Domain Services hierarchy.
public ref class DirectoryEntry : System::ComponentModel::Component
[System.ComponentModel.TypeConverter(typeof(System.DirectoryServices.Design.DirectoryEntryConverter))] [System.DirectoryServices.DSDescription("DirectoryEntryDesc")] [System.ComponentModel.TypeConverter(typeof(System.DirectoryServices.DirectoryEntryConverter))] public class DirectoryEntry : System.ComponentModel.Component
type DirectoryEntry = class inherit Component
Public Class DirectoryEntry Inherits Component
- Inheritance
- DirectoryEntryDirectoryEntryDirectoryEntryDirectoryEntry
- Attributes
-
Remarks.
Note
It is assumed that you have a general understanding of Active Directory Domain Services before using this class. For more information, see the System.DirectoryServices namespace overview.
Constructors
Properties
Methods
Events
Security
DirectoryServicesPermission
LinkDemand | https://docs.microsoft.com/en-us/dotnet/api/system.directoryservices.directoryentry?redirectedfrom=MSDN&view=netframework-4.7.2 | 2019-03-18T17:54:33 | CC-MAIN-2019-13 | 1552912201521.60 | [] | docs.microsoft.com |
in the NFL last season.
You play as college dropout Devin Wade, who walked away from the game after the death of his father (played by Mahershala Ali ofHouse of Cards fame), as he tries to resurrect his careerweeks before the NFL draft. The league’s letter outlined a few areas for potential research that included pain management for both acute and chronic conditions..
Those who read the news know Reggie Rogers’problem has been alcohol including at least six drunk driving arrests resulting in 2016 basketball jersey design two prison terms, one the consequence of an auto accident that left three teens dead. If the Star Tribune played the national anthem every time I tried to go to work which they don as is the case with most jobs that is how I would be expected to behave..
Some, such as Seattle Seahawks defensive end Michael Bennett have football pictures remained seated on the bench during the anthem whereas others have stood with one fist raised in the air.. The Browns who got some bad news with top pick Myles Garrett’s injury will be an improved team, and it wouldn’t be shocking if they covered this big number at home.
The Bills say he’ll be able to return to the field after missing all but three games in 2015 because of a neck injury. They tend to make among the highest salaries while still having longer careers. The Bills will be expected to start their first three draft picks in cornerback Tre’Davious White, wide receiver Zay Jones and offensive tackle Dion Dawkins.
Jonathan Vaughters: You always in contact with a lot of people in the cycling world. Much of this is reaction to cornerback Richard Sherman’s memorable postgame boasting last week, making him The Face, or at least The Mouth, of the Seahawks before the media in New York this week.
His life should mean something, and waiting until the UFC card on Sept. Five time Grammy winning recording artist CeeLo Green is set to sing the National Anthem, and Flo Rida is scheduled to be the halftime entertainment.. Dareus was subsequently charged with felony possession of a controlled substance and possession of drug paraphernalia.
Contract Extensions and Endorsement DealsThe six figure player bonuses seem minuscule in comparison with the millions of dollars generated by the Super Bowl through ticket sales, television rights and advertising. (AP Photo/Bruce Kluckhohn). That dip was when the NFL Draft Advisory Board changed its feedback system to no longer include late round projections.
Two other members of that team, Shawn Lee, 44, and Chris Mims, 38, also have died due to heart problems. In making this determination, we have accepted the findings contained in the comprehensive report independently prepared by Mr. All five services have ESPN, NBC and Fox at least in theory.
The event, with tickets at $100 $150, will be a benefit for his son non profit Stillpont Family Resources Counseling center. Gone is Pittsburgh Steeler Jason Worilds, who wants to “pursue other interests.” Gone are Jake Locker and Cortland Finnegan.
But for over 240 football games, he stood at the 50 yard line, hat and hand over his chest and listened to the chords that Francis Scott Key wrote hundreds of years ago. (DraftKings told Advertising Age that in lieu of pricey in game spots, it would concentrate on buying spots in relevant shows such as ESPN’s “Fantasy Football Now” and NFL Network’s “NFL Fantasy Live,” as well as in various pre game shows.
Historic Hurricane Ophelia bears down amid warnings it. Getting a Heisman finalist on the depth chart behind the 2003 winner isn’t a bad idea. If Dupre happens to run the wrong route on a play or his pattern needs to be slightly altered, Rodgers will pull him aside, explain what the young player should do and the two will run the play again..
The Cowboys were plucky upstarts, a year removed from missing the playoffs and supposedly in rebuilding mode. Students getting busy at school is a lot easier to explain than teachers. Because of the personal appearances that are required, cheerleaders must be able to speak clearly and represent the team appropriately while maintaining a cheery outlook and positive personality..
They understand the power, the platform, the position they have in the lives of young people, and they’re going to use that to change the arc of every young person’s life. With Mariota out Sunday they were challenged to set the tone for the day and dominant up front against the Dolphins.
Yeah the Lions were very impressive this year, it was a longtime coming.). | https://docs.subiz.com/think-of-better-more-productive-way-to-support-black-lives-matter-like-donating-your-time-and-money-to-youth-groups-ov/ | 2019-03-18T18:25:56 | CC-MAIN-2019-13 | 1552912201521.60 | [] | docs.subiz.com |
3.2 Inches tall and 2000 mah makes the Stick Prince Baby small and powerfull. The TFV12 Baby Prince Tank has a 4.5ml capacity, making for less time spent refilling your tank. The Baby Prince Tank comes with their newest, inovative mesh coil. It's makes for a larger heating eare and a more powerfull vaping experience. Available in store or call 410-327-3676 to order.
Get your SMOK Stick Prince Baby and other SMOK products at Doc's Smokeshop in Baltimore, Md
Docs Smokeshop. Best Smokeshop in Baltimore. Best Vaporizers in Baltimore. Great selection of pipes, vaporizers, and accessories. | https://docssmokeshop.com/smok-stick-prince-baby-tfv12-baby-prince-tank-smok-mesh-coil-where-to-buy-smok-stick-prince-baby-in-stock-baltimore-maryland-docs-smokeshop | 2019-03-18T18:33:40 | CC-MAIN-2019-13 | 1552912201521.60 | [] | docssmokeshop.com |
Checklist
We’re keen to see you get value for money from your Hourfleet subscription. So before you sign up, there some important things you’ll need to understand, and a few things you’ll need to do.
Business plan, marketing, customer care and insurance. It should come as no surprise that having a great platform like Hourfleet is only a very small part of your business success. If you’re establishing a car sharing business then you’re going to need to spend plenty of time on these things. You can work on them while you prepare your Hourfleet tenancy as guided below.
Understanding an Hourfleet Tenancy. You will need to host, create and maintain your own landing page which are unique to your business. You can bespoke build this, or use tools like Squarespace, Wix or others.
Configuring your Hourfleet Tenancy. These are a large number of configuration options which help you establish the branding, business rules and other details which underpin your tenancy.
Integrations. The most important integration is with Stripe.com. Specific instructions are here
Let’s go! If you’re ready then get started now by choosing a subscription plan and ordering your initial batch of car-kits | http://docs.hourfleet.com/stepbystep.html | 2019-03-18T17:52:49 | CC-MAIN-2019-13 | 1552912201521.60 | [] | docs.hourfleet.com |
All content with label cache+data_grid+docs+infinispan_user_guide+user_guide.
Related Labels:
podcast, cloud, remoting, mvcc, datagrid, notification, tutorial, server, presentation, xml, read_committed, replication, distribution, transactionmanager, query, deadlock, resteasy, hibernate_search, intro,
contributor_project, pojo_cache, lock_striping, async, xaresource, guide, schema, listener, searchable, memcached, demo, grid, jcache, api, client, xsd, non-blocking, jpa, tx, article, gui_demo, documentation, eventing, student_project, youtube, client_server, infinispan, userguide, 缓存, hotrod, streaming, repeatable_read, consistent_hash, interface, whitepaper, clustering, jta, faq, large_object, spring, jsr-107, lucene, concurrency, fine_grained, jboss_cache, locking, index, events, hash_function, configuration, rest, pojo
more »
( - cache, - data_grid, - docs, - infinispan_user_guide, - user_guide )
Powered by a free Atlassian Confluence Open Source Project License granted to Red Hat, Inc.. Evaluate Confluence today. | https://docs.jboss.org/author/label/cache+data_grid+docs+infinispan_user_guide+user_guide | 2019-03-18T18:45:32 | CC-MAIN-2019-13 | 1552912201521.60 | [] | docs.jboss.org |
Contents IT Business Management Previous Topic Next Topic Allocate resources from resource workbench Subscribe Log in to subscribe to topics and get notified when content changes. ... SAVE AS PDF Selected Topic Topic & Subtopics All Topics in Contents Share Allocate resources from resource workbench Review the availability of all resources in a group and then allocate the named resource to a project using resource workbench. Before you begin Ensure that resource workbench is open. Role required: resource_manager Procedure Click Allocate Resources menu in the workbench header to switch to Allocation flow view. Click the filter icon () in Resource Plans list to filter the list of resource plans displayed. Select a resource plan to view its allocation details. A brief overview and Suggested Allocation Breakdown table for the selected plan is displayed. The resource allocation breakdown differs based on the Member Preference selected for the resource plan. Heat maps for the available hours and forecasted utilization for the resources are displayed. Click to open and modify the display settings for the members displayed in heat maps. All resources: shows heat map for all resources. Matching resources: shows heat map for resources matching to the resource request. Review the resources that are allocated by the system in Suggested allocation breakdown table and view their availability and/or utilization in the respective heat maps. Click the edit icon to open and modify the suggested resource allocations, if required. Click Allocate to allocate the resources to the resource plan. The resource plan moves into the Allocated state. Soft allocations are converted to hard allocations when the resource plan moves to the Allocated state. Click one of the following under Allocate: OptionDescription Edit Resource Plan Opens the selected plan to modify. Edit Allocations Opens the suggested resource allocations for the plan. You can make changes to the suggested allocations. Cancel Resource Plan Cancels the selected plan. On this page Send Feedback Previous Topic Next Topic | https://docs.servicenow.com/bundle/istanbul-it-business-management/page/product/resource-management/task/allocate-resources-from-resource-workbench.html | 2019-03-18T18:21:02 | CC-MAIN-2019-13 | 1552912201521.60 | [] | docs.servicenow.com |
API Reference¶
The reference section contains documentation of Passlib’s public API. These chapters are focused on providing detailed reference of the individual functions and classes; they will generally be cross-linked to any related walkthrough documentation (which tries to provide a higher-level synthetic view).
Note
Primary modules:
The primary modules that will be of interest are:
Caution
Internal modules:
The following modules are mainly used internally, may change structure between releases, and are documented mainly for completeness:
Alphabetical module list:
passlib.apache- Apache Password Files
passlib.apps- Helpers for various applications
passlib.context- CryptContext Hash Manager
passlib.crypto- Cryptographic Helper Functions
passlib.exc- Exceptions and warnings
passlib.ext.django- Django Password Hashing Plugin
passlib.hash- Password Hashing Schemes
passlib.hosts- OS Password Handling
passlib.ifc– Password Hash Interface
passlib.pwd– Password generation helpers
passlib.registry- Password Handler Registry
passlib.totp– TOTP / Two Factor Authentication
passlib.utils- Helper Functions | https://passlib.readthedocs.io/en/stable/lib/index.html | 2019-03-18T17:23:53 | CC-MAIN-2019-13 | 1552912201521.60 | [] | passlib.readthedocs.io |
Grafana Documentation
Installing Grafana
Installing on Linux
Installing on Mac OS X
Installing on Windows
Grafana Cloud
Nightly Builds
For other platforms Read the build from source instructions for more information.
Guides
What is Grafana?
Grafana feature highlights.
Configure Grafana
Article on all the Grafana configuration and setup options.
Getting Started
A guide that walks you through the basics of using Grafana
Provisioning
A guide to help you automate your Grafana setup & configuration.
What's new in v6.0
Article on all the new cool features and enhancements in v6.0
Screencasts
Video tutorials & guides | http://docs.grafana.org/ | 2019-03-18T18:49:47 | CC-MAIN-2019-13 | 1552912201521.60 | [] | docs.grafana.org |
List research¶
Populating new entries¶
For list requests, the research team should seek to identify a potential source of organization identifiers.
The research process may involve:
- Online desk research;
- Consultation with the wider standards community;
- Consultation with local experts;
Once you have identified a good candidate list to meet a particular need, assign it a prefix as detailed below, and then fill in the detailed metadata.
Validating an existing entry¶
For stub entries or proposals with incomplete information, researcher sshould:
- Check the list title - and make sure it follows the rules for multilingual titles
- Write a clear description of the identifier list describing the way in which organisations end up on the list. This should be 1 - 2 paragraphs maximum. You may use content quoted from the registers own website, or wikipedia pages.
- Fill in the list metadata
Research sources¶
When carrying out research the following resources may be useful. All researchers are encouraged to familiarise themselves with these resources.
Useful websites¶
- Investigative dashboard - list of company registers
- Wikipedia. Primary and secondary identifier lists are usually notable enough to have a Wikipedia page, at least in the language of the country concerned. For official registers - look for information about the legislation that created the register in order to understand the organisation types it covers. See in particular List_of_company_registers and Types_of_business_entity
- The World Bank Doing Business country profile give detailed information on company registration processes - and can provide useful hints as to the official company register and identifiers in a country.
- OpenCorporates Open Company Index
- European Commission list of Business registers in Member States
- Wiki Procedure
- Open Corporates policy paper on handling company number problems
Existing use¶
For codes taken from the IATI Codelist, you can search for records that currently use this code via OIPA.
For example, the following query returns a list of activities (in JSON format) with reporting or participating organisation identifiers that start with ‘ET-MFA’
Checking each of the linked activities can give an indication of the kinds of identifier in use.
Current usage in IATI is no guarantee of a correct identifier, but, it can give clues as to the kinds of identifiers you are looking for, and can help validate organisation list information. | http://docs.org-id.guide/en/latest/research/ | 2019-03-18T17:57:38 | CC-MAIN-2019-13 | 1552912201521.60 | [] | docs.org-id.guide |
push.initial-mode (collection.cfg setting)
Description
Sets the initial mode in which push should operate in. This is only read the first time Push starts, ie the first time a editing call is made to the Push API. If the push collection has already been started at least once, you can change the push run mode by running:
POST /push-api/v1/collections/test-push2/mode/?mode=<VALUE>
where value is either 'DEFAULT' or 'SLAVE'.
Default value
By default this is set to DEFAULT, which means the push collection can be used to add and delete documents.
push.initial-mode=DEFAULT
Example value
If you wish to set up a query processors for a Push collection you can set:
push.initial-mode=SLAVE
before the push collection is started. | https://docs.funnelback.com/collections/collection-types/push/push-collection-options/push_initial_mode_collection_cfg.html | 2019-03-18T18:24:54 | CC-MAIN-2019-13 | 1552912201521.60 | [] | docs.funnelback.com |
SAM Authorization Limitations
SAM’s roles and access control policies are maintained in SAM.
Creation of users and assignment to roles must be done using the SAM UI. There is no support to import users from KDC/AD.
Role assignment is at a user level. Assigning roles to a group is not supported.
New Roles or editing the out of the box role cannot be allows. However, the collaboration sharing features allow you to share each of the 5 resources across users meeting most use case requirements. | https://docs.hortonworks.com/HDPDocuments/HDF3/HDF-3.3.1/sam-authorization/content/sam_authorization_limitations.html | 2019-03-18T18:54:12 | CC-MAIN-2019-13 | 1552912201521.60 | [] | docs.hortonworks.com |
Bringing Your Animation on Model
If your primary and secondary animation were done as a rough drawing, it is now time to put your drawing on model. This means that you have to review your animation and ensure that every single detail is on model and there is no volume distortion. You can do this directly on the original sketch layer or on a new layer.
If your primary and secondary animation was done quite on model, you can proceed directly to the animation clean-up. If not, proceed the same way as you did for the secondary animation task to bring your animation on model. | https://docs.toonboom.com/help/harmony-11/draw-standalone/Content/_CORE/_Workflow/017_Paperless_Animation/004_H1_Bringing_On_Model.html | 2019-03-18T17:39:45 | CC-MAIN-2019-13 | 1552912201521.60 | [array(['../../../Resources/Images/_ICONS/Home_Icon.png', None],
dtype=object)
array(['../../../Resources/Images/HAR/_Skins/stage.png', None],
dtype=object)
array(['../../../../Skins/Default/Stylesheets/Images/transparent.gif',
'Closed'], dtype=object)
array(['../../../Resources/Images/HAR/_Skins/draw.png',
'Toon Boom Harmony 11 Draw Online Documentation'], dtype=object)
array(['../../../../Skins/Default/Stylesheets/Images/transparent.gif',
'Closed'], dtype=object)
array(['../../../Resources/Images/HAR/_Skins/sketch.png', None],
dtype=object)
array(['../../../../Skins/Default/Stylesheets/Images/transparent.gif',
'Closed'], dtype=object)
array(['../../../Resources/Images/HAR/_Skins/controlcenter.png',
'Installation and Control Center Online Documentation Installation and Control Center Online Documentation'],
dtype=object)
array(['../../../../Skins/Default/Stylesheets/Images/transparent.gif',
'Closed'], dtype=object)
array(['../../../Resources/Images/HAR/_Skins/scan.png', None],
dtype=object)
array(['../../../../Skins/Default/Stylesheets/Images/transparent.gif',
'Closed'], dtype=object)
array(['../../../Resources/Images/HAR/_Skins/stagePaint.png', None],
dtype=object)
array(['../../../../Skins/Default/Stylesheets/Images/transparent.gif',
'Closed'], dtype=object)
array(['../../../Resources/Images/HAR/_Skins/stagePlay.png', None],
dtype=object)
array(['../../../../Skins/Default/Stylesheets/Images/transparent.gif',
'Closed'], dtype=object)
array(['../../../Resources/Images/HAR/_Skins/stageXsheet.png', None],
dtype=object)
array(['../../../../Skins/Default/Stylesheets/Images/transparent.gif',
'Closed'], dtype=object)
array(['../../../Resources/Images/HAR/Stage/Paperless/HAR11_paperless_onmodel.png',
None], dtype=object) ] | docs.toonboom.com |
Sound Scrubbing
Harmony uses a process known as Sound Scrubbing to let you hear sound in real-time while you move the playback pointer forward or backward. This is very useful for finely-tuned lip-synching. You can scrub sounds from the Timeline view. | https://docs.toonboom.com/help/harmony-11/draw-standalone/Content/_CORE/_Workflow/029_Sound/005_H1_Sound_Scrubbing.html | 2019-03-18T17:36:25 | CC-MAIN-2019-13 | 1552912201521.60 | [array(['../../../Resources/Images/_ICONS/Home_Icon.png', None],
dtype=object)
array(['../../../Resources/Images/HAR/_Skins/stage.png', None],
dtype=object)
array(['../../../../Skins/Default/Stylesheets/Images/transparent.gif',
'Closed'], dtype=object)
array(['../../../Resources/Images/HAR/_Skins/draw.png',
'Toon Boom Harmony 11 Draw Online Documentation'], dtype=object)
array(['../../../../Skins/Default/Stylesheets/Images/transparent.gif',
'Closed'], dtype=object)
array(['../../../Resources/Images/HAR/_Skins/sketch.png', None],
dtype=object)
array(['../../../../Skins/Default/Stylesheets/Images/transparent.gif',
'Closed'], dtype=object)
array(['../../../Resources/Images/HAR/_Skins/controlcenter.png',
'Installation and Control Center Online Documentation Installation and Control Center Online Documentation'],
dtype=object)
array(['../../../../Skins/Default/Stylesheets/Images/transparent.gif',
'Closed'], dtype=object)
array(['../../../Resources/Images/HAR/_Skins/scan.png', None],
dtype=object)
array(['../../../../Skins/Default/Stylesheets/Images/transparent.gif',
'Closed'], dtype=object)
array(['../../../Resources/Images/HAR/_Skins/stagePaint.png', None],
dtype=object)
array(['../../../../Skins/Default/Stylesheets/Images/transparent.gif',
'Closed'], dtype=object)
array(['../../../Resources/Images/HAR/_Skins/stagePlay.png', None],
dtype=object)
array(['../../../../Skins/Default/Stylesheets/Images/transparent.gif',
'Closed'], dtype=object)
array(['../../../Resources/Images/HAR/_Skins/stageXsheet.png', None],
dtype=object)
array(['../../../../Skins/Default/Stylesheets/Images/transparent.gif',
'Closed'], dtype=object) ] | docs.toonboom.com |
This section gives you a quick understanding of how to connect your device to WSO2 IoT Server as an enterprise IoT solution:
Before you begin
- Install Oracle Java SE Development Kit (JDK) version 1.8.* and set the
JAVA_HOMEenvironment variable. For more information on setting up
JAVA_HOMEon your OS, see Installing the Product.
- Check out system requirements section and ensure that you have all the appropriate prerequisite software installed on your system.
Download WSO2 IoT Server
Download WSO2 IoT Server.
Tip: If you want to try out the advanced device search capabilities, download the WSO2 IoT Server 3.10-Update1 pack and try it out.
Copy the downloaded file to a preferred location and unzip it. The unzipped file is called
<IOTS_HOME>throughout this documentation.
The downloaded WSO2 IoT Server file is large. Therefore, when unzipping, it might extract halfway through and stop. To avoid this, we recommend that you unzip the file via the terminal.
Example:
unzip wso2iot-3.1.0.zip
- The maximum character count supported for a file path in the Windows OS is 260. If this count is exceeded when extracting the pack into a directory, you will get the
Error 0x80010135: Path too longerror. To overcome this issue use the commands given below:
Create a substring and map the current file path to it.
In the example given below, the WSO2 IoT Server
.zipfile is located in the
C:\Users\Administrator\Downloads\pre-betadirectory.
C:\Users\Administrator\Downloads\pre-beta>subst Y: C:\Users\Administrator\Downloads\pre-beta
Copy the IoT Server Server zip folder to the new path you created and unzip the file there.
Example: Unzip the file in the
Y:directory. profiles in the following order:
Start the broker profile, which corresponds to the WSO2 Message Broker profile.
cd <IOTS_HOME>/bin ./broker.sh
cd <IOTS_HOME>\bin broker.bat
The default port assigned for the broker is 9446.
Start the core profile, which corresponds to the WSO2 Connected Device Management Framework (WSO2 CDMF) profile.
. Create an account
Fill out the registration form.
- First Name: Provide your first name.
- Last Name: Provide your last name.
- Username: Provide a username. It should be at least 3 characters long with no white spaces.
- Password: Provide a password. It should be at least 8 characters long.
- Confirm Password: Provide the password again to confirm it.
- Click Register.
Click LOGIN.
Start the virtual fire alarm
Follow the steps given below to start the virtual fire alarm device:
If you are a new user, click Enroll New Device.
If you have enrolled devices before, click Add under Devices.
- Click Try to try out the Virtual Firealarm, which is listed under Virtual Device Types.
- Download the device:
- Click Download Agent to download the device agent.
Enter a preferred name for your device and click DOWNLOAD NOW.
Unzip the downloaded agent file and navigate to its location via the terminal.
Start the virtual fire alarm.
./start-device.sh --> For Linux/Mac/Solaris start-device.bat --> For Windows
Once you start your virtual fire alarm, the fire alarm emulator will pop up.to monitor real-time data via the Device Details pages.
What's next?
Follow the options given below to see what you can do next:
- Do you have an Android device? Try out the Android Sense device type supported by default on WSO2 IoT Server. For more information, see Android Sense.
- Want to try out more devices? Connect the devices listed below to WSO2 IoT Server and try them out.
- Need to create a new device type and connect it to WSO2 IoT Server? For more information, see the Device Manufacturer Guide. | https://docs.wso2.com/display/IoTS310/Enterprise+IoT+solution | 2019-03-18T17:34:55 | CC-MAIN-2019-13 | 1552912201521.60 | [] | docs.wso2.com |
push.replication.ignore.index-redirects (collection.cfg setting)
Description
Specifies if push collections running as query processors (aka slaves) should download the redirect data placed on indexes. This redirect data is used to resolve redirects on the index outside of query processing. When set 'true' push query processors will not be able to resolve redirects outside of the query processor such as on tuning.
Default value
Defaults is to include the redirect data.
push.replication.ignore.index-redirects=false
Example value
To save bandwidth you may choose to ignore the redirect files.
push.replication.ignore.index-redirects=true | https://docs.funnelback.com/collections/collection-types/push/push-collection-options/push_replication_ignore_index_redirects_collection_cfg.html | 2019-03-18T17:34:39 | CC-MAIN-2019-13 | 1552912201521.60 | [] | docs.funnelback.com |
Contents IT Service Management Previous Topic Next Topic Configure cart layout for specific items Subscribe Log in to subscribe to topics and get notified when content changes. ... SAVE AS PDF Selected Topic Topic & Subtopics All Topics in Contents Share Configure cart layout for specific items Service catalog enables you to set fields in the Catalog Item form to configure the cart layout for specific items. About this task This overrides any general cart layout settings. For example, you can hide an item's price itemsService catalog enables you to use additional methods to configure cart behavior or layouts, which override cart layout record settings.Related TasksConfigure cart layoutConfigure order guide widgetsConfigure catalog item widgetsConfigure desktop order status screenConfigure mobile order status screenConfigure mobile shopping cart screenConfigure one-step shopping cart screenConfigure two-step shopping cart screenRelated ConceptsCart layout considerationsMigrate cart layoutsLegacy flexible checkout and delivery forms On this page Send Feedback Previous Topic Next Topic | https://docs.servicenow.com/bundle/helsinki-it-service-management/page/product/service-catalog-management/task/t_ConfigCartLaytforSpecifItem.html | 2019-03-18T18:25:29 | CC-MAIN-2019-13 | 1552912201521.60 | [] | docs.servicenow.com |
Contents IT Service Management Previous Topic Next Topic Verify contract administrator assignment for notification Subscribe Log in to subscribe to topics and get notified when content changes. ... SAVE AS PDF Selected Topic Topic & Subtopics All Topics in Contents Share Verify contract administrator assignment for notification An event runs automatically each night to send reminders to contract administrators about contract expiration dates so they can renew or renegotiate the contract. You can verify that the right contract administrator is assigned to the contract. Before you beginRole required: contract_manager or admin About this task. Related ConceptsCondition check definitionsRelated TopicsEmail and SMS notifications On this page Send Feedback Previous Topic Next Topic | https://docs.servicenow.com/bundle/jakarta-it-service-management/page/product/contract-management/task/t_SendAContractNotification.html | 2019-03-18T18:25:22 | CC-MAIN-2019-13 | 1552912201521.60 | [] | docs.servicenow.com |
Hints
Here are the available hint points:
Contour Hint
The Contour Hint
point is used on the colour fill zone and brush lines; in other words, on Contour vectors. It allows the control of line thickness and contour position. Also, if a contour is not animated the way that it should be you can use hints to correct the animation. For example, if a flag is not waving properly.
When adding a Contour Hint point, make sure that you place it far enough away from the contour so that you can see it snap to the contour.
The Contour Hint points are yellow.
Zone Hint
. The system automatically morphs the droplet to the nearest one. This is not always correct. A Zone Hint will force a colour zone to morph with another one.
Zone Hint points are cyan in colour so you can easily see them.
Pencil Hint
so you can easily see them.
Vanishing Point Hint
A Vanishing Point Hint
is used to control the trajectory of a vanishing shape. A shape will vanish from the source drawing when there is no corresponding shape in the destination drawing. If you do not place a Vanishing Hint to control the point of disappearance, the shape will vanish into its centre.
Vanishing Point Hint points are green in colour so you can easily see them.
Appearing Point Hint
so you can easily see them.
Related Topics | https://docs.toonboom.com/help/animate/Content/HAR/Stage/008_Morphing/032_H2_Hints.html | 2019-03-18T17:38:56 | CC-MAIN-2019-13 | 1552912201521.60 | [array(['../../../Resources/Images/HAR/Stage/Morphing/an_contourhint.png',
None], dtype=object)
array(['../../../Resources/Images/HAR/Stage/Morphing/an_zonehint.png',
None], dtype=object)
array(['../../../Resources/Images/HAR/Stage/Morphing/an_pencilhint.png',
None], dtype=object)
array(['../../../Resources/Images/HAR/Stage/Morphing/an_vanish.png', None],
dtype=object)
array(['../../../Resources/Images/HAR/Stage/Morphing/an_appearinghint.png',
None], dtype=object) ] | docs.toonboom.com |
Package lexruntimeserviceiface
Overview ▹
Overview ▾
Package lexruntimeserviceiface provides an interface to enable mocking the Amazon Lex Runtime Service service client for testing your code.
It is important to note that this interface will have breaking changes when the service model is updated and adds new API operations, paginators, and wait LexRuntimeServiceAPI ¶
LexRuntimeServiceAPI provides an interface to enable mocking the lexruntimeservice.LexRuntimeService Lex Runtime Service. func myFunc(svc lexruntimeserviceiface.LexRuntimeServiceAPI) bool { // Make svc.PostContent request } func main() { sess := session.New() svc := lexruntimeservice.New(sess) myFunc(svc) }
In your _test.go file:
// Define a mock struct to be used in your unit tests of myFunc. type mockLexRuntimeServiceClient struct { lexruntimeserviceiface.LexRuntimeServiceAPI } func (m *mockLexRuntimeServiceClient) PostContent(input *lexruntimeservice.PostContentInput) (*lexruntimeservice.PostContentOutput, error) { // mock response/functionality } func TestMyFunc(t *testing.T) { // Setup Test mockSvc := &mockLexRuntimeService.
type LexRuntimeServiceAPI interface { PostContent(*lexruntimeservice.PostContentInput) (*lexruntimeservice.PostContentOutput, error) PostContentWithContext(aws.Context, *lexruntimeservice.PostContentInput, ...request.Option) (*lexruntimeservice.PostContentOutput, error) PostContentRequest(*lexruntimeservice.PostContentInput) (*request.Request, *lexruntimeservice.PostContentOutput) PostText(*lexruntimeservice.PostTextInput) (*lexruntimeservice.PostTextOutput, error) PostTextWithContext(aws.Context, *lexruntimeservice.PostTextInput, ...request.Option) (*lexruntimeservice.PostTextOutput, error) PostTextRequest(*lexruntimeservice.PostTextInput) (*request.Request, *lexruntimeservice.PostTextOutput) } | http://docs.activestate.com/activego/1.8/pkg/github.com/aws/aws-sdk-go/service/lexruntimeservice/lexruntimeserviceiface/ | 2019-03-18T18:01:31 | CC-MAIN-2019-13 | 1552912201521.60 | [] | docs.activestate.com |
Meribook has native integrations with autoresponders and other marketing tools.
5 articles in this collection
Using this simple integration, you can deliver your lead magnet from Meribook and Add Subscriber to your Activecampaign List for email follow-ups and nurturing.
Using convertbox, you can not only send the subscriber to Meribook and give them access to your content you can connect to many other providers as well.
Add users from your Meribook campaign to your Gist account via the API integration. You can add/remove tags to your contacts so that you can segment your list for email nurturing.
You can connect your Mailchimp account and update the list when a subscriber signs up to your Meribook campaign.
From you Mailchimp account, you can submit subscribers to Meribook and automatically provide the subscriber access to any book. | https://docs.meribook.com/collection/2-integrations | 2019-03-18T18:13:56 | CC-MAIN-2019-13 | 1552912201521.60 | [] | docs.meribook.com |
Rate limiting in Swift is implemented as a pluggable middleware. Rate limiting is performed on requests that result in database writes to the account and container sqlite dbs. It uses memcached and is dependent on the proxy servers having highly synchronized time. The rate limits are limited by the accuracy of the proxy server clocks.
All configuration is optional. If no account or container limits are provided there will be no rate limiting. Configuration available:
The container rate limits are linearly interpolated from the values given. A sample container rate limiting could be:
container_ratelimit_100 = 100
container_ratelimit_200 = 50
container_ratelimit_500 = 20
This would result in
The above ratelimiting is to prevent the “many writes to a single container” bottleneck from causing a problem. There could also be a problem where a single account is just using too much of the cluster’s resources. In this case, the container ratelimits may not help because the customer could be doing thousands of reqs/sec to distributed containers each getting a small fraction of the total so those limits would never trigger. If a system administrator notices this, he/she can set the X-Account-Sysmeta-Global-Write-Ratelimit on an account and that will limit the total number of write requests (PUT, POST, DELETE, COPY) that account can do for the whole account. This limit will be in addition to the applicable account/container limits from above. This header will be hidden from the user, because of the gatekeeper middleware, and can only be set using a direct client to the account nodes. It accepts a float value and will only limit requests if the value is > 0.
Except where otherwise noted, this document is licensed under Creative Commons Attribution 3.0 License. See all OpenStack Legal Documents. | https://docs.openstack.org/swift/queens/ratelimit.html | 2019-03-18T18:20:42 | CC-MAIN-2019-13 | 1552912201521.60 | [] | docs.openstack.org |
Contents Security Operations Previous Topic Next Topic Add an IoC to an attack mode/method Subscribe Log in to subscribe to topics and get notified when content changes. ... SAVE AS PDF Selected Topic Topic & Subtopics All Topics in Contents Share Add an IoC to an attack mode/method In addition to importing indicators as STIX data, you can add IoCs to an attack mode/method manually. Before you beginRole required: sn_ti.admin Procedure Navigate to Threat Intelligence > IoC Repository > Attack Mode/Method. Click the attack mode to which you want to add an IoC. Click the Related Indicators related list. Click Edit. As needed, use the filters to locate the IoC you want to add. Using the slushbucket, add the IoC to the Related Indicators list. Click Save. Related TasksDefine an attack mode/methodAdd a related attack mode methodAdd associated task to an attack mode/method On this page Send Feedback Previous Topic Next Topic | https://docs.servicenow.com/bundle/istanbul-security-management/page/product/threat-intelligence/task/t_AddIoCToAttackMode.html | 2019-03-18T18:16:37 | CC-MAIN-2019-13 | 1552912201521.60 | [] | docs.servicenow.com |
While using Trigger feature, there are something you need to know about it:
- Trigger is premium feature which is only supported in Premium or Trial accounts
- In Standard plan, you can create 3 triggers in maximum. In Advanced plan, you can use unlimited number of trigger.
- If you select “Fire only once per visitor”: Trigger will be executed once only during a visit
- If you change any parameter in Trigger, the changes will be applied in the next visit for all visitors who are browsing on website.
- Click into (+) to add more condition or action
- In one visit, Trigger implements auto invitation action once only
- In case a visitor’s information meets the conditions of more than one trigger, then Subiz will execute the trigger that was created earlier based on the trigger list.
For example:
You set 2 triggers :
- Trigger 1: Auto invite visitors who browsed at least 1 page
- Trigger 2: Auto invite visitors who come to a certain page URL (for example: Payment page)
So when a visitor’s information meets both 2 conditions (Visitor page count >= 1 and page URL equals to abc.com/payment.html), Trigger 1 will be executed. | https://docs.subiz.com/trigger-note/ | 2019-03-18T18:04:23 | CC-MAIN-2019-13 | 1552912201521.60 | [] | docs.subiz.com |
If you want to integrate with Marketo via the REST API, you'll need to create an API only user. Here's how.
Prerequisites
Admin Permissions Required
1. Under Admin, click Users & Roles.
2. Click Invite New User.
3. Enter an Email, First Name, and Last Name for the API only user. Click Next.
Tip
Add an optional reason or an access expiration date. Access expiration dates are handy for short-term employees.
4. Select the API Only role and check the API Only checkbox. Click Next.
5. Click Send.
Note
The pop-up says, "An invitation is not required for API only," but that doesn't mean you've done something wrong. It just means we'll create the role without an invite email having to be sent.
Related Articles | http://docs.marketo.com/display/public/DOCS/Create+an+API+Only+User | 2019-05-19T12:25:55 | CC-MAIN-2019-22 | 1558232254882.18 | [array(['/download/attachments/557070/magic_wand.png', None], dtype=object)
array(['/download/attachments/557070/pin_red.png', None], dtype=object)
array(['/download/attachments/557070/attach.png', None], dtype=object)] | docs.marketo.com |
TOPICS×
Form Hidden Component
The Core Component Form Hidden component allows for the display of a hidden field.
Usage
The Core Component Form Hidden Component allows for the creation of hidden fields to pass information about the current page back to AEM and is intended to be used along with the form container component.
The field properties can be defined by the content editor in the configure dialog.
Version and Compatibility
The current version of the Form Hidden.
Sample Component Output
HTML
<div class="cmp cmp-form aem-GridColumn aem-GridColumn--default--12"> <form method="POST" action="/content/we-retail/us/en/experience.html" id="new_form" name="new_form" enctype="multipart/form-data" class="aem-Grid aem-Grid--12 aem-Grid--default--12 "> <input type="hidden" name=":formstart" value="/content/we-retail/us/en/experience/jcr:content/root/responsivegrid/container"> <div class="visible aem-GridColumn aem-GridColumn--default--12"> <input type="hidden" id="ghostToast" name="Invisible Toast" value="ghostToast"> </div> </form> </div>
JSON
"container": { "columnClassNames": "aem-GridColumn aem-GridColumn--default--12", "columnCount": 12, "gridClassNames": "aem-Grid aem-Grid--12 aem-Grid--default--12", ":items": { "hidden": { "columnClassNames": "aem-GridColumn aem-GridColumn--default--12", ":type": "weretail/components/form/hidden", "name": "Invisible Toast", "id": "ghostToast", "value": "ghostToast" } }, ":itemsOrder": [ "hidden" ], ":type": "weretail/components/form/container" }
Technical Details
The latest technical documentation about the Form Hidden Component can be found on GitHub.
Further details about developing Core Components can be found in the Core Components developer documentation.
Configure Dialog
The configure dialog allows the content author to define the parameters of the hidden field.
- Name The name of the field, which is submitted with the form data
- Value The value of the field, which is submitted with the form data
- Identifier The identifier should be unique on the page and can be used to bind scripts to this form field
Because the Form Hidden component normally has no visible attributes, the component's placeholder in the editor displays the Name and Value field values if they are assigned in order to help the author identify the appropriate Form Hidden component.
Design Dialog
There is no design dialog for the Form Hidden component.
IMPROVE THIS PAGE
Last update:
<0> min read
WAS THIS CONTENT HELPFUL?
By submitting your feedback, you accept the Adobe Terms of Use.
Thank you for submitting your feedback. | https://docs.adobe.com/content/help/en/experience-manager-core-components/using/components/forms/form-hidden.html | 2019-05-19T14:03:37 | CC-MAIN-2019-22 | 1558232254882.18 | [array(['/content/dam/help/experience-manager-core-components.en/help/using/assets/chlimage_1-26.png',
None], dtype=object)
array(['/content/dam/help/experience-manager-core-components.en/help/using/assets/screenshot_2018-10-19at094927.png',
None], dtype=object) ] | docs.adobe.com |
This page has been moved to the wiki here.
Please use the wiki, as this information here is out of date.
The rpt.conf configuration file holds configuration information for app_rpt, the Asterisk repeater application. It is a complex configuration file, with a large number of options. It will be helpful to have a copy of rpt.conf in front of you while reading this section.
rpt.conf relies heavily on the use of stanzas to glue related bits of information together. Stanzas are also referenced by other stanzas by using key=value pairs to reference the other stanza within a given stanza. The node stanza makes use of a lot of references to other stanzas within rpt.conf.
Within rpt.conf there are several stanza types. They are are summarized below.
There may be several stanzas of the same type in rpt.conf. For example, a system with two nodes defined will have two node stanzas. | https://docs.allstarlink.org/node/17 | 2019-05-19T13:17:49 | CC-MAIN-2019-22 | 1558232254882.18 | [] | docs.allstarlink.org |
OpenID Connect 1.0. For more information about OpenID Connect, see the specification, OpenID Connect Core 1.0.
Here is a sample OpenID Connect request to Azure AD:
In addition to the support for the id_token response type, we have added support for the following parameters in the request.
Note
The response_mode parameter is required, because the current default response encoding is in a query parameter. This behavior is incompatible with the specification and the default value is likely to change. To prevent your client from failing in the future, include the response_mode parameter in the request with a value of fragment or form_post. | https://docs.microsoft.com/en-us/previous-versions/azure/dn645541%28v%3Dazure.100%29 | 2019-05-19T13:07:04 | CC-MAIN-2019-22 | 1558232254882.18 | [] | docs.microsoft.com |
Vortex can be run from the command line, for running of a script without a Web server or for maintenance scripts. The last argument is usually either a script to run or a SQL command to execute.
The Vortex command line syntax is:
texis [options] [var=value] [var=value] [srcfile-or-SQL-command]
(The brackets (
[]) indicate an optional part of the syntax.)
srcfile-or-SQL-command is either the full path of the Vortex
script to run, or a SQL command to execute on the database. Either
is required, unless a standalone action like
-W is invoked.
If a script is named, it has the same syntax as the URL would have,
without the CGI program
(e.g.
/path[/+state][/function.mime][/+/userpath]). The
path is interpreted as an ordinary file, not relative to
ScriptRoot
or document root. Note that running a script from the
command line may result in inaccurate URLs begin generated by the
$url and
$urlroot variables, as the
texis CGI
script path cannot be determined without the server.
Variables may be assigned on the command line for a script with the
var=value syntax. Both the
var name and its
value
should be URL-encoded (for portable command-line escapement).
Variable assignments may be intermixed with other options, but like
all options, must occur before the script name which is last on the
command line. Assigning the same variable multiple times gives it
multiple values, in command-line order.
If a SQL command is given, the database defaults to the current
directory, or the one named with
-d is used. The result rows
are printed out, in either columns (the default) or one field per row
(if
-c given). No Vortex variables may appear in the command;
any parameters must be given as literal (single-quoted) values.
In addition to the following options, see also the library options (here) and schedule options (here). | https://docs.thunderstone.com/site/vortexman/vortex_command_line_options.html | 2019-05-19T13:23:38 | CC-MAIN-2019-22 | 1558232254882.18 | [] | docs.thunderstone.com |
Build and tests are run automatically whenever a new change is pushed to GitHub repository or when a pull requested is created.
The major advantage is that the tests are executed before a pull request is merged, the test failures can be found out very early. (Jenkins runs the tests usually after a pull request is merged to master, sometimes it required a fix up to a failed test.)
Another advantage is that for example code coverage using Coveralls can be easily added to Travis build.
Disadvantages
Normally the build runs in Ubuntu 12.04 (or 14.04) LTS workers, but fortunately using Docker images allows to use basically any Linux distribution which can be started inside a container.
The Travis workers and the CI service as a whole are out of our control, we cannot change anything there. If the service is down or overloaded we cannot do anything about that.
The workers cannot reach the internal network, e.g. we cannot use the packages from the internal build service.
Using Docker at Travis
As mentioned above, normally Travis runs the builds inside Ubuntu virtual machines providing some quite old package versions. That makes troubles as YaST uses newer distribution and expects newer GCC compiler, Ruby interpreter, libraries... And in some cases the system differences between Ubuntu and (open)SUSE make some tests fail or require specific workarounds in the code.
Fortunately Travis allows using Docker at the nodes. This greatly helps as we can run the build inside a Docker container which is running an openSUSE distribution and avoid all those Ubuntu workarounds and hacks.
Moreover the Docker images allow easily debugging and reproducing of the build issues locally, see below.
Restarting a Build
It may happen that a Travis build fails because e.g. OBS is down and the required packages cannot be downloaded or GitHub times out, etc...
In that case it is possible to manually re-trigger the failed build. Browse to the failed build at Travis and you'll find a Restart Build button in the top right corner for restarting the build.
Make sure you are logged using your GitHub account, it is not displayed if you do not have permissions for the respective GitHub repository.
Implementation
When using Docker the Travis still runs the Ubuntu VM, but instead of running the tests directly we download a Docker image with an openSUSE based distribution and run the tests inside.
The Docker overhead should be very small as it is a container based technology (like chroot on steroids) rather than a full virtualization systems like KVM, VirtualBox or others.
For each branch on the git repository a different tag of the docker image is used. E.g. the master branch always uses the latest tag and the SLE12 SP2 maintenance branch uses the sle12-sp tag. To see all available tags check docker image on registry.opensuse.org or dockerhub (more below).
Open Build Service
The YaST:Head OBS project
builds the latest YaST packages from Git
master branch. These packages are
then used in the Docker images which are then used by the Travis builds. The corresponding
subprojects under YaST
are used for the maintenance branches.
registry.opensuse.org
Since April 2019 Docker images built in the OBS (ruby and cpp) are used with Travis for the master branch. The configuration is kept in GitHub (ruby and cpp).
Image Rebuild
The Docker images in the OBS are rebuild automatically.
The Docker Hub
The Docker Hub provides a central place for publishing the Docker images. Some Docker images used at Travis are hosted there.
The YaST images are stored at the yastdevel Docker Hub organization.
Image Rebuild
The Docker images on Docker Hub are periodically rebuilt, the rebuild is triggered by the Jenkins jobs (e.g. docker-trigger-yastdevel-ruby-latest). Images for the master branch are built more often than the ones corresponding to maintenance branches.
There is also defined an upstream dependency to the base
openSUSE repository,
the images should be rebuilt whenever the upstream is updated.
It is possible to trigger a rebuild manually - log into the Docker Hub, select the image and in the Build Settings section press the Trigger button for the required build tag. (See e.g. the ruby image.).
Parallel Build Jobs
Travis allows using a build matrix which can define multiple independent build environments for each commit. What is good that Travis runs these builds in parallel.
env: - FOO=foo - FOO=bar
This will start two jobs, one with
FOO=foo environment and
FOO=bar in the
other.
This way you can split the Travis work into more smaller parts and run them in parallel:
env: - CMD=quality_check - CMD=security_scan - CMD=compile script: - $CMD
Note: The values needs to be quoted when a space is included.
YaST Example
The YaST Travis script has been adapted to allow running only a subset of the tasks and this can be easily used in Travis:
env: # only the unit tests - CMD='yast-travis-ruby -o tests' # only rubocop - CMD='yast-travis-ruby -o rubocop' # the rest (skip unit tests and rubocop), # -y uses more strict "rake check:doc" instead of plain "yardoc" - CMD='yast-travis-ruby -y -x tests -x rubocop' script: - docker run -it -e TRAVIS=1 -e TRAVIS_JOB_ID="$TRAVIS_JOB_ID" yast-test-image $CMD
This defines three jobs: unit tests, rubocop and the rest (yardoc, package build, ...). You can split the work into less or more jobs if needed.
Limitations
Obviously running the jobs in parallel has also some disadvantages.
- Starting a VM, downloading and building the Docker image takes some time. That means separating a small task which takes just few seconds to run in parallel is usually pointless.
- If Travis is under heavy load then the jobs might not start at once or there even might be delays between the jobs. That means in the edge case running too many small parallel jobs might be slower than running one big sequential job.
You need to find the right balance between parallel and sequential approach. | https://yastgithubio.readthedocs.io/en/latest/travis-integration/ | 2019-05-19T13:38:38 | CC-MAIN-2019-22 | 1558232254882.18 | [] | yastgithubio.readthedocs.io |
Don't have an Instabot account yet?
Overview
This page will guide you through inserting web Instabot into any page of your Squarespace site, or how to configure Instabot to launch when a user clicks a link or button.!"
You've generated your Instabot JavaScript successfully . Now let's insert the JavaScript into our Squarespace site!
Open your Squarespace site, click 'Settings' in the left-menu
- Click 'Advanced'
- Click 'Code Injection'
- Paste in your Instabot JavaScript, then click 'Save'
- That's it! Now, open your conversation in the Instabot portal, finalize the triggers, click deploy, and Instabot will begin appearing in your website!
This section will show you how to configure Instabot to launch when a user clicks on a specific link or button on your Squarespace page.
- To start, let's create and assign an Instabot event-trigger to your conversation. Open your Instabot conversation, click
Edit, then open the
Triggerstab.. In the below form, enter the required parameters and your Instabot JavaScript will be generated for you:
- Now that we've prepared the Instabot JavaScript, we will now insert it into our page. Let's open our page in edit mode:
- Click
Pagesin the left-hand menu
- Find and select the specific page you want to insert Instabot into
- Hover over the page, find, and click
Edit
- Now we will add a code block into our page so the JavaScript has a place to live:
- Hover your mouse anywhere on the page until you see a bubble with a line appear
- Click the bubble, the Content Blocks menu will appear
- In the Content Blocks search field, type
Code, then select the
Codeblock
- Next, we will insert our Instabot JavaScript into the code block:
- Copy the Instabot JavaScript we prepared in the last section
- Paste it into the body of the code block - don't touch the
HTMLor
Display Sourcesettings
- Click
Applyin the code block
- Don't click
Savein the top-left menu yet because we want to add a button to the page before finishing
- Now, we will add a button for the user to click and launch the Instabot:
- Hover your mouse anywhere on the page until you see a bubble with a line appear
- Click the bubble, the Content Blocks menu will appear
- In the Content Blocks search field, type
Button, then select the
Buttonblock
- Give your button some text. We'll call it
Want to learn more about Instabot?
- Finally, we will link the button to our Instabot JavaScript:
- Click the the
Clickthrough URLfield, select
External
- Type in
javascript:buttonClick(), then press Enter
- Click
Apply
- Finally, click
Savein the top-left corner | https://docs.instabot.io/docs/web-squarespace?utm_source=quick_start_guide | 2019-05-19T13:26:51 | CC-MAIN-2019-22 | 1558232254882.18 | [] | docs.instabot.io |
overview
1
fundamentals
2
get tools
3
start coding
What is desktop development? (overview)
Desktop.
Watch this video about Microsoft desktop development offerings, and then prepare your environment by installing the tools you'll use to build your first desktop application.
LEARN THE FUNDAMENTALS OF DESKTOP DEVELOPMENT
Video | 10 minutes | Desktop Development | June 2010
Client development for Windows involves three main models: Native C++ for programming directly to the Windows APIs, .NET managed code with Win Forms or Windows Presentation Foundation (WPF), and .NET managed code with Silverlight for rapid application development. You can write to each of these programming environments and others with Visual Studio – Microsoft's integrated development environment (IDE). The video will explain when to choose one programming environment over another.
Objective: Get a solid foundation on desktop development.
Select one of the following programs to install:
For help picking the right version of Visual Studio, review the Visual Studio 2010 comparison chart.
For more information about team development, read about Application Lifecycle Management.
Download the sample code, then follow along with these videos to get started coding right away.
FULL CONTROL
Video | 16 minutes | Win32 | June 2010
Win32 is an application programming interface (API) used to create all types of Windows applications. Win32 provides services (like access to files) and user interface elements (like drawing and getting input from dialog boxes) to your applications. Applications that are written on Win32 get access to the broadest set of Windows features.
Next: Learn more about C++ development
CONTROL WITH FASTER DEVELOPMENT
Video | 16 minutes | MFC | June 2010
Microsoft Foundation Class Library (MFC) wraps the Win32 APIs so that they can be more seamlessly used with C++ applications. MFC and C++ together provide a great balance of rapid application development and deep control over the platform for experienced developers.
Next: Learn more about MFC classes
TWEETING FOR RAPID DESKTOP DEVELOPMENT
Video | 30 minutes | WPF | June 2010
WPF is a programming interface used to create graphical applications on Windows. WPF, a component of the Microsoft .NET Framework 4, provides facilities to build user interfaces that employ media, documents, hardware acceleration, vector graphics, scalability to different form factors, integration with Windows, interactive data visualization, and superior content readability.
Next: Learn more about WPF
TWEETING USING SILVERLIGHT TO RUN AN RIA APP ON THE DESKTOP
Video | 34 minutes | Silverlight | June 2010
Silverlight is a programming interface that is used to create graphical applications that run on the web or on Windows. Silverlight, a component of the Microsoft .NET Framework 4, provides facilities to build interactive user experiences for web, desktop, and mobile applications that employ webcam, microphone, and printing when online or offline.
Next: Learn more about Silverlight
Developer Topics
C++: Get started developing with Visual C++
Learn more about Visual C++ and how to develop Windows-based and .NET-based applications.
Windows 7: Get started developing applications
Learn about how to develop and integrate your applications with Windows 7 shell features.
WPF and Windows Forms for the Desktop
Read about the differences between WPF and Windows Forms, and find links to further training.
Silverlight Out of Browser for the Desktop
Learn how to write Silverlight applications that run on the desktop.
LightSwitch: Get started building business applications
Learn how you can quickly create professional-quality business applications, regardless of your development skills.
C# and .NET for Java Developers
Read this article to get an introduction to C# and Visual Studio for Java developers.
Books | https://docs.microsoft.com/es-es/previous-versions/ff380143(v=msdn.10) | 2019-05-19T13:02:41 | CC-MAIN-2019-22 | 1558232254882.18 | [] | docs.microsoft.com |
Comparing standalone and operational Ambari server set up
Setting up a standalone Ambari Server instance is very similar to setting up an operational Ambari Server instance. Many of the steps are the same, with one key exception: you do not install a cluster using a standalone server instance. A standalone Ambari Server instance does not manage a cluster and does not deploy or communicate with Ambari Agents; instead, a standalone Ambari Server runs as web server instance, serving views for users.
The following table compares the high-level tasks required to set up an operational Ambari Server and a standalone Ambari server: | https://docs.hortonworks.com/HDPDocuments/Ambari-2.7.3.0/administering-ambari-views/content/amb_comparing_standalone_and_operational_ambari_server_set_up.html | 2019-05-19T13:30:46 | CC-MAIN-2019-22 | 1558232254882.18 | [] | docs.hortonworks.com |
VMware Identity Manager.
You can also join or leave the CEIP for this product at any time after installation. | https://docs.vmware.com/en/VMware-Identity-Manager/3.3/vidm_windows_install/GUID-0C7048E8-346E-4813-9A0E-46F1F0692BD0.html | 2019-05-19T13:24:13 | CC-MAIN-2019-22 | 1558232254882.18 | [] | docs.vmware.com |
Our.
Exceptional Care
in Central Massachusetts and Beyond
Vitreo-Retinal Associates has provided outstanding retinal care to patients throughout Central Massachusetts for more than 30 years. As a group of highly specialized eye surgeons,. McCabe, Baker, and Hong also are active participants in multiple clinical trials, important in determining if a new medication or test is effective and safe and looking for new ways to prevent, detect, or treat retinal disease.
What is a Retina Specialist?
Retina specialists train).
Retinal Diseases & Disorders
We treat a wide range of diseases and disorders of the retina. The most common of these include macular degeneration, retinal vein and artery occlusions, diabetic retinopathy, & detached and torn retinas.
Clinical Trials
We are long-time participants in a series of clincial trials that test various procedures and medications as part of our treatment of vitreo-retinal disease. | https://www.retina-docs.com/ | 2019-05-19T13:32:36 | CC-MAIN-2019-22 | 1558232254882.18 | [array(['https://static1.squarespace.com/static/5be3093b2971145c1371cf3e/t/5c0981e14d7a9c6e0ef94446/1544126958842/3docs_web.jpg',
'3docs_web.jpg'], dtype=object)
array(['https://static1.squarespace.com/static/5be3093b2971145c1371cf3e/t/5c0a89c4575d1f9e8d285f28/1544194504332/clinical+trials+icon.jpg',
'clinical trials icon.jpg'], dtype=object) ] | www.retina-docs.com |
Contents
The StreamBase Exegy adapter relies on version 3.5.0 or later of the JAR file that implements the Exegy Client API,
XCAPI.jar. This file is supplied as part of your Exegy installation and is not shipped with StreamBase. If you get an error message
whose text refers to
NoClassDefFoundError: com/exegy/xcapi/XCException, make sure this JAR file is locatable by the adapter on the path specified in the CLASSPATH environment variable.
The StreamBase Exegy adapter also relies on version 3.5.0 or later of the native library that implements the Exegy Client
API,
libjnixcapi64.so on Linux and
jnixcapi64.DLL on Windows. This file is supplied as part of your Exegy installation and is not shipped with StreamBase. If you get an error
message whose text refers to
Failed to load Java XCAPI library (jnixcapi64), make sure this file is locatable by the adapter on the path specified in the LD_LIBRARY_PATH environment variable on Linux
or in the PATH environment variable on Windows.
The Exegy API implementation described in this section is a product of a third party, and its specifications and file names are subject to change by Exegy. See your Exegy documentation for the latest information.
In StreamBase Studio, import this sample with the following steps:
From the top-level menu, click> .
Enter
exeto narrow the list of options.
Select Exegy input
ExegySample.sbappfile and select the adapter icon to open the Properties view for the adapter.
Select the Connection Properties tab and enter values for Exegy Server Host, Login Username, and Login Password.
Click the
Run button. This opens the SB Test/Debug perspective and starts the module.
In the Test/Debug Perspective, open the Output Streams view. Observe a tuple on the
Dictionarystream containing a list of tuples, one for each field in the Exegy dictionary.
In the Manual Input view, select the
Admininput stream, enter
connectin the
commandfield, and click . An additional tuple appears in the Output Streams view from the
Statusstream indicating the adapter has connected to the Exegy server.
In the Manual Input view, select the
Subscribeinput stream, enter
subscribe,
US:N:IBM,
level_one, and
equityin the
command,
symbol,
subscriptionType, and
containerTypefields, respectively, then optionally add values for PassThru and maxQuoteRate and click . A tuple appears in the Output Streams view from the
Statusstream indicating the subscription request has been processed, followed by a series of refresh (XC_MESSAGE_TYPE=0), quote (2), and trade (3) tuples on the
Equitiesstream.
In the Manual Input view, again in the
Subscribeinput stream, enter
unsubscribein the
commandfield, leave the remaining fields unchanged, and click . A tuple appears in the Output Streams view from the
Statusstream indicating the unsubscribe request has been processed, and the flow of tuples from the
Equitiesstream stops._exegy
See Default Installation Directories for the default location of
studio-workspace on your system. | http://docs.streambase.com/latest/topic/com.streambase.tsds.ide.help/data/html/samplesinfo/Exegy.html | 2019-08-17T18:47:38 | CC-MAIN-2019-35 | 1566027313436.2 | [] | docs.streambase.com |
Introduction ¶
About this Document ¶ ¶ ¶
If you find a bug in this manual, please be so kind as to check the online version on . From there you can hit the “Edit me on GitHub” button in the top right corner and submit a pull request via GitHub. Alternatively you can just file an issue using the bug tracker: .
Maintaining high quality documentation requires time and effort and the TYPO3 Documentation Team always appreciates support. If you want to support us, please contact us as described in the next section.
Contact the Documentation Team ¶
For general questions about the documentation get in touch with the Documentation Team . | https://docs.typo3.org/m/typo3/guide-installation/master/en-us/Introduction/Index.html | 2019-08-17T18:16:07 | CC-MAIN-2019-35 | 1566027313436.2 | [] | docs.typo3.org |
in the same directory, or in a
testor
testssubdirectory. If the current file is already called
test_foo.py, it will try and find a
foo.pynearby.throughwill-c RET (elpy-shell-send-current-statement)¶
Send current statement to Python shell.
This command sends statements to shell without indentation. If you send nested statements, shell will trow
IndentationError. To send nested statements, it is recommended to select region and run
elpy-shell-send-region-or-buffer
C-M-x (python-shell-send-defun)¶
Similar to
C-c C-c, this will send the code of the current top level class or function to the interactive Python process.
C-c C-k (elpy-shell-kill)¶
Kill the current python shell. If
elpy-dedicated-shellsis non-nil, kill the current buffer dedicated shell..program, which you have to install separately. The
elpy-configcommand will prompt you to do this if Elpy can’t find the program.
It is possible to create a single virtual env for the sole purpose of installing
flake8.
Note on Django runners: by default, elpy runs Django tests with
django-admin.py. You must set the environnement variable
DJANGO_SETTINGS_MODULEaccordingly. Alternatively, you can set elpy-test-django-with-manage to t in order to use your project’s
manage.py. You then don’t need to set the environnement variable, but change virtual envs (see virtualenvwrapper.el). variousto.
C-c C-r i (elpy-importmagic-fixup)¶
Query for new imports of unresolved symbols, and remove unreferenced imports. Also sort the imports in the import statement blocks.. | http://elpy.readthedocs.io/en/latest/ide.html | 2017-01-16T12:51:34 | CC-MAIN-2017-04 | 1484560279176.20 | [] | elpy.readthedocs.io |
EMI Music Marketing, a division of Capitol Records LLC (?EMI?), is committed to protecting your privacy. We have prepared this privacy policy (the ?Privacy Policy?) to describe to you our practices regarding the Personal Data (as defined below) we collect from users of our website, located at (?the EMI website,? ?EMI?s website? or ?our website?), and through our related products and services.
1. User Consent.
By submitting Personal Data through our website, products, or related services, you agree to the terms of this Privacy Policy and you expressly consent to the processing of your Personal Data in accordance with this Privacy Policy.
2. A Note about Children.
Protecting the privacy of children who use our site is one of our utmost concerns. Our website is not intended for children under the age of 13, and children under the age of 13 are not permitted to use the EMI website. If you are a child under the age of 13, please do not use our website. For users between the ages of 13 and 17, we require that you have a parent or legal guardian agree to this privacy policy and our Terms of Use (located here) prior to registering with EMI, creating an account profile with EMI or sharing any Personal Data with EMI..
EMI collects Personal Data and Anonymous Data from you when you visit our website, when you send us information or communications, when you purchase a product or service through our website, and/or when you download and/or use our products and services. ?Personal Data? means data that allows someone to identify or contact you, including, for example, your first and last name, address, telephone number, and email address, as well as any other non-public information about you that is associated with or linked to any Personal Data. ?Anonymous Data? means data that is not associated with or linked to your Personal Data; Anonymous Data does not permit the identification of individual persons. We collect Personal Data and Anonymous Data, as described below.
Personal Data That We Collect
We collect Personal Data from you, such as your first and last name, gender, email address, phone number and mailing address when you create an account on EMI?s website, when you access certain services and when you download and/or install certain products. When you order products or services on our website, we will collect all information necessary to complete the transaction, including your name, credit card information or other payment account information, billing information and shipping information. We may also collect Personal Data from you from time to time on an opt-in basis, which collection shall at all times be subject to the Terms of Use and this Privacy Policy.
When you use EMI?s website, you may have the ability to set up a personal profile or account profile, post reviews and comments, send messages, enter contests and otherwise post and transmit information through the EMI website. We collect and store that information as a means of enabling these features. While we take substantial measures to ensure your privacy and security, we cannot guarantee that unauthorized users will not circumvent our security measures and gain access to your personal information and content. Therefore, you acknowledge that EMI is not responsible for any unauthorized access to or use of your personal information. Unauthorized access to a user?s personal information or content is a violation of our Terms of Use. Please report any such violations to [email protected].
If you provide us feedback or contact us via email, we will collect your name and email address, as well as any other content included in or attached to the email, in order to send you a reply. If you participate in one of our surveys, we may collect additional information. When you post messages on the message boards, review boards or comment boards of our website, the information contained in your posting will be stored on our servers, and other users will be able to see it. We also collect other types of Personal Data that you provide to us voluntarily in your interaction with the EMI website.
To make our website, products, and services more useful to you, our servers (which may be hosted by a third party service provider) collect Personal Data from you, including your browser type, operating system, Internet Protocol (IP) address (a number that is automatically assigned to your computer when you use the Internet, which may vary from session to session), domain name, and/or a date/time stamp for your visit. Further, when you visit EMI?s website, we will automatically generate and assign to you one or more ?globally unique identifiers? (?GUID(s)?) associated with your computer and/or handset. At the time of registration, you will also be assigned a unique ID number. We may use the ID number, the GUIDs and similar technologies to collect information about your use of EMI?s services. Such information may be associated with other Personal Data. We also use Cookies .
Personal Data That We Collect from You About Others
If you invite a third party to view or otherwise use our website, we will collect yours and the third party?s name and email address in order to send an email invitation to that third party. If you or the third party wish to have this information removed from our database, you can email us at [email protected].
5. Use of Your Data
General Use
In general, Personal Data you submit to us is used:
(1) to enable the creation, editing, and display of information on your profile or account;
(2) to allow you to interact with us and , when applicable, other users;
(3) to allow us to respond to requests that you make;
(4) to provide you with service- or security-related notifications;
(5) to create your user account;
(6) to identify you as a user on our website;
(7) to send you promotional communications or special offers;
(8) to send you notifications regarding new services in which you may be interested; and
(9) to make telephone calls to you, solely as a means of secondary fraud protection.
We may create Anonymous Data records from Personal Data by excluding information (such as your name) that makes the data personally identifiable to you. We use this Anonymous Data to analyze request and usage patterns so that we may enhance the content of our products and services, to personalize advertisements and promotions to you and to improve site navigation. Company reserves the right to use and disclose Anonymous Data to Third Party Companies in its discretion.
We may share some or all of your Personal Data with any now- or later-existing parent company, subsidiaries, joint ventures, or other companies under a common control (collectively, .
Other Disclosures
Regardless of any choices you make regarding your Personal Data (as described below), EMI may disclose Personal Data if it believes in good faith that such disclosure is necessary to (a) comply with relevant laws or to respond to subpoenas or warrants served on Company; (b) to protect or defend the rights or property of Company or users of the products or related services; or (c) to prevent fraud or illegal activity utilizing the EMI website or network.
7. Third Parties.
Personal and/or Anonymous Data Collected by Third Parties
We may receive Personal and/or Anonymous Data about you from other sources like telephone or fax, or from companies that provide our products or services by way of a co-branded and services we provide.
Third parties may have access to your personal information for a limited time and for limited purposes in order to facilitate or administer the services that we offer. Third parties may be utilized for such services as sending out email updates, facilitating the download of our products or services, processing payment information for products or services, providing search results or links, performing maintenance of the website, or providing technical support to users. Where we use third parties for these services, we implement both technical and contractual measures to help ensure that these third parties adhere to our Privacy Policy and do not use your personal information for any unauthorized purposes.
EMI?s website may contain links to other websites or locations, such as payment processing websites. These links are for your convenience and do not signify our endorsement of such other websites or locations or their contents. When you click on such a link, you will leave our site and go to another site, which may collect Personal Data or Anonymous Data from you. We have no control over, do not review, and cannot be responsible for these outside websites or their content. Please be aware that the terms of this Privacy Policy apply only to information collected by EMI, and do not apply to these outside websites or content, or to any collection of data after you click on links to such outside websites. We suggest that you read all privacy policies of any website that collects personally identifiable information from you.
Disclosure to Third Party Service Providers
Except as otherwise stated in this Privacy Policy, we do not generally sell, trade, share, or rent the Personal Data collected from our services to other entities. However, we may share your Personal Data with third party service providers to provide you with the products and related services that we offer you through our website; to conduct quality assurance testing; to facilitate creation of accounts; to provide technical support; or to provide specific services, in accordance with your instructions. These third party service providers are required not to use your Personal Data other than to provide the services requested by EMI. You expressly consent to the sharing of your Personal Data with our contractors and other service providers for the sole purpose of providing services to you.
Disclosure to Third Party Companies
We may enter into agreements with Third Party Companies to sell products or provide services to our users. A Third Party Company may need to access Personal Data that we collect from you in order to facilitate the offering of these products or services and to execute transactions;.
Choices
EMI offers you choices regarding how we collect, share and use your Personal Data. We will occasionally send you emails that directly promote the use of our site or the purchase of our products or services. These emails may contain advertisements for Third Party Companies. When you receive promotional emails from us, you may choose to stop receiving further communications from us by following the unsubscribe instructions provided in the email you receive or by contacting us directly at [email protected] or the address listed below. If you choose to opt-out of receiving future emails, we may need to share your email address with third parties to ensure that you do not receive further communications from them. Regardless of your email preferences, we may still send you notices of any updates to our Terms of Use or Privacy Policy.
Changes to Personal Data
You may change any of your Personal Data in your account by logging in and editing your Account Profile. You may request deletion of your Personal Data by us by emailing us at [email protected]; however, please note that we may be required by law change your privacy settings which will enable you to adjust what Personal Data is publicly available.
9. Security of Your Personal Data.
EMI places the utmost value on protecting the security of your Personal Data. We use a variety of industry-standard security technologies and procedures to help protect your Personal Data from unauthorized access, use, or disclosure. Security measures taken inlcude SSL when we collect personal data, and all web traffic is secured behind a firewall. EMI is PCI and COPPA compliant and we will never sell your Personal Data to a third party. We also require you to enter a password to access or change your account information. Please do not disclose your account password to unauthorized people. Despite these measures, you should know that we cannot completely eliminate security risks associated with Personal Data. Please be aware that once you share Personal Data with other users, that information may be shared with other people by those other users regardless of your privacy settings. Please review your privacy settings to adjust how your Personal Data may be shared. WE ARE NOT RESPONSIBLE FOR THE USE OF ANY PERSONAL DATA YOU VOLUNTARILY DISCLOSE THROUGH THE WEBSITE.
10. Contacting Us.
EMI welcomes your comments or questions regarding this Privacy Policy. Please email us at [email protected] or contact us at the following address or phone number:
EMI Music Marketing
Attn: Brand Partnerships
1750 North Vine Street
Hollywood, CA 90028
323-871-5451
11. Terms of Use, Changes to Privacy Policy.
Your use of EMI?s website and related services is subject to this Privacy Policy and our Terms of Use, including all provisions pertaining to dispute resolution. If you have any concerns regarding this Privacy Policy or would like to report a violation of this Privacy Policy, please contact us at [email protected]. We will make every effort to address your concerns.
If we make any significant changes to this Privacy Policy that affect how we use your Personal Data, we will notify you by email or by posting a notice on our homepage. Any material changes to this Privacy Policy will take effect thirty days after we send you an email notice or post a notice on our homepage. Any changes will take effect immediately for new users of our website, products or related services. Please make sure to update your Personal Data to provide us with your most current email address. Our sending of email notices to your most current email address shall constitute valid notice of the changes listed in the email notice, regardless of whether you actually receive such notices. If you do not wish to permit changes in our use of your Personal Data, you must notify us prior to the effective date of the changes that you wish to terminate your account with us. Continued use of our website, products, or related services, following notice of such changes shall indicate your agreement to be bound by the terms and conditions of such changes.
This Privacy Policy was last revised on March 10, 2010. | http://docs.neuroticmedia.net/emi/emiprivacypolicy.html | 2017-01-16T12:48:29 | CC-MAIN-2017-04 | 1484560279176.20 | [] | docs.neuroticmedia.net |
When you make changes to your Drupal docroot that you want to save and deploy to your site, commit the changes from your local workspace directory. Remember that you can commit only to a branch, not to an existing tag. You can find these and other useful VCS commands in the Acquia Cloud Application info panel.
In the following Git
commit, note that the
-a option sends all
of the changes that you made to the workspace. To commit only a specific
file or directory, replace
-a with the name of the folder or
directory.
git commit -a -m "Added Foo module."
After you use the
commit command to send your changes to your local clone
of the repository, you must use the
git push command to push the changes to
the appropriate branch of your code on Acquia Cloud. For example, if you are
deploying from a branch named
master, use the following command:
git push origin master | https://docs.acquia.com/acquia-cloud/develop/repository/update/ | 2020-02-17T09:30:27 | CC-MAIN-2020-10 | 1581875141806.26 | [] | docs.acquia.com |
AR System server components This section contains information about: armonitor.exe or armonitorarplugin.exe or arpluginarserver.exe or arserverdarsystemJava plug-in server Was this page helpful? Yes No Submitting... What is wrong with this page? Confusing Missing screenshots, graphics Missing technical details Needs a video Not correct Not the information I expected Your feedback: Send Skip Thank you Last modified by Gregg Kitagawa on Dec 05, 2011 components Comments AR System server components and external utilities armonitor.exe or armonitor | https://docs.bmc.com/docs/ars9000/ar-system-server-components-509973122.html | 2020-02-17T11:30:33 | CC-MAIN-2020-10 | 1581875141806.26 | [] | docs.bmc.com |
How to Create Ruby Web Applications
x
Overview
In cPanel & WHM version 66, we deprecated the legacy Ruby codebase and plan to remove it in a future release.
Create a Ruby application_passengerNote:
If you wish to add environment variables to your application, install the following additional RPMs:
The
ea-apache24-mod_envmodule.
The
ea-ruby24-ruby-develmodule.. | https://docs.cpanel.net/knowledge-base/web-services/how-to-create-ruby-web-applications/ | 2020-02-17T09:31:17 | CC-MAIN-2020-10 | 1581875141806.26 | [] | docs.cpanel.net |
msvsmon fails during VS install
I just spent a few days with a customer who had issues uninstalling Beta 2 and then installing the RC candidate.
Here's a snippet from his MSI log:
MSI (s) (44:28) [20:59:55:916]: Executing op: ServiceInstall(Name=msvsmon80,DisplayName=Visual Studio 2005 Remote Debugger,ImagePath="C:\Program Files\Microsoft Visual Studio 8\Common7\IDE\Remote Debugger\x86\msvsmon.exe" /service msvsmon80,ServiceType=16,StartType=4,ErrorControl=0,,,,StartName=LocalSystem,Password=**********,Description=Allows members of the Administrators group to remotely debug server applications using Visual Studio 2005. Use the Visual Studio 2005 Remote Debugging Configuration Wizard to enable this service.)
Info 1923. Service 'Visual Studio 2005 Remote Debugger' (msvsmon80) could not be installed. Verify that you have sufficient privileges to install system services.
If you see this issue, the workaround is:
Find the following registry key:
HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\msvsmon80
Delete the registry key and then try to reinstall. | https://docs.microsoft.com/en-us/archive/blogs/quanto/msvsmon-fails-during-vs-install | 2020-02-17T11:18:58 | CC-MAIN-2020-10 | 1581875141806.26 | [] | docs.microsoft.com |
Supported Project Types
One of the main goals of Projectile is to operate on a wide range of project types without the need for any configuration. To achieve this it contains a lot of project detection logic and project type specific") :compile "npm install" :test "npm test" :run "npm start" :test-suffix ".spec")
What this does is:
- add your own type of project, in this case
npmpackage.
- add a file in a root of the project that helps to identify the type, in this case it.
The available options are:
For simple projects,
:test-prefix and
:test-suffix option with string will
be enough to specify test prefix/suffix applicable regardless of file extensions
on any directory path.
projectile-other-file-alist variable can be also set to
find other files based on the extension.
For the full control of finding related files,
:related-files-fn option with a
custom function or a list of custom functions can be used. The custom function
accepts the relative file name from the project root and it should return the
related file information as plist with the following optional key/value pairs:
For each value, following type can be used:
Notes:
1. For a big project consisting of many source files, returning strings instead
of a function can be fast as it does not iterate over each source file.
2. There is a difference in behaviour between no key and
nil value for the
key. Only when the key does not exist, other project options such as
:test_prefix or
projectile-other-file-alist mechanism is tried.)
Customizing project root files
You can set the values of
projectile-project-root-files,
projectile-project-root-files-top-down-recurring,
projectile-project-root-files-bottom-up and
projectile-project-root-files-functions to customize how project roots are
identified.
To customize project root files settings:
M-x customize-group RET projectile RET
Ignoring files
Warning
The contents of
.projectile are ignored when using the
alien project indexing method..
You can also quickly visit or create the
dir-locals-file with
s-p E (M-x projectile-edit-dir-locals RET)."))))
Configure a Project's Compilation, Test and Run commands
There are a few variables that are intended to be customized via
.dir-locals.el.
- for compilation -
projectile-project-compilation-cmd
- for testing -
projectile-project-test) | https://docs.projectile.mx/en/latest/projects/ | 2020-02-17T09:57:50 | CC-MAIN-2020-10 | 1581875141806.26 | [] | docs.projectile.mx |
- 17.1.1.1. Setting the Replication Master Configuration
- 17.1.1.2. Setting the Replication Slave Configuration
- 17.1.1.3. Creating a User for Replication
- 17.1.1.4. Obtaining the Replication Master Binary Log Coordinates
- 17.1.1.5. Creating a Data Snapshot Using mysqldump
- 17.1.1.6. Creating a Data Snapshot Using Raw Data Files
- 17.1.1.7. Setting Up Replication with New Master and Slaves
- 17.1.1.8. Setting Up Replication with Existing Data
- 17.1.1.9. Introducing Additional Slaves to an Existing Replication Environment
- 17 17.1.1.1, “Setting the Replication Master Configuration”.
On each slave that you want to connect to the master, you must configure a unique server ID. This might require a server restart. See Section 17.1.1.2, “Setting the Replication Slave Configuration”.
You may want to create a separate user that will be used by your slaves to authenticate with the master to read the binary log for replication. The step is optional. See Section 17 17.1.1.4, “Obtaining the Replication Master Binary Log Coordinates”.
If you already have data on your master and you want to use it to synchronize your slave, you will need to create a data snapshot. You can create a snapshot using mysqldump (see Section 17.1.1.5, “Creating a Data Snapshot Using mysqldump”) or by copying the data files directly (see Section 17.1.1.6, “Creating a Data Snapshot Using Raw Data Files”).
You will need to configure the slave with settings for connecting to the master, such as the host name, login credentials, and binary log file name and position. See Section 17 17 17.1.1.8, “Setting Up Replication with Existing Data”.
If you are adding slaves to an existing replication environment, you can set up the slaves without affecting the master. See Section 17 17.1.3, “Replication and Binary Logging Options and Variables”. | http://doc.docs.sk/mysql-refman-5.5/replication-howto.html | 2020-02-17T09:00:23 | CC-MAIN-2020-10 | 1581875141806.26 | [] | doc.docs.sk |
Time Period Definition¶
A time period is a list of times during various days that are considered to be “valid” times for notifications and service checks. It consists of time ranges for each day of the week that “rotate” once the week has come to an end. Different types of exceptions to the normal weekly time are supported, including: specific weekdays, days of generic months, days of specific months, and calendar dates.
Syntax¶
Bold variables are required, while others are optional. Emphasized variables are Alignak extensions with reference to the Nagios legacy definition.
Example¶
define timeperiod{ timeperiod_name nonworkhours alias Non-Work Hours sunday 00:00-24:00 ; Every Sunday of every week monday 00:00-09:00,17:00-24:00 ; Every Monday of every week tuesday 00:00-09:00,17:00-24:00 ; Every Tuesday of every week wednesday 00:00-09:00,17:00-24:00 ; Every Wednesday of every week thursday 00:00-09:00,17:00-24:00 ; Every Thursday of every week friday 00:00-09:00,17:00-24:00 ; Every Friday of every week saturday 00:00-24:00 ; Every Saturday of every week } define timeperiod{ timeperiod_name misc-single-days alias Misc Single Days 1999-01-28 00:00-24:00 ; January 28th, 1999 monday 3 00:00-24:00 ; 3rd Monday of every month day 2 00:00-24:00 ; 2nd day of every month february 10 00:00-24:00 ; February 10th of every year february -1 00:00-24:00 ; Last day in February of every year friday -2 00:00-24:00 ; 2nd to last Friday of every month thursday -1 november 00:00-24:00 ; Last Thursday in November of every year } define timeperiod{ timeperiod_name misc-date-ranges alias Misc Date Ranges 2007-01-01 - 2008-02-01 00:00-24:00 ; January 1st, 2007 to February 1st, 2008 monday 3 - thursday 4 00:00-24:00 ; 3rd Monday to 4th Thursday of every month day 1 - 15 00:00-24:00 ; 1st to 15th day of every month day 20 - -1 00:00-24:00 ; 20th to the last day of every month july 10 - 15 00:00-24:00 ; July 10th to July 15th of every year april 10 - may 15 00:00-24:00 ; April 10th to May 15th of every year tuesday 1 april - friday 2 may 00:00-24:00 ; 1st Tuesday in April to 2nd Friday in May of every year } define timeperiod{ timeperiod_name misc-skip-ranges alias Misc Skip Ranges 2007-01-01 - 2008-02-01 / 3 00:00-24:00 ; Every 3 days from January 1st, 2007 to February 1st, 2008 2008-04-01 / 7 00:00-24:00 ; Every 7 days from April 1st, 2008 (continuing forever) monday 3 - thursday 4 / 2 00:00-24:00 ; Every other day from 3rd Monday to 4th Thursday of every month day 1 - 15 / 5 00:00-24:00 ; Every 5 days from the 1st to the 15th day of every month july 10 - 15 / 2 00:00-24:00 ; Every other day from July 10th to July 15th of every year tuesday 1 april - friday 2 may / 6 00:00-24:00 ; Every 6 days from the 1st Tuesday in April to the 2nd Friday in May of every year }
Variables¶
- timeperiod_name
- This directives is the short name used to identify the time period.
- alias
- This directive is a longer name or description used to identify the time period.
- [weekday]
- The weekday directives (“sunday” through “saturday”)are comma-delimited lists of time ranges that are “valid” times for a particular day of the week. Notice that there are seven different days for which you can define time ranges (Sunday through Saturday). Each time range is in the form of HH:MM-HH:MM, where hours are Specified on a 24 hour clock. For example, 00:15-24:00 means 12:15am in the morning for this day until 12:00am midnight (a 23 hour, 45 minute total time range). If you wish to exclude an entire day from the timeperiod, simply do not include it in the timeperiod definition.
- The daterange format are multiples :
- Calendar Daterange : look like a standard date, so like 2005-04-04 - 2008-09-19.
- Month Week Day: Then there are the month week day daterange same than before, but without the year and with day names That give something like : tuesday 2 january - thursday 4 august / 5
- Now Month Date Daterange: It looks like : february 1 - march 15 / 3
- Now Month Day Daterange. It looks like day 13 - 14
- Now Standard Daterange: Ok this time it’s quite easy: monday
- [exception]
- You can specify several different types of exceptions to the standard rotating weekday schedule.”. Rather than list all the possible formats for exception strings, I’ll let you look at the example timeperiod definitions above to see what’s possible. :-) Weekdays and different types of exceptions all have different levels of precedence, so its important to understand how they can affect each other. More information on this can be found in the documentation on timeperiods.
- exclude
- This directive is used to specify the short names of other timeperiod definitions whose time ranges should be excluded from this timeperiod. Multiple timeperiod names should be separated with a comma.
Note
The day skip functionality is not managed from now, so it’s like all is / 1 | http://docs.alignak.net/en/latest/20_annexes/timeperiod.html | 2020-02-17T11:01:04 | CC-MAIN-2020-10 | 1581875141806.26 | [] | docs.alignak.net |
Adding a Domain Controller for JumpCloud
You can add a JumpCloud domain so that users can log on to the CommCell environment with their JumpCloud credentials.
Before You Begin
- To enable JumpCloud domains, on the CommServe computer, add the bEnableJumpCloud additional setting as shown in the following table.
For instructions on adding additional settings from the CommCell Console, see Add or Modify an Additional Setting.
- You must have the Add, delete and modify a domain permission at the CommCell level.
- In your JumpCloud account, do the following:
- Configure LDAP.
- To allow users to browse the JumpCloud directory, select User can bind to and search the JumpCloud LDAP service.
- To make a tag behave like a user group, select Create LDAP groups for this tag.
- For LDAP communication, open ports 389 and 636.
- Obtain the Organization ID for the domain.
Procedure
- From the CommCell Browser, go to Security.
- Right-click Domains and click Add new domain > JumpCloud.
The Add New Domain Controller dialog box is displayed.
- Enter the details for the JumpCloud domain controller.
For information on each option, see the online help for Add New Domain Controller/Edit Domain Controller (JumpCloud).
- Click OK.
Result
When JumpCloud users log on, they can use an email address and password or a user ID and password. The user ID must be in the following format: Organization_name\user_name, for example, MyCompany\jsmith.
Last modified: 10/13/2017 8:06:36 AM | http://docs.commvault.com/commvault/v11/article?p=8139.htm | 2020-02-17T09:43:21 | CC-MAIN-2020-10 | 1581875141806.26 | [] | docs.commvault.com |
Storage in a Windows 8 App
Apps generate data. Some of them generate a ton of data and others just a few little bits and pieces. Allow me to enumerate your options for storing stuff when you’re working on an app for Windows 8. There are subtle differences between the way storage is done in an HTML/JS app versus a .NET or a C++, but for most of the techniques you’re just accessing the WinRT library so the steps are practically identical.
Before we enumerate the types of storage, let’s talk about the types of data that typically get generated in an app. I’ll break them up into application state, user settings, application data, and user data.
Application State
Application state (or session state) is the data that pertains to what the user is doing in their use of an app. If the user has browsed to the registration page and starting typing their name and address in, then that data is session state. If the user has to drop their tablet (figuratively of course), switch apps, or something else and doesn’t get back to it for some time, then the kind thing (only humane option?) to do is remember their state and restore it when they return.
User Settings
A user has preferences. Your user might want the music in their game turned off. They might want to choose a theme color. They might want to turn on automatic social sharing. These are preferences. They usually take up hardly any space, but it’s they’re really important to your user. The best thing to do is store them in the cloud instead of just on a device so the user feels like they are remembered wherever they go.
Application Data
Application data is the data generated by your app that may have everything in the world to do with your user, but your user is going to assume that the app is holding that data for him and doesn’t want to manage it himself outside of the app. If you installed a task list app, you’d expect it to hold tasks for you, right? That’s app data. The line can be blurry between app data and user data, so read on.
User Data
User data is data generated by the app, but belongs more to the user than the app. The user expects to be able to send the data to a friend, open it in a different program, or back it up to the cloud. User data is everything you find in the libraries – the documents library, the music library, etc.
Implementation
So, let’s talk about how to implement these.
Application state can be stored in WinJS.Application.sessionState. That’s an object that WinJS and Windows handle for you and plays well with the lifecycle of your app. Saving to the sessionState object couldn’t be easier. Just issue a command like…
WinJS.Application.sessionState = { currentPage: "widgets", formFields: { firstName: "Tom", lastName: "Jones" } };
You could do this anytime during the use of your app or you could wait until the app.oncheckpoint event (look on your default.js page for that) and just do it when your app is on it’s way out of the spotlight.
Keep in mind that this is for session data only. Windows assumes that if your user explicitly ends the app, they are ending their session and sessionState is not stored. You also can’t count on it after an application crash, so make sure it’s only transient data that wouldn’t ruin the users day to lose.
User settings are again very important. You have many options for storing them, but only two that I recommend. The first is localSettings and the second is roamingSettings. Only use localSettings if you have good reason not to roam the setting to the cloud. If you use roamingSettings and the user doesn’t have a Microsoft account, it will still store locally. Both of these are accessed from Windows.Storage.ApplicationData.current. You can store a new setting value like this…
localSettings.values["gameMusic"] = "off";
Application data can work much like the user settings technically, but it serves a different purpose. Imagine the task list app I mentioned before. The tasks themselves must obviously be stored and you - the developer - have quite a variety of options. You have to ask yourself a few questions like:
- Does the user need to share the app data with select others?
- Does the user need access to the data on multiple devices?
- Does the data feed any other apps either on the same or another platform?
It’s very possible that just storing data local to the device is plenty. In that case, the localFolder from Windows.Storage.ApplicationData.current. This spot is dedicated to storing data for your app and your app only. No other apps have access to it actually.
If you have a very small amount of application data (less than 100k) then you can use the roamingFolder from the same Windows.Storage.ApplicationData.current. This data will, just like the roamingSettings, be synced to the user’s online profile and back down to any other devices they might log in to.
You have a variety of other options for storing data such as a local database, online database, online user drive, and more, but I’ll save those for another day and another post.
Finally, we’ll talk about user data. Unlike application data, users will expect to have ownership of their user data. When a user creates a spreadsheet, this is not data that just exists inside of Excel. The user expects to have that spreadsheet in their documents and be able to work with it (share it, rename it, organize it, …) outside of Excel.
If your app is one that will work with user data, then you need to pick a file format and create the association. This is done in the package.appxmanifest where you’ll also need to add a capability to access the users documents. It’s quite a easy thing to use the open and save file dialogs from your app and the user will love having full access not only to his documents, but also all apps he has installed that implement the FilePicker contract.
That’s quite enough on storage for now. Perhaps some of the following locations from within the Windows Dev Center will be helpful to you…
Storage and state reference
Windows.Storage namespace
WinJS.Application.sessionState object
Manage app lifecycle and state
Optimizing your app's lifecycle
Have fun. | https://docs.microsoft.com/en-us/archive/blogs/codefoster/storage-in-a-windows-8-app | 2020-02-17T11:26:25 | CC-MAIN-2020-10 | 1581875141806.26 | [] | docs.microsoft.com |
DIRECTORY Binding
What is the @DIRECTORY Binding?
The DIRECTORY binding reads the contents of a directory. This can really useful when you tie it into a List control widget, e.g. if you want to do something like give the user a list of logo images to choose for a page, or choose which mp3 file plays on a particular page. REMEMBER: it returns ALL contents of a directory, including all files and all directories - with the sole exception of directories prefixed with a period.
Usage
When you create a Template Variable, place the following text into the Input Option Values box:
@DIRECTORY /path/to/some_directory
Frequently, this is coupled with an Input Type of "DropDown List Menu" to allow the user to select a file from the list.
In MODX Revolution, the path used for the @DIRECTORY binding is relative to the site's root. It is not an absolute file path. If you want to list files above your site's root, you must use the ".." syntax, e.g. @DIRECTORY /../dir_above_root This binding will work with or without a trailing slash in the directory name.
If you are using the @DIRECTORY binding for your template variable
[[*myTV]], you can easily imagine that your template code could have some stuff in it like:
<img src="[[*myTV]]" alt="" />
Additional Info
Can you filter which files are selected? E.g. using *.jpg? The following DOES NOT WORK:
@DIRECTORY /list/*.jpg # doesn't work!
There are PHP code snippets out there that emulate this functionality. See the following forum thread:
Security
Depending on how the file is used on the page, it may pose a security risk. Be careful if you were using this binding to select JavaScript files to be executed. What if a user had the ability to upload (and thus execute) a JavaScript file? Also, always be wary of letting users see your directory structure. | https://docs.modx.com/3.x/en/building-sites/elements/template-variables/bindings/directory-binding | 2020-02-17T10:55:07 | CC-MAIN-2020-10 | 1581875141806.26 | [] | docs.modx.com |
For account owners and admins, New Relic provides a UI for understanding your subscription-related usage of New Relic products. This document explains how to view and understand the usage UI for New Relic Synthetics.
For an introduction to New Relic usage data and how to use it, see Intro to usage data.
View Synthetics usage
Only the account Owner and Admins can view the usage UI. However, anyone in your account can query usage data using the
NrDailyUsage Insights event.
To view your Synthetics subscription usage data:
- From synthetics.newrelic.com, select the account dropdown, and then select Account settings.
- Select Usage.
- Select Synthetics usage.
Calculation details
A New Relic Synthetics subscription level is based on the number of non-ping monitor checks used during a calendar month.
The Synthetics usage chart displays the daily count of monitor checks. The table value Avg daily paid checks displays the total number of monitor checks for the selected time period, divided by the number of days.
If your monitor checks are fairly steady over time, you can estimate the current month's eventual usage:
- On the Synthetics usage page, set the time picker to Last 30 days.
- Multiply the Avg daily paid checks by the number of days in the current month.
For a description of how to use general UI features, see UI features.
Table definitions
For definitions of the table headers, see the Synthetics usage attributes.
Query and retrieve data
For information on how to query and retrieve usage data, see Intro to usage data.
For a description of Synthetics usage data you can query in Insights, and example NRQL queries, see Synthetics usage attributes and queries. | https://docs.newrelic.com/docs/accounts/new-relic-account-usage/synthetics-usage/synthetics-usage-ui | 2020-02-17T11:35:38 | CC-MAIN-2020-10 | 1581875141806.26 | [] | docs.newrelic.com |
Knot Resolver daemon¶
The server is in the daemon directory, it works out of the box without any configuration.
$ kresd -h # Get help $ kresd -a ::1.
moduledir([dir])¶
If called with a parameter, it will change kresd’s directory for looking up the dynamic modules. If called without a parameter, it will return kresd’s modules directory.
For when listening on
localhost just doesn’t cut it.
Systemd socket configuration
If you’re using our packages with systemd with sockets support (not supported
on CentOS 7), network interfaces are configured using systemd drop-in files for
kresd.socket and
kresd-tls.socket.
To configure kresd to listen on public interface, create a drop-in file:
$ systemctl edit kresd.socket
# /etc/systemd/system/kresd.socket.d/override.conf [Socket] ListenDatagram=192.0.2.115:53 ListenStream=192.0.2.115:53
The default port can also be overriden by using an empty
ListenDatagram= or
ListenStream= directive. This can be useful if you want to use the Knot DNS with the dnsproxy module to have both resolver and authoritative server running on the same machine.
# to listen for TLS connections.
$ systemctl edit kresd-tls.socket
# /etc/systemd/system/kresd-tls.socket.d/override.conf [Socket] ListenStream=192.0.2.115:853
Daemon network configuration
If you don’t use systemd with sockets to run kresd, network interfaces are configured in the config file.
Tip
Use declarative interface for network.
net = { '127.0.0.1', net.eth0, net.eth1.addr[1] } net.ipv4 = false
Warning
On machines with multiple IP addresses avoid binding to wildcard
0.0.0.0 or
:: (see example below). Knot Resolver could answer from different IP in case the ranges overlap and client will probably refuse such a response.
net = { '0.0.0.0' }
net.listen(addresses, [port = 53, flags = {tls = (port == 853)}])¶
Listen on addresses; port and flags are optional. The addresses can be specified as a string or device, or a list of addresses (recursively). The command can be given multiple times, but note that it silently skips any addresses that have already been bound.
Examples:
net.listen('::1') net.listen(net.lo, 5353) net.listen({net.eth0, '127.0.0.1'}, 53853, {tls = true}) available. Default is 4096. You cannot set less than 512 (512 is DNS packet size without EDNS, 1220 is minimum size for DNSSEC) or more than 65535 octets.
Example output:
> net.bufsize 4096 >
Note
Installations using systemd should be configured using systemd-specific procedures
described in manual page
kresd.systemd(7).
DNS-over-TLS server (RFC 7858) can be enabled using
{tls = true} parameter
in
net.listen() function call. For example:
> net.listen("::", 53) -- plain UDP+TCP on port 53 (standard DNS) > net.listen("::", 853, {tls = true}) -- DNS-over-TLS on port 853 (standard DoT) > net.listen("::", 443, {tls = true}) -- DNS-over-TLS on port 443 (non-standard)
By default an self-signed certificate will be generated. For serious deployments
it is strongly recommended to provide TLS certificates signed by a trusted CA
using
trust_anchors.config(keyfile, readonly)¶
Alias for add_file. It is also equivalent to CLI parameter
-k <keyfile>and
trust_anchors.file = keyfile.
trust_anchors.add_file(keyfile, readonly.keyfile_default = KEYFILE_DEFAULT¶
Set by
KEYFILE_DEFAULTduring compilation (by default
nil). This can be explicitly set to
nilto override the value set during compilation in order to disable DNSSEC.
trust_anchors.hold_down_time = 30 * day¶
Modify RFC5011 hold-down timer to given value. Example:
30 * sec
trust_anchors.refresh_time = nil¶
Modify RFC5011 refresh timer to given value (not set by default), this will force trust anchors to be updated every N seconds periodically instead of relying on RFC5011 logic and TTLs. NTA, DNSSEC validation will be turned off at/below these names. Each function call replaces the previous NTA set. You can find the current active set in
trust_anchors.insecurevariable.
Tip
Use the trust_anchors.negative = {} alias for easier configuration.
Example output:
> trust_anchors.negative = { ..
As of now, the built-in backend with URI
lmdb://allows
Warning
Cache statistics are being reworked. Do not rely on current behavior.
Return table of statistics, note that this tracks all operations over cache, not just which queries were answered from cache or not.
Example:
print('Insertions:', cache.stats().insert). .:
print(worker.stats().concurrent)
Enabling DNSSEC¶.
$ kresd -k root-new.keys # File for root keys [ ta ] keyfile 'root-new.keys': doesn't exist, bootstrapping [ ta ] Root trust anchors bootstrapped over https with pinned certificate. You SHOULD verify them manually against original source: [ ta ] Current root trust anchors are: . 0 IN DS 19036 8 2 49AAC11D7B6F6446702E54A1607371607A1A41855200FD2CE1CDDE32F24E8FB5 . 0 IN DS 20326 8 2 E06D44B80B8F1D39A95C0B0D7C65D08458E880409BBC683457104237C7F8EC8D [ ta ] next refresh for . in 24 hours
Alternatively, you can set it in configuration file with
trust_anchors.file = 'root.keys'. If the file doesn’t exist, it will be automatically populated with root keys validated using root anchors retrieved over HTTPS.
This is equivalent to using unbound-anchor:
$.
Configuration is described in Trust anchors and DNSSEC.
Manually providing root anchors¶
The root anchors bootstrap may fail for various reasons, in this case you need to provide IANA or alternative root anchors. The format of the keyfile is the same as for Unbound or BIND and contains DS/DNSKEY records.
- Check the current TA published on IANA website
- Fetch current keys (DNSKEY), verify digests
- Deploy them
$!
Note
Bootstrapping and automatic update need write access to keyfile directory. If you want to manage root anchors manually you should use
trust_anchors.add_file('root.keys', true).
-DNOVERBOSELOG,.config("root.keys")' nic.cz 'assert(req:resolved().flags.DNSSEC_WANT)' $ echo $? 0 | https://knot-resolver.readthedocs.io/en/v3.2.1/daemon.html | 2020-02-17T10:12:11 | CC-MAIN-2020-10 | 1581875141806.26 | [] | knot-resolver.readthedocs.io |
- API >
- Public API Resources >
- Programmatic API Keys >
- Organization Programmatic API Key Whitelists
Organization Programmatic API Key Whitelists¶
Base URL:
Use the
/orgs/{ORG-ID}/apiKeys/{API-KEY-ID}/whitelist resource to
view, create, or delete whitelist entries for a user or
Programmatic API Key within the specified
Cloud Manager organization.
The Organization API Key, or users with the
Organization Owner role in the
organization in which the
API Key belongs, can access these
endpoints. | https://docs.cloudmanager.mongodb.com/reference/api/org-api-key-whitelists/ | 2020-02-17T10:39:30 | CC-MAIN-2020-10 | 1581875141806.26 | [] | docs.cloudmanager.mongodb.com |
Home | Installation | Backend Correlation | API | FAQ | On-Premises);
Parameter
Example
ineum('page', 'shopping-cart');.
In cases in which you are handling anonymous users and thus don’t have access to user IDs you could alternatively use session IDs. Session IDs are not as helpful as user IDs when filtering data but they are a good indicator to calculate affected/unique user metrics. We recommend setting a user name such as
Anonymous to have a clear differentiation between authenticated and unauthenticated users. Session IDs can be sensitive data (depending on the framework/platform used). Please consider hashing session IDs to avoid transmitting data to Instana that can grant access.', '[email protected]'); // or only some data points ineum('user', 'hjmna897k1'); ineum('user', null, null, '[email protected]'); browers’);
Parameter
Example
ineum('ignoreUrls', [ /\/comet.+/i, /\/ws.+/i, /.*(&|\?)secret=.*/i ]);, see below }
Example
ineum('reportError', new Error('Something failed'), { componentStack: '…', }); Beacons
This feature is in technical beta. Are you interested in trying this out? Get in contact with us!
To track non-standard events happening on your website, you can report custom beacons to Instana.
ineum('reportEvent', eventName, { duration: duration, backendTraceId: backendTraceId, error: error, componentStack: componentStack, meta: meta, });
Parameters
Example
ineum('reportEvent', 'login'); ineum('reportEvent', 'full example', { whitelisted('whitelistedOrigins', urls);
Parameter
Example
ineum('whitelistedOrigins', [/.*api\.example\.com.*/]);
Please check that your application works correctly after these changes. Instructing the JavaScript agent to add backend correlation headers (i.e. whitelisting origins) without configuring CORS, has a high probability of breaking your website! | https://docs.instana.io/products/website_monitoring/api/ | 2020-02-17T09:22:06 | CC-MAIN-2020-10 | 1581875141806.26 | [] | docs.instana.io |
Scripting Visual Studio with Windows PowerShell
One of the really cool new things that shipped recently that I've been trying to learn more about is Windows Powershell. What's really great about it is that if your object supports COM or .NET, you can automate it just by firing up the command line.
To get an instance of Visual Studio running, I can do the following :
$dte = New-Object -comobject "VisualStudio.DTE"
(Note that if you have multiple versions installed you can specify VisualStudio.DTE .7.0 for VS 2002, VisualStudio.DTE .7.1 for VS 2003 or VisualStudio.DTE .8.0 for VS 2005.)
Since the DTE object was created via code and not by a user action, VS runs in silent mode (no user interface shown). I can show the main form with:
$dte.MainWindow.Visible = $true
For a useless example, if I want to close all the windows in the shell, I can do the following:
foreach ($w in $dte.Windows) { $w.Close() }
The cool thing about this really isn't that I can script Visual Studio (I could already do that from IronPython, VBScript or any other environment that can talk to COM objects), but that I can do it from the same command line that I use for other tasks in Windows. | https://docs.microsoft.com/en-us/archive/blogs/aaronmar/scripting-visual-studio-with-windows-powershell | 2020-02-17T11:10:22 | CC-MAIN-2020-10 | 1581875141806.26 | [] | docs.microsoft.com |
BizTalk Correlation of Untyped Messages ...
public IBaseMessage Execute(IPipelineContext pc, IBaseMessage inmsg)
{
string trackCode = Convert.ToString(System.Guid.NewGuid());
inmsg.Context.Promote("TrackingID", "", trackCode);
return inmsg;
} ...
InboundMessage(Microsoft.Demo.Customer.TrackingID);
My customer can then take this value, and set it to the MSMQ label so that it can be extracted later. ...
public IBaseMessage Execute(IPipelineContext pc, IBaseMessage inmsg)
{
string headerVal = inmsg.Context.Read("WSHeader", "").ToString();
System.Xml.XmlDocument doc = new System.Xml.XmlDocument();
doc.LoadXml(headerVal);
//get value of tracking code from soap header
string trackingID = doc.SelectSingleNode("//TrackingID").InnerText;
//promote value back into context
inmsg.Context.Promote("TrackingID", "TrackingID", "", trackingID);
return inmsg;
}
Note that the namespace for the SOAP header was NOT the value in the header schema, but the standard SOAPHeader namespace. ...
//CustService is the name of my web service reference
CustService.Microsoft_Demo_Customer_Orchestration1_Port3 svc = new MyService.Microsoft_Demo_Customer_Orchestration1_Port3();
CustService.WSHeader header = new CustService.WSHeader();
//set header value to queue label retrieved
header.TrackingID = queueLabelBox.Text;
svc.WSHeaderValue = header;
svc.Operation_1(xmlInputDoc);
I still love the fact that you get all these nicely typed objects for web services. There's an object named after my SOAP
header, and it has a property I need. Great stuff.? | https://docs.microsoft.com/en-us/archive/blogs/richardbpi/biztalk-correlation-of-untyped-messages | 2020-02-17T11:25:40 | CC-MAIN-2020-10 | 1581875141806.26 | [array(['http://www.seroter.com/BlogPics/05.01.2006correlation1.jpg', None],
dtype=object)
array(['http://www.seroter.com/BlogPics/05.01.2006correlation2.jpg', None],
dtype=object) ] | docs.microsoft.com |
How to use the keyboard exclusively
Keyboard shortcuts can make it easier to navigate the Visual Studio IDE and to write code. This article explores a few ways you can use keyboard shortcuts more effectively.
For a full listing of command shortcut keys in Visual Studio, see Default keyboard shortcuts.
Tip
To learn more about accessibility updates, see the Accessibility improvements in Visual Studio 2017 blog post.
Note
Depending on your settings or the edition of Visual Studio you use, the dialog boxes and menu commands you see might differ from those described in Help. To change your settings, choose Import and Export Settings on the Tools menu. For more information, see Reset settings.
Toolbox controls
To add a control on the Toolbox to a form or designer without using the mouse:
On the menu bar, choose View > Toolbox.
Use the Ctrl+Up arrow or Ctrl+Down arrow keys to move among the sections in the Toolbox tab.
Use the Up arrow key or Down arrow key to move among the controls in a section.
After you select the control, use the Enter key to add the control to the form or designer.
Dialog box options
To move among the options in a dialog box and change option settings by using only the keyboard:
Use Tab or Shift+Tab to move up and down through the controls in the dialog box.
To change option settings:
For radio buttons, use the Up arrow and Down arrow keys to change the selection.
For check boxes, press Spacebar to select or unselect.
For drop-down lists, use Alt+Down arrow to display items, and then use the Up arrow and Down arrow keys to change the selected item.
For buttons, select Enter to invoke.
For grids, use the arrow keys to navigate. For drop-down lists in grids, use Shift+Alt+Down arrow to display items, and then use the Up arrow and Down arrow keys to change the selected item.
Navigate between windows and files
To move among files in an editor or designer, choose the Ctrl+Tab keyboard shortcut to display the IDE Navigator with Active Files selected. Choose the Enter key to navigate to the highlighted file.
To move among docked tool windows, choose the Alt+F7 keyboard shortcut to display the IDE Navigator with Active Tool Windows selected. Choose the Enter key to navigate to the highlighted window.
Move and dock tool windows
Navigate to the tool window you intend to move and give it focus.
On the Window menu, select the Dockable option.
Press Alt+Spacebar, and then choose Move.
The docking guide diamond appears.
Use the arrow keys to move the window to a new location.
The mouse pointer moves with the window as you use the arrow keys.
When you've reached the new location, use the arrow keys to move the mouse pointer over the correct portion of the guide diamond.
An outline of the tool window appears in the new docking location.
Press Enter.
The tool window snaps into place at the new docking location.
See also
Feedback | https://docs.microsoft.com/en-us/visualstudio/ide/reference/how-to-use-the-keyboard-exclusively?view=vs-2017 | 2020-02-17T11:11:56 | CC-MAIN-2020-10 | 1581875141806.26 | [] | docs.microsoft.com |
Provision.
- Require Environment Selection
- Forces users to select and Environment during provisioning
- Show Pricing
- Displays or hides Pricing in Provisioning wizard and Instance and Host detail pages.
- Hide Datastore Stats On Selection
- Hides Datastore utilization and size stats in provisioning and app wizards
- Cross-Tenant Naming Policies
- Enable for the
sequencevalue in naming policies to apply across tenants
- Reuse Sequence Numbers
- Enable for sequence numbers to always increment and never be reused. When disabled, sequence numbers will be reused.
-.
PXE Boot Settings¶
- Default Root Password
- Enter the default password to be set for Root during PXE Boots.
Environments
Administration -> Provisioning -> Environments
Overview¶
The Environments section is where you create and manage Environment Tags, which are available in the Environment dropdown during Provisioning to attach to Instances. An instances Environment Tag can be changed by editing the instance.
Creating Environments¶
Select + Create Environment
Populate the following for the New Environment:
- Name
Name of the Environment
- Code
Shortcode used for API and CLI
- Description
Environment description displayed in Environments list page.
- Visibility
- Private: Available only in the Tenant the Environment is created in.
- Public: Available for all Tenants. Public is only applicable for Environments created in the the Master Tenant.
Note
Existing Environments can be edited or removed using the Actions dropdown in the Environments list.
Licenses
Administration -> Provisioning -> Licenses
Overview¶
The License section is for automating the application of Licensee to Instances while provisioning. Licenses can be added to Morpheus and then attached to images. Morpheus will then apply the license to Instances provisioned using the images with license attached. Licenses can be configured for single or multiple Tenants.
Creating Licenses¶
Select + Create License
In the New License modal, enter the following:
- License Type
Windows
- Name
Name of the License in Morpheus
- License Key
Enter the License Key
- Org Name
The Organization Name (if applicable) related to the license key
- Full Name
The Full Name (if applicable) related to the license key
- Version
License Version
- Copies
The Number of copies available on the License
- Description
License description displayed in the Licenses list in Morpheus . Helpful for identifying License after creation
- Virtual Images
- Search for existing Virtual Images by name and select to attach the image to the license.
Note
Virtual Images are synced from Clouds or added in the Provisioning -> Virtual Images section.
- Tenant Permissions
Search for and select the Tenant(s) the License will be available for. Multiple Tenants can be added.
Save Changes
Provisioning with Licenses¶
When a Virtual Image is added to a license, Morpheus will automatically apply the License to Instances configured with the Virtual Image during provisioning, including Instance Types with a Node Type that is configured with the Virtual Image, or if the image is selected when using generic Cloud Instances types (VMware, AWS, Nutanix, Openstack etc). Virtual Images can be removed from a License by editing the License.
Managing Licenses¶
Created Licenses details are displayed in the License page, including the number of copies applied per License, the Tenants added to the License, and the Virtual Images attached to the License.
The Name, Version, Copies, Description, Virtual Images and Tenant Permissions are editable but selecting the Actions dropdown on a License.
Note
License Types, Keys, Org Names and Full Names are not editable after a license has been created.
License can also be removed using the Actions dropdown on a License. | https://docs.morpheusdata.com/en/4.1.0/administration/provisioning/provisioning.html | 2020-02-17T09:51:51 | CC-MAIN-2020-10 | 1581875141806.26 | [] | docs.morpheusdata.com |
Payment systems
Splynx software can be connected to different Payment Gateways. Subscribers then are able to pay their invoices using Credit cards or their Payment system accounts.
Below is a list of supported and integrated Payment Gateways. By clicking on Payment Gateway link, you will be redirected to documentation page describing how to install and use Gateway with Splynx. | https://docs.splynx.com/payment_systems/payment_systems.md | 2020-02-17T09:52:42 | CC-MAIN-2020-10 | 1581875141806.26 | [] | docs.splynx.com |
XMPP Manager¶
XMPP Manager is an optional menu item. In order to have the option for XMPP Manager there are a few step to take to enble XMPP.
XMPP Profile
- FusionPBX menu.
- Accounts -> XMPP manager.
- Click the plus on the right to create a profile.
Note
Google has since depricated xmpp service
In this example we will setup Google Talk and by creating a profile called gtalk.
Profile Name: gtalk Username: [email protected] (use your account) Password: use the correct password Auto-Login: yes XMPP Server: talk.google.com
Two approaches can be used for the next part.
Option 1.
Lets say my gmail number was 13051231234. This approach will send the inbound calls to the inbound routes with a destination number that is the default extension number that is set.
Default extension: 13051231234 Advanced -> Context: public
Option 2.
On a single tenant system. This will send the call to extension 1001 in the default context.
Default extension: 1001 Advanced -> Context: default
Option 3.
On a single tenant system. This will send the call to extension 1001 in the multi-tenant domain name.
Default extension: 1001 Advanced -> Context: your.domain.com
Save the settings and restart the module. Restart the ‘XMPP’ module from Advanced -> Modules page. Go back to Accounts -> XMPP if the status says ‘AUTHORIZED’ then you are ready to go.
Note If you are not getting AUTHORIZED you might need to goto the google account settings and choose “Allow less secure apps: ON” under the Sign-in & security section.
Outbound Routes
For this example we will use 11 digit dialing.
Gateway: XMPP Dialplan Expression: 11 digits Description: Google Talk Press Save
If your XMPP profile is named something other than gtalk edit the outbound route you just created. Bridge statement should look like: dingaling/gtalk/[email protected] replace gtalk with the profile name you chose and then save it.
Enable XMPP¶
XMPP manager is used to configure client side XMPP profiles. It can be used as a client to register to make and receive call with Google Talk or other XMPP servers.
GIT Manually add XMPP
After version 3.8 XMPP is optional. To add XMPP do the following
Goto command line
cd /tmp git clone cd fusionpbx-apps/ mv xmpp/ /var/www/fusionpbx/app/ cd /var/www/fusionpbx/app chown www-data:www-data -R xmpp/
Goto Fusionpbx GUI
Goto the GUI and click advanced > menu manager > edit icon > click “Restore Defaults” at top right
Then goto Advanced > Upgrade click Schema, Data Types, and Permission Defaults then click execute
Click status > sip status > Flush Memcache
Log out then back in
You should now have XMPP Manager under Accounts. | https://docs.techlacom.com/en/latest/applications/xmpp.html | 2020-02-17T09:07:26 | CC-MAIN-2020-10 | 1581875141806.26 | [array(['../_images/fusionpbx_xmpp1.jpg', '../_images/fusionpbx_xmpp1.jpg'],
dtype=object)
array(['../_images/fusionpbx_xmpp2.jpg', '../_images/fusionpbx_xmpp2.jpg'],
dtype=object)
array(['../_images/fusionpbx_xmpp5.jpg', '../_images/fusionpbx_xmpp5.jpg'],
dtype=object) ] | docs.techlacom.com |
Use the following steps when you troubleshoot issues with Acquia Search:
Attempt to connect from your local install using the subscription’s keys. You must disable and enable all the search modules to regenerate the salt variables. The step can determine if the issue is with the website or the search subscription.
Examine the version of the Apache Solr Search module.
If indexing appears to run fine but the results are sparse, examine
/admin/settings/apachesolr/query-fields to ensure no major fields are
configured to Omit. Also, scroll to the bottom of
/admin/settings/apachesolr/content-bias and check which content types
are excluded from indexing.
Find the last node indexed by completing the following steps:
Run the following command:
drush -vd search-index
At some point, you’ll start seeing notices about what node IDs failed to index. Run the following command and note the limit:
drush vget apachesolr_cron_limit
Divide the number returned by the previous command by
2, and then
use the resulting number with the following command:
drush vset apachesolr_cron_limit [value]
For example, if you have 100 results, run the following command:
drush vset apachesolr_cron_limit 50
Repeat the process, halving the limit each time until you reach
1
as the returned number.
When you reach
1, you will know which node is causing problems.
Another technique to find the last indexed node is to use the following command:
drush vget apachesolr_index_last
Use the following command for Drupal 7:
drush php-eval 'module_load_include("inc", "apachesolr", "apachesolr.index"); $rows = apachesolr_index_get_entities_to_index(apachesolr_default_environment(), "node", 1); foreach ($rows as $row) { print_r($row); }'
The reported node is probably where the indexing problem resides.
In Drupal 8, views exposed filters, such as search results and blocks with
facets, must not have caching enabled alongside AJAX, as the views AJAX
request does not pass along the
?f[0]= parameters used by exposed
filters or facets to filter results as expected. The open issue on Drupal.org,
AJAX facet block seems to lose views context, will be updated
as the community identifies fixes and workarounds.
The Apache Solr Search module includes a group of Drush commands you can use to work with your search environments. The following list includes several useful commands:
For more information about the preceding commands, run
drush help from
the command line of your application where Acquia Search is installed. | https://docs.acquia.com/acquia-search/debugging/ | 2020-02-17T10:34:32 | CC-MAIN-2020-10 | 1581875141806.26 | [] | docs.acquia.com |
Soap
Exception. Is Version Mismatch Fault Code(XmlQualifiedName) Method
Definition
Returns a value that indicates whether the SOAP fault code is equivalent to the
VersionMismatch SOAP fault code regardless of the version of the SOAP protocol used.
public: static bool IsVersionMismatchFaultCode(System::Xml::XmlQualifiedName ^ code);
public static bool IsVersionMismatchFaultCode (System.Xml.XmlQualifiedName code);
static member IsVersionMismatchFaultCode : System.Xml.XmlQualifiedName -> bool
Public Shared Function IsVersionMismatchFaultCode (code As XmlQualifiedName) As Boolean
Parameters
- code
- XmlQualifiedName
An XmlQualifiedName that contains a SOAP fault code.
Returns
Remarks
Recipients of a SoapException can use this method to determine whether the Code property is functionally equivalent to the
VersionMismatch SOAP fault code defined in SOAP 1.1 regardless of the version of the SOAP protocol used. Versions of the SOAP protocol later than 1.1 might use different names or namespaces for the
VersionMismatch SOAP fault code defined in SOAP version 1.1, which is represented by the SoapException.VersionMismatchFaultCode field. SOAP 1.2 names the fault code the same; however, it is scoped by a different XML namespace and is represented by the Soap12FaultCodes.VersionMismatchFaultCode field. | https://docs.microsoft.com/en-us/dotnet/api/system.web.services.protocols.soapexception.isversionmismatchfaultcode?view=netframework-4.8 | 2020-02-17T09:26:53 | CC-MAIN-2020-10 | 1581875141806.26 | [] | docs.microsoft.com |
News
News feature will help you to provide your customers an updated information about services or some business aspects of your company.
To create News go to Support → News and click on Add News on the top right of the table.
"Create news" window will show up, where you will be able to fill in Title and Description field, choose the correct date, select Partners and location if necessary, and write a text of the news.
The editing option of the text allows you to edit or format the text, insert URL links and images.
For example, you can insert URL link to redirect customers to your company's website to get more information about particular topic. To do so you simply need to highlight the word or phrase, which will be linked to a webpage, click on
After news are created it will be possible to edit or delete them with
It is also possible to sort the news by the Partner or Location.
With the help of icon
Customers will be able to see the news on Customers Portal on their Dashboard. By clicking on the Title of the news they will read them and by clicking on interactive link they will be transferred to a webpage if needed.
It is important to enable option Show portal news in Config → Portal → Dashboard in Splynx, so customers will be able to see the news.
| https://docs.splynx.com/support_messages/news/news.md | 2020-02-17T09:04:40 | CC-MAIN-2020-10 | 1581875141806.26 | [array(['http://docs.splynx.com/images/get?path=en%2Fsupport_messages%2Fnews%2Fadd_news.png',
'Add news'], dtype=object)
array(['http://docs.splynx.com/images/get?path=en%2Fsupport_messages%2Fnews%2Fcreate_news.png',
'Create news'], dtype=object)
array(['http://docs.splynx.com/images/get?path=en%2Fsupport_messages%2Fnews%2Fedit_text.png',
'Edit text'], dtype=object)
array(['http://docs.splynx.com/images/get?path=en%2Fsupport_messages%2Fnews%2Furl_icon.png',
'URL icon'], dtype=object)
array(['http://docs.splynx.com/images/get?path=en%2Fsupport_messages%2Fnews%2Fsave_url.png',
'Save url'], dtype=object)
array(['http://docs.splynx.com/images/get?path=en%2Fsupport_messages%2Fnews%2Fedit_delete_icon.png',
'Edit delete icon'], dtype=object)
array(['http://docs.splynx.com/images/get?path=en%2Fsupport_messages%2Fnews%2Fedit_news.png',
'Edit news'], dtype=object)
array(['http://docs.splynx.com/images/get?path=en%2Fsupport_messages%2Fnews%2Fsort_news.png',
'Sort news'], dtype=object)
array(['http://docs.splynx.com/images/get?path=en%2Fsupport_messages%2Fnews%2Fsave_icon.png',
'Save icon'], dtype=object)
array(['http://docs.splynx.com/images/get?path=en%2Fsupport_messages%2Fnews%2Fcolumns_icon.png',
'Columns icon'], dtype=object)
array(['http://docs.splynx.com/images/get?path=en%2Fsupport_messages%2Fnews%2Fshow_hide_columns.png',
'Show hide columns'], dtype=object)
array(['http://docs.splynx.com/images/get?path=en%2Fsupport_messages%2Fnews%2Fdashboard_news.png',
'Dashboard news'], dtype=object)
array(['http://docs.splynx.com/images/get?path=en%2Fsupport_messages%2Fnews%2Fread_news.png',
'Read news'], dtype=object)
array(['http://docs.splynx.com/images/get?path=en%2Fsupport_messages%2Fnews%2Fturn_on_news.png',
'Turn on news'], dtype=object) ] | docs.splynx.com |
. Enter the display name of the virtual machine or a pattern using wildcards. For example, type Test* to identify virtual machines for which the virtual machine name begins with "Test".
Cloud Service (Azure Classic)
Enter a name pattern or browse to select a cloud service for Azure Classic.
Resource Group (ARM)
Enter a name pattern or browse to select a resource group for Azure Resource Manager.
Power State
Select the power-on status of the virtual machines to be included:
- Running: Identify VMs that are powered on.
- Stopped: Identify VMs that are powered off.
Storage Account (ARM)
Browse to select a storage account for Azure Resource Manager.
Region
Browse to select a region. You can select from the following:
Last modified: 1/17/2019 8:17:51 PM | http://docs.commvault.com/commvault/v11/article?p=62097.htm | 2020-02-17T10:36:38 | CC-MAIN-2020-10 | 1581875141806.26 | [] | docs.commvault.com |
Add an Access IP
This interface adds new IP addresses to a list of IP addresses that can access a Manage2 account.
This interface adds new IP addresses to a list of IP addresses that can access a Manage2 account.
This document lists third-party software and modifications that you can install to help secure your server.
The API Pickup Passphrases interface allows you to create and manage the pickup phrases to use when you authenticate with your Manage2 account.
This document describes some basic security concepts that you can use to protect your system from XSRF attacks.
This feature allows you to change the password for your Manage2 account.
The FastCGI Process Manager (PHP-FPM) implementation of FastCGI provides process management, emergency restarts, and IP address restriction.
This document describes how to manage the cPHulk service from the command line.
On Friday, November 24 2017, Exim announced two vulnerabilities in versions 4.88 and later.
We were made aware of a CVE in Dovecot Versions 2.0.14 - 2.3.5 that involves using Solr on Thursday, March 28th 2019.
On 27 January 2015, a vulnerability in all versions of the GNU C library (glibc) was announced by Qualys.
On July 25 2016, Perl announced a vulnerability in all versions of the Perl 5 software.
On Wednesday, March 2, 2016, Exim announced a vulnerability in all versions of the Exim software.
On Tuesday, May 3 2016, ImageMagick announced a vulnerability in all versions of the ImageMagick software. ImageMagick is a software package commonly used by web services to process images.
This document describes the Apache-disclosed vulnerability that affects application code which runs in CGI, or CGI-like environments.
On Sunday, December 25, 2016, Exim announced a vulnerability in versions 4.69 to 4.87 of the Exim software.
Red Hat has been made aware of multiple microarchitectural (hardware) implementation issues affecting many modern microprocessors, requiring updates to the Linux kernel, virtualization-related components, and/or in combination with a microcode update.
Exim maintainers announced that they received a report of a potential remote exploit in Exim from version 4.87 to version 4.91.
We strongly recommend that hosting providers and system administrators use this document to determine the status of their systems.
cPanel & WHM versions 11.48 and later include functionality to validate that you download all cPanel & WHM-delivered files in an uncorrupted state.
This document lists the interfaces in cPanel & WHM in which you can adjust OpenSSL's protocols and cipher stacks for those services. | https://docs.cpanel.net/tags/security/ | 2020-02-17T10:22:39 | CC-MAIN-2020-10 | 1581875141806.26 | [] | docs.cpanel.net |
Applications
This document describes the cPanel interface Applications.
This document describes the cPanel interface Applications.
This interface allows you to configure the upcp, backup, and cpbackup scripts' cron jobs on your server.
The cPanel DNSOnly™ software allows you to run a dedicated physical nameserver.
The interface allows you to open a ticket with cPanel's Technical Support.
This feature allows you to remove a package from your server.
This interface allows you to configure a DNS cluster and add servers to an existing DNS cluster.
This interface allows you to configure a reseller's ability to access certain privileges via Access Control Lists (ACLs).
This interface enables you to email every cPanel user simultaneously.
The interface uses the server authentication details in your ticket to automatically provide Support with SSH access to your server.
Use this interface to allow or deny (block) access to services for specific IP addresses.
The Initial Quota Setup interface scans your server to confirm that it uses disk space quotas on the directories in which your cPanel users store their files.
RPM® (RPM Package Manager) refers to a software format, the software that the RPM package contains, and the package management system.
This interface allows you to send all of the visitors of a domain or particular page to a different URL.
This interface lets you easily configure a mail client to access a cPanel email address.
This interface allows you to add, manage, upgrade, and remove cPanel Addons (cPAddons).
This document describes the two reboot interfaces.
This interface displays information about cPanel & WHM’s task queue in real time.
This interface allows you to manage your Subaccounts. | https://docs.cpanel.net/tags/uidoc/ | 2020-02-17T10:01:51 | CC-MAIN-2020-10 | 1581875141806.26 | [] | docs.cpanel.net |
Binding
List Collection View. Can Add New Property
Definition
Gets a value that indicates whether a new item can be added to the collection.
public: property bool CanAddNew { bool get(); };
public bool CanAddNew { get; }
member this.CanAddNew : bool
Public ReadOnly Property CanAddNew As Boolean
Property Value
Implements
Remarks
The BindingListCollectionView can create a new item for the collection if there is not an edit transaction occurring, if the collection is not a fixed size, and if the collection is not read-only. | https://docs.microsoft.com/en-us/dotnet/api/system.windows.data.bindinglistcollectionview.canaddnew?view=netframework-4.8 | 2020-02-17T10:56:47 | CC-MAIN-2020-10 | 1581875141806.26 | [] | docs.microsoft.com |
Pause service updates through Lifecycle Services (LCS)
Important
Dynamics 365 for Finance and Operations is now being licensed as Dynamics 365 Finance and Dynamics 365 Supply Chain Management. For more information about these licensing changes, see Dynamics 365 Licensing Update.
This topic explains how to pause updates to your sandbox and production environments by using Microsoft Dynamics Lifecycle Services (LCS)..
Microsoft updates your configured sandbox and production environments to the latest service update that Microsoft has released. Microsoft notifies you about upcoming updates to your environments via email and through notifications in LCS. At that point, if you can't proceed with the update for some reason, you can pause it through LCS.
For more information about how to change the configured sandbox environment and set the production update cadence, see Configure service updates through Lifecycle Services (LCS).
Who can pause service updates?
Only users (customers or partners) who are assigned to the project owner role in LCS can pause updates. Additionally, updates can be paused only for implementation projects.
Staying current with service updates helps guarantee that customers always run on the latest set of fixes that Microsoft has released, so that they have the best service experience. Therefore, Microsoft doesn't allow updates to be paused indefinitely.
You can't use LCS to pause updates if you're three or more updates behind the latest update that Microsoft has released. For example, if the latest update that Microsoft has released is version 10.0.0, customers who are on version 8.1.3, version 8.1.2, and version 8.1.1 can pause updates. However, customers who are on version 8.1.0 can't pause updates, because they are more than three updates behind. Customers who are on version 7.3 can get only platform updates. For example, if the last platform update that Microsoft has released is Platform update 25, customers who are on Platform update 24, Platform update 23, and Platform update 22 can pause updates. However, customers who are on Platform update 21 can't pause updates.
What can I pause?
If you decide to pause updates, you have two options:
- Pause updates only to your production environment.
- Pause updates to both your sandbox environment and your production environment.
You can pause a maximum of three continuous updates at a time. For example, if you're using version 8.1.3, you can pause update version 10.0.0, 10.0.1 and 10.0.2. However, you can't pause update version 10.0.3. In addition, in the month of June, you can pause the next three updates. However, you will not be able to pause updates scheduled for October, November, December and later. Similarly, for customers on version 7.3 for platform only updates, if you’re using Platform update 23 then you can pause update 24, update 25, and update 26, but you cannot pause update 27. We will be releasing 8 updates in a year. We require you to take at least two updates in a year.
Important
There is no way to pause more than three updates, regardless of your industry or business schedule. If you are more than three updates behind and you find a critical issue during validations in your sandbox environment after the update, you can contact Microsoft Support to pause the update to your production environment. This is only required if you are more than three updates behind and you are unable to use the pause updates functionality available in LCS to pause the update to production.
If you pause updates to your sandbox environment, updates are automatically also paused for your production environment, because Microsoft always updates configured sandbox environments before production environments.
How do I pause updates?
To pause updates, follow these steps.
In LCS, in your implementation project, open the Project settings page.
This page has a new tab that is named Update settings.
On the Update settings tab, set the Pause updates option to ON.
Select Edit settings.
In the dialog box that appears, select whether you want to pause updates to your production environment only, or to both your sandbox environment and your production environment.
Select Next.
Select your reason for pausing updates. If you select Issue found during validations, you must enter a valid support ticket number. You can add any additional details that will help Microsoft understand why you want to pause updates.
When you've finished, select Confirm.
You can also edit an existing pause. You can either extend the duration of the pause, so that updates are paused for a longer time, or cancel it, so that updates are resumed. To edit a pause, select Edit settings. The limitations about the number of updates that you can pause still apply.
To cancel a pause and resume updates to your environments, set the Pause updates option to OFF.
Any time that you pause updates or edit an existing pause, a notification appears at the top of the Update settings tab. This notification shows what has been paused. An email is also sent to all stakeholders (the project owner and environment manager), to notify them that service updates for the selected environments have been paused. If someone cancels an existing pause and resumes updates, the notification disappears, and an email is sent to inform the stakeholders that updates have resumed.
Important
You can pause updates through LCS until four hours before the start of the downtime window.
You can cancel a pause and choose to resume updates only 7 days prior to the start of the downtime date. If you are past that date then you will not be able to cancel a pause.
What happens after the pause duration expires?
Cumulative service updates help guarantee that customers always run on the latest set of fixes that Microsoft has released, so that they have the best service experience. Therefore, Microsoft doesn't allow updates to be paused indefinitely.
There are two ways to cancel pauses, so that updates are resumed:
- Someone manually cancels an ongoing pause, as explained in the previous section.
- The duration that was set for the pause expires, and updates to the configured environments are automatically resumed.
In both cases, an email is sent to inform the stakeholders.
For more information about service updates, see One Version service updates FAQ.
Feedback | https://docs.microsoft.com/en-us/dynamics365/fin-ops-core/dev-itpro/lifecycle-services/pause-service-updates | 2020-02-17T10:50:59 | CC-MAIN-2020-10 | 1581875141806.26 | [] | docs.microsoft.com |
OMERO search¶
OMERO.server uses Lucene to index all string and
timestamp information in the database, as well as all
OriginalFile which
can be parsed to simple text (see File parsers for
more information). The index is stored under
/OMERO/FullText or the
FullText subdirectory of your
omero.data.dir, and can be
searched with Google-like queries.
Once an entity is indexed, it is possible to start writing querying
against the server via
IQuery.findAllByFullText(). Use
new Parameters(new Filter().owner()) and
.group() to restrict
your search. Or alternatively use the
ome.api.Search interface
(below).
See also
- Search and indexing configuration
Section of the sysadmin documentation describing the configuration of the search and indexing for the server.
Field names¶
Each row in the database becomes a single Lucene
Document parsed
into the several
Fields. A field is referenced by prefixing a search
term with the field name followed by a colon. For example,
name:myImage searches for myImage anywhere in the name field.
Queries¶
Search queries are very similar to Google searches. When search terms are entered without a prefix (“name:”), then the default field will be used which combines all available fields. Otherwise, a prefix can be added to restrict the search.
Indexing¶¶
ome.api.IQuery¶¶
The Search API offers a number of different queries along with various filters and settings which are all maintained on the server.
The matrix below show which combinations of parameters and queries are supported (S), will throw an exception (X), and which will simply silently be ignored (I).
Footnotes
Leading wildcard searches¶
Leading wildcard searches are disallowed by default. “?omething” or “*hatever”, for example, would both throw exceptions. They can be run by using:
Search search = serviceFactory.createSearchService(); search.setAllowLeadingWildcards(true); too many terms in the expansion then an exception will be thrown. This requires the user to enter a more refined search, but not because there are too many results, only because there is not enough room in memory to search on all terms at once.
Extension points¶
Two extension points are currently available for searching. The first are the File parsers mentioned above. By configuring the map of Formats (roughly mime-types) of files to parser instances, extracting information from attached binary files can be made quick and straightforward.
Similarly, Search bridges provide a mechanism for parsing all metadata entering the system. One built in bridge (the FullTextBridge) parses out the fields mentioned above, but by creating your own bridge it is possible to extract more information specific to your site.
See also
Working with annotations, Search bridges, File parsers, Query Parser Syntax, | https://docs.openmicroscopy.org/omero/5.6.0/developers/Modules/Search.html | 2020-02-17T11:18:16 | CC-MAIN-2020-10 | 1581875141806.26 | [] | docs.openmicroscopy.org |
openHAB Hue Emulation Service
Hue Emulation exposes openHAB items as Hue devices to other Hue HTTP API compatible applications like an Amazon Echo.
Features:
- UPNP automatic discovery
- Support ON/OFF and Percent/Decimal item types
- Can expose any type of item, not just lights
- Pairing (security) can be enabled/disabled in real time using the configuration service (under services in the PaperUI for example)
Configuration:
Pairing can be turned on and off:
org.openhab.hueemulation:pairingEnabled=false
(Optional) For systems with multiple IP addresses the IP to use for UPNP may be specified, otherwise the first non loopback address will be used.
org.openhab.hueemulation:discoveryIp=192.168.1.100
Device Tagging
To expose an item on the service, apply a supported tag (which are “Lighting”, “Switchable”, “TargetTemperature”) to it. The item label will be used as the Hue device name.
Switch TestSwitch1 "Kitchen Switch" [ "Switchable" ] Switch TestSwitch2 "Bathroom" [ "Lighting" ] Dimmer TestDimmer3 "Hallway" [ "Lighting" ] Number TestNumber4 "Temperature Set Point" [ "TargetTemperature" ] | http://docs.openhab.org/addons/io/hueemulation/readme.html | 2017-05-22T19:09:35 | CC-MAIN-2017-22 | 1495463607046.17 | [] | docs.openhab.org |
RabbitMQ® for Pivotal Cloud Foundry
Operation Tips
What should I check before deploying a new version of the tile?
Ensure that all nodes in the cluster are healthy via the RabbitMQ Management UI, or health metrics exposed via the firehose. You cannot rely solely
on the
bosh instances output as that reflects the state of the Erlang VM used by RabbitMQ and not the RabbitMQ application.
What is the correct way to stop and start RabbitMQ in PCF?
Only BOSH commands should be used by the operator to interact with the RabbitMQ application. For example:
bosh stop rabbitmq-server
bosh start rabbitmq-server
There are BOSH job lifecycle hooks which are only fired when rabbitmq-server is stopped through BOSH. You can also stop individual instances by running:
bosh stop JOB [index]
Note: Do not use
monit stop rabbitmq-server as this does not call the drain scripts.
What happens when I run “bosh stop rabbitmq-server”?
BOSH starts the shutdown sequence from the bootstrap instance.
We start by telling the RabbitMQ application to shutdown and then shutdown the Erlang VM within which it is running. If this succeeds, we run the following checks to ensure that the RabbitMQ application and Erlang VM have stopped:
- If
/var/vcap/sys/run/rabbitmq-server/pidexists, check that the PID inside this file does not point to a running Erlang VM process. Notice that we are tracking the Erlang PID and not the RabbitMQ PID.
- Check that
rabbitmqctldoes not return an Erlang VM PID
Once this completes on the bootstrap instance, BOSH will continue the same sequence on the next instance. All remaining rabbitmq-server instances will be stopped one by one.
What happens when “bosh stop rabbitmq-server” fails?
If the
bosh stop fails, you will likely get an error saying that the drain
script failed with:
result: 1 of 1 drain scripts failed. Failed Jobs: rabbitmq-server.
What do I do when “bosh stop rabbitmq-server” fails?
The drain script logs to
/var/vcap/sys/log/rabbitmq-server/drain.log. If you
have a remote syslog configured, this will appear as the
rmq_server_drain
program.
First,
bosh ssh into the failing rabbitmq-server instance and start the
rabbimtq-server job by running
monit start rabbitmq-server). You will not be
able to start the job via
bosh start as this always runs the drain script first
and will fail since the drain script is failing.
Once the
rabbitmq-server job is running, which you can confirm via
monit status, run
DEBUG=1
/var/vcap/jobs/rabbitmq-server/bin/drain. This tells you exactly why it is
failing. | http://docs.pivotal.io/rabbitmq-cf/1-7/operations.html | 2017-05-22T19:27:27 | CC-MAIN-2017-22 | 1495463607046.17 | [] | docs.pivotal.io |
🔗Packaging Plugins
To package, present, and deploy your plugin, see these instructions:
- Plugin Packaging: packaging in a JAR
- Plugin Presentation: controlling how your plugin appears in the CDAP Studio
If you are installing a third-party JAR (such as a JDBC driver) to make it accessible to other plugins or applications, see these instructions.
🔗Plugin Packaging
A Plugin is packaged as a JAR file, which contains inside the plugin classes and their dependencies.
CDAP uses the "Export-Package" attribute in the JAR file manifest to determine
which classes are visible. A visible class is one that can be used by another class
that is not from the plugin JAR itself. This means the Java package which the plugin class
is in must be listed in "Export-Package", otherwise the plugin class will not be visible,
and hence no one will be able to use it. This can be done in Maven by editing your pom.xml.
For example, if your plugins are in the
com.example.runnable and
com.example.callable
packages, you would edit the bundler plugin in your pom.xml:
<plugin> <groupId>org.apache.felix</groupId> <artifactId>maven-bundle-plugin</artifactId> <version>2.3.7</version> <extensions>true</extensions> <configuration> <instructions> <Embed-Dependency>*;inline=false;scope=compile</Embed-Dependency> <Embed-Transitive>true</Embed-Transitive> <Embed-Directory>lib</Embed-Directory> <Export-Package>com.example.runnable;com.example.callable</Export-Package> </instructions> </configuration> ... </plugin>
By using one of the available Maven archetypes, your project will be set up to
generate the required JAR manifest. If you move the plugin class to a different Java
package after the project is created, you will need to modify the configuration of the
maven-bundle-plugin in the
pom.xml file to reflect the package name changes.
If you are developing plugins for the
cdap-data-pipeline artifact, be aware that for
classes inside the plugin JAR that you have added to the Hadoop Job configuration directly
(for example, your custom
InputFormat class), you will need to add the Java packages
of those classes to the "Export-Package" as well. This is to ensure those classes are
visible to the Hadoop MapReduce framework during the plugin execution. Otherwise, the
execution will typically fail with a
ClassNotFoundException. | http://docs.cask.co/cdap/4.1.1/en/developers-manual/pipelines/developing-plugins/packaging-plugins.html | 2017-05-22T19:32:46 | CC-MAIN-2017-22 | 1495463607046.17 | [] | docs.cask.co |
Required - Property Fields (Advanced Custom Fields Pro)
Realty 2.1+ requires Advanced Custom Fields Pro plugin to be installed and activated. Without ACF Pro, the theme won't work properly: Learn how to install and activate all required plugins →
Advanced Custom Fields (short: ACF) is the most popular custom fields plugin for WordPress. It allows you to add all kinds of custom fields to any post type. It is integrated into the Realty theme so you can add your very own property fields, free of charge.
How to import default property fields
Before you can see and use any fields you have to import default ACF property fields, following the steps below. Skip this step, if you have already imported the demo content.
- After you have installed and activated the Realty theme and ACF Pro plugin, go to Custom Fields > Tools
- Under “Import Field Group” click “Choose File” and upload “realty-acf-all-fields.json”. This file is part of your ThemeForest download realty-wordpress-theme.zip. Make sure to download and unzip "All files and documentation" option.
- Click “Import” to run the importer. Once finished go to “Custom fields” on left panel and you should see new field groups added named “Property Fields”, “Additional Fields” and “Slideshow Settings”.
IMPORTANT: Do not make any changes to any “Field Name”. You can change “Field Label” and other values/settings, but DO NOT change “Field Name” for any of the imported default fields.
Where can I find the license key for Advanced Custom Fields Pro?
Realty includes the ACF Pro plugin, free of charge. But as any other bundled premium plugin, we are not allowed to distribute the license key. You can read more about it here:
ACF Pro update method
Whenever we release a theme update we include the latest version of ACF Pro, and make adjustment in theme according to each update of the plugin.
You can’t update the premium plugin yourself. Please ignore the updates until we release a theme update. If you want to update ACF to the latest version by yourself, you can purchase a license of ACF Pro from:
With each update of the theme, if you are asked to update ACF PRO plugin, you can update the plugin and if that process fails you can try the following method.
Please deactivate, uninstall and delete Advanced Custom Fields Pro plugin. When yo do that, you should see a notification to install “Advanced Custom Fields Pro” plugin. If you do not see that please go to Appearance > Install Plugins.
You will see "Advanced Custom Fields Pro" plugin in the list, please install and activate the plugin and you will get the latest plugin with respect to our theme update.
Note: Please make sure when you try this method, you have the Realty parent theme active, if the child theme is active the process will throw an error. You can switch back to child theme after updating the plugin.
ACF Pro additional property fields
On top of the default fields you can add as many field groups/fields as you want.
If you want to create new fields or you want to create another field group for properties, you can do that easily. These custom property fields are visible on the single property page, property submit page and can also be used for your own custom property search.
Currently supported custom field types:
- Text
- Textarea
- Number
- Date Picker
- Checkbox
- Radio
- taxonomy
- Page Link
- Url
- Oembed (only for tab view)
How to create additional property field group
Once you have installed and activated the Advanced Custom Fields Pro plugin, go to Custom Fields on the left-hand side of your WordPress menu. First we need to create a so called “Field Group”. Therefore click “Add New” next to the Field Group title. On the next screen under Location > Rules > Show this field group if, set the post type to equal “property”, as the screenshot below illustrates:
How to add additional custom property fields
Simply click the blue “+Add Field” button and enter the field settings. The “Label” appears on the single property page under “Additional details”. “Name” is created automatically, but you need to make a little change for the field name.
Please make sure to add the prefix “additional_” to the field name which you have created.
Do not apply this rule on “Label”, it is only required for “Field Name”. The purpose of this prefix is to differentiate between default fields and additional fields.
For example: your additional field name “Lot Size” you would give a field name like “additional_lot_size”.
If you don’t use the “additional_” prefix for a field name, the field will not show up under the “Additional Details” section of any single property page.
When using the date picker make sure to set “Save Format” and “Display Format” to “yymmdd”. Otherwise it won’t be searchable in the property search form.
Once you are finished creating all custom property fields, click “Publish”.
When editing a property you will find the field group you just created at the very bottom of the page, right underneath the default “Property Settings”.
IMPORTANT: All custom property fields are also available in your property search within the search field dropdown under Appearance > Theme Options > Property Search (Learn more about property search).
How to output custom fields anywhere in your theme:
Advanced Custom Fields Resources: | http://docs.themetrail.com/article/33-advanced-custom-fields-pro | 2017-05-22T19:08:19 | CC-MAIN-2017-22 | 1495463607046.17 | [array(['http://d33v4339jhl8k0.cloudfront.net/docs/assets/57724d3dc6979166bd81803f/images/5774dfcc903360258a10d394/file-bXHkJuaBnB.jpg',
None], dtype=object) ] | docs.themetrail.com |
Single-Page Javascript App
This topic describes the OAuth 2.0 implicit grant type supported by Pivotal Single Sign-On (SSO). The implicit grant type is for applications with a client secret that is not guaranteed to be confidential.
OAuth 2.0 Roles
- Resource Owner: A person or system capable of granting access to a protected resource.
- Application: A client that makes protected requests using the authorization of the resource owner.
- Authorization Server: The Single Sign-On server that issues access tokens to client applications after successfully authenticating the resource owner.
- Resource Server: The server that hosts protected resources and accepts and responds to protected resource requests using access tokens. Applications access the server through APIs.
Implicit Flow
- Access Application: The user accesses the application and triggers authentication and authorization.
- Authentication and Request Authorization: The application prompts the user for their username and password. The first time the user goes through this flow for the application, the user sees an approval page. On this page, the user can choose permissions to authorize the application to access resources on their behalf.
- Authentication and Grant Authorization: The authorization server receives the authentication and authorization grant.
- Issue Access Token: The authorization server validates the authorization code and returns an access token with the redirect URL.
- Request Resource w/ Access Token in: The application attempts to access the resource from the resource server by presenting the access token in the URL.
- Return Resource: If the access token is valid, the resource server returns the resources that the user authorized the application to receive.
The resource server runs in PCF under a given space and organization. Developers set the permissions for the resource server API endpoints. To do this, they create resources that correspond to API endpoints secured by the Single Sign-On service. Applications can then access these resources on behalf of users. | http://docs.pivotal.io/p-identity/1-3/configure-apps/single-page-js-app.html | 2017-05-22T19:27:55 | CC-MAIN-2017-22 | 1495463607046.17 | [array(['../images/oauth_implicit.png', 'Oauth implicit'], dtype=object)] | docs.pivotal.io |
httpcfg - Mono Certificate Management for HttpListener
httpcfg [options] certificate.
httpcfg -add -port 8081 -pvk myfile.pvk -cert MyCert
For more details on creating the certificate file and the private key, see the following web page:
The certificates are stored in the ~/.mono/httplistener directory
httpcfg was written by Gonzalo Paniagua.
Visit for details.
Visit for details
makecert(1), signcode(1), cert2spc(1)
The private key format: | http://docs.go-mono.com/monodoc.ashx?link=man%3Ahttpcfg(1) | 2017-05-22T19:26:36 | CC-MAIN-2017-22 | 1495463607046.17 | [] | docs.go-mono.com |
Security and Hacking - Cyber Forensics - A Field Manual For Collecting, Examining, And Preserving Evidence Of Computer Crimes.pdf
Security, and, Hacking, -, Cyber, Forensics, -, A, Field, Manual, For, Collecting, Examining, And, Preserving, Evidence, Of, Computer, Crimes
You must be logged in to post comments. Click here to login.
Terms of Use | Privacy Policy | About Us | Advertising | Contact Us | RSS FeedsCopyright 2013 IT-DOCS - Document IT Sharing . All Rights Reserved. | http://www.it-docs.net/doc/468/security-and-hacking---cyber-forensics---a-field-manual-for-collecting-examining-and-preserving-evidence-of-computer-c.html | 2017-05-22T19:18:20 | CC-MAIN-2017-22 | 1495463607046.17 | [] | www.it-docs.net |
To synchronize application settings between user computers, Microsoft User Experience Virtualization (UE-V) 2.0, 2.1, and 2.1 SP1 use settings location templates. Some settings location templates are included in User Experience Virtualization. You can also create, edit, or validate custom settings location templates by using the UE-V Generator.
The UE-V Generator monitors Windows desktop applications to discover and capture the locations where the application stores its settings. The application that is monitored must be a desktop application. The UE-V Generator cannot create a settings location template for the following application types:
Virtualized applications
Applications that are offered through Terminal Services
Java applications
Windows apps
This topic
Standard and Nonstandard settings locations: The UE-V Generator helps you identify where applications search for settings files and registry settings that applications use to store settings information. The generator only discovers settings in locations that are accessible to a standard user. Settings that are stored in other locations are excluded. Discovered settings are grouped into two categories: Standard and Non-standard. Standard settings are recommended for synchronization, and UE-V can readily capture and apply them. Non-standard settings can potentially synchronize settings but, because of the rules that UE-V uses, these settings might not consistently or dependably synchronize settings. These settings might depend on temporary files, result in unreliable synchronization, or might not be useful. These settings locations are presented in the UE-V Generator. You can choose to include or exclude them on a case-by-case basis.
The UE-V Generator opens the application as part of the discovery process. The generator can capture settings in the following locations:
Registry Settings – Registry locations under HKEY_CURRENT_USER
Application Settings Files – Files that are stored under \ Users \ [User name] \ AppData \ Roaming
The UE-V Generator excludes locations, which commonly store application software files, but do not synchronize well between user computers or environments. The UE-V Generator excludes these locations. Excluded locations are as follows:
HKEY_CURRENT_USER registry keys and files to which the logged-on user cannot write values
HKEY_CURRENT_USER registry keys and files that are associated with the core functionality of the Windows operating system
All registry keys that are located in the HKEY_LOCAL_MACHINE hive, which requires administrator rights and might require to set a User Account Control (UAC) agreement
Files that are located in Program Files directories, which requires administrator rights and might require to set a UAC agreement
Files that are located under Users \ [User name] \ AppData \ LocalLow
Windows operating system files that are located in %Systemroot%, which requires administrator rights and might require to set a UAC agreement
If registry keys and files that are stored in these locations are required to synchronize application settings, you can manually add the excluded locations to the settings location template during the template creation process (except for registry entries in the HKEY_LOCAL_MACHINE hive).
Edit Settings Location Templates with the UE-V Generator
Use the UE-V Generator to edit settings location templates. When the revised settings are added to the templates by using the UE-V Generator, the version information within the template is automatically updated to ensure that any existing templates that are deployed in the enterprise are updated correctly.
Note
If you edit a UE-V 1.0 template by using the UE-V 2 Generator, the template is automatically converted to a UE-V 2 template. UE-V 1.0 Agents can no longer use the edited template.
To edit a UE-V settings location template with the UE-V Generator
Click Start, click All Programs, click Microsoft User Experience Virtualization, and then click Microsoft User Experience Virtualization Generator.
Click Edit a settings location template.
In the list of recently used templates, select the template to be edited. Alternatively, click Browse to search for the settings template file. Click Next to continue.
Review the Properties, Registry locations, and Files locations for the settings template. Edit as required.
On the Properties tab, you can view and edit the following properties:
Application name: The application name that is written in the description of the program file properties.
Program name: The name of the program that is taken from the program file properties. This name usually has the .exe file name extension.
Product version: The product version number of the .exe file of the application. This property, together with the File version, helps determine which applications are targeted by the settings location template. This property accepts a major version number. If this property is empty, then the settings location template applies to all versions of the product.
File version: The file version number of the .exe file of the application. This property, along with the Product version, helps determine which applications are targeted by the settings location template. This property accepts a major version number. If this property is empty, the settings location template applies to all versions of the program.
Template author name (optional): The name of the settings template author.
Template author email (optional): The email address of the settings location template author.
The Registry tab lists the Key and Scope of the registry locations that are included in the settings location template. You can edit the registry locations by using the Tasks drop-down menu. In the Tasks menu, you can add new keys, edit the name or scope of existing keys, delete keys, and browse the registry in which the keys are located. When you define the scope for the registry, you can use the All Settings scope to include all the registry settings under the specified key. Use All Settings and Subkeys to include all the registry settings under the specified key, subkeys, and subkey settings.
The Files tab lists the file path and file mask of the file locations that are included in the settings location template. You can edit the file locations by using the Tasks drop-down menu. In the Tasks menu for file locations, you can add new files or folder locations, edit the scope of existing files or folders, delete files or folders, and open the selected location in Windows Explorer. To include all files in the specified folder, leave the file mask empty.
Click Save to save the changes to the settings location template.
Click Close to close the Settings Template Wizard. Exit the UE-V Generator application.
After you edit the settings location template for an application, you should test the template. Deploy the revised settings location template in a lab environment before you put it into production in the enterprise.
How to manually edit a settings location template
Create a local copy of the settings location template .xml file. UE-V settings location templates are .xml files that identify the locations where application store settings values.
Note
A settings location template is unique because of the template ID. If you copy the template and rename the .xml file, template registration fails because UE-V reads the template ID tag in the .xml file to determine the name, not the file name of the .xml file. UE-V also reads the Version number to know if anything has changed. If the version number is higher, UE-V updates the template.
Open the settings location template file with an XML editor.
Edit the settings location template file. All changes must conform to the UE-V schema file that is defined in SettingsLocationTempate.xsd. By default, a copy of the .xsd file is located in \ProgramData\Microsoft\UEV\Templates.
Increment the Version number for the settings location template.
Save the settings location template file, and then close the XML editor.
Validate the modified settings location template file by using the UE-V Generator.
You must register the edited UE-V settings location template before it can synchronize settings between client computers. To register a template, open Windows PowerShell, and then run the following cmdlet:
update-uevtemplate [templatefilename]. You can then copy the file to the settings storage catalog. The UE-V Agent on users’ computers should then update as scheduled in the scheduled task.
Validate Settings Location Templates with the UE-V Generator
It is possible to create or edit settings location templates in an XML editor without using the UE-V Generator. If you do, you can use the UE-V Generator to validate that the new or revised XML matches the schema that has been defined for the template.
To validate a UE-V settings location template with the UE-V Generator
Click Start, point to All Programs, click Microsoft User Experience Virtualization, and then click Microsoft User Experience Virtualization Generator.
Click Validate a settings location template.
In the list of recently used templates, select the template to be edited. Alternatively, you can Browse to the settings template file. Click Next to continue.
Click Validate to continue.
Click Close to close the Settings Template Wizard. Exit the UE-V Generator application.
After you validate the settings location template for an application, you should test the template. Deploy the template in a lab environment before you put it into a production environment in enterprise.
Share Settings Location Templates with the Template Gallery
The Microsoft User Experience Virtualization (UE-V) 2.0 template gallery enables administrators to share their UE-V settings location templates. In the gallery, you can upload your settings location templates for other users to use, and you can download templates that other users have created. The UE-V template gallery is located on Microsoft TechNet here.
Before you share a settings location template on the UE-V template gallery, ensure it does not contain any personal or company information. You can use any XML viewer to open and view the contents of a settings location template file. The following template values should be reviewed before you share a template synchronize settings correctly in a test environment.
Got a suggestion for UE-V?
Add or vote on suggestions here. For UE-V issues, use the UE-V TechNet Forum.
Related topics
Deploy UE-V 2.x for Custom Applications | https://docs.microsoft.com/en-us/microsoft-desktop-optimization-pack/uev-v2/working-with-custom-ue-v-2x-templates-and-the-ue-v-2x-generator-new-uevv2 | 2017-05-22T19:53:25 | CC-MAIN-2017-22 | 1495463607046.17 | [] | docs.microsoft.com |
Omnitruck API¶
The Omnitruck API can be used to download platform-appropriate versions of various Chef Software Inc products.
Syntax¶
The URL from which these downloads can be obtained has the following syntax:<CHANNEL>/<PRODUCT>/download?p=$PLATFORM&pv=$PLATFORM_VERSION&m=$MACHINE_ARCH&v=latest&prerelease=false&nightlies=false
or:<CHANNEL>/<PRODUCT>/metadata?p=$PLATFORM&pv=$PLATFORM_VERSION&m=$MACHINE_ARCH&v=latest&prerelease=false&nightlies=false
where the difference between these URLs is the metadata and download options. Use the metadata option to verify the build before downloading it. Use the download option to download the package in a single step.
Downloads¶
The /metadata and/or /download endpoints can be used to download packages for all products:<CHANNEL>/<PRODUCT>/download?p=$PLATFORM&pv=$PLATFORM_VERSION&m=$MACHINE_ARCH&v=latest
or:<CHANNEL>/<PRODUCT>/metadata?p=$PLATFORM&pv=$PLATFORM_VERSION&m=$MACHINE_ARCH&v=latest
where:
- <CHANNEL> is the release channel to install from. See Chef Software Inc Packages for full details on the available channels.
- <PRODUCT> is the Chef Software Inc product to install. A list of valid product keys can be found at
- p is the platform. Possible values: debian, el (for CentOS), freebsd, mac_os_x, solaris2, sles, suse, ubuntu or windows.
- pv is the platform version. Possible values depend on the platform. For example, Ubuntu: 10.10, 11.04, 11.10, 12.04, or 12.10 or for macOS: 10.6 or 10.7.
- m is the machine architecture for the machine on which the product will be installed. Possible values depend on the platform. For example, for Ubuntu or Debian: i386 or x86_64 or for macOS: x86_64.
- v is the version of the product to be installed. A version always takes the form x.y.z, where x, y, and z are decimal numbers that are used to represent major (x), minor (y), and patch (z) versions. One-part (x) and two-part (x.y) versions are allowed. For more information about application versioning, see. Default value: latest.
Examples¶
Get the Latest Build
To get the latest supported build for Ubuntu 12.04, enter the following:
to return something like:
sha1 99f26627718a3ea4464ab48f534fb24e3e3e4719 sha256 255c065a9d23f3dd0df3090206fe4d48451c7d0af0035c237bd21a7d28133f2f url version 12.9.38
Download Directly
To use cURL to download a package directly, enter the following:
$ curl -LOJ '<CHANNEL>/<PRODUCT>/download?p=debian&pv=6&m=x86_64'
To use GNU Wget to download a package directly, enter the following:
$ wget --content-disposition<CHANNEL>/<PRODUCT>/download?p=debian&pv=6&m=x86_64 | https://docs.chef.io/api_omnitruck.html | 2017-05-22T19:15:10 | CC-MAIN-2017-22 | 1495463607046.17 | [] | docs.chef.io |
Removing an API instance on API Manager
The procedure removes the capability to manage an API Version from API Manager. If the API resides in Exchange or on the file system as a ZIP, it isn’t deleted from Exchange or the file system.
In Anypoint Platform, click API Manager.
In API Administration, click the API version link or an instance link.
Select Delete from the Actions dropdown.
Respond to the prompt to confirm the deletion. Click Delete. | https://docs.mulesoft.com/api-manager/2.x/delete-api-task | 2020-05-25T08:34:06 | CC-MAIN-2020-24 | 1590347388012.14 | [] | docs.mulesoft.com |
RHQ is a platform project that requires an API compatibility plan in order to support multiple compatible higher level projects deployed on top of it. As a platform it must maintain not only backward compatibility, but also have some amount of degraded forward compatibility where possible. As there may be two, three or more separate apps running on top of the platform with different release cycles, focusing on compatibility with the core model will simplify the compatiblity matrix over what would otherwise be pointwise integrations between projects. Where specific point-wise integrations are also beneficial, further compatibility and testing of a specific nature will be necessary.
The base RHQ rev cycle will take into account the typical compatibility needs of the projects that extend it in order to support co-deployments. Longer major revision platform cycles are the goal as they simplify compatibility testing and overlap of supported versions between many extension projects. Effort will also be made to maintain compatibility even between major versions and will only be broken if no compatible options exist or the deprecation has been published for an extended period of time. While not documented here, interproject dependencies should also be tracked and compatibility efforts made where tight integration happens between projects. Finally, patch releases will likely also be released passed the upgrade to a new major revision for the previous revision in order to support overlap until new releases of dependent projects are available.
Conceptual Model
This release model shows the relative release cycles of the base RHQ platform versus two concept projects called foo and bar.
This example shows a possible release iteration schedule. NOTE: "alpha" and "beta" represent two different software products that run with the core RHQ - they do not represent versions of a single product. Some examples of compatibility requirements shown here are.
You're running foo 3.0 and bar 3.1 on RHQ 3.0
In order to upgrade to foo 3.1 you'll need RHQ 3.1, therefore bar 3.1 must run on RHQ 3.1
You're running foo 3.1.1 and bar 4.0 and are considering an upgrade to foo 4.0
You would have to wait for a release of bar compatible with RHQ 4.x. (this argues for good planning on major platform releases)
It also argues for fewer major releases or to have compatibility in major releases also a goal if not a rule
Access to the server side APIs of RHQ should be backward compatible for minor revisions. New services may be added and new methods may be added, but incompatible signature changes are not allowed. Deprecation rules should be followed. The same goes for web service or JSON interfaces. The client library and cli scripts must also remain compatible and follow deprecation rules.
Primarily plugin interfaces, these interfaces are designed as specific extension points and often involve bi-directional calling and implementation of interfaces by the extender. Plugins should remain compatibility for minor revisions of the platform and all attempts should be made to make them compatible to major revisions. As an example, a new method can not be added to a plugin facet in a minor revision as it would cause existing implementations of that interface to be incompatible with the change. This rule goes for both agent plugins and server side plugins and extensions.
To meet the above goals of plugin compatibility in minor revisions a given version of the plugin schema should be supported for the entire minor release cycle. New plugin schemas can be added that are compatible supersets of the earlier schemas, but no removals or name alterations are allowable.
Perspective hooks should remain backward compatible for all minor revisions.
Deprecations in interfaces and services should last until the next major revision or longer.
For product releases on a minor platform change it should be minimally tested against the most recent version of each project
Release type examples
Major revision: 1.x -> 2.0
Minor revision 1.4 -> 1.5
Patch revision 1.4.0 -> 1.4.1 | https://docs.jboss.org/author/display/RHQ/Design%20-%20API%20Compatibility.html | 2020-05-25T08:59:55 | CC-MAIN-2020-24 | 1590347388012.14 | [] | docs.jboss.org |
Office 365: Outlook and mobile device connectivity troubleshooting resources
Introduction
This article contains links to technical resources and support information for troubleshooting Microsoft Outlook connectivity and mobile device connectivity in Office 365.
Cannot connect to Exchange Online by using Outlook
Microsoft Knowledge Base articles
- 2459968 Outlook 2011 for Mac doesn't automatically set up your email server settings for Exchange Online in Office 365
Help articles
Poor performance
Knowledge Base articles
- 2413813 How to troubleshoot issues in which Outlook 2007 or Outlook 2010 crashes or stops responding (hangs) when it's used with Office 365
- 2441551 Outlook performance is slow in the Office 365 environment
- 2646504 How to remove automappping for a shared mailbox in Office 365
Auto Account Setup fails in Office 365
Knowledge"
Repeated password prompts in Outlook
Knowledge Base articles
- 2466333 Federated users can't connect to an Exchange Online mailbox
- 2637629 How to troubleshoot non-browser apps that can’t sign in to Office 365, Azure, or Intune
Cannot view free/busy information in the Outlook calendar
Knowledge
Knowledge
Tools and Diagnostics wiki articles in the Office 365 Community
The third-party products that this article discusses are manufactured by companies that are independent of Microsoft. Microsoft makes no warranty, implied or otherwise, about the performance or reliability of these products.
Still need help? Go to Microsoft Community. | https://docs.microsoft.com/en-us/exchange/troubleshoot/outlook-connectivity/office-365-troubleshooting-resources | 2020-05-25T09:37:00 | CC-MAIN-2020-24 | 1590347388012.14 | [] | docs.microsoft.com |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.