content
stringlengths 0
557k
| url
stringlengths 16
1.78k
| timestamp
timestamp[ms] | dump
stringlengths 9
15
| segment
stringlengths 13
17
| image_urls
stringlengths 2
55.5k
| netloc
stringlengths 7
77
|
---|---|---|---|---|---|---|
Please login or sign up. You may also need to provide your support ID if you have not already done so.
This product can be discovered by Enterprise version of BMC Discovery, but you can still Download our free Community Edition to discover [other products] !
IBM DB2 High Performance Unload performs high-speed, bulk data unloads for DB2 databases. DB2 High Performance Unload is a standalone utility program. It can run while the DB2 database manager is running and accesses the same physical files as the DB2 database manager. It can also run in a standalone mode and as a DB2 stored procedure. You can run DB2 High Performance Unload exclusively from the command line, or you can specify a control file for more complex unloads with the -f command line option. | https://docs.bmc.com/docs/display/Configipedia/IBM+DB2+High+Performance+Unload | 2019-12-05T23:39:51 | CC-MAIN-2019-51 | 1575540482284.9 | [] | docs.bmc.com |
Forkrate plugin
This plugin tracks the fork rate on your server by polling
/proc/stat.On a busy production box you can expect a rate ranging from 1 to 10 per sec, if there is a rate approaching 100/sec then your server is experiencing issues.
Prerequisites
Meter version 4.2 or later must be installed.
The Forkrate plugin supports the following Operating System.
Plugin Setup
- Verify that you are able to get output.
$ cat /proc/stat
- This plugin will not work if there is no output. | https://docs.bmc.com/docs/intelligence/forkrate-plugin-736714539.html | 2019-12-05T23:40:08 | CC-MAIN-2019-51 | 1575540482284.9 | [] | docs.bmc.com |
Event ID 140 — General Task Registration
Applies To: Windows Server 2008 R2
This is a normal condition. No further action is required.
Related Management Information
General Task Registration
Management Infrastructure | https://docs.microsoft.com/en-us/previous-versions/windows/it-pro/windows-server-2008-r2-and-2008/dd363717%28v%3Dws.10%29 | 2019-12-05T22:46:05 | CC-MAIN-2019-51 | 1575540482284.9 | [array(['images/dd300121.green%28ws.10%29.jpg', None], dtype=object)] | docs.microsoft.com |
DebugJoinOnlyOnThisError
DebugJoinOnlyOnThisError specifies a particular error code that causes
DebugJoin to trigger if encountered during Windows Setup.
This setting modifies the behavior of DebugJoin. If specified, this setting causes the DebugJoin behavior only when the domain join operation fails with the error specified by this setting.
DebugJoinOnlyOnThisError only functions if DebugJoin is set to true.
Important
This is an advanced setting designed to be used by product development and Microsoft Product Support Services. Leave this setting unmodified when you configure the unattended answer file.
Values
This string type does not support empty elements. Do not create an empty value for this setting.
Valid Passes
Parent Hierarchy
Microsoft-Windows-UnattendedJoin | Identification | DebugJoinOnlyOnThisError
Applies To
For the list of the supported Windows editions and architectures that this component supports, see Microsoft-Windows-UnattendedJoin.
XML Example
The following XML output shows how to set debug joins.
<DebugJoin>true</DebugJoin> <DebugJoinOnlyOnThisError>1355</DebugJoinOnlyOnThisError> | https://docs.microsoft.com/en-us/previous-versions/windows/it-pro/windows-vista/cc722445%28v%3Dws.10%29 | 2019-12-05T23:03:00 | CC-MAIN-2019-51 | 1575540482284.9 | [] | docs.microsoft.com |
Managing LDAP
LDAP is commonly used to access user or group information in a corporate directory. Using your corporate LDAP infrastructure to authenticate users can reduce the number of administrative tasks that you need to perform in BMC Discovery. LDAP groups can be mapped to BMC Discovery groups and hence assigned permissions on the system. The way in which BMC Discovery integrates with your LDAP infrastructure depends on the schema that is implemented in your organization.
If you are using LDAP authentication, there is no need to set up local user accounts for LDAP users on BMC Discovery.
LDAP Terms
The following terms are used in the sections describing BMC Discovery LDAP configuration:
- Directory Information Tree (DIT)—The overall tree structure of the data directory queried using the LDAP protocol. The structure is defined by the schema. Each entry in a directory is an object; one of the following types:
- Containers—A container is like a folder: it contains other containers or leaves.
- Leaves—A leaf is an object at the end of a tree. Leaves cannot contain other objects.
- Domain Component (dc)—Each element of the Internet domain name of the company is given individually.
- Organizational Unit (ou)—Organizations in the company.
- Common Name (cn)—The name of a person.
- Distinguished Name (dn)—The complete name for a person, including the domain components, organizational unit, and common name.
The following example n example Directory Information Tree is shown below.
dc=tideway,dc=com ou=engineering cn=Timothy Taylor telephoneNumber=1234 [email protected] ou=test cn=Sam Smith telephoneNumber=2345 [email protected] ou=product management cn=John Smith telephoneNumber=3456 [email protected]
The login procedure
When a user attempts to log in through the user interface, BMC Discovery first checks to see whether the username represents a local account. If no local account exists, and LDAP has been configured correctly, BMC Discovery attempts to authenticate against the directory and then performs an account lookup to return the group memberships of that account. If the group mappings have been enabled, and configured correctly, then authentication takes place and the user is logged in with the local BMC Discovery rights as defined in the group mapping..
Configuring LDAP
To configure the LDAP settings:
From the main menu, click the Administration Settings icon.
The Administration page opens.
In the Security section, click LDAP.
The LDAP page is displayed showing the LDAP tab.
The options on this page are described below:
- To save the LDAP settings, click Apply.
Configuring LDAP for use with BMC Atrium SSO
Depending on how your LDAP servers are configured, user authentication via Atrium SSO may work, but then user authorization in BMC Discovery fails. This occurs because Atrium SSO sends BMC Discovery the first part of the user's DN as their userid.
For example, for a DN of the following format:
dn: CN=ADDM QA. TEST,CN=Users,DC=addmsqa,DC=bmc,DC=com
The part that must be matched by the search that BMC Discovery runs is:
ADDM QA. TEST
To do this, for the example above, set the Search Base to:
cn=users,dc=addmsqa,dc=bmc,dc=com
and the Search Template to:
(cn=%(username)s)
Changing from LDAPS to LDAP
When you reconfigure BMC Discovery to use LDAP when it was previously configured to use LDAPS, you must remove the CA Certificate, and change the URI in a single step otherwise you will encounter a Cannot use LDAPS without a CA Certificate warning. To do this:
- Edit the URI to point to the LDAP server's
ldap://URI. Do not click the Apply button yet.
- Select Remove CA Certificate.
- Click Apply.
Changing from LDAP to LDAPS
When you reconfigure BMC Discovery to use LDAPS when it was previously configured to use LDAP, you must add a CA certificate before you attempt to enter an
ldaps:// URI.
LDAP group mapping
The LDAP group mapping enables you to assign membership of BMC Discovery groups to LDAP groups. If you do not use group mapping, users will be only be assigned to groups in BMC Discovery which are exactly the same as the the LDAP groups that they are members of, that is, in LDAP form dc=tideway,dc=com,ou=engineering...
To enable or disable LDAP group mapping
- From the LDAP page, select the Group Mapping tab.
The LDAP Group Mapping page lists the LDAP groups that are assigned to BMC Discovery security groups. For each LDAP group, the appliance security groups to which it is assigned are listed. Links for each action that you can perform are provided for each group.
- Select Enabled or Disabled from the drop-down list.
To add or edit LDAP Group Mapping starting from a username
- From the LDAP page, select the Group Mapping tab.
- Click Lookup User.
- In the LDAP User Lookup dialog, enter the Username and click the OK button.
The system looks up the username in LDAP and displays the results.
LDAP Groups—For each LDAP group of which the user is a member, displays existing group mappings and provides an add link or an edit link.
Mapped Groups—Displays the final list of mapped groups for this user.
Details—Displays whether the information was obtained from the local cache and the total number of groups to which this user belongs.
- Click Add to create a new group mapping or Edit to modify an existing group mapping.
- Select the appliance security groups to which you want to assign the LDAP group.
- To save the mapping, click Apply.
To add an LDAP Group Mapping starting from an LDAP group name
- From the LDAP page, select the Group Mapping tab.
- Click Add.
- On the Add LDAP Group Mapping page, enter a search term for the common name into the LDAP Group field and click the Search button.
A list of matches is displayed. If more than ten entries match, the first ten are shown and a label is displayed at the bottom of the list showing how many additional matches there are.
- Select the matching LDAP group from the list.
The LDAP groups field is not case sensitive. All LDAP groups returned from the LDAP server are displayed in lower case.
- Select the appliance security groups to which you want to assign the LDAP group.
- To save the mapping, click Apply.
To edit an LDAP Group Mapping starting from an LDAP group name
- From the LDAP page, select the Group Mapping tab.
For each LDAP group listed, an edit link and a delete link are provided.
- Click Edit.
- Select the appliance security groups to which you want to assign the LDAP group.
- To save the mapping, click Apply.
To delete an LDAP Group Mapping
- From the LDAP page, select the Group Mapping tab.
For each LDAP group listed, an edit link and a delete link are provided.
- To remove an LDAP group mapping, click Delete.
Troubleshooting
If you receive a "Can't Contact LDAP Server error" in the Connection Status field, this might be caused by certificate problems rather than simple connectivity (wrong URI, port and so forth). Check that the certificate you are using is the one you received from your LDAP administrator.
If the login fails when attempting LDAP authentication, set the security log
/usr/tideway/log/tw_svc_security.log level to debug.
Where the account used to bind to the directory fails to authenticate look for messages similar to the following:
-1285350512: 2010-08-13 10:00:46,843: security.authenticator.ldap: DEBUG: Attempt to auth bind as username "administrator" -1285350512: 2010-08-13 10:00:47,117: security.authenticator.ldap: DEBUG: LDAP passwd for "CN=Administrator,CN=Users,DC=generic,DC=com" not valid
If you are using group mapping and are experiencing login failures, check that group mappings have been correctly defined for one or more LDAP groups to which the user belongs. See To add or edit LDAP Group Mapping starting from a username. | https://docs.bmc.com/docs/discovery/110/managing-ldap-625695446.html | 2019-12-05T23:31:46 | CC-MAIN-2019-51 | 1575540482284.9 | [] | docs.bmc.com |
SelectRow Method
Selects the row that contains the insertion point, or selects all rows that contain the selection. If the selection isn't in a table, an error occurs.
expression**.SelectRow**
expression Required. An expression that returns a Selection object.
Example
This example collapses the selection to the starting point and then selects the column that contains the insertion point.
Selection.Collapse Direction:=wdCollapseStart If Selection.Information(wdWithInTable) = True Then Selection.SelectRow End If
Applies to | Selection Object
See Also | Select Method | SelectCell Method | SelectColumn Method | https://docs.microsoft.com/en-us/previous-versions/office/developer/office-2003/aa220796(v=office.11)?redirectedfrom=MSDN | 2019-12-05T23:39:05 | CC-MAIN-2019-51 | 1575540482284.9 | [] | docs.microsoft.com |
YITH WooCommerce Subscription is a plugin designed to enable recurring payments for the services offered in your shop. Sell products on a subscription basis and charge them every month or week or whatever billing cycle you prefer.
The integration with YITH Points and Rewards will let you choose whether sign-up fees and renewal orders have to generate points or not.
To set up this, go to YITH > Points and Rewards > Points Settings > Subscription Settings. You will find two options.
Earn points on subscription fee: enable this option if you want to let your users earn points on the sign-up fee. Keeping it disabled will not make the sign-up fee amount generate points.
Earn points on renewal orders: enable this option if you want to let your users earn points on automatic renewal orders for their active subscriptions. Keeping it disabled will not let them earn points.
When this option is enabled, you will be able to see a box, Renew order label, where you can change the text that will refer to renewal orders. This is the text that will appear in the Points history description. | https://docs.yithemes.com/yith-woocommerce-points-and-rewards/integrations/yith-subscription/ | 2019-12-05T22:22:27 | CC-MAIN-2019-51 | 1575540482284.9 | [] | docs.yithemes.com |
Fast Path Recovery Utility 7.0
- Fast Path Recovery Utility 7.1
- Fast Path Recovery Utility 7.0
The Fast Path Recovery Utility product is a valuable tool for operations and technical support personnel responsible for the operation and recovery functions of IMS systems. The Fast Path Recovery Utility product performs the following functions:
- Automatically selects and acquires, without any user intervention, all information and resources needed for successful recovery
- Performs recovery functions
- Produces a set of comprehensive audit trail reports
- Provides override capability, allowing you to specify recovery control information such as the Checkpoint-ID and the log data sets to be used for recovery.. | https://docs.bmc.com/docs/fastpathrecoveryutility/70/home-718490000.html | 2019-12-05T23:38:30 | CC-MAIN-2019-51 | 1575540482284.9 | [] | docs.bmc.com |
Database Collect Slices Overview
Overall Capabilities of Database collect slicesOverall Capabilities of Database collect slices
- Introspection of a schema to discover tables to extract
- Application of heuristics to determine the appropriate mode for extraction. Ex. small tables are automatically setup as snapshot & tables with an indexed timestamp attribute
- Schema change detection & propagation of schema changes. Dropped tables and columns are not propagated, by design.
- Whitelist and blacklist of tables. Helpful to filter out temp or sensitive tables from replicating to the warehouse. By default the slice replicates all the tables in a schema.
- Column list for a table to replicate. By default, the slice extracts all the columns in a table. Useful when sensitive columns should not be * replicated
- Pagination for large tables using primary key or a date column.
- Column-level SQL based transformation at the source for Snapshot & Incremental modes. Ex. conversion of timestamp columns to standard format
- Filtering rows at the source
- Ordering rows by columns
Extract modesExtract modes
The Database collect slice provides three modes to extract data from Postgres. When the slice is bootstrapped, it performs an auto detection of snapshot vs. incremental mode based on presence of primary key and an indexed timestamp column.
- Snapshot: to extract all the data, every time. Useful with small tables (<100K rows) OR when hard deletes occur and Change Data Capture (CDC) cannot be used.
- Incremental: to extract only data changed since the last pull. During slice addition, backfill can be initiated to perform a one-time fetch of all data
- Change Data Capture (CDC): to extract and propagate every change that occurs in the source. Useful to propagate hard deletes and/or to build a full audit log of all changes valuable in security, compliance or data debugging Auto detection of snapshot vs. incremental mode based on presence of primary key and an indexed timestamp column
Load modesLoad modes
Three load constructs into your warehouse
- Replace: to replace the data in the warehouse table with the latest fetched version.
- Append: to append the newly fetched data to an existing warehouse table.
- Merge: to update or delete existing records and add new records into an existing warehouse table.
Types of tablesTypes of tables
Warehouse tables can be setup as Regular or Partitioned
- Regular: Normal tables that are ideal for small to medium sizes and do not need periodic pruning of old rows via retention rules
- Partitioned: Large tables that require efficient periodic pruning of old rows via retention rules
Change Data Capture SupportChange Data Capture Support
The following database collect slices support Change Data Capture mode
- MySQL
- Postgres | https://docs.datacoral.com/collect/db/dboverview/ | 2019-12-05T22:38:44 | CC-MAIN-2019-51 | 1575540482284.9 | [] | docs.datacoral.com |
The Police Stations layer shows the point locations of law enforcement and sheriff offices in Massachusetts, covering local, county and state jurisdictions.
The Massachusetts Emergency Management Agency (MEMA) GIS Program in cooperation with the Regional Planning Agencies and participating communities created the original data as part of the development of Homeland Security Data Layers. MassGIS has since incorporated updates into the data.
The features represented include municipal police stations and Massachusetts State Police barracks. Although sheriffs are not technically charged with the same law enforcement tasks as local and state police, county sheriff headquarters are also included in this layer. The duties of the sheriffs include the management and operation of regional correctional systems and transportation of prisoners, service of judicial process and delivery of legal documents needed to support the operation of the courts, community policing, running various outreach services, and the enforcement of laws enacted for the public safety, health and welfare of the people. Not included in this layer are Environmental Police, campus police and various state and federal level law enforcement locations.
Stored in the ArcSDE, the statewide layer is named POLICESTATIONS_PT_MEMA.
ProductionMEMA GIS staff used a contacts database updated with MASS.GOV data to geocode the station addresses using ArcGIS. The layer points were updated based on regional planning agency data (see attribute L_SRC for more detail). MEMA GIS and MassGIS staff reviewed for completeness and accuracy using oblique ("bird's eye") imagery, ortho imagery, parcel databases, and department web sites and by calling towns. Horizontal accuracy of the points is 1:5,000 or better.
AttributesThe layer's point attribute table contains the following fields:
MaintenanceMassGIS corrected the location and updated the addresses of several points in December 2014. In the spring of 2014 the Executive office of Energy and Environmental Affairs updated 74 points. MassGIS reviewed EEA's work and made additional edits in the fall of 2015. As part of EEA's and MassGIS' update process, many points were moved atop buildings as seen in the most current orhto imagery. Level 3 Parcel data and local police websites were often used as references in placing points.
Last Updated 12/3/2015 | https://docs.digital.mass.gov/dataset/massgis-data-police-stations | 2019-12-05T22:38:50 | CC-MAIN-2019-51 | 1575540482284.9 | [array(['http://www.mass.gov/files/images/massgis/datalayers/thumb-policesta.jpg',
'Sample of Police Stations'], dtype=object) ] | docs.digital.mass.gov |
To import from RoboForm, you must export your records from there first.
Macintosh
Before proceeding, make sure you do not have your Passcards organized in folders in RoboForm.
2. Click File > Print List > Logins or Identities or SafeNotes
3. Right-Click HTML page and choose Save As...
4. Select the format as Webpage, HTML Only.
5. Navigate to where you want to save the file and click Save.
Windows App
Click on the RoboForm pull-down menu in the upper left column.
2. Choose Options, then Account & Data.
3. Choose Export.
4. Under Format, select CSV file.
5. Browse the location to save and choose Export.
Windows Extension
Select the RoboForm icon on your taskbar or toolbar.
Click on the Overflow Menu
3. Choose Options then Account & Data.
4. Choose Export.
5. Under Format, select CSV file.
6. Browse the location to save and choose Export.
Log into Keeper's web vault at
Click on your account email in the upper right-hand corner.
Click on Settings > Import.
Choose RoboForm from the list.
Drag the exported file into the target window "Drop a File Here"
Use the drop-down menu in each column to map to a Keeper field.
Click on Import. | https://docs.keeper.io/user-guides/import-records-1/import-from-roboform | 2019-12-05T21:53:02 | CC-MAIN-2019-51 | 1575540482284.9 | [] | docs.keeper.io |
vm’s and containers. To get started some base requirements are recommended:
Base Requirements¶
Operating System: Ubuntu 14.04 /16.04/18.04 or CentOS/RHEL greater than 7.0.
Important
Ubuntu 18.04 install requires NTP (
sudo apt-get install ntp) and the ``libpng12-devpackage which is no longer in default Ubuntu configured repos on 18.04. Run
sudo apt-add-repository "deb xenial main universe"or install
libpng12-devmanually if reconfigure fails on installing
libpng12-dev.
Note
Ubuntu 16.10, 18.04. configuration with locally installed ElasticSearch, VM, Container, Host and Appliance logs and stats are are stored in Elasticsearch. Please ensure adequate space in
/var, specifically
/var/opt/morpheus/elasticsearch in relation to the number or Instances reporting logs, log frequency, and log retention count. also utilizes SSH (Port 22) and Windows Remote Management (Port 5985) to initialize a server. This includes sending remote command instructions to install the agent. It is actually possible for Morpheus to operate without agent connectivity (though stats and logs will not function) and utilize SSH/WinRM to perform operations. Once the agent is installed and connections are established SSH/WinRM communication will stop. This is why an outbound requirement exists for the appliance server to be able to utilize port 22 and 5985.
Note
In newer versions of morpheus this outbound connectivity is not mandatory. The agent can be installed by hand or via Guest Process API’s on cloud integrations like VMware.
Components¶
The Appliance Server automatically installs several components for the operation of Morpheus. This includes:
- RabbitMQ (Messaging)
- MySQL (Logistical Data store)
- Elasticsearch (Logs / Metrics store)
- Redis (Cache. | https://docs.morpheusdata.com/en/3.4.5/getting_started/requirements/requirements.html | 2019-12-05T21:48:53 | CC-MAIN-2019-51 | 1575540482284.9 | [] | docs.morpheusdata.com |
# Upload Binaries
This section is usefull if you want to directly flash your ESP from your desktop. Once flashed you can change wifi and broker settings. Nevertheless you will not be able to change advanced parameters, if you want to do so refer to [Upload from PlatformIO][pio] section.
Download the binary corresponding to your board and gateway here from github and uncompress it.
# ESP32
- Download the bootloader here
- Download the boot_app0 from here
- Download the flash tool utility from espressif:
- Uncompress the package
- Execute
flash_download_tools
- Choose ESP32 DownloadTool
- Set the files and the adress as below:
And set the parameters used by arduino IDE, we are able to upload to ESP32 a binary file containing OpenMQTTGateway.
- Set the config as above
- Connect your ESP32 board and select the COM port
- Click on start.
Note that to reset the wifi and mqtt settings you can check yes, wipes all data
Once loaded into your board you have to set your network parameters with wifi manager portal From your smartphone search for your OpenMQTTGateway wifi network and connect to it, a web page will appear
- Select your wifi
- Set your wifi password
- Set your MQTT Server IP
- Set your MQTT Server username (not compulsory)
- Set your MQTT Server password (not compulsory)
The ESP restart and connect to your network. Note that your credentials are saved into the ESP memory, if you want to redo the configuration you have to erase the ESP memory with the flash download tool.
The default password for wifi manager is "your_password"
Once done the gateway should connect to your network and your broker, you should see it into the broker in the form of the following messages:
home/OpenMQTTGateway/LWT Online home/OpenMQTTGateway/version
2 | https://docs.openmqttgateway.com/upload/binaries | 2019-12-05T22:33:04 | CC-MAIN-2019-51 | 1575540482284.9 | [] | docs.openmqttgateway.com |
Adopt a zero trust network model for security
Big picture
Adopting a zero trust network model is best practice for securing workloads and hosts in your cloud-native strategy.
Value
Zero Trust Networks are resilient even when attackers manage to breach applications or infrastructure. They make it hard for attackers to move laterally, and reconnaissance activities easier to spot.
Organizations that embrace the change control model in this How-To will be able to tightly secure their network without imposing a drag on innovation in their applications. Security teams can be enablers of business value, not roadblocks.
Features
This how-to guide uses the following Calico features:
- NetworkPolicy and GlobalNetworkPolicy with:
- Namespaces
- RBAC
- Service accounts
- HostEndpoints
- Security group integration with AWS
- Calico with application layer policy for Istio
Concepts
The network is always hostile
Zero Trust Networking is an approach to network security that is unified by the principle that the network is always assumed to be hostile. This is in direct contrast to perimeter and “segmentation” approaches that focus on separating the world into trusted and untrusted network segments.
Why assume the network is hostile? In many attack scenarios, it is.
- Attackers may compromise “trusted” parts of your network infrastructure: routers, switches, links, etc.
- Deliberate or accidental misconfiguration can route sensitive traffic over untrusted networks, like the public Internet.
- Other endpoints on a “trusted” network may be compromised: your application may share a network with thousands of other servers, tens of thousands of other containers, thousands of personal laptops, phones, etc.
Major breaches typically start as a minor compromise of as little as a single component, but attackers then use the network to move laterally toward high value targets: your company’s or customers’ data. In a zone or perimeter model, attackers can move freely inside the perimeter or zone after they have compromised a single endpoint. A Zero Trust Network is resilient to this threat because it enforces strong, cryptographic authentication and access control on each and every network connection.
Requirements of a Zero Trust Network
Zero Trust Networks rely on network access controls with specific requirements:
Requirement 1: All network connections are subject to enforcement (not just those that cross zone boundaries).
Requirement 2: Establishing the identity of a remote endpoint is always based on multiple criteria including strong cryptographic proofs of identity. In particular, network-level identifiers like IP address and port are not sufficient on their own as they can be spoofed by a hostile network.
Requirement 3: All expected and allowed network flows are explicitly whitelisted. Any connection not explicitly whitelisted is denied.
Requirement 4: Compromised workloads must not be able to circumvent policy enforcement.
Requirement 5: Many Zero Trust Networks also rely on encryption of network traffic to prevent disclosure of sensitive data to hostile entities snooping network traffic. This is not an absolute requirement if private data are not exchanged over the network, but to fit the criteria of a Zero Trust Network, encryption must be used on every network connection if it is required at all. A Zero Trust Network does not distinguish between trusted and untrusted network links or paths. Also note that even when not using encryption for data privacy, cryptographic proofs of authenticity are still used to establish identity.
How Calico and Istio implement Zero Trust Network requirements
Calico works in concert with the Istio service mesh to implement all you need to build a Zero Trust Network in your Kubernetes cluster.
Multiple enforcement points
When operating with Istio, incoming requests to your workloads traverse two distinct enforcement points:
- The host Linux kernel. Calico policy is enforced in the Linux kernel using iptables at L3-L4.
- The Envoy proxy. Calico policy is enforced in the Envoy proxy at L3-7, with requests being cryptographically authenticated. A lightweight policy decision sidecar called Dikastes assists Envoy in this enforcement.
These multiple enforcement points establish the identity of the remote endpoint based on multiple criteria (Requirement 2). The host Linux kernel enforcement protects your workloads even if the workload pod is compromised and the Envoy proxy bypassed (Requirement 4).
Calico policy store
The policies in the Calico data store encode the whitelist of allowed flows (Requirement 3).
Calico network policy is designed to be flexible to fit many different security paradigms, so it can express, for example, both Zero Trust Network-style whitelists as well as legacy paradigms like zones. You can even layer both of these approaches on top of one another without creating a maintenance mess by composing multiple policy documents.
The How To section of this document explains how to write policy specifically in the style of Zero Trust Networks. Conceptually, you will begin by denying all network flows by default, then add rules that allow the specific expected flows that make up your application. When you finish, only legitimate application flows are allowed and all others are denied.
Calico control plane
The Calico control plane handles distributing all the policy information from the Calico data store to each enforcement point, ensuring that all network connections are subject to enforcement (Requirement 4). It translates the high-level declarative policy into the detailed enforcement attributes that change as applications scale up and down to meet demand, and evolve as developers modify them.
Istio Citadel Identity System
In Calico and Istio, workload identities are based on Kubernetes Service Accounts. An Istio component called Citadel handles minting cryptographic keys for each Service Account to prove its identity on the network (Requirement 2) and encrypt traffic (Requirement 5). This allows the Zero Trust Network to be resilient even if attackers compromise network infrastructure like routers or links.
How to
This section explains how to establish a Zero Trust Network using Calico and Istio. It is written from the perspective of platform and security engineers, but should also be useful for individual developers looking to understand the process.
Building and maintaining a Zero Trust Network is the job of an entire application delivery organization, that is, everyone involved in delivering a networked application to its end users. This includes:
- Developers, DevOps, and Operators
- Platform Engineers
- Network Engineers
- Security Engineers and Security Operatives
In particular, the view that developers build applications which they hand off to others to figure out how to secure is incompatible with a Zero Trust Network strategy. In order to function correctly, a Zero Trust Network needs to be configured with detailed information about expected flows—information that developers are in a unique position to know.
At a high level, you will undertake the following steps to establish a Zero Trust Network:
- Install Calico.
- Install Istio and enable Calico integration.
- Establish workload identity by using Service Accounts.
- Write initial whitelist policies for each service.
After your Zero Trust Network is established, you will need to maintain it.
Install Calico
Follow the install instructions to get Calico software running in your cluster.
Install Istio and enable Calico integration
Follow the instructions to Enable Application Layer Policy.
The instructions include a “demo” install of Istio for quickly testing out functionality. For a production installation to support a Zero Trust Network, you should instead follow the official Istio install instructions. Be sure to enable mutually authenticated TLS (mTLS) in your install options by setting global.mtls.enabled to true.
Establish workload identity by using Service Accounts
Our eventual goal is to write access control policy that authorizes individual expected network flows. We want these flows to be scoped as tightly as practical. In a Calico Zero Trust Network, the cryptographic identities are Kubernetes Service Accounts. Istio handles crypto-key management for you so that each workload can assert its Service Account identity in a secure manner.
You have some flexibility in how you assign identities for the purpose of your Zero Trust Network policy. The right balance for most people is to use a unique identity for each Kubernetes Service in your application (or Deployment if you have workloads that don’t accept any incoming connections). Assigning identity to entire applications or namespaces is probably too granular, since applications usually consist of multiple services (or dozens of microservices) with different actual access needs.
You should assign unique identities to microservices even if you happen to know that they access the same things. Your policy will be more readable if the identities correspond to logical components of the application. You can grant them the same permissions easily, and if in the future they need different permissions it will be easier to handle.
After you decide on the set of identities you require, create the Kubernetes Service Accounts, then modify your application configuration so that each Deployment, ReplicaSet, StatefulSet, etc. uses the correct Service Account.
Write initial whitelist policies for each service
The final step to establishing your Zero Trust Network is to write the policies for each service in your network. The Application Layer Policy Tutorial gives an overview of setting up policies that allow traffic based on Service Account identity.
For each service you will:
- Determine the full set of other identities that should access it.
- Add rules to allow each of those flows.
After a pod is selected by at least one policy, any traffic not explicitly allowed is denied. This implements the Zero Trust Network paradigm of an explicit whitelist of expected flows.
Determine the full set of identities that should access each service
There are several approaches to determining the set of identities that should access a service. Work with the developers of the application to generate this list and ensure it is correct. One approach is to create a flow diagram of your entire application. A flow diagram is a kind of graph where each identity is a node, and each expected flow is an edge.
Let’s look at an example application.
In this example, requests from end-users all flow through a service called api, where they can trigger calls to other services in the backend. These in turn can call other services. Each arrow in this diagram represents an expected flow, and if two services do not have a connecting arrow, the are not expected to have any network communication. For example, the only services that call the post service are api and search.
For simple applications, especially if they are maintained by a single team, the developers will probably be able to just write down this flow graph from memory or with a quick look at the application code.
If this is difficult to do from memory, you have several options.
- Run the application in a test environment with policy enabled.
a. Look at service logs to see what connectivity has broken.
b. Add rules that allow those flows and iterate until the application functions normally.
c. Move on to the next service and repeat.
- Collect flow logs from a running instance of your application. Tigera Secure Enterprise Edition can be used for this purpose, or the Kiali dashboard that comes with Istio.
a. Process the flow logs to determine the set of flows.
b. Review the logged flows and add rules for each expected flow.
- Use Tigera Secure Enterprise Edition for policy, and put it into logging-only mode.
a. In this mode “denied” connections are logged instead of dropped.
b. Review the “denied” logs and add rules for each expected flow.
When determining flows from a running application instance, be sure to review each rule you add with application developers to determine if it is legitimate and expected. The last thing you want is for a breach-in-progress to be enshrined as expected flows in policy!
Write policies with allow rules for each flow
After you have the set of expected flows for each service, you are ready to write Calico network policy to whitelist those flows and deny all others.
Returning to the example flow graph in the previous section, let’s write the policy for the post service. For the purpose of this example, assume all the services in the application run in a Kubernetes Namespace called microblog. We see from the flow graph that the post service is accessed by the api and search services.
apiVersion: projectcalico.org/v3 kind: NetworkPolicy metadata: name: post-whitelist namespace: microblog spec: selector: svc == 'post' types: - Ingress ingress: - action: Allow source serviceAccount: names: ["api", "search"] namespaceSelector: app == 'microblog' protocol: TCP destination: ports: - 8080
Things to notice in this example:
Namespace
Create a Calico NetworkPolicy in the same namespace as the service for the whitelist (microblog).
metadata: name: post-whitelist namespace: microblog spec:
Selectors
The selector controls which pods to apply policy. It should be the same selector used to define the Kubernetes Service.
spec: selector: svc == 'post'
Service account by name
In the source: selector, allow api and search by name. An alternative to selecting service accounts by name, is by namespaceSelector (next example).
source: serviceAccount: names: ["api", "search"]
Service account by namespaceSelector
Service Accounts are uniquely identified by name and namespace. Use a namespaceSelector to fully-qualify the Service Accounts you are allowing, so if names are repeated in other namespaces they will not be granted access to the service.
source serviceAccount: namespacSelector: app == 'microblog'
Rules
Scope your rules as tightly as possible. In this case we are allowing connection only on TCP port 8080.
destination: ports: - 8080
The above example lists the identities that need access to the post service by name. This style of whitelist works best when the developers responsible for a service have explicit knowledge of who needs access to their service.
However, some development teams don’t explicitly know who needs access to their service, and don’t need to know. The service might be very generic and used by lots of different applications across the organization—for example: a logging service. Instead of listing the Service Accounts that get access to the service explicitly one-by-one, you can use a label selector that selects on Service Accounts.
In the following example, we have changed the serviceAccount clause. Instead of a name, we use a label selector. The selector: svc-post = access label grants access to the post service.
apiVersion: projectcalico.org/v3 kind: NetworkPolicy metadata: name: post-whitelist namespace: microblog spec: selector: svc == 'post' types: - Ingress ingress: - action: Allow source serviceAccount: selector: svc-post == 'access' namespaceSelector: app == 'microblog' protocol: TCP destination: ports: - 8080
Define labels that indicate permission to access services in the cluster. Then, modify the ServiceAccounts for each identity that needs access. In this example, we would add the label svc-post = access to the api and search Service Accounts.
Whether you choose to explicitly name the Service Accounts or use a label selector is up to you, and you can make a different choice for different services. Using explicit names works best for services that have a small number of clients, or when you want the service owner to be involved in the decision to allow something new to access the service. If some other team wants to get access to the service, they call up the owner of the service and ask them to grant access. In contrast, using labels is good when you want more decentralized control. The service owner defines the labels that grant access to the service and trusts the other development teams to label their Service Accounts when they need access.
Maintain your zero trust network
The whitelist policies are tightly scoped to the exact expected flows in the applications running in the Zero Trust Network. If these applications are under active development the expected flows will change, and policy, therefore, also needs to change. Maintaining a Zero Trust Network means instituting a change control policy that ensures:
- Policies are up to date with application changes
- Policies are tightly scoped to expected flows
- Changes keep up with the pace of application development
It is difficult to overstate how important the last point is. If your change control process cannot handle the volume of changes, or introduces too much latency in deploying new features, your transition to a Zero Trust Network is very likely to fail. Either your senior leadership will choose business expediency and overrule your security concerns, or competitors that can roll out new versions faster will stifle your market share. On the other hand, if your change control process does keep pace with application development, it will bring security value without sacrificing the pace of innovation.
The size of the security team is often relatively small compared with application development and operations teams in most organizations. Fortunately, most application changes will not require changes in security policy, but even a small proportion of changes can lead to a large absolute number when dealing with large application teams. For this reason, it is often not feasible for a member of the security team to make every policy change. A classic complaint in large enterprises is that it takes weeks to change a firewall rule—this is often not because the actual workflow is time consuming but because the security team is swamped with a large backlog.
Therefore, we recommend that the authors of the policy changes be developers/devops (i.e. authorship should “shift left”). This allows your change control process to scale naturally as your applications do. When application authors make changes that require policy changes (say, adding a new microservice), they also make the required policy changes to authorize the network activity associated with it.
Here is a simplified application delivery pipeline flow.
Developers, DevOps, and/or Operators make changes to applications primarily by making changes to the artifacts at the top of the diagram: the source code and associated deployment configuration. These artifacts are put in source control (e.g. git) and control over changes to the running applications are managed as commits to this source repository. In a Kubernetes environment, the deployment configuration is typically the objects that appear on the Kubernetes API, such as Services and Deployment manifests.
What you should do is include the NetworkPolicy as part of those deployment config artifacts. In some organizations, these artifacts are in the same repo as the source code, and in others they reside in a separate repo, but the principle is the same: you manage policy change control as commits to the deployment configuration. This config then works its way through the delivery pipeline and is finally applied to the running Kubernetes cluster.
Your developers will likely require training and support from the security team in order to get policy correct at first. Many trained developers are not used to thinking about network security. The logical controls expressed in network policy are simple compared with the flexibility they have in source code, so the primary support they will need from you is around the proper security mindset and principles of Zero Trust Networks. You can apply a default deny policy in your cluster to ensure that developers can’t simply forget to apply their own whitelisted policy.
You may wish to review every security policy change request (aka pull request in git workflows) at first. If you do, then be sure you have time allotted, and consider rolling out Zero Trust Network policies incrementally, one application or service at a time. As development teams gain confidence you can pull back and have them do their own reviews. Security professionals can do spot checks on change requests or entire policies to ensure quality remains high in the long term. | https://docs.projectcalico.org/v3.10/security/adopt-zero-trust | 2019-12-05T21:40:53 | CC-MAIN-2019-51 | 1575540482284.9 | [array(['/images/zero-trust-app.png', 'zero-trust-app'], dtype=object)
array(['/images/zero-trust-deploy.png', 'zero-trust-app'], dtype=object)] | docs.projectcalico.org |
Date: Thu, 5 Dec 2019 14:53:45 -0800 (PST) Message-ID: <[email protected]> Subject: Exported From Confluence MIME-Version: 1.0 Content-Type: multipart/related; boundary="----=_Part_1051_55052342.1575586425092" ------=_Part_1051_55052342.1575586425092 Content-Type: text/html; charset=UTF-8 Content-Transfer-Encoding: quoted-printable Content-Location:
Web services security,= or to be more precise, SOAP messag= e security identifies and provides solutions for general computer security = threats as well as threats unique to Web services. WSO2 supports WS Securit= y, WS-Policy and WS-Security Policy specifications. These specifications de= fine a behavioral model for Web services. Since a requirement for one Web s= ervice may not be valid for another, the Data Services Server also helps de= fining service-specific security.
It provides 16 predefined, commonly-u=
sed security scenarios. All you have to do is to apply the required securit=
y scenario into your service through the service's dashboard. You can also =
define a custom security policy. Understanding the exact security requ=
irements is the first step in planning to secure Web services. Consider wha=
t security aspects are important to your service, whether it is the integri=
ty, confidentiality, or both.
Security features are disabled in ser= vices by default. The following steps explain how to enable and config= ure them.
Enable the options you require fr=
om the list of 16 default security scenarios that appears. You can read more details of the scenarios by clicking the browse icon in=
front of them.
You can read more information about each secu= rity scenario by clicking on the icon next to each. We have als= o given a graphical view of each scenario in the next section.
In add= ition to the default security scenarios, you can also refer to a custom sec= urity policy that is stored in Configuration Registry or Governance Registr= y.
Click Next to open the Activ=
ate Security page, using which you can configure the security=
features selected previously.
=
If you selected a default security scenario, this page shows you the us= er groups, key stores etc. according to the selected security scenario. For= example,
If you refer to a custom securit=
y policy from Registry, this page shows all options on user groups and key =
stores from which you can select the ones relevant to your policy. Eve=
n if you select irrelevant options, they will not be used at runtime.
= span>
The topics below explain the 16 defau= lt security scenarios provided by WSO2.
If you apply security scenario 16 (Kerberos Token-based Security), you m= ust associate your service with a service principal. Security scenario 16 i= s only applicable if you have a Key Distribution Center (KDC) and an Authen= tication Server in your environment. Ideally you can find KDC and an Authen= tication Server in a LDAP Directory server.
Two configuration files are used to specify Kerberos related parameters = as follows.
The above files are located in <=
code><PRODUCT_HOME>/repository/conf/security folder.
After selecting scenario 16, fill information about the service principa= l to associate the Web service with. You must specify the service principal= name and password. The service principal must be already defined in the LD= AP Directory server. | https://docs.wso2.com/exportword?pageId=28705938 | 2019-12-05T22:53:45 | CC-MAIN-2019-51 | 1575540482284.9 | [] | docs.wso2.com |
- Machine Learning
- Features
- Models
- Understanding Custom Context
- Leveraging Custom Contexts
- Feature Selection
- Event Recommendations Deployment Overview
- Deploying Dynamic Navigation Experience
- Helping Train ART Models by Linking Queries to Results
- Adding Coveo Machine Learning Blacklist Words
- About the PermanentId Field
Coveo Machine Learning Features
The Coveo™ Machine Learning (Coveo ML) service currently offers the features described in the following sections (see Coveo Machine Learning).
By default, models are built for each combination of language, hubs, and tabs since these attributes normally define different type of users and use cases. Thus, out of the box, Coveo ML models do not deliver recommendations or suggestions based on user behavior in another search hub, search interface, or language.
Query suggestions that were recommended based on your internal search interface logged events are not recommended in your external search interface.
When you want the relevance on one search page to influence other search interfaces for a unified experience, you can set the filterFields custom model parameter value accordingly. If the parameter value only contains the desired search page, the model will provide recommendations or suggestions based on user behavior on that specific search page even if the model is active on a search interface in another hub. Before modifying the value, it is strongly recommended to consult your Coveo Customer Success Manager (CSM) or Coveo Support for appropriate guidance. Moreover, you should test changes thoroughly in a sandbox environment before deploying in production (see About Non-Production Coveo Cloud Organizations).
For further information on Coveo ML, see the Coveo Machine Learning FAQ section.
Automatic Relevance Tuning (ART) Feature
In short, the ART feature learns what search users seek and delivers it.
In more detail, ART, as well as with paragraph-sized queries expressing long descriptions.. Members of the Administrators and Relevance Managers built-in groups can configure and activate ART in just a few clicks (see Adding and Managing Coveo Machine Learning Models).
By default, ART model recommendations are based on the language of the user’s query as well as the search hub and search tab (interface) in which the query was performed. One model is made per search hub/search tab/language combination.
Items are boosted only if they were clicked in the same language, hub, and interface as the current query.
You can change this default behavior by modifying the filterFields custom model parameter value with the guidance of Coveo Support.
You can use the JavaScript Search Framework Debug Panel to temporarily highlight and therefore identify search results promoted by ART (see Using the JavaScript Search Debug Panel).
Query Suggestions (QS) Feature
The Coveo ML Query Suggestions feature recommends significantly more relevant queries to users as they type in the search box.
that were performed and followed by clicks on search results enough times in order to prevent infrequent queries to pollute the suggestions (see Reviewing Coveo Machine Learning Query Suggestion Candidates).
Members of the Administrators and Relevance Managers built-in groups can configure and activate Coveo ML Query Suggestions in a few clicks (see Adding and Managing Coveo Machine Learning Models). Developers can leverage the feature in the desired search interface (see Providing Coveo Machine Learning Query Suggestions).
Event Recommendations (ER) Feature
The Coveo ML Event Recommendations feature learns from your website user page and search navigation history to return the most likely relevant content for each user in his current session. The Recommendation service results can be included in a search page or in any web page such as in a side panel window (see Coveo Machine Learning Event Recommendations Deployment Overview).
The recommendations can be interpreted as “
People who viewed this page also viewed the following pages”.
Your company offers product technical documentation and Q&A content on several public websites for customer end users, administrators, and developers. These websites are configured to send all views to the Coveo Usage Analytics service. Your website pages include Recommended Articles side panel windows.
The recommendation algorithm is based on the co-occurrence of events such as view events within a user session. When two events abnormally frequently co-occur within sessions, the algorithm learns that they are linked. When one event is seen, the model recommends the other.
Event Recommendations model suggestions are provided based only on the user language, since view events are not logged from a search hub.
Dynamic Navigation Experience (DNE) Feature
The Coveo ML DNE feature learns from usage analytics events to pertinently order facets and facet values according to user queries. More precisely, DNE models analyze queries and target specific user behaviors such as result clicks and facet selections to make the most relevant facets appear at the top for a given query.
Furthermore, Coveo ML DNE models also reorder facet values within a given facet to make the most popular values appear at the top. To do so, the models use the search events performed by previous users, who have selected certain facet values for a specific query.
Coveo ML DNE feature also uses its facet value ranking to boost search results. The model uses the most popular facet values for a certain query and applies query ranking expressions (QREs) to boost the search results whose field values match the values of those facets.
See Deploying Dynamic Navigation Experience.
You are selling smartphones on your e-commerce website. Before enabling Coveo ML DNE, your search page, powered by the Coveo JavaScript Search Framework, displays facets in the following order when customers search for
cellphone:
- Screen size
- Storage capacity
- Price
- Brand sort search results using the Brand and the Price facets. Your search page now displays facets in the following order:
- Brand
- Price
- Screen size
- Storage capacity
Before you enabled Coveo ML DNE, the Brand facet displayed its facet values in the following order when customers searched for
cellphone:
- LG
- Samsung
- Apple search for Apple and Samsung smartphones rather than for LG devices. The JavaScript Search Framework thus displays the facet values within the Brand facet in the following order:
- Apple
- Samsung
- LG
Since the Coveo ML DNE model determined that customers are more likely to shop for Apple smartphones, the model modifies the user query to boost Apple smartphone result list items. | https://docs.coveo.com/en/1671/ | 2019-12-05T21:59:00 | CC-MAIN-2019-51 | 1575540482284.9 | [array(['/en/assets/imported/cloud-v2-administrators/RevealARTSchema1b.png',
'RevealARTSchema1b'], dtype=object)
array(['/en/assets/imported/cloud-v2-administrators/Reveal-QuerySuggestionsEx4.png',
'Reveal-QuerySuggestionsEx3'], dtype=object)
array(['/en/assets/imported/cloud-v2-administrators/Reveal-Recommendations.png',
'Reveal-Recommendations'], dtype=object)
array(['/en/assets/images/cloud-v2-administrators/FacetDne2.png',
'FacetDne2'], dtype=object) ] | docs.coveo.com |
Wire Tap component
A Wire Tap lets you take a sample of an ongoing message in your flow, in order to take some action with it while it goes through the flow. The tapped message is copied and any resulting routing runs in its own separate thread and thus will not affect the main flow if the main flow is set to asynchronous. If the flow is synchronous it won't run in its own seperate thread.
UsageUsage
A Wire Tap has two forwarding endpoints. The endpoint on the right side of the component routes the original message, while the one on the bottom starts a new thread and routes the copied message.
Following is an example of a Wiretap use case. In this example messages are received through an inbound http endpoint, processed with XSLT and sent to HTTP endpoint. A wiretap component is used in this example to store the original incoming messages on an FTP server. This happens in parallel with the main flow of converting and forwarding the message.
NotesNotes
The Wire Tap component is particularly well suited for debugging and testing flows since one's extensive usage may cause heavy load. It should be used to accomplish simple and light fire and forget operations as per design.
Beware that the Wire Tap component is set to allow waiting for pending messages only to certain extent, if messages are being enqueued faster than they are being processed by the bottom route of the Wire Tap component, they will be discarded after a threshold is hit. | https://docs.dovetail.world/docs/components/routing/wire_tap | 2019-12-05T22:37:41 | CC-MAIN-2019-51 | 1575540482284.9 | [array(['https://d33wubrfki0l68.cloudfront.net/9d3c138a25713fec226be1359b83940014e4838c/c64d0/docs/assets/images/wire_tap_component/wire_tap_example.png',
'Wiretap example'], dtype=object) ] | docs.dovetail.world |
Distributed Storage
The distributed storage cache location contains accelerator, tables, job results, downloads, and upload data.
The location is specified with the
paths.dist property in the dremio.conf file.
In addition, this property must be the same across all nodes in the cluster.
This means that if local storage or NAS is used, the configured path must exist on, or be accessible from, all nodes.
If the value of this property is changed, then it must be updated in the dremio.conf file on all nodes in the Dremio cluster.
By default Dremio uses the disk space on local Dremio nodes.
The following table shows which stores are supported: | https://docs.dremio.com/deployment/distributed-storage.html | 2019-12-05T23:22:42 | CC-MAIN-2019-51 | 1575540482284.9 | [] | docs.dremio.com |
This chapter presents some guidelines for troubleshooting and resolving possible error conditions and other issues that may arise when creating, maintaining, and accessing Blackfish SQL databases.
You can use the DataDirectory macro in the specification of database filenames to provide support for relative pathnames. For more information on the DataDirectory macro, see Establishing Connections. If you do not use the DataDirectory macro, relative pathnames are relative to the current directory of the process in which Blackfish SQL is executing. On Java platforms, the user.dir property dictates how database filenames are resolved when a fully qualified path name is not specified. The Java Virtual Machine (JVM) defaults this property to the current working directory of the process. You can set this property with a JVM command line option. For example:
-Duser.dir=/myapplication
You can also set this property from within a Java application by using the java.util.System.setProperty method.
System logging is performed for all connections and all databases accessed in the same process.
You can enable Blackfish SQL system logging in the following ways.
For the local Blackfish SQL client:
Set the blackfishsql.logFile system.
For the remote Blackfish SQL client:
In the Blackfish SQL configuration file set the blackfishsql.logFile.
All Blackfish SQL system properties are case sensitive and begin with the blackfishsql. prefix. The SystemProperties class has constant strings for all system properties. For Windows system properties:
If your application uses the Blackfish SQL server, set system properties in the BSQLServer.exe.config file. If your application does not use the Blackfish SQL server, set system properties by calling the System.AppDomain.CurrentDomain.SetData method.
For Java system properties:
If your application uses the Blackfish SQL server, set system properties in the BSQLServer.config file by prefixing the property setting with vmparam -D. If your application does not use the Blackfish SQL server, set system properties by calling the System.setProperty method.
For Blackfish SQL for Java, these are additional logging options:
Database logging output is performed on a per-database basis and is sent to the status log files for that database. The lifetime of status log files is managed in the same fashion as the transactional log files for the database. When a transactional log file is dropped, the corresponding status log file is dropped also. When you create a database, status logging is disabled by default. You can enable database status logging by calling the DB_ADMIN.ALTER_DATABASE built-in stored procedure. You can set the log filtering options for all connections to a database by calling the DB_ADMIN.SET_DATABASE_STATUS_LOG_FILTER built-in stored procedure. You can set the log filtering options for a single connection by setting the logFilter connection property or by calling the DB_ADMIN.SET_STATUS_LOG_FILTER built-in stored procedure.
Locks can fail due to lock timeouts or deadlocks. Lock timeouts occur when a connection waits to acquire a lock held by another transaction and that wait exceeds the milliseconds set in the lockWaitTime connection property. In such cases, an exception is thrown that identifies which connection encountered the timeout and which connection is currently holding the required lock. The transaction that encounters the lock timeout is not rolled back.
Blackfish SQL has automatic, high speed deadlock detection that should detect all deadlocks. An appropriate exception is thrown that identifies which connection encountered the deadlock, and the connection with which it is deadlocked. Unlike lock timeout exceptions, deadlock exceptions encountered by a java.sql.Connection cause that connection to automatically roll back its transaction. This behavior allows other connections to continue their work.
Use the following guidelines to detect timeouts and deadlocks:
A connection usually requires a lock when it either reads from or writes to a table stream or row. It can be blocked by another connection that is reading or writing. You can prevent blocks in two ways:
Connections should use short-duration transactions in high concurrency environments. However, in low- or no-concurrency environments, a long-duration transaction can provide better throughput, since fewer commit requests are made. There is a significant overhead to the commit operation because it must guarantee the durability of a transaction.
Read-only transactions are not blocked by writers or other readers, and since they do not acquire locks, they never block other transactions.
Setting the readOnlyTx connection property to true causes a connection to use read only connections. Note that there is also a readOnly connection property, which is very different from the readOnlyTx connection property. The readOnly connection property causes the database file to be open in read only mode, preventing any other connections from writing to the database.
For Blackfish SQL for Java JDBC connections you can also enable read-only transactions by setting the readOnly property of the java.sql.Connection object or the com.borland.dx.sql.dataset.Database.getJdbcConnection methods to true. When using Blackfish SQL for Java DataStoreConnection objects, set the readOnlyTx property to true before opening the connection.
Read-only transactions work by simulating a snapshot of the Blackfish SQL database. The snapshot sees only data from transactions that are committed at the point the read-only transaction starts; otherwise, the connection would have to check if there were pending changes and roll them back whenever it accessed the data. A snapshot begins when the connection opens. The snapshot is refreshed each time the commit method is called.
If you suspect that cache contents were not properly saved on a non-transactional Blackfish SQL database, you can verify the integrity of the file by calling the DB_ADMIN.VERIFY built-in stored procedure.
Note that transactional Blackfish SQL databases have automatic crash recovery when they are opened. Under normal circumstances, Blackfish SQL databases do not require verification.
This section provides more Java-specific troubleshooting guidelines.
The approach to debugging triggers and stored procedures depends on whether your application uses the local or remote JDBC driver.
If your application uses the local JDBC driver, there is nothing special to set up, since the database engine is executing in the same process as your application.
If your application uses the remote JDBC driver, you can use either of the following procedures.
Using the DataStoreServer JavaBean for debugging:
In your application, instantiate a com.borland.datastore.jdbc.DataStoreServer JavaBean component and execute its start method.
Using the JdsServer for debugging:
Complete the following steps:
Creating an SQL table forces unquoted identifiers to be uppercase. You must quote the identifiers to enable case sensitivity. See “Identifiers” in the SQL Reference.
When you use DataExpress components to create a table, the table and column names are case sensitive. If you specify these identifiers in lowercase or mixed case, SQL is not able to access them unless the identifiers are quoted.
When you use DataExpress to access a table, the StorageDataSet storeName property is case sensitive. However, the column identifiers can be referenced in a case-insensitive fashion. Consequently, for DataExpress, you can access an address column by using ADDRESS or address.
The simplest way to avoid problems with identifiers for both SQL and DataExpress components is to always use uppercase identifiers when your application creates or accesses tables.
Set the saveMode property to 2 when you are debugging an application that uses a non-transactional Blackfish SQL database. The debugger stops all threads when you are single-stepping through code or when breakpoints are hit. If you do not set the saveMode property to 2, the Blackfish SQL daemon thread cannot save modified cache data. For more information, see “Non-transactional Database Disk Cache Write Options” in Optimizing Blackfish SQL Applications.
Sun Microsystems makes changes to the java.text.CollationKey classes from time to time as it corrects problems. The secondary indices for tables stored inside a Blackfish SQL database use these CollationKey classes to generate sortable keys for non-US locales. When Sun changes the format of these CollationKeys classes, the secondary indexes created by an older Sun JDK may not work properly with a new Sun JDK. The problems resulting from such a situation manifest themselves in the following ways:
Currently, the only way to correct this is to drop the secondary indices and rebuild them with the current JDK. The StorageDataSet.restructure() method also drops all the secondary indexes.
Administering Blackfish SQL
Using Blackfish SQL Security
Using Stored Procedures and User Defined Functions
Using Triggers in Blackfish SQL Tables
Stored Procedures Reference
Optimizing Blackfish SQL Applications
Deploying Blackfish SQL Database Applications | http://docs.embarcadero.com/products/rad_studio/radstudio2007/RS2007_helpupdates/HUpdate4/EN/html/bfsql/trouble_xml.html | 2012-05-27T01:21:57 | crawl-003 | crawl-003-016 | [] | docs.embarcadero.com |
Blackfish SQL supports stored procedures to encapsulate business logic in the schema of a database. In addition, Blackfish SQL supports User Defined Functions (UDFs) to extend the built-in SQL support. Where many other database vendors have invented their own SQL-like language for stored procedures, Blackfish SQL can access stored procedures and UDFs created in any .NET language such as Delphi, C#, VB and Java.
Stored procedures can also increase the performance of an application, since they are executed in the same Virtual Machine as the Blackfish SQL database engine itself. This results in execution with minimal overhead. While a stored procedure is executing SQL statements, no network traffic is generated. The stored procedure uses an in-process ADO.NET connection. This provides the same performance advantage as using the in-process Blackfish SQL ADO.NET driver rather than the remote driver.
Stored procedures and UDFs provide these additional benefits:
This chapter covers:
Stored procedures are procedures that are stored on the database server and executed on request from an SQL client. Often the stored procedure executes several SQL queries against the tables of the database to yield the desired result. In Blackfish SQL, these SQL queries are written in the language of choice, that is available on the .NET or Java platforms. The desired effect may be to update a set of tables, or to calculate an accumulated value from one or more tables, or to add specialized integrity constraints. A stored procedure may have several parameters, which can be either input only, output only, or both.
Example
Consider an ADD_ORDER procedure that takes a customerId, an itemId, and a quantity value as input, and adds a record to the ORDERS table. However, suppose that you also want to verify that this customer has paid for previous orders. To achieve this, you can cause the procedure to throw an exception if this is not the case.
The stored procedure is executed with an IDbCommand object by setting the properties CommandType and CommandText, and then adding the appropriate parameters.
Notice the difference in the interpretation of the parameters, depending on the combination of CommandType and the style of the parameter markers that are used. If the CommandType is StoredProcedure, the parameter names are taken from the implementation of the stored procedure, in which case it is possible to omit optional parameters.
A User Defined Function is a code snippet that is written to extend the built-in SQL support. Like stored procedures, they are executed on the database server and called from an SQL client. UDFs must return a value, and are usually written to be used in the WHERE clause of SELECT queries. However, a UDF may also be called by itself, similar to a stored procedure.
Example
Consider a MAX_VALUE function that takes two values, <value1> and <value2>, and returns the greater of the two. The UDF can be executed in an SQL statement:
'SELECT * FROM PEOPLE WHERE MAX_VALUE(HEIGHT,5*WIDTH) < ?'
Or, in an SQL CALL statement:
'?=CALL MAX_VALUE(?,?)'
This section provides detailed information on how to create Blackfish SQL stored procedures and UDFs for the .NET platform.
There are three steps involved in creating a Blackfish SQL stored procedure:
Example
This example uses the sample ADD_ORDER from the previous example in About Stored Procedures, with this schema:
CUSTOMER TABLE
ORDERS TABLE
ITEMS TABLE
P1 := Command.Parameters.Add('P1', DbType.Decimal); P2 := Command.Parameters.Add('P2', DbType.Int32); P1.Direction := ParameterDirection.Output; P2.Value := CustId; Command.ExecuteNonQuery; if P1.Value = DBNull.Value then Owed := 0 else Owed := Decimal(P1.Value); Owed := Owed + Amount; Command.Parameters.Clear; Command.CommandText := 'SELECT CREDIT INTO ? FROM CUSTOMER WHERE CUST_ID=?'; P1 := Command.Parameters.Add('P1', DbType.Decimal); P2 := Command.Parameters.Add('P2', DbType.Int32); P1.Direction := ParameterDirection.Output; P2.Value := CustId; Command.ExecuteNonQuery; Credit := Decimal(P1.Value); if Owed > Credit then raise Exception.Create('Customer doesn''t have that much credit'); Command.Parameters.Clear; Command.CommandText := 'UPDATE ITEMS SET STOCK=STOCK-? WHERE ITEM_ID=?'; P1 := Command.Parameters.Add('P1', DbType.Int32); P2 := Command.Parameters.Add('P2', DbType.Int32); P1.Value := Quantity; P2.Value := ItemId; Command.ExecuteNonQuery; Command.Parameters.Clear; Command.CommandText := 'INSERT INTO ORDERS (CUST_ID, ITEM_ID, QUANTITY, SALE_AMOUNT) '+ 'VALUES (?, ?, ?, ?)'; P1 := Command.Parameters.Add('P1', DbType.Int32); P2 := Command.Parameters.Add('P2', DbType.Int32); P3 := Command.Parameters.Add('P3', DbType.Int32); P4 := Command.Parameters.Add('P4', DbType.Decimal); P1.Value := CustId; P2.Value := ItemId; P3.Value := Quantity; P4.Value := Amount; Command.ExecuteNonQuery; Command.Free; end; end.
After completing the code for the stored procedure:
Now that the code is ready to be executed, the Blackfish SQL database must be made aware of the class member that can be called from SQL. To do this, start DataExporer and issue a CREATE METHOD statement:
CREATE METHOD ADD_ORDER AS 'MyProcs::SampleStoredProcedures.TMyClass.AddOrder';
MyProcs is the name of the package, and the method name is fully-qualified with unit name and class name.
Execute the stored procedure ADD_ORDER from a Delphi Console application:
unit MyCompany; interface implementation uses System.Data; type TSomething = class public procedure AddOrder( Connection: DbConnection; CustId: Integer; ItemId: Integer; Quantity: Integer); end; { Assume: Connection: is a valid Blackfish SQL connection. CustId: is a customer in the CUSTOMER table. ItemId: is an item from the ITEMS table. Quantity: is the quantity of this item ordered. } procedure TSomething.AddOrder( Connection: DbConnection; CustId: Integer; ItemId: Integer; Quantity: Integer); var Command: DbCommand; P1, P2, P3: DbParameter; begin Command := con.CreateCommand; Command.CommandText := 'ADD_ORDER'; Command.CommandType := CommandType.StoredProcedure; P1 := Command.Parameters.Add('custId', DbType.Int32); P2 := Command.Parameters.Add('itemId', DbType.Int32); P3 := Command.Parameters.Add('quantity', DbType.Int32); P1.Value := CustId; P2.Value := ItemId; P3.Value := Quantity; Command.ExecuteNonQuery; Command.Free; end; end.
When TSomeThing.AddOrder is called in the client application, this in turn calls the stored procedure ADD_ORDER, which causes TMyClass.AddOrder to be executed in the Blackfish SQL server process. By making TMyClass.AddOrder into a stored procedure, only one statement has to be executed over a remote connection. The five statements executed by TMyClass.AddOrder are executed in-process of the Blackfish SQL server, using a local connection.
Note that the application is not passing a connection instance to the call of the ADD_ORDER stored procedure. Only the actual logical parameters are passed.
Blackfish SQL generates an implicit connection object, when it finds a stored procedure or UDF where the first argument is expected to be a System.Data.IDbConnection instance.
The Delphi language supports output parameters and reference parameters. The Blackfish SQL database recognizes these types of parameters and treats them accordingly.
Database NULL values require special handling. The System.String can be handled by the NULL value. However, for all other types, the formal parameter type must be changed to TObject, since NULL is not a valid value for a .NET ValueType. If the formal parameter is a TObject type, then the value of System.DBNull is used for a database NULL value. Blackfish SQL will also accept nullable types in stored procedures written in C# (for example, int).
Examples:
Example of a stored procedure with an INOUT parameter; NULL values are ignored:
class procedure TMyClass.AddFive(ref Param: Integer); begin Param := Param + 5; end;
Example of a stored procedure with an INOUT parameter; NULL values are kept as NULL values:
class procedure TMyClass.AddFour(ref Param: TObject); begin if Param <> nil then Param := TObject(Integer(Param) + 4); end;
Use:
procedure TryAdding(Connection: DbConnection); var Command: DbCommand; begin Command := Connection.CreateCommand; Command.CommandText := 'ADD_FIVE'; Command.CommandType := CommandType.StoredProcedure; P1 := Command.Parameters.Add('param', DbType.Int32); P1.Direction := ParameterDirection.InputOutput; P1.Value = 17; Command.ExecuteNonQuery; if 22 <> Integer(P1.Value) then raise Exception.Create('Wrong result'); Command.Parameters.Clear; Command.CommandText := 'ADD_FOUR'; Command.CommandType := CommandType.StoredProcedure; P1 := Command.Parameters.Add('param', DbType.Int32); P1.Direction := ParameterDirection.InputOutput; P1.Value = 17; Command.ExecuteNonQuery; if 21 <> Integer(P1.Value) then raise Exception.Create('Wrong result'); P1.Value = DBNull.Value; Command.ExecuteNonQuery; if DbNull.Value <> P1.Value then raise Exception.Create('Wrong result'); Command.Free; end;
The above implementation of AddFour uses a TObject wrapper class for integers. This allows the developer of addFour to recognize NULL values passed by Blackfish SQL, and to set an output parameter to NULL to be recognized by Blackfish SQL.
In contrast, in the implementation for AddFive, it is impossible to know if a parameter was NULL, and it is impossible to set the result of the output parameter to NULL.
If for some reason an operator (for example: a bitwise AND operator) is needed for a where clause, and Blackfish SQL does not offer that operator, you can create one in Delphi, Visual Basic, C#, or C++ and call it as a UDF. However, use this capability with caution, since Blackfish SQL will not recognize the purpose of such a function, and will not be able to use any indices to speed up this part of the query.
Consider the UDF example given earlier, involving the MAX_VALUE UDF:
'SELECT * FROM PEOPLE WHERE MAX_VALUE(HEIGHT,5*WIDTH) < ?'
That query is equivalent to this query:
'SELECT * FROM PEOPLE WHERE HEIGHT < ? AND 5*WIDTH < ?'
where the same value is given for both parameter markers. This SQL statement yields the same result, because the implementation of MAX_VALUE is known. However, Blackfish SQL will be able to use only indices available for the HEIGHT and WIDTH column for the second query. If there were no such indices, the performance of the two queries would be about the same. The advantage of writing a UDF occurs when functionality does not already exist in Blackfish SQL (for example: a bit wise AND operator).
To debug .NET stored procedures:
To debug stored procedures when the protocol is in-process or not set:
To debug stored procedures when the protocol is TCP:
If your IDE supports remote debugging:
Delphi will be able to attach to the Blackfish SQL Server process.
If your IDE does not support remote debugging:
A stored procedure can produce an ADO.NET DbDataReader simply by returning a DbDataReader.
class function GetRiskyCustomers( Connection: DbConnection; Credit: Decimal credit): DbDataReader; var Command: DbCommand; P1: DbParameter; begin Command := Connection.CreateCommand; Command.CommandText := 'SELECT NAME FROM CUSTOMER WHERE CREDIT > ? '; P1 := Command.Parameters.Add('param', DbType.Decimal); P1.Value := Credit; Result := Command.ExecuteReader; end;
Note that the command object is not freed at the end of the method. If the command was freed, it would implicitly close the DbDataReader, which results in no data being returned from the stored procedure. Instead, Blackfish SQL closes the command implicitly after the stored procedure has been called.
The GetRiskyCustomers stored procedure can be used as follows, in ADO:
function GetRiskyCustomers( Connection: DbConnection): ArrayList; var Command: DbCommand; Reader: DbReader; List: ArrayList; begin List := ArrayList.Create; Command := Connection.CreateCommand; Command.CommandText := 'GETRISKYCUST'; Command.CommandType := CommandType.StoredProcedure; P1 := Command.Parameters.Add('Credit', DbType.Decimal); P1.Value := 2000; Reader := Command.ExecuteReader; while Reader.Read do List.Add(Reader.GetString(0)); Command.Free; Result := List; end;
This section provides detailed information on how to create Blackfish SQL stored procedures and UDFs for the Java platform.
Stored procedures and UDFs for Blackfish SQL for Java must be written in Java. The compiled Java classes for stored procedures and UDFs must be added to the CLASSPATH of the Blackfish SQL server process in order to be available for use. This should give the database administrator a chance to control which code is added. Only public static methods in public classes can be made available for use.
You can update the classpath for the Blackfish SQL tools by adding the classes to the <jds_home>/lib/storedproc directory.
After a stored procedure or a UDF has been written and added to the CLASSPATH of the Blackfish SQL server process, use this SQL syntax to associate a method name with it:
CREATE JAVA_METHOD <method-name> AS <method-definition-string>
<method-name> is a SQL identifier such as INCREASE_SALARY and <method-definition-string> is a string with a fully qualified method name. For example:
com.mycompany.util.MyClass.increaseSalary
Stored procedures and UDFs can be dropped from the database by executing:
DROP JAVA_METHOD <method-name>
After a method is created, it is ready for use. The next example shows how to define and call a UDF.
This example defines a method that locates the first space character after a certain index in a string. The the first SQL snippet defines the UDF and and the second shows an example of how to use it.
Assume that TABLE1 has two VARCHAR columns: FIRST_NAME and LAST_NAME. The CHAR_LENGTH function is a built-in SQL function.
package com.mycompany.util; public class MyClass { public static int findNextSpace(String str, int start) { return str.indexOf(' ',start); } } CREATE JAVA_METHOD FIND_NEXT_SPACE AS 'com.mycompany.util.MyClass.findNextSpace'; SELECT * FROM TABLE1 WHERE FIND_NEXT_SPACE(FIRST_NAME, CHAR_LENGTH(LAST_NAME)) < 0;
A final type-checking of parameters is performed when the Java method is called. Numeric types are cast to a higher type if necessary in order to match the parameter types of a Java method. The numeric type order for Java types is:
The other recognized Java types are:
Note that if you pass NULL values to the Java method, you cannot use the primitive types such as short and double. Use the equivalent encapsulation classes instead (Short, Double). A SQL NULL value is passed as a Java null value.
If a Java method has a parameter or an array of a type that is not listed in the tables above, it is handled as SQL OBJECT type.
If a Java method parameter is an array of one of the recognized input types (other than byte[]), the parameter is expected to be an output parameter. Blackfish SQL passes an array of length 1 (one) into the method call, and the method is expected to populate the first element in the array with the output value. The recognized Java types for output parameters are:
Output parameters can be bound only to variable markers in SQL. All output parameters are essentially INOUT parameters, since any value set before the statement is executed is passed to the Java method. If no value is set, the initial value is arbitrary. If any of the parameters can output a SQL NULL (or have a valid NULL input), use the encapsulated classes instead of the primitive types.
package com.mycompany.util; public class MyClass { public static void max(int i1, int i2, int i3, int result[]) { result[0] = Math.max(i1, Math.max(i2,i3)); } } CREATE JAVA_METHOD MAX AS 'com.mycompany.util.MyClass.max'; CALL MAX(1,2,3,?);
The CALL statement must be prepared with a CallableStatement in order to get the output value. See the JDBC documentation for how to use java.sql.CallableStatement. Note the assignment of result[0] in the Java method. The array passed into the method has exactly one element.
If the first parameter of a Java method is of type java.sql.Connection, Blackfish SQL passes a connection object that shares the transactional connection context used to call the stored procedure. This connection object can be used to execute SQL statements using the JDBC API.
Do not pass anything for this parameter. Let Blackfish SQL do it.
package com.mycompany.util; public class MyClass { public static void increaseSalary(java.sql.Connection con, java.math.BigDecimal amount) { java.sql.PreparedStatement stmt = con.prepareStatement("UPDATE EMPLOYEE SET SALARY=SALARY+?"); stmt.setBigDecimal(1,amount); stmt.executeUpdate(); stmt.close(); } } CREATE JAVA_METHOD INCREASE_SALARY AS 'com.mycompany.util.MyClass.increaseSalary'; CALL INCREASE_SALARY(20000.00);
Note:
A Java stored procedure can produce a ResultSet on the client by returning either a ResultSet or a DataExpress DataSet from the Java implementation of the stored procedure. The DataSet is automatically converted to a ResultSet for the user of the stored procedure.
Example
This example returns a ResultSet:
package com.mycompany.util; public class MyClass { public static void getMarriedEmployees(java.sql.Connection con) java.sql.Statement stmt = con.getStatement(); java.sql.ResultSet rset = stmt.executeQuery("SELECT ID, NAME FROM EMPLOYEE WHERE SPOUSE IS NOT NULL"); return rset; }
Note: Do not close the stmt statement. This statement is closed implicitly.
Example
This example returns a DataSet, which is automatically converted to a ResultSet:
package com.mycompany.util; public class MyClass { public static void getMarriedEmployees() com.borland.dx.dataset.DataSet dataSet = getDataSetFromSomeWhere(); return dataSet; }
Note: Do not close the stmt statement. This statement is closed implicitly.
Example
Register and call the previous examples like this:
java.sql.Statement stmt = connection.getStatement(); stmt.executeUpdate("CREATE JAVA_METHOD GET_MARRIED_EMPLOYEES AS "+ "'com.mycompany.util.MyClass.getMarriedEmployees'"); java.sql.ResultSet rset = stmt.executeQuery("CALL GET_MARRIED_EMPLOYEES()"); int id = rset.getInt(1); String name = rset.getString(2);
Java methods can be overloaded to avoid numeric loss of precision.
package com.mycompany.util; public class MyClass { public static int abs(int p) { return Math.abs(p); } public static long abs(long p) { return Math.abs(p); } public static BigDecimal abs(java.math.BigDecimal p) { return p.abs(); } public static double abs(double p) { return Math.abs(p); } } CREATE JAVA_METHOD ABS_NUMBER AS 'com.mycompany.util.MyClass.abs'; SELECT * FROM TABLE1 WHERE ABS(NUMBER1) = 2.1434;
The overloaded method abs is registered only once in the SQL engine. Now imagine that the abs method taking a BigDecimal is not implemented! If NUMBER1 is a NUMERIC with decimals, then the abs method taking a double would be called, which can potentially lose precision when the number is converted from a BigDecimal to a double.
The return value of the method is mapped into an equivalent SQL type. Here is the type mapping table: | http://docs.embarcadero.com/products/rad_studio/radstudio2007/RS2007_helpupdates/HUpdate4/EN/html/bfsql/storedprocedures_xml.html | 2012-05-27T01:21:46 | crawl-003 | crawl-003-016 | [] | docs.embarcadero.com |
This chapter provides a brief overview of basic Blackfish SQL security features and the SQL commands you can use to implement them. For a complete description of the syntax, use, and examples for a specific command, see the SQL Reference or the Stored Procedures Reference.
Blackfish SQL provides the following built-in security features:
User authentication restricts access to a Blackfish SQL database to authorized users only. Users must log into the database using an authorized user account and password. Permissions can be granted to or revoked from an account to fine tune access. In general, full access is reserved for the Administrator account(s), and a more restricted account or accounts are provided for general users.
By default, Blackfish SQL has one built-in Administrator account, sysdba/masterkey. You can secure a database by changing the password for the sysdba account and restricting use of that account to database administrators only. You can then create a user account with limited access rights to be used for general access. You can also create additional Administrator accounts, which may or may not be granted database startup privileges.
The following section describes how to create and modify user accounts.
You can use the following SQL statements to add, delete, and modify user accounts:
CREATE <userid> PASSWORD <password>
Where:
<userid> is the account to be added
<password> is the password for this account
DROP <userid> [ CASCADE|RESTRICT ]
Where:
<userid> is the account to be removed.
CASCADE deletes the user and all objects that the user owns.
RESTRICT causes the statement to fail if the user owns any objects, such as tables, views, or methods.
(no option) causes the statement to fail if the user owns any objects, such as tables, views, or methods.
ALTER USER <userid> SET PASSWORD "<password>";
Where:
<userid> is the account for which the password should be changed.
<password> is the new password.
There are several database access privileges which you can grant to or revoke from an account The following section describes the set of access privileges, and how to grant and revoke privileges for an account.
You can use the GRANT and REVOKE statements to change the access privileges for one or more user accounts. You can grant or revoke access to specific database resources or specific objects in the database. In addition, you can grant specific privileges to named roles, and you can then grant or revoke these roles for specific users.
To grant or revoke a privilege for an account, use the following SQL commands:
GRANT <role>|<privilege> TO <userid>
Grants the specified privilege or role to the specified user account.
REVOKE <role>|<privilege> FROM <userid>
Revokes the specified privilege or role from the specified user account.
Where:
<userid> is the account to be modified.
<role> is the user role to be granted or revoked, such as ADMIN. This can be a single role, or a comma-separated list of roles.
<privilege> is the privilege to be granted or revoked. This can be a single privilege or a comma-separated list of privileges, and can be one or more of the following:
Only a user with Administrator privileges can encrypt a database. When a database is encrypted, the STARTUP privilege is automatically revoked for all users (including Administrators) other than the Administrator issuing the encryption command. Consequently, after encrypting you must use the same Administrator account to restart the database. You can reassign STARTUP privileges to other users after the database has been encrypted and restarted.
You can use the built-in stored procedure DB_ADMIN.ENCRYPT() to encrypt a new or empty Blackfish SQL database. For instructions on encrypting a non-empty database, see Encrypting a Database with Existing Content
To encrypt a new database, log in from an Administrator account, and issue the following SQL command:
CALL DB_ADMIN.ENCRYPT(<AdminPassword>,<EncryptionSeed>)
Where:
DB_ADMIN.ENCRYPT() is the built-in stored procedure for encrypting a database.
<AdminPassword> is the password for the user issuing the encryption command.
<EncryptionSeed> is a 16 character seed value.
For Blackfish SQL for Java:
To encrypt a Blackfish SQL for Java database that has existing tables, use the JBuilder utility, JdsExplorer. For instructions, see the JBuilder online help for JdsExplorer.
For Blackfish SQL for Windows:
To encrypt a database with existing content, do the following:
For additional information, see the Stored Procedures Reference.
In this discussion, an opponent is someone who is trying to break the Blackfish SQL security system.
The authentication and authorization support is secure for server-side applications where opponents do not have access to the physical Blackfish SQL database files. The SYS.USERS table stores passwords, user IDs, and rights in encrypted form. The table also stores the user ID and rights in an unencrypted column, but this is for display purposes only. The encrypted values for user ID and rights are used for security enforcement.
The stored passwords are encrypted using a strong TwoFish block cipher. A pseudo-random number generator is used to salt the encryption of the password. This makes traditional password dictionary attacks much more difficult. In a dictionary attack, the opponent makes guesses until the password is guessed. This process is easier if the the opponent has personal information about the user, and the user has chosen an obvious password. There is no substitution for a well chosen (obscure) password as a defense against password dictionary attacks. When an incorrect password is entered, the current thread sleeps for 500 milliseconds.
If a Blackfish SQL database is unencrypted, it is important to restrict physical access to the file, for the following reasons:
For environments where a dangerous opponent may gain access to physical copies of a Blackfish SQL database, the database and log files should be encrypted, in addition to being password protected. WARNING: The cryptographic techniques that Blackfish SQL uses to encrypt data blocks are state-of-the-art. The TwoFish block cipher used by Blackfish SQL has never been defeated. This means that if you forget your password for an encrypted Blackfish SQL database, you will not be able to access the database. The best chance of recovering the data would be to have someone guess the password.
There are measures that can be used to guard against forgetting a password for an encrypted database. It is important to note that there is a master password used internally to encrypt data blocks. Any user that has STARTUP rights has the master password encrypted using their password in the SYS.USERS table. This allows one or more users to open a database that has been shut down, because their password can be used to decrypt a copy of the master password. This feature can be used to create a new database that has one secret user who has Administrator privileges (which includes STARTUP rights). If you use this virgin database whenever a new empty database is needed, you will always have one secret user who can unlock the encryption.
Encrypting a database has some effect on performance. Data blocks are encrypted when they are written from the Blackfish SQL cache to the Blackfish SQL database and are decrypted when they are read from the Blackfish SQL database into the Blackfish SQL cache. So the cost of encryption is only incurred when file I/O is performed.
Blackfish SQL encrypts all but the first 16 bytes of .jds file data blocks. There is no user data in the first 16 bytes of a data block. Some blocks are not encrypted. This includes allocation bitmap blocks, the header block, log anchor blocks and the SYS.USERS table blocks. Note that the sensitive fields in the SYS.USERS table are encrypted using the user's password. Log file blocks are completely encrypted. Log anchor and status log files are not encrypted. The temporary database used by the query engine is encrypted. Sort files used by large merge sorts are not encrypted, but they are deleted after the sort completes.
NOTE: The remote client for Blackfish SQL currently uses sockets to communicate with a Blackfish SQL Server. This communication is not secure. Since the local client for Blackfish SQL is in-process, it is secure.
Administering Blackfish SQL
Using Stored Procedures and User Defined Functions
Using Triggers in Blackfish SQL Tables
Stored Procedures Reference
Optimizing Blackfish SQL Applications
Deploying Blackfish SQL Database Applications | http://docs.embarcadero.com/products/rad_studio/radstudio2007/RS2007_helpupdates/HUpdate4/EN/html/bfsql/security_xml.html | 2012-05-27T01:21:36 | crawl-003 | crawl-003-016 | [] | docs.embarcadero.com |
This Preface describes the manual, lists technical resources, and provides CodeGear contact information.
This document is for:
Blackfish SQL for Windows users should have a working knowledge of:
Blackfish SQL for Java users should have a working knowledge of:
This document uses the following typographic conventions:
General information is at: SQL.
Technical information and community contributions are at: SQL.
To discuss issues with other Blackfish SQL users, visit the newsgroups: support.codegear.com/newsgroups/directory.
CodeGear offers a variety of support options for Blackfish SQL. For pre-sales support, installation support, and a variety of technical support options, visit:.
When you are ready to deploy Blackfish SQL, you may need additional deployment licenses. To purchase licenses and upgrades, visit the CodeGear Online Shop at:.
Useful technical resources include:
JDBC
SQL
DataExpress JavaBeans
Administering Blackfish SQL
Using Blackfish SQL Security
Using Stored Procedures and User Defined Functions
Using Triggers in Blackfish SQL Tables
Stored Procedures Reference
Optimizing Blackfish SQL Applications
Deploying Blackfish SQL Database Applications | http://docs.embarcadero.com/products/rad_studio/radstudio2007/RS2007_helpupdates/HUpdate4/EN/html/bfsql/preface_xml.html | 2012-05-27T01:21:31 | crawl-003 | crawl-003-016 | [] | docs.embarcadero.com |
4.2.1
This documentation does not apply to the most recent version of Splunk. Click here for the latest version.
Contents
4.2.1
The following issues have been resolved in this version of Splunk:
Security issue resolved
A reflected XSS exploit was resolved in Splunk Web. For more details about this issue, refer to this issue's page on the Security portal. (SPL-38585)
Resolved issues
- Epoch timestamps not parsed correctly after March 12, 2011. (SPL-37992)
- In rare cases, concurrent hash table and string length collisions for metadata field values can cause index-level metadata files to grow to very large sizes, up to several gigabytes. (SPL-38464)
- Splunk Web fails to start if the SPLUNK_HOME path in splunk-launch.conf ends with a directory delimiter ("/" for Linux or "\" for Windows). (SPL-38054)
- Splunk Web can become unresponsive due to excessive session/lock files in
var/run/splunk. Removing the lock files and restarting Splunk will resolve the issue. (SPL-37409)
- The error 'SearchOperator:loadjob': Cannot find artifacts within the search..." in is written to splunkd.log on the first run of an alert that includes the 'rises by' or 'drops by' conditions, although the search executes correctly. This is because there can be no change in the value on the first run of the search. (SPL-33432)
- If when saving a search, a user gets the error message: 'Cannot find viewstate with vsid=' it means that the user doesn't have sufficient permissions to save viewstates to the app. (SPL-37874)
- If you are using distributed search and your Splunk installation is not on the same partition as your indexes, you may see issues where you run out of disk on the indexer if you run searches that return a very large number of events (such as for *). (SPL-37799)
- Using "show source" from a 4.2 search head against a 4.1.x index doesn't remove subseconds properly and causes the surrounding search to fail. (SPL-37776)
- An error "Failed to fetch data : In handler 'win-perfmon-find-collection': bad allocation" is displayed when trying to add Performance Monitoring counters as inputs installed on non-English Windows server. (SPL-37560)
- If you create and delete keys which have Chinese names in the Windows Registry, in Splunk, the events don't show the Chinese names. (SPL-22148)
- When viewing Splunk Web in English, a cacheing issue can cause Chinese text to be displayed. (SPL-37917)
- The Windows 4.2 lightweight and universal forwarder parses WinEventLog datastreams on the forwarder, preventing all parsing control on the indexer. The symptoms of this are: no filtering nor routing to the nullqueue based on props and transforms. (SPL-38443)
- A migration from 4.1.x to 4.2 on Windows replaces %SPLUNK_HOME%\etc\apps\windows\default\*.conf files with *.conf.in filenames. Work around this issue by first backing up the configuration files for your existing Windows app's local directory, then download and install the latest Splunk for Windows app from Splunkbase. (SPL-38402)
- Events from Windows Event logs line break at random positions. Work around this issue by editing the value of
LINE_BREAKERin
$SPLUNK_HOME/etc/system/default/props.confand specifying
([\r\n](?=\d{2}/\d{2}/\d{2,4} \d{2}:\d{2}:\d{2} [aApPmM]{2}))as the value. (SPL-38325)
- Splunk Web shows Event Log Collections that were enabled in 4.1.x as going to
index=None, although it is actually going to the default index. (SPL-37529)
- Setting
restartSplunkd=trueon a Windows deployment client causes an error: "Exception: <type 'exceptions.WindowsError'>, Value: [Error 6] The handle is invalid" to be written to the Windows Application event log. (SPL-37439)
- The
splunk list indexcommand returns a segmentation fault. (SPL-37796)
- Distributing a search to a Free version of Splunk gives a "version mismatch" warning. (SPL-37167)
- Deployment manager shows extra (not real) forwarders because of empty fields in metrics.log. (SPL-37264)
- Occasional universal forwarder crash in
TcpOutEloop. (SPL-37491)
- Forwarder crash with 'TcpOutputClient::decrementRefCount(): Assertion `_refCount > 0' failed'. (SPL-38776)
- If upgrade from 4.1.x to 4.2 fails and "An error occurred: Failed to run splunkd rest" is displayed during the migration process, it is possible that the *nix app failed to migrate. (SPL-38651)
- A warning message ("Skipped indexing of internal audit event will keep dropping events until indexer congestion is remedied. Check disk space and other issues that may cause indexer to block") is displayed in Splunk Web due to congestion in queues (most often tcpout-queue) (SPL-37407)
- An error ("ERROR IndexProcessor - 'homePath' tag required in config for index sample") stops migration process when upgrading from 4.0 to 4.2. (SPL-38061)
- The splunkd process crashes on startup if a bucket's metadata is corrupt. (SPL-36595)
- Migration from 4.2 should check for metadata corruption. (SPL-38730, SPL-38738)
- New 4.2 installations use
serverNamethat does not agree with 4.1.x versions. (SPL-38563)
- New 4.2 installations on Windows use $COMPUTERNAME rather than hostname for value of host. (SPL-38561)
- Universal forwarder changes capitalization of the hostname and the UI now displays two hosts. (SPL-38141)
- Search Head Pooling gets error of "end-of-stream" in the app view if the app is located not only in the shared mount point, but also in etc/apps.(SPL-38485)
- Upgrading from 4.1.x to 4.2 overwrites existing Windows and *Nix app config files with files ending in .in. (SPL-38402, SPL-38340)
- Crash in HTTPRequestHandlerThread in splunkd when enabling the *Nix app (SPL-38260)
- Splunk Web en-US/paths URL is returning "IndexError: list assignment index out of range". (SPL-38100)
- Can't use the Services.msc interface to restart Splunk Web on Windows after changing caCertPath, changes don't get picked up properly. (SPL-38027, SPL-35732)
- Mako runtime error when upgrading from 4.1.x to 4.2 on PPC Mac. (SPL-38026)
- Getting an error "ERROR IndexProcessor - calling getPolicyByDomain, but not a read-only IndexProcessor." in splunkd.log since upgrading to 4.2. (SPL-37994)
- Universal forwarders accept and spawn search processes that crash with a lot of PROCESS_SEARCH WARNs in splunkd.log. (SPL-37978)
- Splunk generating a lot of dmp files from splunk-admon.exe crashing. (SPL-37898)
- Search head peers drop off the list of known search head peers in Manager if authentication against that peer fails. (SPL-37754)
- The splunkd.log fills with 2 ERRORs every 5 seconds once minimum free disk space reached. (SPL-37616)
- Table command adds a bunch of empty fields at the very end of running the search. (SPL-37500)
- Queue full with raw TCP input causes a hang and unclean shutdown when doing index-and-forward. (SPL-37465)
- Banner message "skipped indexing of internal audit event will keep dropping events until indexer congestion is remedied. Check disk space and other issues that may cause indexer to block" when there is no congestion. (SPL-37407)
- splunk-optimize doesn't identify bad tsidx when it finds one. (SPL-37107)
- No error messaging displayed to users if SSO login fails. (SPL-30884)
- Upgrade of SplunkLightForwarder and SplunkForwarder tries to launch Splunk in browser at the end. (SPL-25676)
This documentation applies to the following versions of Splunk: 4.2.1 , 4.2.2 , 4.2.3 , 4.2.4 , 4.2.5 View the Article History for its revisions. | http://docs.splunk.com/Documentation/Splunk/4.2.1/ReleaseNotes/4.2.1 | 2012-05-27T00:12:57 | crawl-003 | crawl-003-016 | [] | docs.splunk.com |
(Command-line option to suppress warning: -w-bei)
You're trying to initialize an enum variable to a different type.
For example, the following initialization will result in this warning, because 2 is of type int, not type enum count:
enum count zero, one, two x = 2;
It is better programming practice to use an enum identifier instead of a literal integer when assigning to or initializing enum types.
This is an error, but is reduced to a warning to give existing programs a chance to work. | http://docs.embarcadero.com/products/rad_studio/delphiAndcpp2009/HelpUpdate2/EN/html/devwin32/wrnenuminit_xml.html | 2012-05-27T01:05:24 | crawl-003 | crawl-003-016 | [] | docs.embarcadero.com |
Wednesday, April 02, 2008 4:55 PMRegina Dinneen, Consumer User Operations
Hello faithful Google Docs Blog readers -- I'm Regina, and I work in our Mountain View, California office. I'm a San Francisco native, and when I'm not at work, you can find me playing soccer (or "football") 2-3 nights per week. Another passion is travel and learning about new cultures: I've had the opportunity to live in Japan, South Africa and Ireland. And of course I'm always pushing my friends to give Google Docs a test drive. Recently I got a few of them to try out the new forms feature (big win!).
I also post under the ID "Google Docs Guide 2" on the Google Docs Help Group, mostly to let you know when specific issues have been fixed or when new features have been added. Some days you may not see any posts for me, but I'm always watching the action. My main goal is to let you help one another. I'll only jump in when an issue can't be fixed with normal troubleshooting steps. I've found that sometimes you can answer the "How do I" questions better than I can.
Here are two examples of how your feedback has helped improve Google Docs:
1) Issue: Autosum in Danish and Swedish locales
Quite a few people reported a bug with Autosum in spreadsheets. We told our engineering team, and they fixed it. Without the multiple reports from the group, we might have missed this one. (Thanks, MacLeod, ahab and Gill!)
2) Issue: Email notifications about changes to your spreadsheet
This feature was listed on our suggestions form and in the group where you can suggest new features and vote on what you think we should work on next. Many people voted for email notifications, and it recently became a reality.
Please also be aware that I often update our known issues page and suggestions page with topics I've seen from the group. So if you aren't already a member of the Help Group, join today and let your voice be heard! | http://googledocs.blogspot.com/2008/04/introducing-google-docs-guide-2.html | 2009-07-10T23:57:12 | crawl-002 | crawl-002-008 | [] | googledocs.blogspot.com |
Deleting Learning Items
To delete a learning item, browse to the Learning Items page. Select a learning item from the table and click the Delete/Archive Learning Item link. Note: Attempting to delete a learning item that is associated with a DNA component or assigned to users will cause it to be archived, otherwise it will be deleted.
This page was last modified on July 18, 2018, at 06:03.
Feedback
Comment on this article: | https://docs.genesys.com/Documentation/PDNA/latest/PDNAA/DeletingLearningItems | 2018-11-13T01:08:31 | CC-MAIN-2018-47 | 1542039741176.4 | [] | docs.genesys.com |
8.5.000.72
Workspace Plugin for Skype for Business Release Notes
Contents
Helpful Links
Releases Info
Product Documentation
Multimedia Connector for Skype for Business
Genesys Products
What's New
This release includes only resolved issues.
Resolved Issues
This release contains the following resolved issues:
Previously, the error message “Cannot log in to the voice, instant messaging channel. Please check or refine your channel information.” was displayed if the Skype for Business client did not log in within 300 ms after an agent signed in to Workspace Desktop Edition. Now, this message is suppressed. All other warning and error messages are not affected. (WPLYNC-878)
Upgrade Notes
No special procedure is required to upgrade to release 8.5.000.72.
Dependencies
This version of Workspace Plugin requires Workspace Desktop Edition version 8.5.116.10 or later.
This page was last modified on December 16, 2016, at 13:41.
Feedback
Comment on this article: | https://docs.genesys.com/Documentation/RN/latest/wde-pl-skype85rn/wde-pl-skype8500072 | 2018-11-13T01:04:05 | CC-MAIN-2018-47 | 1542039741176.4 | [] | docs.genesys.com |
, { + devtool: 'inline-source-map', + devServer: { + contentBase: './dist' + } + });
webpack.prod.js
+ const merge = require('webpack-merge'); + const UglifyJSPlugin = require('uglifyjs-webpack-plugin'); + const common = require('./webpack.common.js'); + + module.exports = merge(common, { + plugins: [ + new UglifyJSPlugin() + ] + });
In
webpack.common.js, we now have our
entry and
output setup configured and we've included any plugins that are required for both environments. In
webpack.dev.js, we've added the recommended
devtool for that environment (strong source mapping), as well as our simple
devServer configuration. Finally, in
webpack.prod.js, we included the
UglifyJS": "webpack.config.
Note that while the
UglifyJSPlugin is a great place to start for minification, there are other options out there. Here are a few more popular ones:
BabelMinifyWebpackPlugin
ClosureCompilerPlugin
If you decide to try another, just make sure your new choice also drops dead code as described in the tree shaking guide. UglifyJSPlugin = require('uglifyjs-webpack-plugin'); const common = require('./webpack.common.js'); module.exports = merge(common, { + devtool: 'source-map', plugins: [ - new UglifyJSPlugin() + new UglifyJSPlugin({ + sourceMap: true + }) ] })
Avoid
inline-***and
eval-***use in production as they can increase bundle size and reduce the overall performance.. We can use webpack's built in
DefinePlugin to define this variable for all our dependencies:
webpack.prod.js
+ const webpack = require('webpack'); const merge = require('webpack-merge'); const UglifyJSPlugin = require('uglifyjs-webpack-plugin'); const common = require('./webpack.common.js'); module.exports = merge(common, { devtool: 'source-map', plugins: [ new UglifyJSPlugin({ sourceMap: true }), + new webpack.DefinePlugin({ + 'process.env.NODE_ENV': JSON.stringify(());
As mentioned in Asset Management at the end of the Loading CSS section, it is typically best practice to split your CSS out to a separate file using the
ExtractTextPlugin. There are some good examples of how to do this in the plugin's documentation. The
disable option can be used in combination with the
--env flag to allow inline loading in development, which is recommended for Hot Module Replacement and build speed.
Some of what has been described above is also achievable via the command line. For example, the
--optimize-minimize flag will include the
UglifyJS.
© JS Foundation and other contributors
Licensed under the Creative Commons Attribution License 4.0. | http://docs.w3cub.com/webpack/guides/production/ | 2018-11-13T00:15:13 | CC-MAIN-2018-47 | 1542039741176.4 | [] | docs.w3cub.com |
Test2::Util - Tools used by Test2 and friends.
NAME.
- ($ok, $err) = do_rename($old_name, $new_name)
Rename a file, this wraps
rename()in a way that makes it more reliable cross-platform when trying to rename files you recently altered.
- ($ok, $err) = do_unlink($filename)
Unlink a file, this wraps
unlink()in a way that makes it more reliable cross-platform when trying to unlink files you recently altered.
- ($ok, $err) = try_sig_mask { ... }
Complete an action with several signals masked, they will be unmasked at the end allowing any signals that were intercepted to get handled.
This is primarily used when you need to make several actions atomic (against some signals anyway).
Signals that are intercepted:
- SIGINT
-
- SIGALRM
-
- SIGHUP
-
- SIGTERM
-
- SIGUSR1
-
- SIGUSR2
-
NOTES && CAVEATS
- 5.10.0
Devel::Cover does not support threads. CAN_THREAD will return false if Devel::Cover is loaded before the check is first run.
SOURCE
The source code repository for Test2 can be found at.
MAINTAINERS
- Chad Granum <[email protected]>
-
AUTHORS
- Chad Granum <[email protected]>
-
- Kent Fredric <[email protected]>
-
This program is free software; you can redistribute it and/or modify it under the same terms as Perl itself.
See | http://docs.activestate.com/activeperl/5.24/perl/lib/Test2/Util.html | 2018-11-13T00:52:01 | CC-MAIN-2018-47 | 1542039741176.4 | [] | docs.activestate.com |
Kubernetes YAML
Configure Dotmesh in Kubernetes
This guide will explain the ins and outs of the Dotmesh Kubernetes YAML file. It’s not a guide to just using it to install and use Dotmesh with Kubernetes - you can find that in the Installation guide. This guide is for people who want to modify the YAML and do non-standard installations.
We assume you have read the Concrete Architecture guide, and understand what the different components of a Dotmesh cluster are.
Using customised YAML.
The Dotmesh YAML is split between two files.
The first file is a ConfigMap that, as you might expect, provides configuration used by the Dotmesh Operator to configure Dotmesh on your cluster; that’s the one you’re most likely to need to modify.
The second file actually sets up the components of the cluster, and is less likely to need changing. However, this guide will explain both in detail, to enable customised setups.
We provide pre-customised versions of both YAML files:
ConfigMap
Core YAML
- Core YAML for Kubernetes 1.7
- Core YAML for Kubernetes 1.8
- Core YAML for Kubernetes 1.8 on AKS (Azure)
Getting the YAML ready for customisation
Grab the most appropriate base YAML for your situation, and customise it like so:
$ curl > configmap-default.yaml $ curl > dotmesh-default.yaml $ cp configmap-default.yaml configmap-customised.yaml $ cp dotmesh-default.yaml dotmesh-customised.yaml # ...edit configmap-customised.yaml and/or dotmesh-customised.yaml... $ kubectl apply -f $ kubectl apply -f
The ConfigMap
The ConfigMap has the following keys in its
data section:
nodeSelector: A selector for the nodes that should run Dotmesh. If it’s left as the empty string, then Dotmesh will be installed on every node.
upgradesUrl: The URL of the checkpoint server. The Dotmesh server on each node will periodically ping an API call to this URL to find out if a new version is available; if so, a message will be presented to users when they run
dm version. To turn this off so that we don’t know you’re running Dotmesh, set it to the empty string.
upgradesIntervalSeconds: How many seconds to wait between checks of the checkpoint server.
flexvolumeDriverDir: The directory on the nodes (in the host filesystem) where flexdriver plugins need to be installed. This varies between cloud providers; this is the line that changes between the vanilla, GKE and AKS versions of the ConfigMap YAML.
poolName: The name of the ZFS pool to use for backend storage.
logAddress: The IP address of a syslog server to send log messages to. If left as the empty string, then logging will go to standard output (which means is recommended).
storageMode: This controls how Dotmesh obtains the actual underlying storage used to store the dots. Valid values are
localand
pvcPerNode.
local.poolSizePerNode: (
storageMode=
localonly) How large a pool file to create on each node. Defaults to
10Gfor a ten gigabyte pool.
local.poolLocation: (
storageMode=
localonly) The location on the host filesystem where the pool file will be created. Defaults to
/var/lib/dotmesh.
pvcPerNode.pvSizePerNode: (
storageMode=
pvcPerNodeonly) How large a PVC to request for each node. Defaults to
10Gfor a ten gigabyte PVC.
pvcPerNode.storageClass: (
storageMode=
pvcPerNodeonly) What
storageClassto use when creating PVCs. Defaults to
standard.
local storage mode
In this storage mode, Dotmesh will store the dots in a pool file (with size
local.poolSizePerNode) located in the directory named in
local.poolLocation on each node.
This means that destroying the node will throw away that file. This isn’t a problem if there was no dirty data on this node all the commits on this node have been replicated to other nodes - but if not, then some data will be lost. That’s not nice, so please use
pvcPerNode mode in production clusters. However,
local mode just works out of the box; there’s no requirement for a dynamic provisioner to create PersistentVolume objects when the Dotmesh Operator creates a PersistentVolumeClaim to request storage.
In this mode, the Dotmesh server pods are named
server-(NODE NAME), so it’s easy to tell which one is associated with the storage on which node.
pvcPerNode storage mode
In this storage mode, Dotmesh will store the dots in a Kubernetes PersistentVolume obtained by creating a PersistentVolumeClaim for a volume of size
pvcPerNode.pvSizePerNode and storageClass
pvcPerNode.storageClass, for each node.
For this to work, the PersistentVolumeClaims it creates need to be matched with PersistentVolumes. You can do this manually by creating PersistentVolumes, but it’s intended to be used in a cloud environment with automatic dynamic provisioning.
The PersistentVolumeClaims are created in the
dotmesh namespace, with names of the form
pvc-(SOME UNIQUE ID). The Dotmesh server pods are created with names of the form
server-(PVC NAME)-node-(NODE NAME), making it clear which PVC and which node the server pod is associated with.
Switching modes
We don’t currently officially support switching modes in a running cluster - although it is a feature we could add if there’s demand. For now, we’d recommend creating a new cluster and pulling all your dots across. If you’d like Dotmesh to automatically do this for you by simply changing the ConfigMap and sitting back, please comment on this issue!
Components of the Core YAML.
The YAML is a List, composed of a series of different objects. We’ll summarise them, then look at each in detail.
All namespaced objects are in the
dotmesh namespace; but
ClusterRoles, ClusterRoleBindings and StorageClasses are not
namespaced in Kubernetes.
The following objects comprise the core Dotmesh cluster:
- The
dotmeshServiceAccount
- The
dotmesh-operatorServiceAccount
- The
dotmeshClusterRole
- The
dotmeshClusterRoleBinding
- The
dotmesh-operatorClusterRoleBinding
- The
dotmeshService
- The
dotmesh-operatorDeployment
Then the following comprise the Dynamic Provisioner:
- The
dotmesh-provisionerServiceAccount
- The
dotmesh-provisioner-runnerClusterRole
- The
dotmesh-provisionerClusterRoleBinding
- The
dotmesh-dynamic-provisionerDeployment
- The
dotmeshStorageClass
The
dotmesh-etcd-cluster etcd cluster.
We use the coreos etcd
operator to set
up our own dedicated etcd cluster in the
dotmesh namespace. Please
consult the operator
documentation
to learn how to customise it.
The
dotmesh ServiceAccount.
This is the ServiceAccount that will be used to run the Dotmesh server. You shouldn’t need to change this.
The
dotmesh-operator ServiceAccount.
This is the ServiceAccount that will be used to run the Dotmesh operator. You shouldn’t need to change this.
The
dotmesh ClusterRole.
This is the role Dotmesh will run under. You shouldn’t need to change this file.
If you are running Kubernetes >=
1.8 then RBAC is probably enabled and you need to create a
cluster-admin role for your cluster.
Here is an example of adding that role for a gcloud user running a GKE cluster:
$ kubectl create clusterrolebinding cluster-admin-binding \ --clusterrole cluster-admin \ --user "$(gcloud config get-value core/account)"
The
--user can be replaced with a local user (e.g.
root) or another user depending on where your cluster is deployed.
The
dotmesh ClusterRoleBinding.
This simply binds the
dotmesh ServiceAccount to the
dotmesh
ClusterRole. You shouldn’t need to change this.
The
dotmesh-operator ClusterRoleBinding.
This simply binds the
dotmesh ServiceAccount to the
cluster-admin
ClusterRole (so that it can manage pods and PVCs). You shouldn’t need
to change this.
The
dotmesh Service.
This is the service used to access the Dotmesh server on port 32607. It’s needed both for internal connectivity between nodes in the cluster, and to allow other clusters to push/pull from this one.
The
dotmesh-operator Deployment.
This runs the Dotmesh operator, which then creates a Dotmesh server
pod for every node in your cluster (that matches the
nodeSelector in
the ConfigMap, at any rate).
The operator references the
dotmesh-operator ServiceAccount in order
to be able to create and destroy pods and PVCs.
The Dotmesh server pods it creates reference the
dotmesh
ServiceAccount in order to obtain the privileges it needs.
They also refer to the
dotmesh secret (also in the
dotmesh
namespace) to configure the initial API key and admin password, which
is not created by the YAML - you have to provide these secrets
yourself; we wouldn’t dream of shipping you a default API key! This
gets mounted into the container filesystem at
/secret.
The
dotmesh-provisioner ServiceAccount.
This is the ServiceAccount that will be used to run the Dotmesh provisioner. You shouldn’t need to change this.
The
dotmesh-provisioner-runner ClusterRole.
This is the role the Dotmesh provisioner will run under. You shouldn’t need to change this.
The
dotmesh-provisioner ClusterRoleBinding.
This simply binds the
dotmesh-provisioner ServiceAccount to the
dotmesh-provisioner-runner ClusterRole. You shouldn’t need to change
this.
The
dotmesh-dynamic-provisioner Deployment.
This actually runs the dynamic provisioner. Only one replica of it needs to run somewhere in the cluster; it just looks for Dotmesh PVCs and creates corresponding PVs, so it doesn’t need to actually run on very node.
It references the
dotmesh secret from the
dotmesh namespace, in
order to obtain the cluster’s admin API key so it can communicate with
the Dotmesh server.
The
dotmesh StorageClass.
This defines a default
dotmesh StorageClass that, when referenced
from a PersistentVolumeClaim, will cause Dotmesh to manage the
resulting volume.
In the
parameters section, there’s a single configurable
item.
dotmeshNamespace is the default namespace for dots accessed
through this StorageClass. You probably don’t need to change it unless
you’re doing something interesting.
Dotmesh PersistentVolumeClaims.
A PersistentVolumeClaim using a Dotmesh StorageClass has the following structure:
kind: PersistentVolumeClaim apiVersion: v1 metadata: name: example-pvc annotations: dotmeshNamespace: admin dotmeshName: example dotmeshSubdot: logging_db spec: storageClassName: dotmesh accessModes: - ReadWriteOnce resources: requests: storage: 1Gi
The interesting parts:
spec.storageClassNamemust reference a suitably configured Dotmesh StorageClass, or Dotmesh won’t manage this volume.
metadata.annotations.dotmeshNamespacecan be used to override the namespace in which the dot is kept. The default is inherited from the StorageClass, or defaults to
adminif none is specifed there, so it needn’t be specified here unless you’re doing something strange.
metadata.annotations.dotmeshNameis the name of the dot. If you don’t specify it, then the name of the PVC (in this case,
example-pvc) will be used.
metadata.annotations.dotmeshSubdotis the name of the subdot to use. If left unspecified, then the default of
__default__will be used. Use
__root__to reference the root of the dot, or any other name to use a specific subdot. | https://docs.dotmesh.com/references/kubernetes/ | 2018-11-13T01:30:58 | CC-MAIN-2018-47 | 1542039741176.4 | [] | docs.dotmesh.com |
Tutorial: Configure G Suite for automatic user provisioning
The objective of this tutorial is to show you how to automatically provision and de-provision user accounts from Azure Active Directory (Azure AD) to G Suite..
Assign users to G Suite
Azure Active Directory uses a concept called "assignments" to determine which users should receive access to selected apps. In the context of automatic user account provisioning, only the users and groups that have been "assigned" to an application in Azure AD are synchronized.
Before you configure and enable the provisioning service, you need to decide which users or groups in Azure AD need access to your app. After you've made this decision, you can assign these users to your app by following the instructions in Assign a user or group to an enterprise app.
Important
We recommend that a single Azure AD user be assigned to G Suite to test the provisioning configuration. You can assign additional users and groups later.
When you assign a user to G Suite, select the User or Group role in the assignment dialog box. The Default Access role doesn't work for provisioning.
Enable automated user provisioning
This section guides you through the process of connecting your Azure AD to the user account provisioning API of G Suite. It also helps you configure the provisioning service to create, update, and disable assigned user accounts in G Suite based on user and group assignment in Azure AD.
Tip
You might also choose to enable SAML-based single sign-on for G Suites, by following the instructions in the Azure portal. Single sign-on can be configured independently of automatic provisioning, though these two features compliment each other.
Configure automatic user account provisioning
Note
Another viable option for automating user provisioning to G Suite is to use Google Apps Directory Sync (GADS). GADS provisions your on-premises Active Directory identities to G Suite. In contrast, the solution in this tutorial provisions your Azure Active Directory (cloud) users and email-enabled groups to G Suite.
Sign in to the Google Apps Admin console with your administrator account, and then select Security. If you don't see the link, it might be hidden under the More Controls menu at the bottom of the screen.
On the Security page, select API Reference.
Select Enable API access.
Important
For every user that you intend to provision to G Suite, their user name in Azure Active Directory must be tied to a custom domain. For example, user names that look like [email protected] are not accepted by G Suite. On the other hand, [email protected] is accepted. You can change an existing user's domain by editing their properties in Azure AD. We've included instructions for how to set a custom domain for both Azure Active Directory and G Suite in the following steps.
If you haven't added a custom domain name to your Azure Active Directory yet, then take the following steps:
a. In the Azure portal, on the left navigation pane, select Active Directory. In the directory list, select your directory.
b. Select Domain name on the left navigation pane, and then select Add.
c. Type your domain name into the Domain name field. This domain name should be the same domain name that you intend to use for G Suite. Then select the Add Domain button.
d. Select Next to go to the verification page. To verify that you own this domain, edit the domain's DNS records according to the values that are provided on this page. You might choose to verify by using either MX records or TXT records, depending on what you select for the Record Type option.
For more comprehensive instructions on how to verify domain names with Azure AD, see Add your own domain name to Azure AD.
e. Repeat the preceding steps for all the domains that you intend to add to your directory.
Note
For user provisioning, the custom domain must match the domain name of the source Azure AD. If they do not match, you may be able to solve the problem by implementing attribute mapping customization.
Now that you have verified all your domains with Azure AD, you must verify them again with Google Apps. For each domain that isn't already registered with Google, take the following steps:
a. In the Google Apps Admin Console, select Domains.
b. Select Add a domain or a domain alias.
c. Select Add another domain, and then type in the name of the domain that you want to add.
d. Select Continue and verify domain ownership. Then follow the steps to verify that you own the domain name. For comprehensive instructions on how to verify your domain with Google, see Verify your site ownership with Google Apps.
e. Repeat the preceding steps for any additional domains that you intend to add to Google Apps.
Warning
If you change the primary domain for your G Suite tenant, and if you have already configured single sign-on with Azure AD, then you have to repeat step #3 under Step 2: Enable single sign-on.
In the Google Apps Admin console, select Admin Roles.
Determine which admin account you want to use to manage user provisioning. For the admin role of that account, edit the Privileges for that role. Make sure to enable all Admin API Privileges so that this account can be used for provisioning.
Note
If you are configuring a production environment, the best practice is to create an admin account in G Suite specifically for this step. These accounts must have an admin role associated with them that has the necessary API privileges.
In the Azure portal, browse to the Azure Active Directory > Enterprise Apps > All applications section.
If you have already configured G Suite for single sign-on, search for your instance of G Suite by using the search field. Otherwise, select Add, and then search for G Suite or Google Apps in the application gallery. Select your app from the search results, and then add it to your list of applications.
Select your instance of G Suite, and then select the Provisioning tab.
Set the Provisioning Mode to Automatic.
Under the Admin Credentials section, select Authorize. It opens a Google authorization dialog box in a new browser window.
Confirm that you want to give Azure Active Directory permission to make changes to your G Suite tenant. Select Accept.
In the Azure portal, select Test Connection to ensure that Azure AD can connect to your app. If the connection fails, ensure that your G Suite account has Team Admin permissions. Then try the Authorize step again.
Enter the email address of a person or group who should receive provisioning error notifications in the Notification Email field. Then select the check box.
Select Save.
Under the Mappings section, select Synchronize Azure Active Directory Users to Google Apps.
In the Attribute Mappings section, review the user attributes that are synchronized from Azure AD to G Suite. The attributes that are Matching properties are used to match the user accounts in G Suite for update operations. Select Save to commit any changes.
To enable the Azure AD provisioning service for G Suite, change the Provisioning Status to On in Settings.
Select Save.
This process starts the initial synchronization of any users or groups that are assigned to G Suite in the Users and Groups section. The initial sync takes longer to perform than subsequent syncs, which occur approximately every 40 minutes while the service is running. You can use the Synchronization Details section to monitor progress and follow links to provisioning activity logs. These logs describe all actions that are performed by the provisioning service on your app.
For more information on how to read the Azure AD provisioning logs, see Reporting on automatic user account provisioning. | https://docs.microsoft.com/en-us/azure/active-directory/saas-apps/google-apps-provisioning-tutorial | 2018-11-13T00:37:37 | CC-MAIN-2018-47 | 1542039741176.4 | [] | docs.microsoft.com |
Secret Handling¶
Starting with version 0.11.1, goiardi can use external services to store secrets like public keys, the signing key for shovey, and user password hashes. As of this writing, only Hashicorp’s vault is supported. This is very new functionality, so be aware.
NB: If goiardi has been compiled with the
novault build tag, none of this will be available.
Configuration¶
The relevant options for secret configuration on goiardi’s end are:
--use-external-secrets: Turns on using an external secret store.
--vault-addr=<address>: Address of vault server. Defaults to the value the
VAULT_ADDRenvironment variable, but can be specified here. Optional.
--vault-shovey-key=<path>: Optional path for where shovey’s signing key will be stored in vault. Defaults to “keys/shovey/signing”. Only meaningful, unsurprisingly, if shovey is enabled.
Each of the above command-line flags may also be set in the configuration file, with the
-- removed.
Additionally, the
VAULT_TOKEN environment variable needs to be set. This can either be set in the configuration file in the
env-vars stanza in the configuration file, or exported to goiardi in one of the many other ways that’s possible.
To set up vault itself, see the intro and the general documentation for that program. For goiardi to work right with vault, there will need to be a backend mounted with
-path=keys before goiardi is started.
Populating¶
A new goiardi installation won’t need to do anything special to use vault for secrets - assuming everything’s set up properly, new clients and users will work as expected.
Existing goiardi installations will need to transfer their various secrets into vault. A persistent but not DB backed goiardi installation will need to export and import all of goiardi’s data. With MySQL or Postgres, it’s much simpler.
For each secret, get the key or password hash from the database for each object and make a JSON file like this:
{ "secretType": "secret-data\nwith\nescaped\nnew\nlines\nif-any" }
(Once everything looks good with the secrets being stored in vault, those columns in the database should be cleared.)
The “secretType” is “pubKey” for public keys, “passwd” for password hashes, and “RSAKey” for the shovey signing key.
Optionally, you can add a
ttl (with values like “60s”, “30m”, etc) field to that JSON, so that goiardi will refetch the secret after that much time has passed.
Now this JSON needs to be written to the vault. For client and user public keys, the path is “keys/clients/<name>” for clients and “keys/users/<name>” for users. User password hashes are “keys/passwd/users/<name>”. The shovey signing key is more flexible, but defaults to “keys/shovey/signing”. If you save the shovey key to some other path, set
--vault-shovey-key appropriately. | https://goiardi.readthedocs.io/en/latest/features/secrets.html | 2018-11-13T00:25:34 | CC-MAIN-2018-47 | 1542039741176.4 | [] | goiardi.readthedocs.io |
API Gateway 7.5.3 Policy Developer Guide Scripting language filter Overview Write a custom script To write a custom script, you must implement the invoke() method. This method takes a com.vordel.circuit.Message object as a parameter and returns a boolean result. The API Gateway provides a Script Library that contains a number of prewritten. See also Policy Studio preferences. Configure a script filter You can write or edit the JavaScript, Groovy, or Jython.JavaScript:. Further information For more details on using scripts to extend API Gateway, see the API Gateway Developer Guide. Related Links | https://docs.axway.com/bundle/APIGateway_753_PolicyDevGuide_allOS_en_HTML5/page/Content/PolicyDevTopics/utility_scripting.htm | 2018-11-13T00:45:06 | CC-MAIN-2018-47 | 1542039741176.4 | [] | docs.axway.com |
Antispam Tools
To protect your users from spam, you can use the following tools with your Plesk:
- SpamAssassin spam filter. It is a powerful spam filter that uses a wide variety of local and network tests to identify spam signatures.
You can configure the spam filter so as to either delete suspicious messages when they come to your mail server, or change the subject line and add "X-Spam-Flag: YES" and "X-Spam-Status: Yes" headers to the messages. The latter can be useful for users who prefer to filter mail with mail filtering programs installed on their own computers.
To learn more about SpamAssassin, visit.
To configure and switch on the SpamAssassin filter, proceed to the section SpamAssassin Spam Filter.
- DomainKeys. DomainKeys is a spam protection system based on sender authentication. When an email claims to originate from a certain domain, DomainKeys provides a mechanism by which the recipient system can credibly determine that the email did in fact originate from a person or system authorized to send email for that domain. If the sender verification fails, the recipient system discards such email messages. To configure the DomainKeys system on your server, refer to the section DomainKeys Protection.
- '550' error, or rejection of the requested connection.
To configure your mail server for working with DNSBL databases, proceed to the section DNS Blackhole Lists.
- Sender Policy Framework .
To learn more about SPF, visit.
To enable filtering based on SPF, proceed to the section Sender Policy Framework System (Linux).
- Server-wide.
To set up server-wide black and white lists, proceed to the section Server-wide Black and White Lists.
-.
The greylisting protection system also takes into account the server-wide and per-user black and white lists of email senders: email from the white-listed senders is accepted without passing through the greylisting check, and mail from the black-listed senders is always rejected.
When the greylisting support components are installed on the server, then greylisting is automatically switched on for all domains. You can switch off and on greylisting protection for all domains at once (at Tools & Settings > Spam Filter ), or for individual subscriptions (in Customer Panel > Mail section > Change Settings). | https://docs.plesk.com/en-US/12.5/administrator-guide/mail/antispam-tools.59431/ | 2018-11-13T01:23:03 | CC-MAIN-2018-47 | 1542039741176.4 | [] | docs.plesk.com |
Bigcommerce
Bigcommerce is an e-commerce platform for SMBs. You can make your Bigcommerce store multilingual using Transifex Live and sell your products worldwide. We have a live example of a localized Bigcommerce store. Use the language drop-down at the bottom right to switch languages.
Below, you'll find instructions for translating your Bigcommerce store.
Note
Transifex Live only works with Bigcommerce storefronts. For example, the home page and product pages. The shopping cart and check out pages will be only partially localized because those pages live under a subdomain different from your storefront's. This is done in order to protect the shopper's security. Before you begin, you must have a Transifex account and a project you will be associating with your Bigcommerce:
In your Bigcommerce control panel, go to Setup & Tools → Set up your store → Web Analytics.
Check the box next to Google Analytics, then click Save.
Click the Google Analytics tab next to the General Settings tab and paste your Transifex Live JavaScript snippet into the Tracking Code field.
Click Save and you're on to the next step!
An alternative way to add the JavaScript Snippet to your Bigcommerce store is through the theme. From your control panel, go to Design → Edit HTML/CSS. Then click on HTMLHead.html (this file may have another name depending on the theme) and paste the snippet right after the comment
<-- visitor tracking code (if any) -->. Click Save to finish updating the theme head.. | https://docs.transifex.com/integrations/bigcommerce/ | 2018-11-13T01:38:04 | CC-MAIN-2018-47 | 1542039741176.4 | [] | docs.transifex.com |
Agent Group Membership Details Report
From Genesys Documentation
This topic is part of the manual Historical Reporting with Genesys CX Insights for version Current of Reporting.
Contents
Examine the distribution of agents among agent groups.
Related pages:
Understanding the Agent Group Membership Details Report
Use the (Details folders) Agent Group Membership Details Report to understand how agents are distributed among agent groups, and learn when each agent entered and exited each group.
You can specify the Date, Agent Group, and Agent.To get a better idea of what this report looks like, view sample output from the report:
HRCXIAgentGroupMembershipDetails.pdf
The following tables explain the prompts you can select when you generate the report, and the metrics and attributes that are represented in the report: | https://all.docs.genesys.com/PEC-REP/Current/RPRT/HRCXIAgentGrpMbrshpReport | 2021-01-16T03:20:34 | CC-MAIN-2021-04 | 1610703499999.6 | [] | all.docs.genesys.com |
Backward compatibility risk areas in the API
BMC Cloud Lifecycle Management version 3.0 contains several major changes to the API. BMC has a policy of maintaining backward compatibility with previous versions of the API. However, this policy does not guarantee that existing customizations that use the API will not need changes to take advantage of new features. This topic provides an overview of the changes and highlights the risk areas.
Cloud API changes
In version 3.0, some of the core model and, therefore, some API assumptions, have changed. As a result of these changes, old parts of the model have been deprecated. Deprecated APIs work with old objects, but might not work with new objects. This is a core risk for field use of the API. Because core assumptions have changed, code written against the old assumptions might need to be adapted to the new assumptions. Preexisting objects satisfy the old assumptions, so the APIs are guaranteed to work against those objects.
To better understand the deprecation issues, consider the following example: suppose you have a customization that includes API calls to find the names of all networks in a network container. To find network containers that do not have zones — a new capability in version 3.0 — this customization must be modified. However, this customization continues to function correctly when used with network containers that were created in BMC Cloud Lifecycle Management 2.1 and upgraded to version 3.0.
Version 3.0 maintains the old relationships and attributes for each API and returns them when an object is retrieved by search or by
GET requests. However, because the API implementation ignores old relationships and attributes during
PUT and
POST requests, any customizations that use these features must be modified to call the equivalent new APIs (see Model changes).
Firewall and load balancer management changes
Existing customizations that involve firewall and load balancer management might need to be modified. In version 2.1,
FirewallRule,
LoadBalancerPool, and
LBPoolEntry class objects are not persisted in the CloudDB and can be read only from the object provider. In version 3.0, these objects are persistent first-class objects. Because of this change, search requests involving these objects might need to be modified.
The sequence of calls that you must use to support concurrency during firewall modifications has changed. The following table shows the version 2.1 and version 3.0 flows for firewall modifications. Because the
sessionGuid attribute of the
Firewall class is deprecated, you must now acquire the
sessionGuid attribute from the
NetworkContainer object. The
POST /csm/Firewall/<guid>/replaceRulesForIP request is deprecated, but functions for version 2.1 network containers that have been upgraded. However, to use any newly-created containers, you must use the new APIs (see Model changes).
Version 2.1 versus version 3.0 firewall modification flow
Model changes
The following table summarizes the changes to the model, and lists the deprecated APIs and corresponding new APIs.
Tip
If you can't see the rightmost column of the following table, click Hide left columnor type [. If you are using Safari on a Mac and still cannot see the rightmost column, try using Firefox.
Summary of model changes
Provider API changes
The approach to backward compatibility for the Provider API is different from the approach for the Cloud API because the Provider API is a southbound API. Version 3.0 contains many enhanced features, which cause the resource providers to have new behaviors. Most of the changes belong to the following categories: resource onboarding, container provisioning, and service provisioning. The primary feature responsible for these changes is the networking feature, in which the introduction of dynamic containers has changed the features that are expected from network and compute providers. The following sections describe changes in those categories, and changes to some algorithms that call the Provider API.
Resource onboarding changes
The version 3.0 resource onboarding flow is similar to the version 2.1 flow, but includes some new return values. The following table shows the differences (new features are listed in italics):
Version 2.1 versus version 3.0 resource onboarding flow
The key changes in resource onboarding are:
- A pod must have at least one access switch.
- Each compute resource (cluster or physical server) must be connected to the same switches used in the pod, and the switch names must match the names used in the pod.
- A container blueprint must have a template network container.
The following table lists the changes to the associated Provider APIs:
Changes in the Provider API for resource onboarding
Network container provisioning changes
The changes in network container provisioning are due to the introduction of dynamic network containers and changes in the object model. Dynamic network containers affect the API input parameters because the software uses cloud administrator input for IP address information, enabled and disabled networks, and so on. Changes were made to the object model to reduce the importance of zones. For example, firewalls are no longer in a single zone and no longer represent a single real interface. As such, both the input and the output of the Provider API calls have changed.
Changes in the Provider API for network container provisioning
SOI provisioning changes
Changes to service offering instance (SOI) provisioning are due to the introduction of dynamic containers, zone relaxation, and multi-ACL firewalls. The following table shows the differences between the version 2.1 and version 3.0 provisioning flows (new actions are listed in italics).
Version 2.1 versus version 3.0 SOI provisioning flow
Changes in the Provider API for SOI provisioning
Callout API changes
The core piece of the Callout API, registration of a new callout, does not have any changes in version 3.0. However, because callouts receive data from either the Cloud API or the Provider API, depending on the API call to which they are attached, some risk exists in backward compatibility for callouts. This section summarizes the risks and highlights the risk mitigation factors.
High-risk areas for callouts attached to Cloud APIs
Most callouts attached to Cloud APIs remain unaffected, except for callouts that, in their implementation, depend on the network container topology. For example, a callout attached to
POST /csm/ServiceOfferingInstance/bulkCreate that looks up all network containers for the tenant will work. However, if the callout then looks into the network containers to find all load balancers, the callout might fail. If a version 3.0 network container exists for the tenant, the callout might not function correctly. Similarly, for callouts attached to the network container creation APIs, all input arguments are exactly the same. However, if the callout needs to introspect a network container in the system via the Cloud API, it might not function correctly. Success or failure of the callout depends on what you are looking for.
Low-risk areas for callouts attached to Cloud APIs
Callouts that do not look into the network container part of the system and look only in the compute part of the system have lower risk. For example, a callout attached to the
POST csm/ComputeContainer/start API that looks up the IP Address of the second NIC in the compute container functions correctly in version 3.0. Similarly, a callout attached to
POST /csm/ServiceOfferingInstance/bulkCreate that looks up all the IP addresses of all the compute containers continues to function.
High-risk areas for callouts attached to Provider APIs
The changes to the Provider APIs are dramatic in version 3.0. Thus, most of the callouts attached to Provider API calls are at risk. Some of the risk is syntactical and some of the risk is semantic. The biggest risk areas are around networking, however the risk is present in all areas of the Provider API.
For an example of a callout that might seem on the surface to be unaffected syntactically, consider a callout that is attached to the
POST /csm/VirtualGuest API. The callout introspects all the NICs to see all the IPs that the VM has acquired, either through IPAM or DHCP (it can do this if it is a post-callout to the API). In version 2.1, when the virtual guest create API call finishes, all IP addresses are reserved. However, in version 3.0, all IP addresses are not reserved when the call finishes because a new Provider API call that handles NAT is invoked after the virtual guest create API call finishes. Thus, a callout that depends on having a complete picture of the reserved IP addresses does not function correctly in version 3.0, even though it still can see all the IP addresses on the VM.
Low-risk areas for callouts attached to Provider APIs
At the Provider API level, low-risk areas are few and far between. However, in some cases, such as finding information about VMs or physical servers (for example, the host name and IP addresses), and in the absence of any use of NAT features, these callouts function correctly. For example, a callout that registers a VM in Active Directory can find the VM IP addresses in version 3.0 and does not require modification. As long as the VM IP does not use NAT, this callout continues to function as expected. | https://docs.bmc.com/docs/cloudlifecyclemanagement/40/developing/restful-api/backward-compatibility-risk-areas-in-the-api | 2021-01-16T02:43:10 | CC-MAIN-2021-04 | 1610703499999.6 | [] | docs.bmc.com |
This section instructs you on how to perform the OpenStack performance (load)
testing of your deployment using the
CVP - Performance tests Jenkins
pipeline.
Note
Clone the cvp-configuration to your local Gerrit and use it locally to add new or adjust the existing Rally scenarios.
To perform the OpenStack performance testing:
In your local
cvp-configuration repository, inspect and modify the
following items as required:
The Rally scenarios (
rally/rally_scenarios* files)
The
configure.sh setup script.
In the global view, find the CVP - Performance tests pipeline.
Select the Build with Parameters option from the drop-down menu of the pipeline.
Configure the following parameters as required:
Click Build.
Verify the job status:
GREEN, SUCCESS
Testing has beed performed successfully, no errors found.
YELLOW, UNSTABLE
Some errors occurred during the test run. Proceed to Review the CVP - Performance pipeline tests results.
RED, FAILURE
Testing has failed due to issues with the framework or/and pipeline configuration. Review the console output. | https://docs.mirantis.com/mcp/q4-18/mcp-operations-guide/cvp/cvp-perf/execute-cvp-perf-pipeline.html | 2021-01-16T02:43:38 | CC-MAIN-2021-04 | 1610703499999.6 | [] | docs.mirantis.com |
Here you can find a list of setup hints for your website.
Use a static page for homepage and add a nice slider in the header of this.
Read more about how to add a slider to your page from this link about page options.
Create a page for your blog
Set it in
Dashboard > Settings > Reading > Reading Settings
Disable AJAX add to cart to see animated add-to-cart banners on your website
Configure the right amount of products per row and page.
To do this navigate to
Appearance > Customize > WooCommerce > Product Catalog
Configure the right image size and proportion for your products
Here you can check values used in our live demo
Take care of the Thumbnail cropping if you want to achieve our design.
Add a search widget to your site header
Leave widget title empty for design purposes and to achieve our design.
Add a cart widget to your site header
Leave widget title empty for design purposes and to achieve our design.
Use sticky header
Use a Proteo child theme
If you want to modify some parts of the theme, use a child theme is highly recommended.
You can read more about Proteo child themes in this documentation. | https://docs.yithemes.com/yith-proteo/setup-hints/general-hints/ | 2021-01-16T03:05:38 | CC-MAIN-2021-04 | 1610703499999.6 | [array(['https://docs.yithemes.com/yith-proteo/wp-content/uploads/sites/128/2020/05/Screenshot_42-e1588683042912.jpg',
None], dtype=object) ] | docs.yithemes.com |
Rechtspraak
Communicate securely with the Rechtspraak
Set additional login method - Authenticator app
Set up an authenticator app on your mobile phone to receive an access code as a second factor (extra login method) when logging in.
What is an authenticator app?
An authenticator app is an app that provides a changing, unique 6-digit code. The app can be set up on your mobile phone. You can link this app to your Zivver account. You can then use the generated code as a “second factor” to access your account.
How do I set up an authenticator app?
- Open the Zivver WebApp.
- Click the settings for your account in the bottom left.
- Go to the Security tab.
- In the top right corner, click the blue SET button. A new window opens.
- Click NEXT.
- Select the Authenticator app option.
- Click NEXT.
- Follow the instructions on the screen.
Which authenticator app is best to use?
We recommend downloading Google Authenticator from the links below: - Android Play Store. - Apple App Store
It is also possible to add an Authenticator as an extension to Google Chrome. Click here for an explanation. | https://docs.zivver.com/en/rechtspraak/aanmelden-voor-zivver/je-account-beveiligen-met-een-authenticator-app.html | 2021-01-16T02:29:30 | CC-MAIN-2021-04 | 1610703499999.6 | [] | docs.zivver.com |
Lync Server 2013 2013 administrative tools. For procedures to open the tools to perform management tasks, see Open Lync Server 2013 administrative tools.
Ensure that you review infrastructure, operating system, software, and administrator rights requirements before you install or use the Lync Server administrative tools. For details about infrastructure requirements, see Administrative tools infrastructure requirements in Lync Server 2013. For details about operating system and software requirements to install the Lync Server administrative tools, see Server and tools operating system support in Lync Server 2013, Additional software requirements for Lync Server 2013, and Additional server support and requirements in Lync Server 2013. The user rights and permissions required to install and use the tools are described in Administrator rights and permissions required for setup and administration of Lync Server 2013..
Deployment Wizard 2013 administrative tools.
Topology Builder
For details about deployment tasks that you can you perform by using Topology Builder, see the Deployment documentation for each server role.
Lync Server Control Panel 2013 administrative tools.
Important
To configure settings using Lync Server Control Panel, you must be logged in using an account that is assigned to the CsAdministrator role. For details about the predefined administrative roles available in Lync Server 2013, see Planning for role-based access control in Lync Server 2013.
To configure settings using Lync Server Control Panel, you must also use a computer with a minimum screen resolution of 1024 x 768.
Lync Server Management Shell 2013 Management Shell documentation or the command-line help for each cmdlet.
Logging Tool.
Important
The Centralized Logging Service is recommended for all logging collection over the Lync Server Logging Tool in all circumstances. The Lync Server Logging Tool will still work, but it will interfere or be rendered mostly ineffective if the Centralized Logging Service is already running. You should use only the Centralized Logging Service or the Lync Server Logging Tool, but never both concurrently. For more information on the Centralized Logging Service and why you should use it exclusively, see Using the Centralized Logging Service in Lync Server 2013.
In This Section
Administrative tools infrastructure requirements in Lync Server 2013
Server and tools operating system support in Lync Server 2013
Administrative tools software requirements in Lync Server 2013
Administrator rights and permissions required for setup and administration of Lync Server 2013
Requirements to publish a topology in Lync Server 2013
Install Lync Server 2013 administrative tools
Open Lync Server 2013 administrative tools
Troubleshooting Lync Server 2013 Control Panel
Using the Centralized Logging Service in Lync Server 2013
See Also
Lync Server 2013 Management Shell | https://docs.microsoft.com/zh-tw/previous-versions/office/lync-server-2013/lync-server-2013-lync-server-administrative-tools | 2021-01-16T03:40:38 | CC-MAIN-2021-04 | 1610703499999.6 | [] | docs.microsoft.com |
Cameo Enterprise Data Warehouse 18.2 Documentation
All material contained herein is considered proprietary information owned by No Magic, Inc. and is not to be shared, copied, or reproduced by any means. All information copyright 2010-2018 by No Magic, Inc. All Rights Reserved.
Newer versions of Teamwork Cloud (the new product name of Cameo Enterprise Data Warehouse) documentation are available from the following links. | https://docs.nomagic.com/display/CEDW182/Cameo+Enterprise+Data+Warehouse+Documentation | 2021-01-16T02:05:00 | CC-MAIN-2021-04 | 1610703499999.6 | [] | docs.nomagic.com |
Virtual Appliance Guide
IMPORTANT
The Virtual Appliance has limited disk space and is only intended for product evaluation purposes.
It is NOT intended for enterprise and production deployments.
- Deploying the Virtual Appliance
- Powering on the Virtual Appliance
- Administering the Virtual Appliance
- Updating the host's operating system
- Logging onto the Security Console
- Frequently Asked Questions
Deploying the Virtual Appliance
Read this section to learn how to deploy the Virtual Appliance in one of the supported environments.
Supported environments
The Virtual Appliance is tested and supported in the following environments:
- VMware Player 6 or later
- VMware Workstation 9 or later
- VMware Fusion 8 or later
- VMware vCenter 5.5, 6.0
- VMware ESXi 5.5, 6.0
Downloading the Virtual Appliance
Rapid7 provides the Virtual Appliance as an Open Virtualization Archive (OVA) file. You can download either a Virtual Appliance Security Console (VA) or the Virtual Appliance Scan Engine (VASE). Download links for both are as follows:
Deploying in VMware Player and VMware Workstation
- In VMware Player and VMware Workstation, click "File" -> "Open".
- In the dropdown list, select the group that includes *.ova.
- Select the Virtual Appliance file, and click "Open".
The "Import Virtual Machine" window will appear.
OPTIONAL
You can rename the Virtual Appliance file name if desired.
- Specify the storage path for the Virtual Appliance.
- Click "Import".
The import process converts the Virtual Appliance file to a Virtual Machine Disk Format (VMDK) file. When the import process is complete, the Virtual Appliance appears on the list of available virtual machines in VMware Player.
- Select the Rapid7 Virtual Appliance, and click "Play" or "Power On this Virtual Machine" if using VMware Workstation.
Deploying in vCenter or VMware ESXi
- In vCenter or VMware ESXi, click File | Deploy OVF Template... The Deploy OVF template window appears.
- Locate the downloaded Virtual Appliance file, and click Next. The OVF Template Details panel appears for configuring Virtual Appliance set- tings.
- Enter a name for the Virtual Appliance.
- Select an inventory location, and click Next.
- Select a host or cluster for the Virtual Appliance, and click Next.
- Select a resource pool, and click Next.
- Select a datastore, and click Next.
- Select Thin or Thick (recommended) Provision for the disk format, and click Next.
- Select a network mapping, and click Next.
- Click Finish.
Powering on the Virtual Appliance
- When the import process is complete, select the Virtual Appliance from the list of available virtual machines.
- Click Power on.
- Click the Console tab to view a terminal window for the Virtual Appliance.
Administering the Virtual Appliance
Log in to the Virtual Appliance after it starts to perform any necessary administrative functions. The operating system for the Virtual Appliance is a CIS hardened, minimal install of Ubuntu Server 16.04 LTS.
When startup is complete, the Virtual Appliance window displays a login prompt. If you are logging in for the first time, you will be asked to change the current UNIX password:
- Enter the default username:
nexpose
- Enter the default password:
nexpose
TIP
Your password keystrokes will not appear in the terminal as you type them. Take care that you input the password accurately.
- When prompted, enter the default password again.
- Enter your new password according to the complexity requirements.
Password Complexity Requirements
Passwords must at least 14 characters long and contain at least one uppercase letter, one lowercase letter, one number, and one special character.
- Enter your new password again to confirm the change.
You need the IP address of the Virtual Appliance in order to login to to the Web interface. Run
ifconfig -a to view the IP address.
Updating the host's operating system
As a security best practice, make sure to keep your operating system current with the latest updates. To apply an update, take the following steps:
- Access the operating system of your Virtual Appliance using SSH or by opening the a virtual console.
- Run the following command to update all operating system packages to the latest versions:
1sudo apt-get update && sudo apt-get upgrade
Note
The unattended-updates package is installed and configured to automatically apply security updates when available. The virtual appliance requires access to us.archive.ubuntu.com and security.ubuntu.com to retrieve updated packages. Unattended update logs can be reviewed in /var/log/unattended-upgrades/unattended-upgrades.log
Logging onto the Security Console
You perform all Security Console operations through a Web-based interface, which supports the browsers listed at.
To log onto the Security Console take the following steps:
- Open a web browser.
- Enter the URL for the Virtual Appliance:
https://<Virtual_Appliance_IP>:3780
- Enter the default username (
nxadmin) and password (
nxpassword).
- Click the Logon button.
Change Password
Upon first login the Security Console will prompt you to change your password. Enter the default username and password: nxadmin and nxpassword. Enter a new password, and confirm the new password.
If you are a first-time user and have not activated your license, the Security Console displays an activation dialog box. Enter your license key. If you do not have a license key, visit to start your 30-day free trail.
After you receive the license key, login and enter the license key in the activation window.
Frequently Asked Questions
How do I set up a static IP?
There are two different ways to set up a static IP.
- Option 1 - edit one file only
Open the
/etc/network/interfaces file in a text editor with the following command:
1sudo nano /etc/network/interfaces
Edit the
/etc/network/interfaces config with the following code. Note that you do not need to do the
/etc/resolvconf/resolv.conf.d/tail section if you add the dns-nameservers to the
/etc/network/interfaces conf file.
1auto ens322iface ens32 inet static3address 192.168.0.164netmask 255.255.255.05gateway 192.168.0.16dns-nameservers 8.8.4.4 8.8.8.8
- Option 2 - Edit two files
Open the
/etc/network/interfaces file in a text editor with the following command:
1sudo nano /etc/network/interfaces
Match the corresponding lines to the following values:
1auto ens322iface ens32 inet static
Add the following address, netmask, and gateway lines and specify the values as desired.
NOTE
Values shown here are only examples.
ens32 is the default network interface for the OVAs. Run
ifconfig to display existing network interfaces to confirm which interface is in use.
1address 10.0.0.1002netmask 255.255.255.03gateway 10.0.0.1
How do I set up DNS?
Create the
/etc/resolvconf/resolv.conf.d/tail file with the following command:
1sudo nano /etc/resolvconf/resolv.conf.d/tail
Add the following lines and specify values according to your configuration requirements:
1nameserver 8.8.8.82nameserver 8.8.4.43search local.company.com internal.company.com
How do I restart networking?
In order for the static IP and DNS changes to take effect, the existing IP must be flushed with the following command:
1sudo ip addr flush ens32
To restart the networking service, use the following command:
1sudo systemctl restart networking.service
How do I set the system time?
The virtual appliance comes preinstalled with chrony. To check the current system time, run the
chronyc tracking command.
Chrony can be configured by editing the
/etc/chrony/chrony.conf file.
Please see for complete documentation.
To manually sync the time, run the following commands:
1sudo service chrony stop2sudo chronyd -q 'pool pool.ntp.org iburst'3sudo service chrony start
To change the timezone, run
sudo dpkg-reconfigure tzdata and select the desired timezone.
How do I start, stop, and check the status of the console and engine services?
Console
1sudo systemctl status nexposeconsole.service2sudo systemctl start nexposeconsole.service3sudo systemctl stop nexposeconsole.service
Engine
1sudo systemctl status nexposeengine.service2sudo systemctl start nexposeengine.service3sudo systemctl stop nexposeengine.service
What is the OS account lockout policy?
Accounts will get locked out after 5 invalid login attempts. Accounts will get automatically unlocked after 15 minutes. | https://docs.rapid7.com/nexpose/virtual-appliance-guide/ | 2021-01-16T01:58:41 | CC-MAIN-2021-04 | 1610703499999.6 | [array(['/areas/docs/_repos//product-documentation__master/b157b8886c548d94cd89fa31b5cbbad9e6d0c00d/nexpose/images/4ff4b5f-ova1.png',
None], dtype=object)
array(['/areas/docs/_repos//product-documentation__master/b157b8886c548d94cd89fa31b5cbbad9e6d0c00d/nexpose/images/978efd4-ova2.png',
None], dtype=object)
array(['/areas/docs/_repos//product-documentation__master/b157b8886c548d94cd89fa31b5cbbad9e6d0c00d/nexpose/images/f0ce0ec-ova3.png',
None], dtype=object)
array(['/areas/docs/_repos//product-documentation__master/b157b8886c548d94cd89fa31b5cbbad9e6d0c00d/nexpose/images/f4e8e68-deploy-ova-template.png',
None], dtype=object)
array(['/areas/docs/_repos//product-documentation__master/b157b8886c548d94cd89fa31b5cbbad9e6d0c00d/nexpose/images/338bc8a-ova-template-details.png',
None], dtype=object)
array(['/areas/docs/_repos//product-documentation__master/b157b8886c548d94cd89fa31b5cbbad9e6d0c00d/nexpose/images/9482710-new_password.png',
None], dtype=object) ] | docs.rapid7.com |
Provider::ExecQuery method (provider.h)
[The Provider class is part of the WMI Provider Framework which is now considered in final state, and no further development, enhancements, or updates will be available for non-security related issues affecting these libraries. The MI APIs should be used for all new development.]
The ExecQuery method is called by WMI to process a WMI Query Language (WQL) query.
Syntax
HRESULT ExecQuery( MethodContext *pMethodContext, CFrameworkQuery & cQuery, long lFlags );
Parameters
pMethodContext
Pointer to the context object for this call. This value contains any IWbemContext properties specified by the client. Also, this pointer must be used as a parameter to any calls back into WMI.
cQuery
Pointer to a query that has already been parsed by the provider framework.
lFlags
Bitmask of flags with information about the execute query operation. This is the value specified by the client in the IWbemServices::ExecQuery method.
The following flags are handled by (and filtered out) by WMI:
- WBEM_FLAG_ENSURE_LOCATABLE
- WBEM_FLAG_FORWARD_ONLY
- WBEM_FLAG_BIDIRECTIONAL
- WBEM_FLAG_USE_AMENDED_QUALIFIERS
- WBEM_FLAG_RETURN_IMMEDIATELY
- WBEM_FLAG_PROTOTYPE
Return value
The default framework provider implementation of this method returns WBEM_E_PROVIDER_NOT_CAPABLE to the calling method. The IWbemServices::ExecQuery method lists the common return values, although you can choose to return any COM return code.
Remarks
WMI often calls ExecQuery in response to a client call to IWbemServices::ExecQuery, where the client passes in either a list of selected properties or a WHERE clause. WMI can also call ExecQuery if the client query contains an "ASSOCIATORS OF" or "REFERENCES OF" statement describing your class. If your implementation of ExecQuery returns WBEM_E_NOT_SUPPORTED, the client relies on WMI to handle the query.
WMI handles a query by calling your implementation of CreateInstanceEnum to provide all the instances. WMI then filters the resulting instances before returning the instances to the client. Therefore, any implementation of ExecQuery you create must be more efficient than CreateInstanceEnum.
The following describes a common implementation of ExecQuery:
- Create an empty instance of your class using Provider::CreateNewInstance.
- Determine the subset of instances that you should create.
You can use methods such as IsPropertyRequired to see what properties are required, and GetValuesForProp to see what instances WMI requires. Other methods that deal with requested properties include CFrameworkQuery::GetRequiredProperties, CFrameworkQuery::AllPropertiesAreRequired, and CFrameworkQuery::KeysOnly.
- Populate the properties of the empty instance using the Set methods of the CInstance class, such as CInstance::SetByte or CInstance::SetStringArray.
- Send the instance back to the client using CInstance::Commit.
- Return the appropriate return values.
The default ExecQuery framework provider implementation returns WBEM_E_PROVIDER_NOT_CAPABLE. If you implement ExecQuery, you should use the common return values listed in IWbemServices::ExecQuery. If necessary, however, you can return any COM return code.
However, you should keep the following in mind when writing your framework provider:
- Make sure you support standard queries in your association class, especially queries where the reference properties are used in a WHERE clause. For more information, see CFrameworkQuery::GetValuesForProp.
- In your association class support, when you check to see if the endpoints exist, ensure you use the CWbemProviderGlue::GetInstanceKeysByPath or CWbemProviderGlue::GetInstancePropertiesByPath methods.
These methods allow the endpoints to skip populating resource-intensive or unneeded properties.
- Make sure any association endpoint classes support per-property Get methods. For more information, see Supporting Partial-Instance Operations. For more information about the query parameter, see CFrameworkQuery. | https://docs.microsoft.com/en-us/windows/win32/api/provider/nf-provider-provider-execquery | 2021-02-24T21:25:19 | CC-MAIN-2021-10 | 1614178347321.0 | [] | docs.microsoft.com |
attribute
ndarray.base
Base object if memory is from some other object.
The base of an array that owns its memory is None:
>>> x = np.array([1,2,3,4]) >>> x.base is None True
Slicing creates a view, whose memory is shared with x:
>>> y = x[2:] >>> y.base is x True
© 2005–2019 NumPy Developers
Licensed under the 3-clause BSD License. | https://docs.w3cub.com/numpy~1.17/generated/numpy.ndarray.base | 2021-02-24T21:16:03 | CC-MAIN-2021-10 | 1614178347321.0 | [] | docs.w3cub.com |
Open vSwitch Library ABI Updates¶
This file describes the manner in which the Open vSwitch shared library manages different ABI and API revisions. This document aims to describe the background, goals, and concrete mechanisms used to export code-space functionality so that it may be shared between multiple applications.
Overview¶
C and C++ applications often use ‘external’ functionality, such as printing specialized data types or parsing messages, which has been exported for common use. There are many possible ways for applications to call such external functionality, for instance by including an appropriate inline definition which the compiler can emit as code in each function it appears. One such way of exporting and importing such functionality is through the use of a library of code.
When a compiler builds object code from source files to produce object code, the results are binary data arranged with specific calling conventions, alignments, and order suitable for a run-time environment or linker. This result defines a specific ABI.
As library of code develops and its exported interfaces change over time, the resulting ABI may change as well. Therefore, care must be taken to ensure the changes made to libraries of code are effectively communicated to applications which use them. This includes informing the applications when incompatible changes are made.
The Open vSwitch project exports much of its functionality through multiple such libraries of code. These libraries are intended for multiple applications to import and use. As the Open vSwitch project continues to evolve and change, its exported code will evolve as well. To ensure that applications linking to these libraries are aware of these changes, Open vSwitch employs libtool version stamps.
ABI Policy¶
Open vSwitch will export the ABI version at the time of release, such that the library name will be the major.minor version, and the rest of the release version information will be conveyed with a libtool interface version.
The intent is for Open vSwitch to maintain an ABI stability for each minor revision only (so that Open vSwitch release 2.5 carries a guarantee for all 2.5.ZZ micro-releases). This means that any porting effort to stable branches must take not to disrupt the existing ABI.
In the event that a bug must be fixed in a backwards-incompatible way, developers must bump the libtool ‘current’ version to inform the linker of the ABI breakage. This will signal that libraries exposed by the subsequent release will not maintain ABI stability with the previous version.
Coding¶
At build time, if building shared libraries by passing the –enable-shared
arguments to ./configure, version information is extracted from
the
$PACKAGE_VERSION automake variable and formatted into the appropriate
arguments. These get exported for use in Makefiles as
$OVS_LTINFO, and
passed to each exported library along with other
LDFLAGS.
Therefore, when adding a new library to the build system, these version flags
should be included with the
$LDFLAGS variable. Nothing else needs to be
done.
Changing an exported function definition (from a file in, for instance lib/*.h) is only permitted from minor release to minor release. Likewise changes to library data structures should only occur from minor release to minor release. | https://docs.ovn.org/en/latest/internals/contributing/libopenvswitch-abi.html | 2021-02-24T20:13:33 | CC-MAIN-2021-10 | 1614178347321.0 | [] | docs.ovn.org |
.3.1. shp2pgsql¶
shp2pgsql is a commandline tool to import ESRI Shapefile to the database. Under Unix, you can use the following command for importing a new PostGIS table:
shp2pgsql -s <SRID> -c -D -I <path to shapefile> <schema>.<table> | \ psql -d <databasename> -h <hostname> -U <username>:
ERROR: operator class "gist_geometry_ops" does not exist for access method "gist"
This is a known issue regarding the creation in situ of a spatial index for the data you’re importing. To avoid the error, exclude the -I parameter. This will mean that no spatial index is being created directly, and you’ll need to create it in the database after the data have been imported. (The creation of a spatial index will be covered in the next lesson.)
16.3.2. pgsql2shp¶
pgsql2shp is a commandline tool to export PostGIS Tables, Views or SQL select queries. To do this under Unix:
pgsql2shp -f <path to new shapefile> -g <geometry column name> \ -h <hostname> -U <username> <databasename> <table | view>
To export the data using a query:
pgsql2shp -f <path to new shapefile> -g <geometry column name> \ -h <hostname> -U <username> "<query>"
16.3.3. ogr2ogr¶
ogr2ogr is a very powerful tool to convert data into and from postgis to many data formats. ogr2ogr is part of the GDAL/OGR Software and has to be installed separately. To export a table from PostGIS to GML, you can use this command:
ogr2ogr -f GML export.gml PG:'dbname=<databasename> user=<username> host=<hostname>' <Name of PostGIS-Table>
16.3.4. DB Manager¶
You may have noticed another option in the Database menu labeled DB Manager. This is a tool that provides a unified interface for interacting with spatial databases including PostGIS. It also allows you to import and export from databases to other formats. Since the next module is largely devoted to using this tool, we will only briefly mention it here.
16.3.5. In Conclusion¶
Importing and exporting data to and from the database can be done in many various ways. Especially when using disparate data sources, you will probably use these functions (or others like them) on a regular basis. | https://docs.qgis.org/3.10/en/docs/training_manual/spatial_databases/import_export.html | 2021-02-24T20:11:20 | CC-MAIN-2021-10 | 1614178347321.0 | [] | docs.qgis.org |
[−][src]Struct ed25519_dalek::
PublicKey
An ed25519 public key.
Implementations
impl PublicKey[src]
pub fn to_bytes(&self) -> [u8; 32][src]
Convert this public key to a byte array.
pub fn as_bytes<'a>(&'a self) -> &'a [u8; 32][src]
View this public key as a byte array.
pub fn from_bytes(bytes: &[u8]) -> Result<PublicKey, SignatureError>[src]
Construct a
PublicKey from a slice of bytes.
Warning
The caller is responsible for ensuring that the bytes passed into this
method actually represent a
curve25519_dalek::curve::CompressedEdwardsY
and that said compressed point is actually a point on the curve.
Example
use ed25519_dalek::PublicKey; use ed25519_dalek::PUBLIC_KEY_LENGTH; use ed25519_dalek::SignatureError; let public_key_bytes: [u8; PUBLIC_KEY_LENGTH] = [ 215, 90, 152, 1, 130, 177, 10, 183, 213, 75, 254, 211, 201, 100, 7, 58, 14, 225, 114, 243, 218, 166, 35, 37, 175, 2, 26, 104, 247, 7, 81, 26]; let public_key = PublicKey::from_bytes(&public_key_bytes)?;
Returns
A
Result whose okay value is an EdDSA
PublicKey or whose error value
is an
SignatureError describing the error that occurred.
pub fn verify_prehashed<D>([src]
&self,
prehashed_message: D,
context: Option<&[u8]>,
signature: &Signature
) -> Result<(), SignatureError> where
D: Digest<OutputSize = U64>,
&self,
prehashed_message: D,
context: Option<&[u8]>,
signature: &Signature
) -> Result<(), SignatureError> where
D: Digest<OutputSize = U64>,
Verify a
signature on a
prehashed_message using the Ed25519ph algorithm.
Inputs
prehashed_messageis an instantiated hash digest with 512-bits of output which has had the message to be signed previously fed into its state.
contextis an optional context string, up to 255 bytes inclusive, which may be used to provide additional domain separation. If not set, this will default to an empty string.
signatureis a purported Ed25519ph [
Signature] on the
prehashed_message.
Returns
Returns
true if the
signature was a valid signature created by this
Keypair on the
prehashed_message.
pub fn verify_strict([src]
&self,
message: &[u8],
signature: &Signature
) -> Result<(), SignatureError>
&self,
message: &[u8],
signature: &Signature
) -> Result<(), SignatureError>
Strictly verify a signature on a message with this keypair's public key.
On The (Multiple) Sources of Malleability in Ed25519 Signatures
This version of verification is technically non-RFC8032 compliant. The following explains why.
- Scalar Malleability
The authors of the RFC explicitly stated that verification of an ed25519
signature must fail if the scalar
s is not properly reduced mod \ell:
To verify a signature on a message M using public key A, with F being 0 for Ed25519ctx, 1 for Ed25519ph, and if Ed25519ctx or Ed25519ph is being used, C being the context, first split the signature into two 32-octet halves. Decode the first half as a point R, and the second half as an integer S, in the range 0 <= s < L. Decode the public key A as point A'. If any of the decodings fail (including S being out of range), the signature is invalid.)
All
verify_*() functions within ed25519-dalek perform this check.
- Point malleability
The authors of the RFC added in a malleability check to step #3 in
§5.1.7, for small torsion components in the
R value of the signature,
which is not strictly required, as they state:
Check the group equation [8][S]B = [8]R + [8][k]A'. It's sufficient, but not required, to instead check [S]B = R + [k]A'.
History of Malleability Checks
As originally defined (cf. the "Malleability" section in the README of this repo), ed25519 signatures didn't consider any form of malleability to be an issue. Later the scalar malleability was considered important. Still later, particularly with interests in cryptocurrency design and in unique identities (e.g. for Signal users, Tor onion services, etc.), the group element malleability became a concern.
However, libraries had already been created to conform to the original definition. One well-used library in particular even implemented the group element malleability check, but only for batch verification! Which meant that even using the same library, a single signature could verify fine individually, but suddenly, when verifying it with a bunch of other signatures, the whole batch would fail!
"Strict" Verification
This method performs both of the above signature malleability checks.
It must be done as a separate method because one doesn't simply get to change the definition of a cryptographic primitive ten years after-the-fact with zero consideration for backwards compatibility in hardware and protocols which have it already have the older definition baked in.
Return
Returns
Ok(()) if the signature is valid, and
Err otherwise.
Trait Implementations
impl AsRef<[u8]> for PublicKey[src]
impl Clone for PublicKey[src]
impl Copy for PublicKey[src]
impl Debug for PublicKey[src]
impl Default for PublicKey[src]
impl Eq for PublicKey[src]
impl<'a> From<&'a ExpandedSecretKey> for PublicKey[src]
fn from(expanded_secret_key: &ExpandedSecretKey) -> PublicKey[src]
Derive this public key from its corresponding
ExpandedSecretKey.
impl<'a> From<&'a SecretKey> for PublicKey[src]
fn from(secret_key: &SecretKey) -> PublicKey[src]
Derive this public key from its corresponding
SecretKey.
impl PartialEq<PublicKey> for PublicKey[src]
impl StructuralEq for PublicKey[src]
impl StructuralPartialEq for PublicKey[src]
impl Verifier<Signature> for PublicKey[src]
Auto Trait Implementations
impl RefUnwindSafe for PublicKey
impl Send for PublicKey
impl Sync for PublicKey
impl Unpin for PublicKey
impl UnwindSafe for PublicKey>, | https://docs.rs/ed25519-dalek/1.0.1/ed25519_dalek/struct.PublicKey.html | 2021-02-24T20:42:00 | CC-MAIN-2021-10 | 1614178347321.0 | [] | docs.rs |
The Deployment of the Management Domain in the Second Region documentation provides general guidance on the deployment of the management domain in Region B based on VMware Validated Design and step-by-step instructions for extending the single-region software-defined data center (SDDC) to dual-region by using NSX-T Federation that spans the software-defined network between the regions.
Intended Audience
The Deployment of the Management Domain in the Second Region documentation is intended for cloud architects, infrastructure administrators, and cloud administrators who are familiar with and want to use VMware software to deploy in a short time and manage a dual-region software-defined data center (SDDC) that meets the requirements for capacity, scalability, backup and restore, and extensibility for disaster recovery support.
Supported VMware Cloud Foundation Version
Deployment of the Management Domain in the Second Region is compatible with VMware Cloud Foundation 4.2.
Required VMware Software
The Deployment of the Management Domain in the Second Region documentation is compliant and validated with certain product versions. See VMware Validated Design Release Notes.
Before You Apply This Guidance
The sequence of the documentation of this design follows the stages for implementing and maintaining an SDDC.
To deploy the management domain in Region B and configure NSX-T Federation Second Region, you must:
Complete the respective Planning and Preparation Workbook with your deployment options included.
Optionally, read Architecture and Design for the Management Domain.
Deploy a single-region SDDC management domain. See Deployment of the Management Domain in the First Region.
See Documentation Map for VMware Validated Design.
The same requirement applies if you are following the VMware Cloud Foundation documentation to deploy a virtual infrastructure workload domain. See the VMware Cloud Foundation documentation.
Using VMware Cloud Foundation for Deployment of the Management Domain in the Second. | https://docs.vmware.com/en/VMware-Validated-Design/6.2/sddc-deployment-of-the-management-domain-in-the-second-region/GUID-1DACEE4B-320B-44FB-A25C-FF7B40890BC0.html | 2021-02-24T20:19:18 | CC-MAIN-2021-10 | 1614178347321.0 | [] | docs.vmware.com |
Gratis applikasjoner til webutvikling fra Microsoft!
Det er nå lansert et Toolkit som hjelper ASP.NET web-utviklere til hurtig å gjennomføre standardiserte oppgaver. Mer informasjon på engelsk under.
What’s new?
1. Web Application Toolkits have a new landing page:
2. We are launching three new Web Application Toolkits: (1) Calendars 1.0, (2) Bing Maps 1.0, (3) Freemium Apps 1.0..
In this Web Application Toolkit you will find a set of reusable custom controls built in Silverlight, which integrated with the Bing Maps Silverlight Control, make a perfect fit for some of the most common location-aware scenarios. With this Toolkit, you will also find a sample Silverlight application showing how to use those controls when implementing a “store locator” scenario on a Web site.
] | https://docs.microsoft.com/en-us/archive/blogs/isvnorge/gratis-applikasjoner-til-webutvikling-fra-microsoft | 2020-01-17T20:31:03 | CC-MAIN-2020-05 | 1579250590107.3 | [] | docs.microsoft.com |
the catalina-server.xml file file in the
<IS_HOME>/repository/conf/tomcat directory. It is essential for this transport to be properly configured in this file for the Management Console to be accessible to users. For information on how to access the Management Consolemanagement console, see Running the Product.
The following screen depicts the full overview of the Management Consolemanagement.
...
The main menu in the Management Console includes the main list of features that the WSO2 Identity Server provides. The main menu is divided into different sections.
Identity section
Entitlement section
Manage section
Monitor menu
The monitor menu includes a list of features focused on providing logs and statistics related to monitoring the Identity Server. For more information on these features and their usage, see the topics on on monitoring the Identity Server.
Configure menu
The configure menu is mainly a list of administration features which can help you customize and configure the Identity Server to suit your specific requirements.
Tools menu
The tools menu includes SAML and XACML tools. For more details on each of these tools and their usage, see the topics on working with tools. | https://docs.wso2.com/pages/diffpages.action?pageId=80724398&originalId=75108399 | 2020-01-17T19:56:08 | CC-MAIN-2020-05 | 1579250590107.3 | [] | docs.wso2.com |
Requirements¶
Binary Releases¶
Pre-compiled binaries of the latest release of Salmon for a number different platforms are available available under the Releases tab of Salmon’s GitHub repository. You should be able to get started quickly by finding a binary from the list that is compatible with your platform. Additionally, you can obtain a Docker image of the latest version from DockerHub using:
> docker pull combinelab/salmon
Requirements for Building from Source¶
Installation¶
After downloading the Salmon source distribution and unpacking it, change into the top-level directory:
> cd salmon
Then, create and out-of-source build directory and change into it:
> mkdir build > cd build
Salmon Salmon to be installed. If you don’t specify this option, it will be installed locally in the top-level directory (i.e. the directory directly above “build”).
There are a number of other libraries upon which Salmon depends, but CMake should fetch these for you automatically.
You can test the installation by running
> make test
This should run a simple test and tell you if it succeeded or not. | https://salmon.readthedocs.io/en/latest/building.html | 2020-01-17T19:03:26 | CC-MAIN-2020-05 | 1579250590107.3 | [] | salmon.readthedocs.io |
Configuring Word Cloud Components for Build Your Own Reports
When you add a word cloud, at the top of the page, click Worldwide Reports > Report Manager
, and on the Reports tab, next to the Report Name under Actions, click Edit.
- Optional: If your report has multiple pages, click the tab for the page that you want to edit.
- Drag Word Cloud to the Drop components to build the report box.
- From the Data Sets list, drag text-based field component, on the Properties tab, click General
, and configure the settings:
The following table lists the properties you can change:
- On the Properties tab, click Fields
, expand the Field Name, and then configure the available settings:
- To specify the type of numeric value that determines the size of the text, in the Aggregate list, select one of the available options, such as Distinct or Avg.
- To remove a field from the component, click Remove Measure
.
This image is an example of the word cloud component with a title configured:
- To save this version of your report specification in the Reports Manager, at the top of the Report Builder page, click Save.
- To make the report available to end users on the Reports page, at the top of the page, click Deploy.
Related Topics | http://docs.snapprotect.com/netapp/v11/article?p=features/reports/custom/t_rptbyo_configure_tag_cloud_component.htm | 2020-01-17T20:37:20 | CC-MAIN-2020-05 | 1579250590107.3 | [] | docs.snapprotect.com |
Create a new page
Contents
Create a new page.
All requests to the API need the
API Token; you can find the token in the plugin settings.
For all requests to the API to write content, you'll need to provide the
Authorization Token. To get this token, you need a user with
Administrator role. Get the
Authorization Token from the user profile.
Request
- Endpoint:
/api/pages
- Method:
- Content-Type:
application/json
Below is the list of parameters allowed for this endpoint.
Response
- HTTP Code:
200
- Content-Type:
application/json
- Content
{ "status": "0", "message": "Page created.", "data": { "key": "<PAGE-KEY>" } }
CURL command example
Here is an example that shows you how to create a new page via the command line with the
curl command. The
data.json file has the basic data needed to create a new page.
Content of file
data.json:
{ "token": "24a8857ed78a8c89a91c99afd503afa7", "authentication": "193569a9d341624e967486efb3d36d75", "title": "My dog", "content": "Content of the page here, support Markdown code and HTML code." }
Execute the command and attach the
data.json file:
$ curl -X POST \ -H "Content-Type: application/json" \ -d @data.json \ ""
Output:
{ "status": "0", "message": "Page edited.", "data": { "key": "my-dog" } } | https://docs.bludit.com/en/api/create-a-new-page | 2020-01-17T19:36:14 | CC-MAIN-2020-05 | 1579250590107.3 | [] | docs.bludit.com |
Host Maps visualize hosts together on one screen, with metrics made comprehensible via color and shape.
Filter by limits the Host Map to a specific subset of an infrastructure. The filter input bar in the top left enables filtering of the Host Map by tags as well as Datadog-provided attributes.
If the filter input bar is empty, the map displays all hosts that are reporting the selected metric to Datadog.
Example: if you tag your hosts by the environment they are in, you can filter by ‘production’ to remove hosts in your staging and other environments from the map. If you want to eliminate all but one host role in production, then add that role to the filter, too—the filters are
ANDed together.
Note: there is a distinction between filtering for
tag:value and
"tag:value"—filtering for
tag:value strictly matches the tag, while filtering for
"tag:value" performs a search on that text.
Group hosts by tags spatially arranges your hosts into clusters. Any host in a group shares the tag or tags you group by.
A simple example is grouping your hosts by AWS availability zone. If you add a second grouping tag, such as instance type, then the hosts are further subdivided into groups, first by availability zone and then by instance type, as seen below.
Note: Your Datadog host map is automatically grouped by
availability-zone. If you would like to change the default grouping, contact Datadog support.
Tags may be applied automatically by Datadog integrations or manually applied. You can use these to filter your hosts.
For example, if some of your hosts are running on AWS, the following AWS-specific tags are available to you right now:
When you’ve identified a host that you want to investigate, click it for details. It zooms in and displays up to six integrations reporting metrics from that host. If there are more than six integrations, they are listed under the “Apps” header in the host’s detail pane, as in the screenshot below.
Click the name of an integration for a condensed dashboard of metrics for that integration. In the screenshot below, “system” has been clicked to get system metrics such as CPU usage, memory usage, disk latency, etc.
By default the color of each host is set to represent the percentage of CPU usage on that host, where the color ranges from green (0% utilized) to orange (100% utilized). You can select different metrics from the
Color by selector.
Host Maps can also communicate an additional, optional metric with the size of the hexagon: use the
Size by selector.
In the screenshot below, the size of the hexagons is the 15 minute average load, normalized so that machines’ workloads can be compared even if they have different numbers of cores.
Note: The “% CPU utilized” metric uses the most reliable and up-to-date measurement of CPU utilization, whether it is being reported by the Datadog Agent, or directly by AWS or vSphere.
By default, the Host Map only shows hosts that are reporting the selected metric, which can then be used to set a color or size for the individual hexagon within the grid.
If a host is not reporting the selected metric, it can still appear within the Host Map by selecting the “gear” icon on the top-right of the map and enabling “Show hosts with no metrics” in the Host Map settings:
Data in the Host Map is refreshed about once a minute—unless you are continuously interacting with the map. The bottom right of the screen tells you when data was last updated.
If you are an AWS user, you probably use a variety of instance types. Some instances are optimized for memory, some for compute, some are small, some are big.
If you want to reduce your AWS spend, you might start by figuring out what the expensive instances are used for. First group by
instance-type and then group by
role or
name. Take a look at your expensive instance types, such as c3.8xlarge. Are there any host roles whose CPU is underutilized? If so, heavily loaded.
As seen below, by clicking on the c3.2xlarge group and then sub-grouping by role, you can find that only some of the roles are loaded, while others are nearly idling. If you downgraded those 7 green nodes to a c3.xlarge, you would save almost $13K per year. ($0.21 saved per hour per host x 24 hr/day * 365 days/year * 7 hosts = $12,877.20 / year)
Host maps enable you to see distributions of machines in each of your availability zones (AZ). Filter for the hosts you are interested in, group by AZ, and you can immediately see whether resources need rebalancing.
In the example seen below, there is an uneven distribution of hosts with
role:daniels across availability zones. (Daniels is the name of an internal application.)
Imagine you are having a problem in production. Maybe the CPUs on some of your hosts are pegged, which is causing long response times. Host Maps can help you quickly see whether there is anything different about the loaded and not-loaded hosts. You can rapidly group by dimensions you would like to investigate, and visually determine whether the problem servers belong to a certain group.
For example, you can group by availability zone, region, instance type, image, or any tags that you use within your system.
Below is a screenshot from a recent issue at Datadog. Some hosts have much less usable memory than others, despite being part of the same cluster. Grouping by machine image reveals that there were two different images in use, and one of them is overloaded.
Additional helpful documentation, links, and articles: | https://docs.datadoghq.com/infrastructure/hostmap/ | 2020-01-17T18:44:24 | CC-MAIN-2020-05 | 1579250590107.3 | [] | docs.datadoghq.com |
Thank you for purchasing our theme, Electro WordPress Toolkit — This toolkit plugin establishes an Envato Marketplace API connection to take advantage of the new
wp-list-themes&
wp-downloadmethods created specifically for this plugin.
-.
-.
- WooCommerce Quantity Increment — Allows you to add quantity button.
- WPBakery Visual Composer – This is a drag and drop frontend and backend page builder plugin that will save you tons of time working on the site content. This is a premium plugin and comes bundled with your theme.
-. | https://docs.madrasthemes.com/blog/topics/wordpress-themes/electro/getting-started/introduction/ | 2020-01-17T19:44:28 | CC-MAIN-2020-05 | 1579250590107.3 | [] | docs.madrasthemes.com |
Migrating.
Was this page helpful?
Thank you for the feedback! Please tell us if we can improve further.
Sorry to hear that. Please tell us how we can improve. | https://docs.plesk.com/en-US/onyx/migration-guide/migrating-from-supported-hosting-platfoms.75497/ | 2020-01-17T18:43:22 | CC-MAIN-2020-05 | 1579250590107.3 | [] | docs.plesk.com |
Planning for the Solaris IPS Package Creation
Before you run the Solaris IPS Packager, you must collect the information that is required during the IPS package creation.
Use the following checklist before creating the IPS package. Record the information so you can refer to it during the package creation.
Select a SnapProtect instance
Select an instance to be used by the package during installation. The instance must already exist in your CommCell environment, because the package does not allow the creation of new instances during installation. In addition, the instance that you select should not be already installed on the clients where the package will be installed.
Determine the IPS package repository
Collect the name and URL of the repository where you plan to host the package. You can use a File System or HTTP repository.
- If you are using a remote HTTP repository, on the remote host, run the following command:
svccfg -s application/pgk/server setprop pkg/readonly=false
The command ensures that the repository is not set to read-only.
- If you do not have a repository ready, you can create a repository during the creation of the package.
- For a File System repository, select a directory for hosting the repository and record its path. The directory must be local directory or an NFS share with at least 1.15 GB of free space available.
- For an HTTP repository, select a subdirectory under rpool/export, a port number, and a publisher name.
Determine the installation directory
By default, during the package installation, the software is installed under /opt. You can configure the package to install the software on a different directory.
Determine the IPS staging directory
By default, during the package creation, SnapProtect binaries are staged under /opt.
You can stage the binaries on a different directory, but it must be a local directory or an NFS share with at least 1 GB of free space.
Gather installation data for CommCell options
During the package creation, the Solaris IPS Packager requests information related to the CommCell environment, which includes the following items:
- CommServe name
Note: If you want users to specify the CommServe name during installation, configure the package to install the software in decoupled mode.
- Firewall services
- Log directory
- UNIX group and permissions
- Client groups
- Subclient and storage policy
For more information on the above items, see the Gather Installation Data section in Preinstallation Checklist for the File System Agent (UNIX, Linux, or Macintosh). | http://docs.snapprotect.com/netapp/v11/article?p=deployment/install/solaris_ips/r_solaris_ips_plan.htm | 2020-01-17T20:40:58 | CC-MAIN-2020-05 | 1579250590107.3 | [] | docs.snapprotect.com |
Outlier detection is an algorithmic feature that allows you to detect when a specific group is behaving different compared to its peers. For example, you could detect that one web server in a pool is processing an unusual number of requests, or significantly more 500 errors are happening in one AWS availability zone than the others.
To create an outlier monitor in Datadog, use the main navigation: Monitors –> New Monitor –> Outlier.
Any metric currently reporting to Datadog is available for monitors. For more information, see the Metric Monitor page.
The outlier monitor requires a metric with a group (hosts, availability zones, partitions, etc.) that has three or more members, which exhibit uniform behavior.
<GROUP>
5 minutes,
15 minutes,
1 hour, etc.
MAD,
DBSCAN,
scaledMAD, or
scaledDBSCAN
0.33,
1.0,
3.0, etc.
10,
20,
30, etc. (only for
MADalgorithms)
When setting up an outlier monitor, the time window is an important consideration. If the time window is too large, you might not be alerted in time. If the time window is too short, the alerts are not as resilient to one-off spikes.
To ensure your alert is properly calibrated, set the time window in the preview graph and use the reverse (<<) button to look back in time at outliers that would have triggered an alert. Additionally, you can use this feature to tune your parameters to a specific outlier algorithm.
Datadog offers two types of outlier detection algorithms:
DBSCAN/
scaledDBSCAN and
MAD/
scaledMAD. It is recommended to use the default algorithm, DBSCAN. If you have trouble detecting the correct outliers, adjust the parameters of DBSCAN or try the MAD algorithm. The scaled algorithms may be useful if your metrics are large scale and closely clustered.
DBSCAN (density-based spatial clustering of applications with noise) is a popular clustering algorithm. Traditionally, DBSCAN takes:
𝜀that specifies a distance threshold under which two points are considered to be close.
𝜀-radiusbefore that point can start agglomerating.
Datadog uses a simplified form of DBSCAN to detect outliers on timeseries. Each group is considered to be a point in d-dimensions, where d is the number of elements in the timeseries. Any point can agglomerate, and any point not in the largest cluster is considered an outlier. The initial distance threshold is set by creating a new median timeseries by taking the median of the values from the existing timeseries at every time point. The Euclidean distance between each group and the median series is calculated. The threshold is set as the median of these distances, multiplied by a normalizing constant.
Parameters
This implementation of DBSCAN takes one parameter,
tolerance, the constant by which the initial threshold is multiplied to yield DBSCAN’s distance parameter 𝜀. Set the tolerance parameter according to how similarly you expect your groups to behave—larger values allow for more tolerance in how much a group can deviate from its peers.
MAD (median absolute deviation) is a robust measure of variability, and can be viewed as the robust analog for standard deviation. Robust statistics describe data in a way that is not influenced by outliers.
Parameters
To use MAD for your outlier monitor, configure the parameters
tolerance and
%.
Tolerance specifies the number of deviations a point needs to be away from the median for it to be considered an outlier. This parameter should be tuned depending on the expected variability of the data. For example, if the data is generally within a small range of values, then this should be small. Otherwise, if points can vary greatly, then set a higher scale so the variabilities do not trigger false positives.
Percent refers to the percentage of points in the series considered as outliers. If this percentage is exceeded, the whole series is marked as an outlier.
DBSCAN and MAD have scaled versions (scaledDBSCAN and scaledMAD). In most situations, the scaled algorithms behave the same as their regular counterparts. However, if DBSCAN/MAD algorithms are identifying outliers within a closely clustered group of metrics, and you would like the outlier detection algorithm to scale with the overall magnitude of the metrics, try the scaled algorithms.
So which algorithm should you use? For most outliers, any algorithm performs well at the default settings. However, there are subtle cases where one algorithm is more appropriate.
In the following image, a group of hosts is flushing their buffers together, while one host is flushing its buffer slightly later. DBSCAN picks this up as an outlier whereas MAD does not. This is a case where you might prefer to use MAD since the synchronization of the group is just an artifact of the hosts being restarted at the same time. On the other hand, if instead of flushed buffers, the metrics represented a scheduled job that should be synchronized across hosts, DBSCAN would be the correct choice.
For detailed instructions on the Say what’s happening and Notify your team sections, see the Notifications page.
To create outlier monitors programmatically, see the Datadog API reference. Datadog recommends exporting a monitor’s JSON to build the query for the API.
The outlier algorithms are set up to identify groups that are behaving differently from their peers. If your groups exhibit “banding” behavior as shown below (maybe each band represents a different shard), Datadog recommends tagging each band with an identifier, and setting up outlier detection alerts on each band separately. | https://docs.datadoghq.com/ja/monitors/monitor_types/outlier/ | 2020-01-17T19:25:45 | CC-MAIN-2020-05 | 1579250590107.3 | [] | docs.datadoghq.com |
Integrating with GitLab
Dotscience allows you to hook into GitLab to give you more control over your model builds.
Integrating with GitLab
Dotscience allows you to integrate with GitLab to give you more control over your model builds than you get with our in-built model builder. By getting GitLab to do your model builds for you, you can control exactly what goes into the Docker image and easily put custom code in there. It also allows you to host your model images on a container registry of your choosing. This is great, for example, with sklearn models that need to have custom model serving logic.
Fork example project
For this example we will be using a public gitlab.com account and the (free) gitlab.com public runners to build our model images. You can also follow along using an account on your own GitLab instance, just be sure to change the settings where needed.
Fork the example project at to your own gitlab.com account.
Configure container registry
The example project is configured to push to the GitLab container registry that is automatically configured with a gitlab project.
If you want to change the Docker registry gitlab will use - you can change these variables in the .gitlab-ci.yml file:
- DOCKER_REGISTRY - e.g.
quay.io
- DOCKER_REGISTRY_USERNAME
- DOCKER_REGISTRY_PASSWORD
- DOCKER_IMAGE - e.g.
quay.io/myusername/myrepo
GitLab authentication
We need to create a new Access Token for your GitLab profile with API scope. This will allow Dotscience to communicate with GitLab’s API.
You can set up Access Tokens in your GitLab User Settings which you can get to by clicking on your avatar in the top right corner. Whilst in the User Settings section, click on Access Tokens in the menu.
When creating the Access Token, be sure to enable the api scope. Your new token will be displayed at the top of the page - be sure to copy it somewhere safe as you’ll need this soon.
Creating the GitLab configuration
You’ll need to make a note of the following 3 things:
- The URL of the GitLab instance (e.g.);
- The name of the GitLab project (e.g.
<YOUR_USERNAME>/gitlab-model-builder-v2).
- The Access Token you created above;
Note: If you’re using the CLI you can use the project’s ID instead of its name. This can be found in the project’s General settings page under the General project heading.
Using the GUI
Create a free Dotscience account. Once logged in, you’ll find a link to the CI Configurations page in the navigation menu. This page allows you to easily create, view, modify and remove CI integrations.
Begin by clicking the Add new button in the top right of the page and selecting GitLab from the dropdown menu. You’ll now be presented with a form where you can enter the following:
- Name - A custom configuration name used to identify the configuration when configuring builds.
- Description - An optional description of the configuration.
- Access Token - Your GitLab Access Token.
- GitLab URL - The URL of the GitLab instance.
- GitLab Project - The name of the project on GitLab.
- Reference- A custom reference.
Upon clicking Continue you will now see your new integration in the configurations list, giving you the control to make modifications or remove it if you so desire:
You’ll notice that the initial dotscience-default configuration is still marked as Default. This means that all model builds will currently be using this configuration and not the one we’ve just created. To make the new configuration the default for Dotscience to use, simply click on the tick ✓ icon on the right of the configuration you’ve just created:
Upon confirming this change, you will notice the ‘Default’ label is now next to your new configuration. All future model builds will now be run using this configuration.
Train and build a model
Now we have our GitLab configured to perform model builds - let’s put it to use by training a model then building it with a GitLab pipeline.
We will use an example Dotscience notebook to train a Tensorflow model and then our GitLab pipeline to build a Tensorflow serving image and push it to a container registry.
It’s easy to use another machine learning framework, for example, sklearn models that need to have custom model serving logic.
There are two components involved in this:
- notebook - produces model files using any framework you want
- gitlab - downloads the model files and creates/pushes a container image
You are free to change both the notebook and the gitlab config to suit your needs.
Before we being, make sure you have selected your GitLab configuration as the
Default using the instructions above.
Fork sample project
Login to your dotscience account and scroll to the bottom of the projects page towards the
Public projects section
You will notice a project called
MNIST Example - click this project and then click the
Fork this project button
Launch Jupyter
We will now launch this project on a Jupyter instance. Click the
Settings button on our forked project and make sure we have a runner.
Your managed runner might still be starting, in which case wait a short time for it to be ready.
Otherwise you will need to add a runner (you can add a managed runner easily).
Once our runner is online - click the
Open button for Jupyter.
Train model
Once Jupyter has loaded - you should see a notebook called
mnist-model.ipynb in the file tree.
Open this notebook and
Run all cells.
You should notice the Dotscience plugin on the right hand side pick up the run.
Build model
Click on the
Runs button top left and you should see your run.
Click the
Models button on the top and you should your model.
Click the
Build button for your model. This will have triggered a gitlab pipeline on your gitlab account.
Wait for the build to finish and then click the
Logs button - this will open the gitlab pipeline.
Deploy model
Now we can deploy this model using aD managed deployer. Click
Deploy for the model we just built.
Choose a managed deployer (in our case
eu-west) and choose
Create new deployment.
Then enter a name without dashes (e.g.
mnisttest) then click the
Deploy button.
Your model server is now being deployed to a Kubernetes cluster. Copy the hostname from the deployments page using the
copy to clipboard button.
Open our example application at.
This allows us to test our model running in production. Paste the URL you copied above into the text field, click on some numbers and your Dotscience trained, gitlab built model, is now serving predictions!
Take a look in the sample gitlab project to understand how the model files from Dotscience are turned into a container image.
Further reading:
Using the CLI
The following command can be used to create the GitLab configuration from the CLI:
ds ci create gitlab --name {NAME} --url {URL} --project {PROJECT} --ref {REFERENCE} --token {TOKEN}
{NAME}- A custom configuration name used to identify the configuration by the dotscience-python library.
{URL}- The URL of the GitLab instance.
{PROJECT}- The name of the GitLab project.
{TOKEN}- Your GitLab Access Token.
{REFERENCE}- A git ref, such as a branch name.
Using our sample project, you’d use something similar to:
ds ci create gitlab --name my-gitlab-configuration --url --project <YOUR_USERNAME>/gitlab-model-builder-v2 --token 9RX-3CvNmX7voy1cszt_ --ref master
To confirm that this went through successfully, you can now run
ds ci ls and you should see something like:
ID NAME TYPE AGE e3d46e96-01e9-40e7-a229-abe4811f513b my-gitlab-configuration gitlab 2 seconds | https://docs.dotscience.com/tutorials/integrating-with-gitlab/ | 2020-01-17T20:01:56 | CC-MAIN-2020-05 | 1579250590107.3 | [array(['/hugo/integrating-with-gitlab/gitlab-logo.png', None],
dtype=object)
array(['/hugo/integrating-with-gitlab/access-tokens.png', None],
dtype=object)
array(['/hugo/integrating-with-gitlab/navigation-menu.png', None],
dtype=object)
array(['/hugo/integrating-with-gitlab/add-new-integration.png', None],
dtype=object)
array(['/hugo/integrating-with-gitlab/configurations-list.png', None],
dtype=object)
array(['/hugo/integrating-with-gitlab/make-default-button.png', None],
dtype=object)
array(['/hugo/integrating-with-gitlab/public-projects.png', None],
dtype=object)
array(['/hugo/integrating-with-gitlab/fork-project.png', None],
dtype=object)
array(['/hugo/integrating-with-gitlab/wait-for-runner.png', None],
dtype=object)
array(['/hugo/integrating-with-gitlab/launch-jupyter.png', None],
dtype=object)
array(['/hugo/integrating-with-gitlab/notebook-file-tree.png', None],
dtype=object)
array(['/hugo/integrating-with-gitlab/run-all-cells.png', None],
dtype=object)
array(['/hugo/integrating-with-gitlab/dotscience-plugin.png', None],
dtype=object)
array(['/hugo/integrating-with-gitlab/view-runs.png', None], dtype=object)
array(['/hugo/integrating-with-gitlab/view-models.png', None],
dtype=object)
array(['/hugo/integrating-with-gitlab/gitlab-pipeline.png', None],
dtype=object)
array(['/hugo/integrating-with-gitlab/deploy-model.png', None],
dtype=object)
array(['/hugo/integrating-with-gitlab/deployment-values-1.png', None],
dtype=object)
array(['/hugo/integrating-with-gitlab/deployment-values-2.png', None],
dtype=object)
array(['/hugo/integrating-with-gitlab/deployed.png', None], dtype=object)
array(['/hugo/integrating-with-gitlab/demo-app.png', None], dtype=object)] | docs.dotscience.com |
DateTimeOffset.AddTicks Method
Microsoft Silverlight will reach end of support after October 2021. Learn more.
Adds a specified number of ticks to the current DateTimeOffset object.
Namespace: System
Assembly: mscorlib (in mscorlib.dll)
Syntax
'Declaration Public Function AddTicks ( _ ticks As Long _ ) As DateTimeOffset
public DateTimeOffset AddTicks( long ticks )
Parameters
- ticks
Type: System.Int64
A number of 100-nanosecond ticks. The number can be negative or positive.
Return Value
Type: System.DateTimeOffset
An object whose value is the sum of the date and time represented by the current DateTimeOffset object and the number of ticks represented by ticks.
Exceptions
Remarks | https://docs.microsoft.com/en-us/previous-versions/windows/silverlight/dotnet-windows-silverlight/bb340945%28v%3Dvs.95%29 | 2020-01-17T19:32:06 | CC-MAIN-2020-05 | 1579250590107.3 | [] | docs.microsoft.com |
Gamepad and remote control interactions
Many interaction experiences are shared between gamepad, remote control, and keyboard
Build interaction experiences in your Universal Windows Platform (UWP) applications that ensure your app is usable and accessible through both the traditional input types of PCs, laptops, and tablets (mouse, keyboard, touch, and so on), as well as the input types typical of the TV and Xbox 10-foot experience, such as the gamepad and remote control.
See Designing for Xbox and TV for general design guidance on UWP applications in the 10-foot experience.
Overview
In this topic, we discuss what you should consider in your interaction design (or what you don't, if the platform looks after it for you), and provide guidance, recommendations, and suggestions for building UWP applications that are enjoyable to use regardless of device, input type, or user abilities and preferences.
Bottom line, your application should be as intuitive and easy to use in the 2-foot environment as it is in the 10-foot environment (and vice versa). Support the user's preferred devices, make the UI focus clear and unmistakable, arrange content so navigation is consistent and predictable, and give users the shortest path possible to what they want to do.
Note
Most of the code snippets in this topic are in XAML/C#; however, the principles and concepts apply to all UWP apps. If you're developing an HTML/JavaScript UWP app for Xbox, check out the excellent TVHelpers library on GitHub.
Optimize for both 2-foot and 10-foot experiences
At a minimum, we recommend that you test your applications to ensure they work well in both 2-foot and 10-foot scenarios, and that all functionality is discoverable and accessible to the Xbox gamepad and remote-control.
Here are some other ways you can optimize your app for use in both 2-foot and 10-foot experiences and with all input devices (each links to the appropriate section in this topic).
Note
Because Xbox gamepads and remote controls support many UWP keyboard behaviors and experiences, these recommendations are appropriate for both input types. See Keyboard interactions for more detailed keyboard info.; }
Note
If the B button is used to go back, then don't show a back button in the UI. If you're using a Navigation view, the back button will be hidden automatically. For more information about backwards navigation, see Navigation history and backwards navigation for UWP apps.. This property has three possible values:
Never (the default value),
WhenEngaged, and
WhenFocused..
Reveal focus
Reveal focus is a lighting effect that animates the border of focusable elements, such as a button, when the user moves gamepad or keyboard focus to them. By animating the glow around the border of the focused elements, Reveal focus gives users a better understanding of where focus is and where focus is going.
Reveal focus is off by default. For 10 foot experiences you should opt-in to reveal focus by setting the Application.FocusVisualKind property in your app constructor.
if(AnalyticsInfo.VersionInfo.DeviceFamily == "Windows.Xbox") { this.FocusVisualKind = FocusVisualKind.Reveal; }
For more information see the guidance for Reveal focus.".
Summary
You can build UWP applications that are optimized for a specific device or experience, but the Universal Windows Platform also enables you to build apps that can be used successfully across devices, in both 2-foot and 10-foot experiences, and regardless of input device or user ability. Using the recommendations in this article can ensure that your app is as good as it can be on both the TV and a PC.
Related articles
Feedback | https://docs.microsoft.com/en-us/windows/uwp/design/input/gamepad-and-remote-interactions?redirectedfrom=MSDN | 2020-01-17T19:36:44 | CC-MAIN-2020-05 | 1579250590107.3 | [array(['images/keyboard/keyboard-gamepad.jpg',
'keyboard and gamepad image'], dtype=object)
array(['images/designing-for-tv/hardware-buttons-gamepad-remote.png',
'Gamepad and remote buttons diagram'], dtype=object)
array(['images/designing-for-tv/2d-navigation-best-practices-ui-layout-to-avoid.png',
'Elements in four corners with inaccessible element in middle'],
dtype=object)
array(['images/designing-for-tv/2d-navigation-best-practices-provide-path-with-least-clicks.png',
'Navigation best practices provide path with least clicks'],
dtype=object)
array(['images/designing-for-tv/2d-navigation-best-practices-provide-path-with-least-clicks-2.png',
'TextBlock moved above Play button so that it is no longer between priority tasks'],
dtype=object)
array(['images/designing-for-tv/2d-navigation-best-practices-commandbar-and-contextflyout.png',
'CommandBar at bottom of list/grid'], dtype=object)
array(['images/designing-for-tv/2d-focus-navigation-and-interaction-real-estate-app.png',
'Fake real estate app'], dtype=object)
array(['images/designing-for-tv/2d-focus-navigation-and-interaction-real-estate-app-list.png',
'Real estate app: list with 50 items takes 51 clicks to reach buttons below'],
dtype=object)
array(['images/designing-for-tv/2d-focus-navigation-and-interaction-ui-rearrange.png',
'Real estate app: place buttons above long scrolling list'],
dtype=object)
array(['images/designing-for-tv/2d-focus-navigation-and-interaction-engagement.png',
'Real estate app: set engagement to required so that it only takes 1 click to reach Previous/Next buttons'],
dtype=object)
array(['images/designing-for-tv/2d-focus-navigation-and-interaction-scrollviewer.png',
'Real estate app: ScrollViewer with only text'], dtype=object)
array(['images/designing-for-tv/map-mouse-mode.png',
'Map UI element using mouse mode'], dtype=object)
array(['images/designing-for-tv/10ft_infographics_mouse-mode.png',
'Button mappings for gamepad/remote in mouse mode'], dtype=object)
array(['images/designing-for-tv/focus-engagement-focus-trapping.png',
'Buttons to the left and right of a horizontal slider'],
dtype=object)
array(['images/designing-for-tv/focus-engagement-focus-trapping-2.png',
'Buttons above and below a horizontal slider'], dtype=object)
array(['images/designing-for-tv/focus-engagement-slider.png',
'Requiring focus engagement on slider so user can navigate to button on the right'],
dtype=object)
array(['images/designing-for-tv/focus-engagement-list-and-grid-controls.png',
'ListView with large amount of data and buttons above and below'],
dtype=object)
array(['images/designing-for-tv/focus-engagement-list-and-grid-controls-2.png',
'ListView with engagement required'], dtype=object) ] | docs.microsoft.com |
You must migrate Virtual Storage Console for VMware vSphere (VSC) hosts and backup jobs to SnapCenter Plug-in for VMware vSphere.
For SnapCenter 3.0 and later, you must migrate your VSC backups (which includes backup jobs, resource groups, and policies) to SnapCenter for data protection.
There are two paths for migrating backup jobs from VSC to the Plug-in for VMware vSphere:
If you are using VSC in conjunction with SnapCenter 2.0, you can use the migrate feature when you update SnapCenter. You can access the migrate feature in two ways:
You can update and migrate backup jobs from SnapCenter 2.x to any later version of SnapCenter.
If you are using VSC with SMVI, you can use the NetApp Import Utility for SnapCenter and Virtual Storage Console tool to migrate the backups, backup jobs, and storage connections. You can download the Import Utility from the NetApp Support Toolchest. The utility includes a ReadMe file that describes how to use it.
You can use the NetApp Import Utility for SnapCenter and Virtual Storage Console to migrate from any VSC 6.x version to SnapCenter Plug-in for VMware vSphere 3.0.1 or later.
The migration procedure installs the Plug-in for VMware vSphere and migrates VSC backup policies, backups, and backup jobs (called resource groups in SnapCenter) so they are available for use in the Plug-in for VMware vSphere GUI in vCenter.
NetApp Interoperability Matrix Tool | http://docs.netapp.com/ocsc-41/topic/com.netapp.doc.ocsc-isg/GUID-C9DB49CB-98EB-4500-9E7F-044EB1AC4BAB.html | 2020-01-17T18:51:05 | CC-MAIN-2020-05 | 1579250590107.3 | [] | docs.netapp.com |
Universal Ads Advanced
Universal Ads Basics Guide
Please refer to our Universal Ads Basics guide before using any the advanced options below.
Setup¶.
Testing with setDebug¶.
Server to Server Ad Links¶
If you have a server to server integration you must provide specific requirements for attribution. Make sure to append the following mandatory key-values into tracking ad links to ensure they are not rejected or blocked:
Server to Server Parameter: Add server-to-server click macro URL parameter at the end of your link, so we know it's a server to server link:
%24s2s=true
Device ID Macro Value: Pass user Advertising Identifier via click macro URL parameter:
%24idfa={IDFA}for iOS devices
%24aaid={AAID}for Android devices
IP address: Pass user IP information in the header OR click macro URL parameter to override on click:
- HTTP header
x-ip-override: {IP_ADDRESS}
- Click macro URL parameter:
device_ip={IP_ADDRESS}
User Agent: Pass User Agent information in the header OR click macro URL parameter to override on click:
- HTTP header
User-Agent: {USER_AGENT}
- Click macro URL parameter:
user_agent={USER_AGENT}
Update Partner-specific URL macros
Please make sure that you are using your macros instead of {IDFA}. {AAID}, {IP_ADDRESS}, {USER_AGENT}_0<<
Sending All Events¶
If you want to send
Privacy Implications
As this setting will send
Granting Agency Access¶
Tracking Link Parameters¶
Branch Tracking links allow tracking many parameters about the performance of your ad campaigns and individual ads. You can see each partner's specific link Parameters under the
.
Attribution Windows¶_3<<_4<<. | https://docs.branch.io/deep-linked-ads/branch-universal-ads-advanced/ | 2020-01-17T20:25:19 | CC-MAIN-2020-05 | 1579250590107.3 | [array(['/_assets/img/pages/partner-management/postback-add.gif', 'image'],
dtype=object)
array(['/_assets/img/pages/deep-linked-ads/branch-universal-ads/all-events.png',
'image'], dtype=object)
array(['/_assets/img/ingredients/deep-linked-ads/link-parameters.png',
'image'], dtype=object)
array(['/_assets/img/pages/dashboard/people-based-attribution/attribution-windows.png',
'image'], dtype=object)
array(['/_assets/img/pages/deep-linked-ads/branch-universal-ads/anaw_clear.png',
'image'], dtype=object)
array(['/_assets/img/pages/deep-linked-ads/branch-universal-ads/install-by-secondary-pub.png',
'image'], dtype=object) ] | docs.branch.io |
This documentation is for a previous release of Cloud Manager. Go to the docs for the latest release.
Launching ONTAP Cloud in Azure
You can launch a single ONTAP Cloud system in Azure by creating an ONTAP Cloud working environment in Cloud Manager.
You should have prepared by choosing a configuration and by obtaining Azure networking information from your administrator. For details, see Planning your ONTAP Cloud configuration.
If you want to launch an ONTAP Cloud BYOL instance, you must have the 20-digit serial number (license key) and you must have credentials for a NetApp Support Site account, if the tenant is not already linked with an account.
When Cloud Manager creates an ONTAP Cloud system in Azure, it creates a resource group that includes the security group, network interfaces, and two storage accounts: one for Azure Standard Storage and one for Premium Storage.
On the Working Environments page, click Add environment.
Under Create, select ONTAP Cloud.
On the Details and Credentials page, optionally change the Azure subscription, specify a cluster name and resource group name, add tags if needed, and then specify credentials.
The following table describes fields for which you might need guidance:
On the Location page, enter the network information that you recorded in the worksheet, select the checkbox to confirm network connectivity, and then click Continue.
On the ONTAP Cloud BYOL License page, specify whether you want to enter a license for this ONTAP Cloud system.
On the Preconfigured Packages page, select one of the packages to quickly launch an ONTAP Cloud system, or click Create my own configuration.
If you choose one of the packages, you only need to specify a volume and then review and approve the configuration.
On the Licensing page, change the ONTAP Cloud version as needed, select a license and a virtual machine type, and then click Continue.
If your needs change after you launch the system, you can modify the license or virtual machine type later.
On the Azure Marketplace page, follow the steps if Cloud Manager could not enable programmatic deployments of ONTAP Cloud.
If the NetApp Support Site credentials page is displayed, enter your NetApp Support Site credentials.
Credentials are required for BYOL instances. For details, see Why you should link a tenant to your NetApp Support Site account.
On the Underlying storage resources page, choose either Premium Storage or Standard Storage.
The disk type is for the initial volume. You can choose a different disk type for subsequent volumes. For help choosing a disk type, see Choosing an Azure disk type.
On the Disk Size page, select the default disk size for all disks in the initial aggregate and for any additional aggregates that Cloud Manager creates when you use the simple provisioning option.
You can create aggregates that use a different disk size by using the advanced allocation option.
For help choosing a size, see Choosing disk size.
On the Write Speed page, choose Normal or High.
For help choosing between the options, see Choosing a write speed.
On the Create Volume page, enter details for the new volume, and then click Continue.
You should skip this step if you want to use iSCSI. Cloud Manager enables you to create volumes for NFS and CIFS only.
Some of the fields in this page are self-explanatory. The following table describes fields for which you might need guidance:
The following image shows the Volume page filled out for the CIFS protocol:
If you chose the CIFS protocol, set up a CIFS server on the ONTAP Cloud CIFS Setup page:
On the Review & Approve page, review and confirm your selections:
Review details about the configuration.
Click More information to review details about support and the Azure resources that Cloud Manager will purchase.
Select the I understand… check boxes.
Click Go.
Cloud Manager deploys the ONTAP Cloud system. You can track the progress in the timeline.
If you experience any issues deploying the ONTAP Cloud system, review the failure message. You can also select the working environment and click Re-create environment.
For additional help, go to NetApp ONTAP Cloud Support.
If you deployed an ONTAP Cloud pay-as-you-go system and the tenant is not linked to a NetApp Support Site account, manually register the system with NetApp to enable support. For instructions, see Registering ONTAP Cloud instances.
Support from NetApp is included with your ONTAP Cloud system. To activate support, you must first register the system with NetApp.
If you provisioned a CIFS share, give users or groups permissions to the files and folders and verify that those users can access the share and create a file.
If you want to apply quotas to volumes, use System Manager or the CLI.
Quotas enable you to restrict or track the disk space and number of files used by a user, group, or qtree. | https://docs.netapp.com/us-en/occm34/task_deploying_otc_azure.html | 2020-01-17T19:26:35 | CC-MAIN-2020-05 | 1579250590107.3 | [] | docs.netapp.com |
When added before a collection type class member, indicates that the value of any of its members can not be null.
Any member whose value is null will be highlighted with red color in the inspector view.
Attribute Target
Field, property, indexer, method return value or method parameter.
Target type must be a collection of some sort (e.g. Array or List).
Example
using UnityEngine; using Sisus.Attributes; public class MissingScriptDetector : MonoBehaviour { [NotNullMembers, ReadOnly, ShowInInspector] private Component[] components; // called when the script is loaded or a value is changed in the inspector void OnValidate() { components = GetComponents<Component>(); for(int n = 0, count = components.Length; n < count; n++) { if(components[n] == null) { Debug.LogWarning("Missing component found on GameObject "+name, gameObject); } } } } | https://docs.sisus.co/power-inspector/attributes/notnullmembers/ | 2020-01-17T18:21:49 | CC-MAIN-2020-05 | 1579250590107.3 | [] | docs.sisus.co |
Advertising and searching
Adding boards and CV databases
- Adding job boards for advert multiposting
- Job board set up - Additional option fields
- Updating job board settings
- Adding job boards for Aggregated Searching
- Setting up your posting currencies
- Connecting Twitter: Job board settings area
- Connecting Twitter: During advert posting
- Connecting Linkedin for advert posting
- Facebook: An introduction to setting up and posting
- Facebook: Setting up users to post to company pages
- Facebook: Finding your Facebook Page ID
- Facebook: Installing the optional idibu job feed application
- Facebook: Connecting your account for advert posting
- Facebook: Adding an image to your advert posting
- Setting up application pages: Getting started
- Setting up landing pages: Different question types
- Setting up landing pages: Activating and editing your auto-rejection email.
- Setting up landing pages: Creating bespoke themes
- Setting up landing pages: Adding an applicant redirect webpage
- Posting errors | https://v3-docs.idibu.com/category/527-advertising-and-searching | 2020-01-17T18:52:20 | CC-MAIN-2020-05 | 1579250590107.3 | [] | v3-docs.idibu.com |
Devices¶
The
Device class is an abstract class which encapsulates constraints
(or lack thereof) that come when running a circuit on actual hardware.
For instance, most hardware only allows certain gates to be enacted
on qubits. Or, as another example, some gates may be constrained to not
be able to run at the same time as neighboring gates. Further the
Device class knows more about the scheduling of
Operations.
Here for example is a
Device made up of 10 qubits on a line:
import cirq from cirq.devices import GridQubit class Xmon10Device(cirq.Device): def __init__(self): self.qubits = [GridQubit(i, 0) for i in range(10)] def validate_operation(self, operation): if not isinstance(operation, cirq.GateOperation): raise ValueError('{!r} is not a supported operation'.format(operation)) if not isinstance(operation.gate, (cirq.CZPowGate, cirq.XPowGate, cirq.PhasedXPowGate, cirq.YPowGate)): raise ValueError('{!r} is not a supported gate'.format(operation.gate)) if len(operation.qubits) == 2: p, q = operation.qubits if not p.is_adjacent(q): raise ValueError('Non-local interaction: {}'.format(repr(operation))) def validate_circuit(self, circuit): for moment in circuit: for operation in moment.operations: self.validate_operation(operation)
This device, for example, knows that two qubit gates between next-nearest-neighbors is not valid:
device = Xmon10Device() circuit = cirq.Circuit() circuit.append([cirq.CZ(device.qubits[0], device.qubits[2])]) try: device.validate_circuit(circuit) except ValueError as e: print(e) # prints something like # ValueError: Non-local interaction: Operation(cirq.CZ, (GridQubit(0, 0), GridQubit(2, 0))) | https://cirq.readthedocs.io/en/latest/devices.html | 2020-01-17T18:43:43 | CC-MAIN-2020-05 | 1579250590107.3 | [] | cirq.readthedocs.io |
The Type Initializers options page enables you to specify the default values for a certain type of initialization.
The options page includes the following options.
Take descendant initializers into account
Specifies whether the Initialize code provides only the current type initializers or descendant type initializers as well.
Use type initializers
Specifies whether CodeRush Classic uses the type initializers specified via this options page or initializes variables, fields and properties with the default type values.
The field at the bottom of the options page lists types and values used to initialize these types.
The table at the left side of the field lists the types for which you want to define default initializer values. Use the New and Delete buttons to add a type or remove the selected type appropriately.
The table at the right part of the field specifies the list of initializer values for the selected type. To add a value, click the button and specify the value via the text box located at the bottom. Press the button to delete the selected value.
The order of the values on the options page affects their order in the Initialize code provider sub menu and in the Intellisense. Use the arrow buttons to move the selected value up and down within the value list.
This product is designed for outdated versions of Visual Studio. Although Visual Studio 2015 is supported, consider using the CodeRush extension with Visual Studio 2015 or higher. | https://docs.devexpress.com/CodeRush/15173/coderush-options/editor/code-modification/type-initializers | 2020-01-17T19:01:00 | CC-MAIN-2020-05 | 1579250590107.3 | [] | docs.devexpress.com |
Hyperparameter Optimization
We look at how you can use Dotscience to explore relationships between hyperparameters & metrics
Create a new project in Dotscience and open JupyterLab. See this tutorial on creating projects and working with the Dotscience hosted JupyterLab for more information.
This tutorial is in 2 parts:
- Tune hyperparameters to optimise the precision and recall on a scikit-learn dataset of digits
- Use H2O AutoML to automatically tune the hyperparamers of a model on product backorders
Hyperparameter optimization with scikit-learn
The notebook for this tutorial can be found at .
Download our
demos Git repository with
git clone
Navigate to your project on Dotscience, open a JupyterLab session and upload the notebook file
grid-search.ipynb from the git repository above. It can be found at
demos/sklearn-gridsearch/grid-search.ipynb
At the start of the notebook, we import the dotscience python library and instrument our training with it. And if you look closely at the notebook, you will notice that as we iterate though a collection of scores to optimise them, we record the summary statistics with
ds.add_summary("param", value) for all the parameters involved.
for mean, std, params in zip(means, stds, clf.cv_results_['params']): ds.start() ds.add_parameters(**params) ds.add_summary("%s-stdev" % (score,), std) ds.add_summary("%s-mean" % (score,), mean) ds.publish() print("%0.3f (+/-%0.03f) for %r" % (mean, std * 2, params)) print()
Run the notebook by clicking run -> all cells
When the run completes, navigate to the Runs tab to see the summary of all the runs. Clicking on each run will show the provenance of the run.
Now, go to the Explore tab, you can see a graphical representation of the optimisation we did earlier.
The screen capture above shows the behaviour of the summary statistic
precision-mean. From this we can draw conclusions about how the each change to the hyperparameters affected the summary statistic. Clicking on an individual data point, takes us to the run that was associated with that change.
You can also toggle the views between multiple optimisations by selecting it from the Summary statistic field.
We have now demonstrated Hyperparameter tuning on a simple machine learning model using an scikit-learn grid search. You can visualise the effect of tuning the params on the graph and specifically zoom into runs where the summary statistics go off the charts.
Note the problem is in a sense too easy: the hyperparameter plateau is too flat and the output model is the same for precision and recall with ties in quality.
Nevertheless, the principle is demonstrated of using Python code for hyperparameter optimization, augmented by the ds functions of the Dotscience Python library to automatically record versioning, provenance, parameters, and metrics within the system.
AutoML optimization with H2O
The setup for this is the same as tutorial 1, except that the notebook is in
demos/h2o/automl_binary_classification_product_backorders.ipynb
Running the notebook shows how H2O can be used within Dotscience, and the model performances in the AutoML process tracked using the Dotscience Python library ds functions.
Note that the AutoML step takes a few minutes to complete.
In this case, we see that the stacked ensemble model combining the top individual models (mostly XGBoost and H2O’s GBM gradient boosted decision trees) outperforms the individual models.
Dotscience hyperparameter optimization in the future
Viewing the outputs of the tutorials in the Explore tab of your project will show that Dotscience’s current visualizations and integrations of scikit-learn and H2O are quite basic. These integrations and visualizations will be improved in the future. Nevertheless, the tracking and versioning of everything from the run is available, as with our other tutorials. | https://docs.dotscience.com/tutorials/hyperparam/ | 2020-01-17T19:07:32 | CC-MAIN-2020-05 | 1579250590107.3 | [array(['/hugo/hyperparam/hyperparam-notebook.png', None], dtype=object)
array(['/hugo/hyperparam/hyperparam-runs.png', None], dtype=object)
array(['/hugo/hyperparam/hyperparam-graph.png', None], dtype=object)
array(['/hugo/hyperparam/hyperparam-toggle.png', None], dtype=object)] | docs.dotscience.com |
LDAP / Active Directory is an enterprise authentication solution developed by Microsoft.
{{username}}
uid
(uid={{username}})
If you get errors while trying to login, you can enable a LDAP debugging flag to report internal LDAP error messages to the console (or docker logs).
From the Administration Area, click on Developer Tools in the sidebar, then on Flags. Enable the LDAP Debug flag. | https://docs.requarks.io/en/auth/ldap | 2020-01-17T20:17:26 | CC-MAIN-2020-05 | 1579250590107.3 | [] | docs.requarks.io |
One of the main advantages of integrating with RHQ is the ability to leverage the shared inventory model across all managed products. All plugins deployed in your RHQ system store their auto and manually discovered resources in the RHQ shared inventory subsystem. Almost all the other subsystems in RHQ (Configuration, Operations, Alerts, etc) are related to or use the Inventory subsystem in some manner. The Inventory subsystem, is therefore, a critical component to RHQ.
RHQ's Inventory domain model maintains information about all your resources that are currently managed by your agents. A Resource is an abstract concept that simply refers to a "thing" being managed, such as a JBossAS server, an Apache Web Server, a Tomcat Webapp application, a servlet, a Python script, etc. Resources are categorized by ResourceCategory, which can be one of the following:
Platform
Represents a computer or an operating system
e.g. Linux, Windows, AIX, etc...
Server
Represents an application or a running process, typically running on a platform.
e.g. JBossAS 4.2 Application Server, Tomcat Web Container, Apache Web Server, etc...
Service
Represents a component that typically runs in a server
e.g. Tomcat Web application, EJB, Servlet, .sh script, etc...
Indicating if a resource is a platform, server and service is a very generic form of categorization, but doesn't tell you what type of resource it is. This is where ResourceType comes in. As with the Resource domain object, ResourceType is an abstract concept that tells you what kind of resource it represents. Plugins define concrete types within their plugin descriptor; for example, a PostgreSQL plugin could define resource types of "Database", "Table", "User", etc. A JBossAS plugin could define "JBossAS Server", "EJB Stateless Bean", "DataSource", etc. Note that the plugin descriptor, as part of the resource type definition, implicitly defines the type's category. For example, the JBossAS plugin would categorize its "JBossAS Server" type as a "server", but its "EJB Stateless Bean" and "DataSource" types would each be categorized as a "service".
The RHQ inventory model not only contains the types, categories and the resources themselves, but it also manages the relationships with one another. Resource types can have parent and child types; the same holds true for resources themselves. Typically the "platform" resources have no parents - they are at the top of the hierarchy. "server" resources are typically children of "platform" resources (as an example, a Linux platform resource can have a child JBossAS Server resource). "service" resources are typically children of "server" resources.
Plugin descriptors define this hierarchy by implicitly embedding <platform>, <server> and <service> tags within each other. The XML hierarchy within the plugin descriptor therefore defines the resource type hierarchy. When a plugin discovers resources, the plugin container will place those new resources in their proper place within the hierarchy of existing resources. | https://docs.jboss.org/author/display/RHQ/Design-Inventory.html | 2020-05-25T09:01:02 | CC-MAIN-2020-24 | 1590347388012.14 | [] | docs.jboss.org |
Automations
Automatic deletion of files or folders
The Automatic Deletes sub-tab allows you to adjust the workflow for automatic deletion of files or folders that are older than a specified period of time. To add a rule for auto-deletion, follow these steps:
- Open the Automatic Deletes sub-tab on the Automations tab.
- Click on the Add rule icon and specify settings for the automatic deletion:
- Click on the Select button at the top right to choose the folder where you would like to activate the deletion. 1 folder for 1 rule!
- Specify the time interval the deletion should be performed.
- Type in emails for file deletion notifications (your email is listed by default).
- Keep folder structure - tick this check box if you want to save the structure with folders and delete the contents of these folders.
- Skip home folders - tick if you intend to skip files of inactive users.
- Skip shared folders - tick to leave Project Folders.
- Remove files permanently without sending them to Trash.
- Active - tick if you would like to activate the rule.
- Save the rule.
If there is an overlapping of auto-deletion rules, the rule created first will apply.
Your added rules will be displayed in the list and will contain brief information about the adjusted deletion. All your listed rules can be easily managed by ticking the check box next to the rule and selecting an appropriate option from the top menu. You can edit, delete, activate or deactivate auto-deletion rules. The edition of the rule presupposes the change of previously specified settings. If you delete the rule, it won't be either available, or applicable. The deletion cannot be restored back.
To make the rule inactive, click on the Deactivate icon and this grays the rule out in the list. Whilst the Activate rule icon allows to turn the rule on again.
Automatic generation of activity exports
The Auto-generation of activity exports provides a possibility to automatically receive logs in CSV format (a comma-separated values file that allows data to be saved in a tabular format) with the account activity on a regular basis.
The automation rules created by the owners cannot be updated or deleted by the administrators.
To add a rule, follow the steps below:
- Open the Activity Export sub-tab of the Automations tab.
- Click on the Add rule icon and adjust settings for the rule:
- Select the folder where you would like to store your activity logs.
- Type in emails for automatic activity log notifications.
- Pick the start date for the generation of future logs.
- Choose the frequency of logs: daily/weekly/monthly.
- Active - tick if you would like to activate the rule.
- Save the rule.
The logs will be stored in your specified folder in CSV format and sent to the listed recipients in the email notifications.
If you would like to retrieve a log for the past period, go to the Activity Log tab and manually generate a report there. In order to save your time in future, adjust the rule with the above instructions and receive required logs with the specified frequency. | https://docs.maytech.net/quatrix/quatrix-administration-guide/automations?selectedPageVersions=36&selectedPageVersions=37 | 2020-05-25T06:50:26 | CC-MAIN-2020-24 | 1590347388012.14 | [] | docs.maytech.net |
Sharing.
Make your workbook look great.
Using the workbook in the browser.
Other views.
Creating new reports based on your workbook.
Troubleshooting the icons in the gallery.
What’s next
The other great feature that you get when you publish your workbook to SharePoint is the ability to schedule a regular data refresh. Look for a follow-up post on this. | https://docs.microsoft.com/en-us/archive/blogs/analysisservices/sharing-workbooks-using-powerpivot-gallery | 2020-05-25T09:12:36 | CC-MAIN-2020-24 | 1590347388012.14 | [] | docs.microsoft.com |
Exchange Server 2013 Performance Recommendations
Applies to: Exchange Server 2013
Exchange Server 2013 performance tuning and troubleshooting is most effective when your environment has been properly sized and planned. While Exchange 2013 was designed to simplify the underlying resource infrastructure, it can still consume a large amount of system resources, such as memory, storage capacity, and CPU capacity.
The articles in this section were written by the Exchange performance team. They contain expertise from the Exchange product group, as well as best practices learned from customer support cases. The goal of these articles is to help you understand the impact of changes introduced in Exchange 2013, and the importance of appropriately sizing your Exchange 2013 infrastructure. We've also included recommended optimizations and guidance on identifying performance issues.
Architectural Changes in Exchange 2013 and other Resources
The architectural changes in Exchange 2013 are already documented on TechNet and in the Exchange Team Blog. We'll first touch upon a few high level changes you should consider in order to better understand performance cost and sizing. Then, below, we've included a list of recommended references to provide further context and background in these important areas.
Note
Please see Exchange 2013 virtualization for performance optimization guidance about deploying Exchange Server 2013 in a virtualized environment.
In Exchange 2013, the Client Access server role is a stateless proxy server. Now the Client Access server role's primary responsibility is to authenticate incoming requests and then proxy each request to the appropriate Mailbox server, the one hosting the active copy of the user mailbox. This means it's no longer necessary to configure affinity between the Client Access server and load balancer for specific protocols.
Another noteworthy change in Exchange 2013 is in the Information Store. The Information Store now consists of two kinds of processes: host and worker. Each database instance is associated with its own Microsoft.Exchange.Store.Worker.exe process. This allows for better isolation of problematic database issues, and can reduce the performance impact of a database problem to just the one worker instance for that database.
The Microsoft Exchange Replication service is responsible for all high availability services related to the Mailbox Server role. This replication service hosts the Active Manager component, which is responsible for monitoring failures and taking corrective actions.
A great post on architectural changes, including the impact to re-sizing an Exchange 2013 environment from earlier versions, can be found in Exchange 2013 Server Role Architecture.
More about Exchange 2013 architectural changes, and background information on other relevant areas, can be found in the following:
Exchange Server 2013 Architecture
Plan it the right way: Exchange Server 2013 sizing scenarios
Monitoring and Tuning Microsoft Exchange Server 2013 Performance
Implementing Exchange Server 2013: (01) Upgrade and Deploy Exchange Server 2013
Implementing Exchange Server 2013: (02) Plan It the Right Way: Exchange Server 2013 Sizing
Implementing Exchange Server 2013: (03) Exchange Server 2013 Virtualization Best Practices
Implementing Exchange Server 2013: (04) Exchange Architecture: High Availability and Site Resilience
Implementing Exchange Server 2013: (05) Outlook Connectivity
The Preferred Architecture
Exchange 2013 Client Access Server Role
Exchange Server 2013 Virtualization Best Practices
Exchange Server Updates: build numbers and release dates
Release notes for Exchange 2013
Updates for Exchange 2013
ASP.NET Thread Usage on IIS 7.5, IIS 7.0, and IIS 6.0 | https://docs.microsoft.com/en-us/exchange/exchange-server-2013-performance-recommendations-exchange-2013-help?redirectedfrom=MSDN | 2020-05-25T08:55:09 | CC-MAIN-2020-24 | 1590347388012.14 | [] | docs.microsoft.com |
Communication Discourse Forum. This is the main place for Open States discussion. The core team and many other contributors are present there, and we’re usually able to answer questions in a timely fashion.
Have a private or financial question, or a security concern?
Have you found an error or issue in the Open States data?
File an issue on our bug tracker. And before you do, quickly check whether anyone else there has already reported the same bug.
Have a technical issue not related to the data itself?
Try to find the appropriate repository in our GitHub organization, and file an issue there. For example:
- openstates.org:
- our documentation:
Want to contribute to the project more regularly?
We also have a private Slack that we can invite you to. If you’re interested in an invite, introduce yourself in an email to [email protected].. | https://docs.openstates.org/en/latest/contributing/communication.html | 2020-05-25T08:31:01 | CC-MAIN-2020-24 | 1590347388012.14 | [] | docs.openstates.org |
public class PropertiesLauncher extends Launcher
Launcherfor archives with user-configured classpath and main class via a properties file. This model is often more flexible and more amenable to creating well-behaved OS-level services than a model based on executable jars.
Looks in various places for a properties file to extract loader settings, defaulting to
loader.properties either on the current classpath or in the current working
directory. The name of the properties file can be changed by setting a System property
loader.config.name (e.g.
-Dloader.config.name=foo will look for
foo.properties. If that file doesn't exist then tries
loader.config.location (with allowed prefixes
classpath: and
file: or any valid URL). Once that file is located turns it into Properties and
extracts optional values (which can also be provided overridden as System properties in
case the file doesn't exist):
loader.path: a comma-separated list of directories (containing file resources and/or nested archives in *.jar or *.zip or archives) or archives to append to the classpath.
BOOT-INF/classes,BOOT-INF/libin the application archive are always used
loader.main: the main method to delegate execution to once the class loader is set up. No default, but will fall back to looking for a
Start-Classin a
MANIFEST.MF, if there is one in
${loader.home}/META-INF.
createArchive, createClassLoader, createMainMethodRunner, launch, launch
clone, equals, finalize, getClass, hashCode, notify, notifyAll, toString, wait, wait, wait
public static final String MAIN
Start-Class.
public static final String PATH
BOOT-INF/classes,BOOT-INF/libin the application archive are always used.
public static final String HOME
loader path. Defaults to current working directory (
${user.dir}).
public static final String ARGS
public static final String CONFIG_NAME
loader config locationis provided instead.
public static final String CONFIG_LOCATION
public static final String SET_SYSTEM_PROPERTIES
public PropertiesLauncher()
protected File getHomeDirectory()
protected String[] getArgs(String... args) throws Exception
Exception
protected String getMainClass() throws Exception
getMainClassin class
Launcher
Exception- if the main class cannot be obtained
protected ClassLoader createClassLoader(List<Archive> archives) throws Exception
createClassLoaderin class
Launcher
archives- the archives
Exception- if the classloader cannot be created
protected List<Archive> getClassPathArchives() throws Exception
getClassPathArchivesin class
Launcher
Exception- if the class path archives cannot be obtained
public static void main(String[] args) throws Exception
Exception
public static String toCamelCase(CharSequence string) | https://docs.spring.io/spring-boot/docs/2.0.x/api/org/springframework/boot/loader/PropertiesLauncher.html | 2020-05-25T06:45:19 | CC-MAIN-2020-24 | 1590347388012.14 | [] | docs.spring.io |
How do I get the theme license key?
After you purchase the theme, you can get your license key at checkout confirmation and copy it.
OR
- Click on My Account from our site's main menu( here )
- Then click on View Details and Downloads
- There you will find the license key.
How to activate the theme license?
Note: Using theme without license key will result in not getting the future security updates and patches
To activate the license key for the theme:
- Go to Appearance->Theme License
Enter the license key in the input field as shown in the image below:
The second step is to activate the license key of your theme which can not be skipped. Input the theme license key you got while buying theme and click Activate
How do I renew my license key?
You will be notified if your license key is expired through mail and details will be provide to renew the license.
If you want to renew the license manually, follow the steps below:
Click on My Account from our site's main menu
Then click on View Licenses
Under the Expiration column, click Renew license
Then follow the processes shown there. | https://docs.codemanas.com/code-manas-pro/theme-license/ | 2020-05-25T08:32:18 | CC-MAIN-2020-24 | 1590347388012.14 | [] | docs.codemanas.com |
Released on:
Thursday, March 14, 2019 - 10:00
Notes
A new version of the agent has been released. Follow standard procedures to update your Infrastructure agent.
Improvements
- Added replacement of on-host integration's remote entity-names; when a loopback address is found, it will be replaced with agent entity-name. This change will be applied in the entity key and the hostname metric field (if present), when the data comes from an integration using protocol V3. See protocol V3 documentation for further details.
Bug fixes
- Fixed unreported processes issue caused by the inability to parse a different format of /proc//stat.
- Fixed a problem that caused the Windows agent to submit
os:"unknown".
Security updates
- Fixed a low security issue that caused the Windows agent to periodically access
C:\etcfolder. | https://docs.newrelic.com/docs/release-notes/infrastructure-release-notes/infrastructure-agent-release-notes/new-relic-infrastructure-agent-1225 | 2020-05-25T09:05:10 | CC-MAIN-2020-24 | 1590347388012.14 | [] | docs.newrelic.com |
TOPICS×
Performance Guidelines
This page provides general guidelines on how to optimize the performance of your AEM deployment. If you are new to AEM, please go over the following pages before you start reading the performance guidelines:
Illustrated below are the deployment options available for AEM (scroll to view all the options):
The performance guidelines apply mainly to AEM Sites.
When to Use the Performance Guidelines
You should use the performance guidelines in the following situations:
- First time deployment : When planning to deploy AEM Sites or Assets for the first time, it is important to understand the options available when configuring the Micro Kernel, Node Store, and Data Store (compared to the default settings). For example, changing the default settings of the Data Store for TarMK to File Data Store.
- Upgrading to a new version : When upgrading to a new version, it is important to understand the performance differences compared to the running environment. For example, upgrading from AEM 6.1 to 6.2, or from AEM 6.0 CRX2 to 6.2 OAK.
- Response time is slow : When the selected Nodestore architecture is not meeting your requirements, it is important to understand the performance differences compared to other topology options. For example, deploying TarMK instead of MongoMK, or using a File Data Sore instead of an Amazon S3 or Microsoft Azure Data Store.
- Adding more authors : When the recommended TarMK topology is not meeting the performance requirements and upsizing the Author node has reached the maximum capacity available, it is important to understand the performance differences compared to using MongoMK with three or more Author nodes. For example, deploying MongoMK instead of TarMK.
- Adding more content : When the recommended Data Store architecture is not meeting your requirements, it’s important to understand the performance differences compared to other Data Store options. Example: using the Amazon S3 or Microsoft Azure Data Store instead of a File Data Store.
Introduction
This chapter gives a general overview of the AEM architecture and its most important components. It also provides development guidelines and describes the testing scenarios used in the TarMK and MongoMK benchmark tests.
The AEM Platform
The AEM platform consists of the following components:
For more information on the AEM platform, see What is AEM .
The AEM Architecture
There are three important building blocks to an AEM deployment. The Author Instance which is used by content authors, editors, and approvers to create and review content. When the content is approved, it is published to a second instance type named the Publish Instance from where it is accessed by the end users. The third building block is the Dispatcher which is a module that handles caching and URL filtering and is installed on the webserver. For additional information about the AEM architecture, see Typical Deployment Scenarios .
Micro Kernels
Micro Kernels act as persistence managers in AEM. There are three types of Micro Kernels used with AEM: TarMK, MongoDB, and Relational Database (under restricted support). Choosing one to fit your needs depends on the purpose of your instance and the deployment type you are considering. For additional information about Micro Kernels, see the Recommended Deployments page.
Nodestore
In AEM, binary data can be stored independently from content nodes. The location where the binary data is stored is referred to as the Data Store , while the location of the content nodes and properties is called the Node Store .
Adobe recommends TarMK to be the default persistence technology used by customers for both the AEM Author and the Publish instances.
The Relational Database Micro Kernel is under restricted support. Contact Adobe Customer Care before using this type of Micro Kernel.
Data Store
When dealing with large number of binaries, it is recommended that an external data store be used instead of the default node stores in order to maximize performance. For example, if your project requires a large number of media assets, storing them under the File or Azure/S3 Data Store will make accessing them faster than storing them directly inside a MongoDB.
For further details on the available configuration options, see Configuring Node and Data Stores .
Adobe recommends to choose the option of deploying AEM on Azure or Amazon Web Services (AWS) using Adobe Managed Services, where customers will benefit from a team who has the experience and the skills of deploying and operating AEM in these cloud computing environments. Please see our additional documentation on Adobe Managed Services ..
For additional details also see the technical requirements page.
Search
Listed in this section are the custom index providers used with AEM. To know more about indexing, see Oak Queries and Indexing .
For most deployments, Adobe recommends using the Lucene Index. You should use Solr only for scalability in specialized and complex deployments.
Development Guidelines
You should develop for AEM aiming for performance and scalability . Presented below are a number of best practices that you can follow:
DO
- Apply separation of presentation, logic, and content
- Use existing AEM APIs (ex: Sling) and tooling (ex: Replication)
- Develop in the context of actual content
- Develop for optimum cacheability
- Minimize number of saves (ex: by using transient workflows)
- Make sure all HTTP end points are RESTful
- Restrict the scope of JCR observation
- Be mindful of asynchronous thread
DON'T
- Don’t use JCR APIs directly, if you can
- Don’t change /libs, but rather use overlays
- Don’t use queries wherever possible
- Don’t use Sling Bindings to get OSGi services in Java code, but rather use:
- @Reference in a DS component
- @Inject in a Sling Model
- sling.getService() in a Sightly Use Class
- sling.getService() in a JSP
- a ServiceTracker
- direct access to the OSGi service registry
For further details about developing on AEM, read Developing - The Basics . For additional best practices, see Development Best Practices .
Benchmark Scenarios
All the benchmark tests displayed on this page have been performed in a laboratory setting.
The testing scenarios detailed below are used for the benchmark sections of the TarMK, MongoMk and TarMK vs MongoMk chapters. To see which scenario was used for a particular benchmark test, read the Scenario field from the Technical Specifications table.
Single Product Scenario
AEM Assets:
- User interactions: Browse Assets / Search Assets / Download Asset / Read Asset Metadata / Update Asset Metadata / Upload Asset / Run Upload Asset Workflow
- Execution mode: concurrent users, single interaction per user
Mix Products Scenario
AEM Sites + Assets:
- Sites user interactions: Read Article Page / Read Page / Create Paragraph / Edit Paragraph / Create Content Page / Activate Content Page / Author Search
- Assets user interactions: Browse Assets / Search Assets / Download Asset / Read Asset Metadata / Update Asset Metadata / Upload Asset / Run Upload Asset Workflow
- Execution mode: concurrent users, mixed interactions per user
Vertical Use Case Scenario
Media:
- Read Article Page (27.4%), Read Page (10.9%), Create Session (2.6%), Activate Content Page (1.7%), Create Content Page (0.4%), Create Paragraph (4.3%), Edit Paragraph (0.9%), Image Component (0.9%), Browse Assets (20%), Read Asset Metadata (8.5%), Download Asset (4.2%), Search Asset (0.2%), Update Asset Metadata (2.4%), Upload Asset (1.2%), Browse Project (4.9%), Read Project (6.6%), Project Add Asset (1.2%), Project Add Site (1.2%), Create Project (0.1%), Author Search (0.4%)
- Execution mode: concurrent users, mixed interactions per user
TarMK
This chapter gives general performance guidelines for TarMK specifying the minimum architecture requirements and the settings configuration. Benchmark tests are also provided for further clarification.
Adobe recommends TarMK to be the default persistence technology used by customers in all deployment scenarios, for both the AEM Author and Publish instances.
For more information about TarMK, see Deployment Scenarios and Tar Storage .
TarMK Minimum Architecture Guidelines
The minimum architecture guidelines presented below are for production enviroments and high traffic sites. These are not the minimum specifications needed to run AEM.
To establish good performance when using TarMK, you should start from the following architecture:
- One Author instance
- Two Publish instances
- Two Dispatchers
Illustrated below are the architecture guidelines for AEM sites and AEM Assets.
Binary-less replication should be turned ON if the File Datastore is shared.
Tar Architecture Guidelines for AEM Sites
Tar Architecture Guidelines for AEM Assets
TarMK Settings Guideline
For good performance, you should follow the settings guidelines presented below. For instructions on how to change the settings, see this page .
TarMK Performance Benchmark
Technical Specifications
The benchmark tests were performed on the following specifications:
Performance Benchmark Results
The numbers presented below have been normalized to 1 as the baseline and are not the actual throughput numbers.
MongoMK.
For more information about TarMK, see Deployment Scenarios and Mongo Storage .
MongoMK Minimum Architecture Guidelines
To establish good performance when using MongoMK, you should start from the following architecture:
- Three Author instances
- Two Publish instances
- Three MongoDB instances
- Two Dispatchers
In production environments, MongoDB will always be used as a replica set with a primary and two secondaries. Reads and writes go to the primary and reads can go to the secondaries. If storage is not available, one of the secondaries can be replaced with an arbiter, but MongoDB replica sets must always be composed of an odd number of instances.
Binary-less replication should be turned ON if the File Datastore is shared.
MongoMK Settings Guidelines
For good performance, you should follow the settings guidelines presented below. For instructions on how to change the settings, see this page .
MongoMK Performance Benchmark
Technical Specifications
The benchmark tests were performed on the following specifications:
Performance Benchmark Results
The numbers presented below have been normalized to 1 as the baseline and are not the actual throughput numbers.
TarMK vs MongoMK
The basic rule that needs to be taken into account when choosing between the two is that TarMK is designed for performance, while MongoMK is used for scalability. Adobe recommends TarMK to be the default persistence technology used by customers in all deployment scenarios, for both the AEM Author and Publish instances. generally results from the fact that the CPU and memory capacity of a single server, supporting all concurrent authoring activities, is no longer sustainable.
For further details on TarMK vs MongoMK, see Recommended Deployments .
TarMK vs MongoMk Guidelines
Benefits of TarMK
- Purpose-built for content management applications
- Files are always consistent and can be backed up using any file-based backup tool
- Provides a failover mechanism - see Cold Standby for more details
- Provides high performance and reliable data storage with minimal operational overhead
- Lower TCO (total cost of ownership)
Criteria for choosing MongoMK
- Number of named users connected in a day: in the thousands or more
- Number of concurrent users: in the hundreds or more
- Volume of asset ingestions per day: in hundreds of thousands or more
- Volume of page edits per day: in hundreds of thousands or more
- Volume of searches per day: in tens of thousands or more
TarMK vs MongoMK Benchmarks
The numbers presented below have been normalized to 1 as the baseline and are not actual throughput numbers.
Scenario 1 Technical Specifications
Scenario 1 Performance Benchmark Results
Scenario 2 Technical Specifications
To enable the same number of Authors with MongoDB as with one TarMK system you need a cluster with two AEM nodes. A four node MongoDB cluster can handle 1.8 times the number of Authors than one TarMK instance. An eight node MongoDB cluster can handle 2.3 times the number of Authors than one TarMK instance.
Scenario 2 Performance Benchmark Results
Architecture Scalability Guidelines For AEM Sites and Assets
Summary of Performance Guidelines
The guidelines presented on this page can be summarized as follows:
- TarMK with File Datastore is the recommended architecture for most customers:
- Minimum topology: one Author instance, two Publish instances, two Dispatchers
- Binary-less replication turned on if the File Datastore is shared
- MongoMK with File Datastore is the recommended architecture for horizontal scalability of the Author tier:
- Minimum topology: three Author instances, three MongoDB instances, two Publish instances, two Dispatchers
- Binary-less replication turned on if the File Datastore is shared
- Nodestore should be stored on the local disk, not a network attached storage (NAS)
- When using Amazon S3 :
- The Amazon S3 datastore is shared between the Author and Publish tier
- Binary-less replication must be turned on
- Datastore Garbage Collection requires a first run on all Author and Publish nodes, then a second run on Author
- Custom index should be created in addition to the out of the box index based on most common searches
- Lucene indexes should be used for the custom indexes
- Customizing workflow can substantially improve the performance , for example, removing the video step in the “Update Asset” workflow, disabling listeners which are not used, etc.
For more details, also read the Recommended Deployments page. | https://docs.adobe.com/content/help/en/experience-manager-65/deploying/configuring/performance-guidelines.html | 2020-05-25T08:38:44 | CC-MAIN-2020-24 | 1590347388012.14 | [array(['/content/dam/help/experience-manager-65.en/help/sites-deploying/assets/chlimage_1a.png',
None], dtype=object)
array(['/content/dam/help/experience-manager-65.en/help/sites-deploying/assets/chlimage_1-1a.png',
None], dtype=object)
array(['/content/dam/help/experience-manager-65.en/help/sites-deploying/assets/chlimage_1-2a.png',
None], dtype=object)
array(['/content/dam/help/experience-manager-65.en/help/sites-deploying/assets/chlimage_1-3a.png',
None], dtype=object)
array(['/content/dam/help/experience-manager-65.en/help/sites-deploying/assets/chlimage_1-4a.png',
None], dtype=object)
array(['/content/dam/help/experience-manager-65.en/help/sites-deploying/assets/chlimage_1-5a.png',
None], dtype=object)
array(['/content/dam/help/experience-manager-65.en/help/sites-deploying/assets/chlimage_1-6a.png',
None], dtype=object)
array(['/content/dam/help/experience-manager-65.en/help/sites-deploying/assets/chlimage_1-7a.png',
None], dtype=object)
array(['/content/dam/help/experience-manager-65.en/help/sites-deploying/assets/chlimage_1-8a.png',
None], dtype=object)
array(['/content/dam/help/experience-manager-65.en/help/sites-deploying/assets/chlimage_1-9a.png',
None], dtype=object)
array(['/content/dam/help/experience-manager-65.en/help/sites-deploying/assets/chlimage_1-10a.png',
None], dtype=object)
array(['/content/dam/help/experience-manager-65.en/help/sites-deploying/assets/chlimage_1-11a.png',
None], dtype=object)
array(['/content/dam/help/experience-manager-65.en/help/sites-deploying/assets/chlimage_1-12a.png',
None], dtype=object)
array(['/content/dam/help/experience-manager-65.en/help/sites-deploying/assets/chlimage_1-13a.png',
None], dtype=object)
array(['/content/dam/help/experience-manager-65.en/help/sites-deploying/assets/chlimage_1-14a.png',
None], dtype=object) ] | docs.adobe.com |
화제×
Bulk Offline Update
This section covers the following topics on Bulk Offline Update:
- Overview
- Using Bulk Offline Update
This AEM Screens functionality is only available, if you have installed AEM 6.3 Feature Pack 3 or AEM 6.4 Screens Feature Pack 1.
Overview
Bulk Offline Update, allows you to update all the channel in bulk. It avoids the hassle of navigating to a particular channel and update the content. Rather, you can update all the content in channels for one specific project in one instant.
You can also schedule this activity for a time of lower network traffic.
The Bulk Offline Update feature is optimized to update only those channels that have been modified.
Using Bulk Offline Update
You can manually use bulk offline update from the User Interface (UI) or schedule the bulk update from OSGi services.
Using AEM Screens User Interface
Follow the steps below to use bulk offline update for an AEM Screens project:
- Navigate to your AEM Screens project.
- Select the project and click Update Offline Content from the action bar to manually update the channel content.
Adobe Experience Manager Web Console Configuration
Follow the steps below to use bulk offline update for an AEM Screens project:
- Adobe Experience Manager Web Console Configuration.
- Add the following properties:Project Path Specify the path of your AEM Screens project. The path is usually /content/screens/<Name of your project> .For example , /content/screens/we-retail . You can find this path in the URL by selecting any project under AEM Screens (do not click the icon).Specify the project path relative to your channel.Schedule Frequency Specify a time, for example, 5:00 pm or 17:00 at which this service should update offline content. | https://docs.adobe.com/content/help/ko-KR/experience-manager-screens/user-guide/authoring/product-features/bulk-offline-update.html | 2020-05-25T07:18:24 | CC-MAIN-2020-24 | 1590347388012.14 | [] | docs.adobe.com |
#include <switch_arg.hpp>
A simple switch argument. If the switch is set on the command line, then the getValue method will return the opposite of the default value for the switch.
Definition at line 31 of file switch_arg.hpp.
Definition at line 108 of file switch_arg.hpp.
Definition at line 117 of file switch_arg.hpp.
Checks a string to see if any of the chars in the string match the flag for this Switch.
Definition at line 131 of file switch_arg.hpp.
Returns bool, whether or not the switch has been set.
Definition at line 129 of file switch_arg.hpp.
Handles the processing of the argument. This re-implements the Arg version of this method to set the _value of the argument appropriately.
Reimplemented in ecl::MultiSwitchArg.
Definition at line 160 of file switch_arg.hpp.
The value of the switch.
Definition at line 38 of file switch_arg.hpp. | http://docs.ros.org/kinetic/api/ecl_command_line/html/classecl_1_1SwitchArg.html | 2020-05-25T08:14:07 | CC-MAIN-2020-24 | 1590347388012.14 | [] | docs.ros.org |
We think over the logic of work, determine the schema and model of the database
The preparatory work has been completed, and today we proceed directly to the development of the component.
We write the component mailings, so you need to think about the basic logic of the work. I draw your attention to the fact that our goal is to learn how to write components for MODX, and not to write the best newsletter in the world. Therefore, I ask you to restrain your ambitions right away and not to suggest adding mega-functionality.
The logic of work seems to me like this:
- We have an object Newsletter - everything that is needed to form letters in it: subject, template, sender, etc.
- Object Subscriber subscribes to the newsletter. For now, we will assume that this should be an authorized user, but we keep in mind that we can add guests.
- When some events occur, the Subscription object is called and executes certain code that generates letters and saves them as an object Queue. For greater versatility, you can carry this code into a separate snippet.
- The server executes the mailing script and sends emails from the Queue on a schedule, at least every 5 minutes. If the queue is empty, then everything has already been sent.
Accordingly, in the admin we will have a page for managing subscriptions, adding subscribers to them, and viewing the message queue, with the ability to delete or send something right now. Perhaps we will add a message verification page to send test messages for debugging.
We have decided on the functionality, now we need to write a database schema to store our data.
DB table schema
The schema in MODX is an XML file in which all objects and their relationships are described. It does not participate in the work of the component, it is not used anywhere, it is only needed to generate the model.
The scheme may change as the supplement develops. You can add or remove objects, indexes, and relationships. No need to try to provide all the columns in the tables at once - you can add them at any time.
The basic principles of the xPDO scheme can be read the official documentation, I'll just show you the finished file and explain what is there and how.
We open the scheme in my repository on GitHub and look.
Each object is described in the object tag. In the attributes of an object, you specify its name and the database table in which it will be stored. The table is specified without a site prefix - it will be added automatically by MODX itself when needed.
<object class="sxNewsletter" table="sendex_newsletters" extends="xPDOSimpleObject">
Our object is obliged to extend an existing MODX object, usually
xPDOSimpleObject is used. From it we will inherit the id column as the primary key - therefore, anywhere in the scheme id is not indicated for objects.
But if we inherited
xPDOObject, then we would have to write it ourselves. If you do not want problems, just always use
xPDOSimpleObject.
Further in the object its fields are described:
<field key="name" dbtype="varchar" precision="100" phptype="string" null="false" default="" /> <field key="description" dbtype="text" phptype="text" null="true" default="" /> <field key="active" dbtype="tinyint" precision="1" phptype="boolean" attributes="unsigned" null="true" default="1" />
- key - field name
- dbtype - type of field in the database:
int, varchar, text, etc..
- precision - accuracy, or field size. Required for fixed-length types, such as
intand
varchar. The
textfields are not specified.
- phptype - the type of a variable in php, xPDO will change the value according to it:
integer, string, float, json, array. Note that
jsonand
arrayis an invention of MODX.
- Array is for serialized data, with type preservation, and
jsonis regular json. When saving such a field, its value will be run via
serialize ()or
json_encode (), and when received, via
unserialize ()and
json_decode. Thus, it is possible to conveniently store arrays in a database.
- null - can the field be empty? If you specify here
false, and when working with an object you do not send a value, there will be an error in the log.
- default - the default value will be used if the field can be
nulland there is no data for it when saving
- attributes - additional properties for transfer to the database. They are exactly the same as in mySql
These are only basic properties, MODX stores many undocumented features, so I recommend carefully looking at its own scheme, and just copy what you need.
After describing the columns of the table, you need to specify the indexes so that our table works quickly. In most cases, it is enough to add to the index those fields that will be sampled:
<index alias="name" name="name" primary="false" unique="false" type="BTREE"> <column key="name" length="" collation="A" null="false" /> </index> <index alias="active" name="active" primary="false" unique="false" type="BTREE"> <column key="active" length="" collation="A" null="false" /> </index>
Of the important attributes here:
- primary - is the index primary? Usually - no, the primary index is in our
idfield from
xPDOSimpleObject.
- unique - is the index unique? That is, can table 2 and more have the same values for this field? We have a unique index in the id column again. And finally - the relationship of objects to each other:
<composite alias="Subscribers" class="sxSubscriber" local="id" foreign="newsletter_id" cardinality="many" owner="local" /> <aggregate alias="Template" class="modTemplate" local="template" foreign="id" cardinality="one" owner="foreign" /> <aggregate alias="Snippet" class="modSnippet" local="snippet" foreign="id" cardinality="one" owner="foreign" />
- Composite - an object is the main one, in relation to another. When you delete such an object, all child objects associated with it here will be deleted.
- Aggregate - an object is subject to another object. When it is deleted, the main thing will be nothing.
- alias is a pseudonym for communications. Used in
$object->getMany('Subscribers');or
$object-> addOne ('Template');
- class is the real name of the class with which the current object is associated.
- local - field of the current object, which is connected
- foreign - fields of the object with which we are associated.
- cardinality is a type of connection. One to one, or one to several. Typically, a bond aggregate is one, and a composite has many, that is, the parent has many children, and the descendants have only one parent. But there are exceptions. If the connection is many, it uses
addMany()and
getMany(), if one is, then
addOne()and
getOne().
For a visual representation of the scheme, I advise you to use the service from Jeroen Kenters
In the course of development, the scheme will change several times, so it should become clearer further.
Model generation
As I said, the scheme itself gives us nothing, we need a working model. What is the database model in MODX? This is a set of php files, which consists of basic objects and extensions for a specific database.
Let's generate a model and see what happens there:
1 Copy-paste current chart in your project and save. Changes should be synchronized with the server. 2 Delete old unnecessary files from modExtra model
3 Execute file
build.model.php on server. I have it
c2263.paas2.ams.modxcloud.com/Sendex/_build/build.model.php — on server. In the first generation we will have only done, and in subsequent ones - messages that the existing objects will not be overwritten.
4 New files were created on the server - you need to synchronize the model directory (click on the two green arrows at the top).
5 Model uploaded to us in the project. The files are still brown, as they have not yet been added to Git. Add them through the context menu and see green (new) files:
6 Add the creation of new objects when installing a component in
/_build/resolvers/resolve.tables.php
See the white bar to the left of the line numbers? This version control system shows us where the lines were changed. The file immediately turns blue - it contains their changes that were not saved in Git.
7 Send changes to the repository
In the lower left window we see the old file, in the lower right window - the new one. You can check all changes before submitting.
Here is my today's commit with all the work. But list of all commits, to track progress.
Well, now let's take a closer look at what kind of model it is with the example of an object sxNewsletter?
So, we have new files:
/model/sendex/metadata.mysql.php— general information about what objects are in the component.
/model/sendex/sxnewsletter.class.php— object
sxNewsletter, here are all its main methods
/model/sendex/mysql/sxnewsletter.class.php— extension of the
sxNewsletterobject for MySql database. Here are the methods that are needed to ensure that it works with this particular database.
/model/sendex/mysql/sxnewsletter.map.inc.php— object map,
sxNewsletter, which is used only for MySql. It contains all the fields, indices and relationships that we specified in the XML schema.
As you might guess, if we created another schema for the MsSQL database and generated a model for it, then
/model/sendex/metadata.mysql.php would remain the same, and the directory would be added to
/model/sendex/ mssql, with the
sxnewsletter.class.php and
sxnewsletter.map.inc.php files.
This is how MODX supports any database using xPDO - creates one common object that expands when working in a particular system.
We don’t need the files that are in
/model/sendex/mysql/, moreover, they will be overwritten each time the model is generated according to the new scheme (I’m already generating a script), but in
/model/sendex/sxnewsletter.class.php later we will write different methods to call them like this:
if ($newsletter = $modx->getObject('sxNewsletter', 1)) { echo $newsletter->nameMethod ('options'); }
Open, for example, object modUser and look at the familiar methods
isAuthenticated() and
joinGroup() - this is how MODX works =)
Now you know how easy it is to find out what an object can do in the engine or its additions.
Our other two objects sxSubscriber and sxQueue work the same way.
Conclusion
So, today we have decided on the main logic of work that we will program in the future, and sketched the first version of our xPDO model for the MySql database.
In the next lesson, we assemble the component into a transport package, install it on the site and configure it for convenient further development. And then we deal with the controllers custom manager pages of the admin panel and prepare to draw the interface on ExtJs. | https://docs.modx.org/3.x/en/extending-modx/creating-components/work-logic | 2020-05-25T08:45:57 | CC-MAIN-2020-24 | 1590347388012.14 | [array(['/2.x/ru/extending-modx/creating-components/work-logic/work-logic-1.png',
None], dtype=object)
array(['/2.x/ru/extending-modx/creating-components/work-logic/work-logic-2.png',
None], dtype=object)
array(['/2.x/ru/extending-modx/creating-components/work-logic/work-logic-3.png',
None], dtype=object)
array(['/2.x/ru/extending-modx/creating-components/work-logic/work-logic-4.png',
None], dtype=object)
array(['/2.x/ru/extending-modx/creating-components/work-logic/work-logic-5.png',
None], dtype=object)
array(['/2.x/ru/extending-modx/creating-components/work-logic/work-logic-6.png',
None], dtype=object)
array(['/2.x/ru/extending-modx/creating-components/work-logic/work-logic-7.png',
None], dtype=object)
array(['/2.x/ru/extending-modx/creating-components/work-logic/work-logic-8.png',
None], dtype=object) ] | docs.modx.org |
Domino environment variables¶
Domino automatically injects several environment variables whenever it runs your code, as part of the context of your run.
If you’re looking to define your own environment variables, please see Environment variables for secure credential storage.— username
AWS_SHARED_CREDENTIALS_FILE- path to your AWS credential file for connecting to addition AWS resources (e.g. S3, Redshift). Learn more.
DOMINO_TOKEN_FILE- path to a jwt token signed by Domino useful for authenticating with the Domino API or other third party services. Learn more.
Note: These variables are not available in the Model Manager
Usage¶
Here are some examples on retrieving an environment variable within your code:
R
Sys.getenv("DOMINO_RUN_ID")
Python
import os
os.environ['DOMINO_RUN_ID'] | https://docs.dominodatalab.com/en/4.3.3/reference/runs/advanced/Domino_environment_variables.html | 2021-11-27T05:01:52 | CC-MAIN-2021-49 | 1637964358118.13 | [] | docs.dominodatalab.com |
- ApplicationWorker
- Retries
- Dedicated Queues
- Queue Namespaces
- Versioning
- Idempotent Jobs
- Limited capacity worker
- Job urgency
- Job size
- Job data consistency strategies
- Trading immediacy for reduced primary load
- Jobs with External Dependencies
- CPU-bound and Memory-bound Workers
- Declaring a Job as CPU-bound
- Determining whether a worker is CPU-bound
- Feature category
- Job weights
- Worker context
- Arguments logging
- Tests
- Sidekiq Compatibility across Updates
Sidekiq Style Guide
This document outlines various guidelines that should be followed when adding or modifying Sidekiq workers.
ApplicationWorker
All workers should include
ApplicationWorker instead of
Sidekiq::Worker,
which adds some convenience methods and automatically sets the queue based on
the worker’s name.
Retries
Sidekiq defaults to using 25 retries, with back-off between each retry. 25 retries means that the last retry would happen around three weeks after the first attempt (assuming all 24 prior retries failed).
For most workers - especially idempotent workers - the default of 25 retries is more than sufficient. Many of our older workers declare 3 retries, which used to be the default within the GitLab application. 3 retries happen over the course of a couple of minutes, so the jobs are prone to failing completely.
A lower retry count may be applicable if any of the below apply:
- The worker contacts an external service and we do not provide guarantees on delivery. For example, webhooks.
- The worker is not idempotent and running it multiple times could leave the system in an inconsistent state. For example, a worker that posts a system note and then performs an action: if the second step fails and the worker retries, the system note will be posted again.
- The worker is a cronjob that runs frequently. For example, if a cron job runs every hour, then we don’t need to retry beyond an hour because we don’t need two of the same job running at once.
Each retry for a worker is counted as a failure in our metrics. A worker which always fails 9 times and succeeds on the 10th would have a 90% error rate.
Dedicated.
data_consistencyattribute set to
:stickyor
:delayed. The reason for this is that deduplication always takes into account the latest binary replication pointer into account, not the first one. There is an open issue to improve this.
Ensuring
GitLab supports two deduplication strategies:
until_executing
until_executed
More deduplication strategies have been suggested. If you are implementing a worker that could benefit from a different strategy, please comment in the issue.
Until
This concern exposes three Prometheus metrics of gauge type with the worker class name as label:
limited_capacity_worker_running_jobs
limited_capacity_worker_max_running_jobs
limited_capacity_worker_remaining_work_count
Job urgency
Jobs can have an
urgency attribute set, which can be
:high,
:low, or
:throttled. These have the below targets:
To set a job’s urgency, use the
urgency class method:
class HighUrgencyWorker include ApplicationWorker urgency :high # ... end
Latency.
In general, latency-sensitive jobs perform operations that a user could reasonably expect to happen synchronously, rather than asynchronously in a background worker. A common example is a write following an action..
Job size
GitLab stores Sidekiq jobs and their arguments in Redis. To avoid excessive memory usage, we compress the arguments of Sidekiq jobs if their original size is bigger than 100KB.
After compression, if their size still exceeds 5MB, it raises an
ExceedLimitError
error when scheduling the job.
If this happens, rely on other means of making the data available in Sidekiq. There are possible workarounds such as:
- Rebuild the data in Sidekiq with data loaded from the database or elsewhere.
- Store the data in object storage before scheduling the job, and retrieve it inside the job.
Job data consistency strategies
In GitLab 13.11 and earlier, Sidekiq workers would always send database queries to the primary database node, both for reads and writes. This ensured that data integrity is both guaranteed and immediate, since in a single-node scenario it is impossible to encounter stale reads even for workers that read their own writes. If a worker writes to the primary, but reads from a replica, however, the possibility of reading a stale record is non-zero due to replicas potentially lagging behind the primary.
When the number of jobs that rely on the database increases, ensuring immediate data consistency
can put unsustainable load on the primary database server. We therefore added the ability to use
database load balancing for Sidekiq workers.
By configuring a worker’s
data_consistency field, we can then allow the scheduler to target read replicas
under several strategies outlined below.
Trading immediacy for reduced primary load
We require Sidekiq workers to make an explicit decision around whether they need to use the
primary database node for all reads and writes, or whether reads can be served from replicas. This is
enforced by a RuboCop rule, which ensures that the
data_consistency field is set.
When setting this field, consider the following trade-off:
- Ensure immediately consistent reads, but increase load on the primary database.
- Prefer read replicas to add relief to the primary, but increase the likelihood of stale reads that have to be retried.
To maintain the same behavior compared to before this field was introduced, set it to
:always, so
database operations will only target the primary. Reasons for having to do so include workers
that mostly or exclusively perform writes, or workers that read their own writes and who might run
into data consistency issues should a stale record be read back from a replica. Try to avoid
these scenarios, since
:always should be considered the exception, not the rule.
To allow for reads to be served from replicas, we added two additional consistency modes:
:sticky and
:delayed.
When you declare either
:sticky or
:delayed consistency, workers become eligible for database
load-balancing. In both cases, jobs are enqueued with a short delay.
This minimizes the likelihood of replication lag after a write.
The difference is in what happens when there is replication lag after the delay:
sticky workers
switch over to the primary right away, whereas
delayed workers fail fast and are retried once.
If they still encounter replication lag, they also switch to the primary instead.
If your worker never performs any writes, it is strongly advised to apply one of these consistency settings,
since it will never need to rely on the primary database node.
The table below shows the
data_consistency attribute and its values, ordered by the degree to which
they prefer read replicas and will wait for replicas to catch up:
In all cases workers read either from a replica that is fully caught up, or from the primary node, so data consistency is always ensured.
To set a data consistency for a worker, use the
data_consistency class method:
class DelayedWorker include ApplicationWorker data_consistency :delayed # ... end
For idempotent jobs, the deduplication is not compatible with the
data_consistency attribute set to
:sticky or
:delayed.
The reason for this is that deduplication always takes into account the latest binary replication pointer into account, not the first one.
There is an open issue to improve this.
feature_flag property
The
feature_flag property allows you to toggle a job’s
data_consistency,
which permits you to safely toggle load balancing capabilities for a specific job.
When
feature_flag is disabled, the job defaults to
:always, which means that the job will always use the primary database.
The
feature_flag property does not allow the use of
feature gates based on actors.
This means that the feature flag cannot be toggled only for particular
projects, groups, or users, but instead, you can safely use percentage of time rollout.
Note that since we check the feature flag on both Sidekiq client and server, rolling out a 10% of the time,
will likely results in 1% (
0.1
[from client]*0.1
[from server]) of effective jobs using replicas.
Example:
class DelayedWorker include ApplicationWorker data_consistency :delayed, feature_flag: :load_balancing_for_delayed_worker # ... end
Jobs
This example shows how to declare a job as being CPU-bound.
class CPUIntensiveWorker include ApplicationWorker # Declares that this worker will perform a lot of # calculations on-CPU. worker_resource_boundary :cpu # ... end
Determining
Free, newly-added
workers do not need to have weights specified. They can use the
default weight, which is 1.
Worker
Tests
Each Sidekiq worker must be tested using RSpec, just like any other class. These
tests should be placed in
spec/workers.
Side
Jobs need to be backward and forward compatible between consecutive versions of the application. Adding or removing an argument may cause problems during deployment before all Rails and Sidekiq nodes have the updated code.
Dep
This approach doesn’t require multiple releases if an existing worker already uses a parameter hash.
Use a parameter hash in the worker to allow future flexibility.
class ExampleWorker def perform(object_id, params = {}) # ... end end
Removing[1.0]. | https://docs.gitlab.com/14.3/ee/development/sidekiq_style_guide.html | 2021-11-27T04:42:00 | CC-MAIN-2021-49 | 1637964358118.13 | [] | docs.gitlab.com |
Manage Disk Space
LIMIT LARGE FILE DOWNLOADS
You want to limit large file downloads because your hard drive storage is not as unlimited as your cloud storage.
PREMIUM FEATURE
The best way to limit large files from downloading is to download nothing at all! odrive's Progressive Sync allows you to do this by simply setting your auto download limit to "Never download". You can browse all your files and folders in the cloud without taking up any local hard disk space.
The auto download limit applies across your entire odrive - it is a global setting - whenever you expand a placeholder folder. Conversely, placeholder folders that have not been expanded, will not have any content downloaded.
See the autodownload limit documentation for more information.
PREMIUM FEATURE
You can also decide what is automatically downloaded on a per-folder basis by using the right-click Sync option on a folder. Be sure to choose the "Save and apply to new folders and files" option. Without that checkmark, the requested sync command will proceed with those settings as a one-time operation. If you want to exclude files over a certain size, use the slider to set the size limit.
See the Folder Sync Rule documentation for more information.
UNSYNC OLD FILES
PREMIUM FEATURE
For a quick run-through of the Unsync features, take a look at this video.
For any files or folders that you want to turn back into a placeholder, right click and select "Unsync".
Your computer storage is restored by the same amount of the folder content that has been unsynced. Files remain safe in the cloud and can be re-synced any time you need them.
To read more about how unsync makes unlimited cloud storage work without limits - Unsync is the Missing Link to Cloud Storage
We also have a short video on our YouTube channel which describes how to use unsync and auto unsync features.
Unsync is only available with an odrive Premium subscription
CONFIGURE AUTO UNSYNC
PREMIUM FEATURE
If you are like most people, you likely don't need or want a file after you have seen or used them. odrive lets you set a global policy that automatically unsyncs files after a day, week, or month.
If you have auto unsync turned on, you will get a notification that your computer storage is restored by the same amount of the file content that has been auto unsynced.".
To read more about how Unsync makes your unlimited cloud storage work without limits: Unsync is the Missing Link to Cloud Storage
We also have a short video on our YouTube channel which describes how to use unsync and auto unsync features.
Updated about 1 year ago | https://docs.odrive.com/docs/manage-disk-space | 2021-11-27T06:03:49 | CC-MAIN-2021-49 | 1637964358118.13 | [array(['https://www.odrive.com/images/usage-guide/odrive-mac-syncall-limited.png',
'Choose the file size limit on the fly'], dtype=object)
array(['https://www.odrive.com/images/usage-guide/odrive-mac-syncall-limited.png',
'Click to close... Choose the file size limit on the fly'],
dtype=object)
array(['https://www.odrive.com/images/usage-guide/odrive-mac-unsync-singlefolder.png',
"Unsync files you're no longer using to reclaim local disk space"],
dtype=object)
array(['https://www.odrive.com/images/usage-guide/odrive-mac-unsync-singlefolder.png',
"Click to close... Unsync files you're no longer using to reclaim local disk space"],
dtype=object)
array(['https://www.odrive.com/images/usage-guide/odrive-mac-unsync-folder.png',
'Unsyncing a folder removes reclaims disk space in bulk on your local computer'],
dtype=object)
array(['https://www.odrive.com/images/usage-guide/odrive-mac-unsync-folder.png',
'Click to close... Unsyncing a folder removes reclaims disk space in bulk on your local computer'],
dtype=object) ] | docs.odrive.com |
This is the Marker Class
Performance should be favored at the possible cost of drawing markers at a smaller size than requested.
Marker Size Units
A unitless linear scaling factor. A value of 2.0 will cause markers to be rendered twice as large. A value of 1.0 will result in a visually pleasing device-dependent marker size that is approximately 3% of the height of the outer window. A value of 0 will result in a single pixel marker for display-devices, or the smallest size supported by any other device.
Object space units ignoring any scaling components in modelling matrices.
Fraction of the height of the outermost window.
Fraction of the height of the local window.
Object space units including any scaling components in modelling matrices and cameras.
Points units typically used for text size. 1 point corresponds to 1/72 inch.
Number of pixels. | https://docs.techsoft3d.com/hps/latest/build/api_ref/cs/class_h_p_s_1_1_marker.html | 2021-11-27T05:51:15 | CC-MAIN-2021-49 | 1637964358118.13 | [] | docs.techsoft3d.com |
Metadata File Structure¶
The root of a metadata file is a key-value map. LOOT will recognise the following keys, none of which are required. Other keys may also be present, but are not processed by LOOT.
bash_tags
string list
A list of Bash Tags that are supported by the masterlist’s game. These Bash Tags are used to provide autocomplete suggestions in LOOT’s metadata editor.
globals
message list
A list of message data structures for messages that are displayed independently of any plugin.
plugins
plugin list and plugin set
The plugin data structures that hold all the plugin metadata within the file. It is a mixture of a list and a set because no non-regex plugin value may be equal to any other non-regex plugin value , but there may be any number of equal regex plugin values, and non-regex plugin values may be equal to regex plugin values.If multiple plugin values match a single plugin, their metadata is merged in the order the values are listed, and as defined in Merging Behaviour.
The message and plugin data structures are detailed in the next section.
Example¶
bash_tags: - 'C.Climate' - 'Relev' globals: - type: say content: 'You are using the latest version of LOOT.' condition: 'version("LOOT", "0.5.0.0", ==)' plugins: - name: 'Armamentarium.esm' tag: - Relev - name: 'ArmamentariumFran.esm' tag: - Relev - name: 'Beautiful People 2ch-Ed.esm' tag: - Eyes - Graphics - Hair - R.Relations | http://loot-api.readthedocs.io/en/latest/metadata/file_structure.html | 2017-12-11T07:20:20 | CC-MAIN-2017-51 | 1512948512584.10 | [] | loot-api.readthedocs.io |
This Data Transport Target, forms a “For Each” loop around the data set retrieved, and executes a linked Component.
The column values
for each row are represented as parameters and can be assigned directly to target columns in the linked component and sub-components. Also the values may be used as arguments in calculations and variables in PowerShell scripts.
Any supported connector can generate the data set for the
for each row loop .
Connectors include SharePoint, CSV, SQL Server, ODBC, OData etc.
Feedback
Thanks for your feedback.
Post your comment on this topic. | http://docs.surfray.com/ontolica-fusion/1/en/topic/for-each-row-execute-component | 2017-12-11T07:25:20 | CC-MAIN-2017-51 | 1512948512584.10 | [array(['http://manula.r.sizr.io/large/user/760/img/foreachrow.png', None],
dtype=object) ] | docs.surfray.com |
Real Time Models
The result of the ModelService.open(...) method is a Promise that resolves to a
RealTimeModel object. This is the main entry point for working with the model. The
RealTimeModel contains all of the data and metadata associated with the model. As models are changed over time, every individual mutation to the model is tracked. Every model has a version that starts at zero and is incremented each time a mutation occurs. Therefore, every state that the model moved through can be referenced by a version number.
Metadata
The main metadata methods for the model are:
// Returns the collection the model belongs to. var collectionId = model.collectionId(); // Returns the unique model id within the collection. var modelId = model.modelId(); // Returns the current version of the model. var version = model.version(); // Returns the most recent time the model was modified. var modifiedTime = model.time(); // Returns the time when the model was first created. var createdTime = model.createdTime();
Model Data
The data in the model is represented by an object tree much like a JSON object. All of the values in the model are subclasses of the RealTimeElement class. Each subclass corresponds to one of the supported data types. These include:
- RealTimeObject
- RealTimeArray
- RealTimeNumber
- RealTimeString
- RealTimeBoolean
- RealTimeDate
- RealTimeNull
- RealTimeUndefined
Each of these types have various methods for accessing and manipulating data. The root of every model is always a
RealTimeObject. This can be obtained as follows:
var rootObject = model.root();
Depending on the structure of your data you may also wish to directly access data that is nested more deeply in the object structure. To do this you can use the model path to directly access a nested element:
var childObject = model.elementAt("emails", 0);
The
elementAt() method will return the element at the specified path, or undefined if no such element exists. [is this true or does it return RealTimeUndefined???]
Each element also has a unique id within the model. An element can be accessed directly by its id (without using the path) using the
element(id) method.
const elementId = model.elementAt("emails", 0).id(); // later const element = model.element(elementId);
Closing
When you are done working with a model, you must call the close method to release the resources and to stop being notified of changes to the model. You can also tell if the model has already been closed with the
isOpen() method.
model.close().then(() => { console.log(isOpen()); // false });
Collaborators
You can determine which other user sessions have a model open as follows:
model.collaborators();
Events
See the API documentation for full details of the methods of the RealTimeModel. | https://docs.convergence.io/guide/models/real-time-models.html | 2017-12-11T07:38:55 | CC-MAIN-2017-51 | 1512948512584.10 | [] | docs.convergence.io |
GitHub¶
Sentry provides two ways to integrate with GitHub.
GitHub SSO¶
GitHub Issues¶
Create issues in GitHub based on Sentry events.
- Go to the project settings page in Sentry that you’d like to link with GitHub
- Click All Integrations, find the GitHub integration in the list, and click configure
- Click Enable Plugin
- Fill in the required information and save
- The option to create and link GitHub issues will be displayed from your Sentry issue pages | https://docs.sentry.io/integrations/github/ | 2017-12-11T07:30:15 | CC-MAIN-2017-51 | 1512948512584.10 | [] | docs.sentry.io |
Customize activities You can customize which fields appear in the activity formatter. You can add or remove fields from the list of activities that users can select when they open the activity filter. Before you beginRole required: personalize_form About this task Figure 1. Customize the fields that appear in the activity filter Procedure Scroll to the activity stream and. Note: The activities appear in alphabetical order, regardless of the order in the Selected column. made through Configure Activities. Related TasksAdd the activity formatter to a formEnable the Live Feed-Activity toggleCreate an activity formatterConfigure the email property for activity formatter | https://docs.servicenow.com/bundle/geneva-servicenow-platform/page/administer/form_administration/task/t_CustomizeActivities.html | 2017-12-11T07:43:54 | CC-MAIN-2017-51 | 1512948512584.10 | [] | docs.servicenow.com |
Unlock-Cs
Client Pin
Syntax
Unlock-CsClientPin [-Identity] <UserIdParameter> [-Force] [ a problem. Skype for Business Skype for Business Server.
That means that you will not be able to run the
Unlock
Unlock-CsClientPin cmdlet remotely against a Standard Edition server you will need to manually enable the firewall exceptions for SQL Server Express.
Examples
-------------------------- Example 1 --------------------------
Unlock-CsClientPin -Identity "litwareinc\kenmyer"
In Example 1, the
Unlock-CsClientPin cmdlet is used to unlock the PIN belonging to the user litwareinc\kenmyer.
-------------------------- Example 2 --------------------------
Get-CsUser | Get-CsClientPinInfo | Where-Object {$_.IsLockedOut -eq $True} | Unlock-CsClientPin
In Example 2, the
Unlock-CsClientPin cmdlet is used to unlock all the PINs that are currently locked.
To do this, the
Get-CsUser cmdlet is first used to return a collection of all the users who have been enabled for Skype for Business Server.
That collection is then piped to the
Get-CsClientPinInfo cmdlet, which is used in conjunction with the
Where-Object cmdlet to select only those users where the IsLockedOut property is equal to (-eq) to True ($True).
The resulting filtered collection is then piped to the
Unlock-CsClientPin cmdlet, which unlocks the PIN for each user whose PIN was previously locked.
Required Parameters
Identity of the user account for which the PIN should be unlocked.). User Identities can also be referenced by using the user's Active Directory distinguished name.
In addition,.
Suppresses the display of any non-fatal error message that might occur when running the command.
Describes what would happen if you executed the command without actually executing the command.
Inputs
String value or Microsoft.Rtc.Management.ADConnect.Schema.ADUser object.
The
Unlock-CsClientPin cmdlet accepts pipelined input of string values representing the Identity of a user account.
The cmdlet also accepts pipelined input of user objects.
Outputs
The
Unlock-CsClientPin cmdlet does not return a value or object.
Instead, the cmdlet configures one or more instances of the Microsoft.Rtc.Management.UserPinService.PinInfoDetails object. | https://docs.microsoft.com/en-us/powershell/module/skype/Unlock-CsClientPin?view=skype-ps | 2017-12-11T08:08:29 | CC-MAIN-2017-51 | 1512948512584.10 | [] | docs.microsoft.com |
Recursive Functions
Any function in a C program can be called recursively; that is, it can call itself. The number of recursive calls is limited to the size of the stack. See the /STACK (Stack Allocations) (/STACK) linker option for information about linker options that set stack size. Each time the function is called, new storage is allocated for the parameters and for the auto and register variables so that their values in previous, unfinished calls are not overwritten. Parameters are only directly accessible to the instance of the function in which they are created. Previous parameters are not directly accessible to ensuing instances of the function.
Note that variables declared with static storage do not require new storage with each recursive call. Their storage exists for the lifetime of the program. Each reference to such a variable accesses the same storage area.
Example
This example illustrates recursive calls:
int factorial( int num ); /* Function prototype */ int main() { int result, number; . . . result = factorial( number ); } int factorial( int num ) /* Function definition */ { . . . if ( ( num > 0 ) || ( num <= 10 ) ) return( num * factorial( num - 1 ) ); } | https://docs.microsoft.com/en-us/cpp/c-language/recursive-functions | 2017-12-11T07:57:18 | CC-MAIN-2017-51 | 1512948512584.10 | [] | docs.microsoft.com |
All classes to support kinematic families. More...
All classes to support kinematic families.
The Kinematic Families classes range from the basic building blocks (KDL::Joint and KDL::Segment) and their interconnected kinematic structures (KDL::Chain, KDL::Tree and KDL::Graph), to the solver algorithms for the kinematics and dynamics of particular kinematic families.
A kinematic family is a set of kinematic structures that have similar properties, such as the same interconnection topology, the same numerical or analytical solver algorithms, etc. Different members of the same kinematic family differ only by the concrete values of their kinematic and dynamic properties (link lengths, mass, etc.).
Each kinematic structure is built from one or more Segments (KDL::Segment). A KDL::Chain is a serial connection of these segments; a KDL:Tree is a tree-structured interconnection; and a KDL:Graph is a kinematic structure with a general graph topology. (The current implementation supports only KDL::Chain.)
A KDL::Segment contains a KDL::Joint and an offset frame ("link length", defined by a KDL::Frame), that represents the geometric pose between the KDL::Joint on the previous segment and its own KDL::Joint.
A list of all the classes is available on the modules page: KinFam | http://docs.ros.org/indigo/api/orocos_kdl/html/group__KinematicFamily.html | 2017-12-11T07:31:50 | CC-MAIN-2017-51 | 1512948512584.10 | [] | docs.ros.org |
ARS: Python robotics simulator¶
Note
This software and its documentation are currently under development so they will be subject to changes. Contributions are welcome!
Introduction¶
Welcome! This is the documentation of ARS 0.5, last updated on February 28, 2017.
ARS is written in a marvelous programming language called Python. One of the many features that make it great (and popular) is its documentation. Taking that into consideration, many sections herein were taken from the official Python documentation.
So simple¶()
Official website¶.
Contents¶
- Installation
- About ARS
- External links
- Software reference
- Changelog
- Language election
- Python
- Developers FAQ
- About the documentation | https://ars-project.readthedocs.io/en/latest/ | 2020-10-20T06:42:41 | CC-MAIN-2020-45 | 1603107869933.16 | [] | ars-project.readthedocs.io |
Editing Android XML Layout Files in Android Studio
Elements in Visual Studio provides integration with Android Studio, Google's official IDE for Android development that is used by Java language developers, to allow you to edit XML Layout Files and other Android-specific XML resource files (such as
AndroidManifest.xml or
strings.xml).
This allows you take full advantage of Google's visual designers for Android.
With an Android project open, simply right-click the project node in the Solution Explorer and choose "Edit User Interface Files in Android Studio":
Behind the scenes, Elements will create a wrapper Android Studio project folder with the relevant files, and then launch the Android Studio application (provided it is installed).
In Android Studio, you will see all the projects's XML files and can edit them with all the tools Android Studio provides, including the visual designers. As you make changes and save them in Android Studio, they will automatically reflect back into your Elements project inside Visual Studio.
Connecting Code and UI
The way Android user interfaces work, the UI you design in Android Studio has no direct knowledge of the classes in your code. Instead, your code has a one-way connection to all the resources in your XML files via the static
R Class, covered in more detail in the The
R Class topic.
As you design your UI and add new controls or other resources, the
R class will gain new properties that allow you to access these resources from your code.
See Also
- Working with Android Layout Files
- The
RClass
- Android XML Files
Version Notes
- Integration with Android Studio for visual design of Android Layout files is new in Version 8.1. | https://docs.elementscompiler.com/VisualStudio/Designers/EditingAndroidXMLFilesInAndroidStudio/ | 2020-10-20T05:47:52 | CC-MAIN-2020-45 | 1603107869933.16 | [array(['../../../VisualStudio/Designers/LaunchAndroidStudio.png', None],
dtype=object) ] | docs.elementscompiler.com |
Differences in Columns Between SQL and Data Warehouse Manager
There are two key differences between columns created in the SQL Report Builder and those created using the Data Warehouse Manager: one is the dependency on update cycles, the other is how columns are saved in your account.
Columns in the SQL Report Builder
Columns are not dependent on update cycles, so you no longer have to wait for one to complete before you can iterate on your column. If you make a mistake, it only takes a few keystrokes to correct it - no more waiting for two updates to wrap up before you can get back to work.
It’s important to note that the columns you create using the SQL editor are not saved to your Data Warehouse. You’ll always have access to the query containing the column, but if you want to use the column in more than one report, you’ll have to recreate it in the query for each report. This means that columns created using the SQL editor cannot be used in the traditional Report Builder.
Columns in the Data Warehouse Manager
Columns are dependent on update cycles, so a full cycle must complete before they can be edited. These columns are saved to the Data Warehouse Manager and can be used in the traditional Report Builder or SQL Report Builder. | https://docs.magento.com/mbi/data-analyst/dev-reports/columns-sql-dwm.html | 2020-10-20T06:28:50 | CC-MAIN-2020-45 | 1603107869933.16 | [] | docs.magento.com |
, macOS, React Native, tvOS, UWP, WPF/WinForms, and Xamarin., Windows, macOS, and tvOS.
Export
Continuously Export all your Analytics data into Azure Blob Storage or Application Insights. This will allow you to keep your data as long as you need, as well as provide you with further insights into your data with powerful filtering, data visualizations, and query capabilities. | https://docs.microsoft.com/en-gb/appcenter/analytics/ | 2020-10-20T07:34:19 | CC-MAIN-2020-45 | 1603107869933.16 | [] | docs.microsoft.com |
...
- A join is a data operation in which two or more tables or datasets are merged into one based on the presence of matching values in one or more key columns that you specify.
- In , the joined-in dataset is unmodified by the operation.
- There are multiple types of joins, which generate very different results. For more information, see Join Types.
...
- In the Transformer page, open the recipe panel. See Recipe Panel.
- In the Search panel, enter
join datasets.
- Joins are specified through a wizard in a special panel. For more information, see Join PanelWindow. | https://docs.trifacta.com/pages/diffpagesbyversion.action?pageId=109906179&selectedPageVersions=3&selectedPageVersions=4 | 2020-10-20T05:49:25 | CC-MAIN-2020-45 | 1603107869933.16 | [] | docs.trifacta.com |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.