content
stringlengths 0
557k
| url
stringlengths 16
1.78k
| timestamp
timestamp[ms] | dump
stringlengths 9
15
| segment
stringlengths 13
17
| image_urls
stringlengths 2
55.5k
| netloc
stringlengths 7
77
|
---|---|---|---|---|---|---|
3.16. Getting notified when selectors cannot find a value¶
The plugin can send you an email when a CSS selector’s content is empty. To configure the notifications, do the following:
- Go to Site Settings Page
- If you want to get notifications for category pages, enable Category Tab and go to CSS selectors for empty value notification setting. If you want to get notifications for post pages, enable Post Tab and go to CSS selectors for empty value notification setting.
- Define CSS selectors. When the elements found by these CSS selectors do not contain any value, a notification email will be sent to you.
- For notifications to be sent, they should be activated. Go to General Settings Page and activate Notifications Tab.
- Check Notifications are active? setting’s checkbox.
- Into Email addresses setting, enter email addresses to which the notifications should be sent.
- Save the general settings.
After these steps are done, the plugin will send email notifications when the elements found by the defined CSS selectors do not contain any value.
Tip
If you already activated the notifications by checking Notifications are active? setting’s checkbox before, you do not need to do it again. You can just configure the CSS selectors. | https://docs.wpcontentcrawler.com/1.11/guides/getting-notified-when-empty-selectors.html | 2022-06-25T07:37:35 | CC-MAIN-2022-27 | 1656103034877.9 | [] | docs.wpcontentcrawler.com |
Electrochemical Properties of Peat Particulate Organic Matter on a Global Scale: Relation to Peat Chemistry and Degree of Decomposition
DOI:
Persistent URL:
Persistent URL:
Supplement:
Teickner, Henning; Gao, Chuanyu; Knorr, Klaus‐Holger, 2022: Electrochemical Properties of Peat Particulate Organic Matter on a Global Scale: Relation to Peat Chemistry and Degree of Decomposition. In: Global Biogeochemical Cycles, Band 36, 2, DOI: 10.1029/2021GB007160.
Methane production in peatlands is controlled by the availability of electron acceptors for microbial respiration, including peat dissolved organic matter (DOM) and particulate organic matter (POM). Despite the much larger mass of POM in peat, knowledge on the ranges of its electron transfer capacities—electron accepting capacity (EAC), and electron donating capacity (EDC)—is scarce in comparison to DOM and humic and fulvic acids. Moreover, it is unclear how peat POM chemistry and decomposition relate to its EAC and EDC. To address these knowledge gaps, we compiled peat samples with varying carbon contents from mid to high latitude peatlands and analyzed their EACPOM and EDCPOM, element ratios, decomposition indicators, and relative amounts of molecular structures as derived from mid infrared spectra. Peat EACPOM and EDCPOM are smaller (per gram carbon) than EAC and EDC of DOM and terrestrial and aquatic humic and fulvic acids and are highly variable within and between sites. Both are small in highly decomposed peat, unless it has larger amounts of quinones and phenols. Element ratio‐based models failed to predict EACPOM and EDCPOM, while mid infrared spectra‐based models can predict peat EACPOM to a large extent, but not EDCPOM. We suggest a conceptual model that describes how vegetation chemistry and decomposition control polymeric phenol and quinone contents as drivers of peat EDCPOM and EACPOM. The conceptual model implies that we need mechanistic models or spatially resolved measurements to understand the variability in peat EDCPOM and EACPOM and thus its role in controlling methane formation.Plain Language Summary: Peatlands accumulated large amounts of carbon via photosynthesis and slow decomposition of senesced plant material. Microorganisms within the peat form methane. For this reason, peatlands are important global sources of the greenhouse gas methane and therefore can contribute to climate change. In order to produce methane, the microorganisms have to transfer electrons between compounds in respiration processes. Only recently, it has been found that the peat itself can reversibly transfer electrons and that its capacities to reversibly accept electron accepting capacity (EAC) and reversibly donate electron donating capacity (EDC) electrons are large. We investigated which conditions favor large or small EAC and EDC of peat so that we can better explain methane formation. We argue that vegetation and decomposition control the amount of phenols and quinones—molecules in the peat that presumably are responsible for most of the peat's EAC and EDC. The EAC and EDC probably are largest for peat formed from vegetation rich in quinones and phenols, such as shrubs, and smaller for other vegetation types, for example, certain mosses. Intense decomposition may reduce both the EAC and EDC.Key Points: Peat particulate organic matter electron accepting and donating capacities per grams of carbon are smaller than for humic and fulvic acids. Both capacities are small in highly decomposed peat, unless it has larger amounts of quinones and phenols. We explain these patterns with parent vegetation chemistry and conditions during decomposition.
Statistik:View Statistics
Collection
Subjects:peat chemistry
electron accepting capacity
electron donating capacity
particulate organic matter
decomposition
mid infrared spectroscopy
This is an open access article under the terms of the Creative Commons Attribution‐NonCommercial License, which permits use, distribution and reproduction in any medium, provided the original work is properly cited and is not used for commercial purposes. | https://e-docs.geo-leo.de/handle/11858/9972 | 2022-06-25T08:32:13 | CC-MAIN-2022-27 | 1656103034877.9 | [] | e-docs.geo-leo.de |
Decentralized Finance (DeFi)
The Vault
Project SEED introduces The Vault to create a balance between the supply side and demand. We allocate a portion of the Project SEED's community token supply for Staking Reward and LP Staking Reward. This system will provide incentives for holders to hold their tokens from the early stages of the project.
We use The Vault in our Project SEED Ecosystem as our mechanism to maintain the circulation supply of the token while maximising the utility of the token for the users.
There are two types of Vault:
1.
Single-Token
SHILL Token Staking
2.
Liquidity Token
Liquidity Provider Token Staking ( i.e.: SHILL-SOL, SHILL-BNB, SHILL-USDC, SHILL-BUSD, and others).
Liquidity Provider Token can be obtained by putting liquidity in the supported Decentralized Exchanges (DEXs).
Both of the Vault will have 2 (two) types of Staking Periods:
Flexible Period
Users can withdraw (unstake) their token on-demand whenever they need it. Flexible Staking Unlock (withdraw) will involve Unlocking (withdrawing) time.
Locked Period
To incentivise our strong holders, players, and believers that are willing to hold for a longer period of time, they can lock your token up to 12 months. This Locking Mechanism will give additional rewards depending on the length the token is locked.
Staking
In the most simple way, staking can be understood as earning passive income from holding cryptocurrency, which is SHILL token in Project SEED ecosystem. It is a wonderful financial instrument for players to gain benefits even without game entry. Players who stake their SHILL Token will be rewarded with various benefits within the game.
Random “Supplies” distribution
DAO Proposal
Dungeon Creator Capability
Increase players’ level to keep active Zeds
APR for staking rewards
SEEDex
SEEDex (Project SEED Decentralized Exchange) will be built with the implementation of AMM (Automated Market Making) built on the blockchain to enhance a fully decentralized CLOB (Central Limit Order Book).
We implement the CLOB (Central Limit Order Book) concept as our backbone for the AMM Mechanism to provide high-speed trades, and access on-chain liquidity to multiple DEXs.
Non-Fungible Token (NFT)
Marketplace
Last modified
5mo ago
Copy link
Contents
The Vault
Staking
SEEDex | https://docs.projectseed.io/project-seed/ecosystem/gamefi/decentralized-finance-defi | 2022-06-25T06:57:57 | CC-MAIN-2022-27 | 1656103034877.9 | [] | docs.projectseed.io |
As needed, you can modify via parameter the logging levels for the following services of the platform. These settings should only be modified when you are debugging an issue. After the issue is resolved, you should set the logging level back to its original value.
For more information on these services, see System Services and Logs.
For more information on logging levels, see.:
You can test the configuration using the following command:
WebApp
The WebApp manages loading of data from the supported connections into the front-end web interface.:
Supported levels:
DEBUG,
INFO, ,
WARNING
ERROR,
CRITICAL
JSData
The JSData logging options do not apply to a specific service. Instead, they are used by various services to log activities related toand its interactions with various connections and running environments. | https://docs.trifacta.com/pages/diffpages.action?originalId=132057102&pageId=141204734 | 2022-06-25T07:28:29 | CC-MAIN-2022-27 | 1656103034877.9 | [] | docs.trifacta.com |
. accessanalyzer ]
Retrieves a list of analyzers.
See also: AWS API Documentation
See 'aws help' for descriptions of global parameters.
list-analyzers:
analyzers
list-analyzers [--type <value>] [--cli-input-json <value>] [--starting-token <value>] [--page-size <value>] [--max-items <value>] [--generate-cli-skeleton <value>]
--type (string)
The type of analyzer.
Possible values:
-
ACCOUNT
-.
analyzers -> (list)
The analyzers retrieved.
(structure)
Contains information about the analyzer.
arn -> (string)The ARN of the analyzer.
createdAt -> (timestamp)A timestamp for the time at which the analyzer was created.
lastResourceAnalyzed -> (string)The resource that was most recently analyzed by the analyzer.
lastResourceAnalyzedAt -> (timestamp)The time at which the most recently analyzed resource was analyzed.
name -> (string)The name of the analyzer.
status -> (string)The status of the analyzer. An
Activeanalyzer successfully monitors supported resources and generates new findings. The analyzer is
Disabledwhen a user action, such as removing trusted access for Identity and Access Management Access Analyzer from Organizations, causes the analyzer to stop generating new findings. The status is
Creatingwhen the analyzer creation is in progress and
Failedwhen the analyzer creation has failed.
statusReason -> (structure)
The
statusReasonprovides more details about the current status of the analyzer. For example, if the creation for the analyzer fails, a
Failedstatus is returned. For an analyzer with organization as the type, this failure can be due to an issue with creating the service-linked roles required in the member accounts of the Amazon Web Services organization.
code -> (string)The reason code for the current status of the analyzer.
tags -> (map)
The tags added to the analyzer.
key -> (string)
value -> (string)
type -> (string)The type of analyzer, which corresponds to the zone of trust chosen for the analyzer.
nextToken -> (string)
A token used for pagination of results returned. | https://docs.aws.amazon.com/cli/latest/reference/accessanalyzer/list-analyzers.html | 2022-06-25T08:15:36 | CC-MAIN-2022-27 | 1656103034877.9 | [] | docs.aws.amazon.com |
Databinding the Slider Control (VB)
The Slider control in the AJAX Control Toolkit provides a graphical slider that can be controlled using the mouse. It is possible to bind the current position of the slider to another ASP.NET control.
Overview
The Slider control in the AJAX Control Toolkit provides a graphical slider that can be controlled using the mouse. It is possible to bind the current position of the slider to another ASP.NET control.
Steps
In order to activate the functionality of ASP.NET AJAX and the Control Toolkit, the
ScriptManager control must be put anywhere on the page (but within the
<form> element):
<asp:ScriptManager
Next, add two
TextBox controls to the page. One will be transformed into a graphical slider, and the other one will hold the position of the slider.
<asp:TextBox <asp:TextBox
The next step is already the final step. The
SliderExtender control from the ASP.NET AJAX Control Toolkit makes a slider out of the first text box and automatically updates the second text box when the slider position changes. In order for that to work, The
SliderExtender's
TargetControlID attribute must be set to the ID of the first text box; the
BoundControlID attribute must be set to the ID of the second text box.
<ajaxToolkit:SliderExtender
As you can see in the browser, the data binding works in both directions: entering a new value in the text box updates the slider's position. If you make the second text box read only, you may add a weak protection to the text field so that it is harder for the user to manually update the value in there.
Slider and text box are in sync (Click to view full-size image) | https://docs.microsoft.com/en-us/aspnet/web-forms/overview/ajax-control-toolkit/slider/databinding-the-slider-control-vb | 2022-06-25T07:49:38 | CC-MAIN-2022-27 | 1656103034877.9 | [] | docs.microsoft.com |
[mrpt-graphs]
Graphs data structures (directed graphs, trees, graphs of pose constraints), graphs algorithms
Library mrpt-graphs
This C++ library is part of MRPT and can be installed in Debian-based systems with:
sudo apt install libmrpt-graphs-dev
Read also how to import MRPT into your CMake scripts.:
mrpt::graphs::CNetworkOfPoses2D -> Edges are 2D graphs (x,y,phi), without uncertainty.
mrpt::graphs::CNetworkOfPoses3D -> Edges are 3D graphs (x,y,z,yaw,pitch,roll), without uncertainty.
mrpt::graphs::CNetworkOfPoses2DInf -> Edges are 2D graphs (x,y,phi), with an inverse covariance (information) matrix.
mrpt::graphs::CNetworkOfPoses3DInf -> Edges are 3D graphs (x,y,z,yaw,pitch,roll), with an inverse covariance (information) matrix.
Library contents
// namespaces namespace mrpt::graphs::detail; namespace mrpt::graphs; namespace mrpt::graphs::detail; // structs struct mrpt::graphs::TGraphvizExportParams; template <class GRAPH_T> struct mrpt::graphs::detail::THypothesis; struct mrpt::graphs::detail::TMRSlamEdgeAnnotations; struct mrpt::graphs::detail::TMRSlamNodeAnnotations; struct mrpt::graphs::detail::TNodeAnnotations; struct mrpt::graphs::detail::TNodeAnnotationsEmpty; // classes template <typename T> class mrpt::graphs::CAStarAlgorithm; template < class TYPE_GRAPH, class MAPS_IMPLEMENTATION = mrpt::containers::map_traits_stdmap > class mrpt::graphs::CDijkstra; template < class TYPE_EDGES, class EDGE_ANNOTATIONS = detail::edge_annotations_empty > class mrpt::graphs::CDirectedGraph; template <class TYPE_EDGES = uint8_t> class mrpt::graphs::CDirectedTree; template < class CPOSE, class MAPS_IMPLEMENTATION = mrpt::containers::map_traits_stdmap, class NODE_ANNOTATIONS = mrpt::graphs::detail::TNodeAnnotationsEmpty, class EDGE_ANNOTATIONS = mrpt::graphs::detail::edge_annotations_empty > class mrpt::graphs::CNetworkOfPoses; // global functions void mrpt::graphs::registerAllClasses_mrpt_graphs();
Global Functions
void mrpt::graphs::registerAllClasses_mrpt_graphs()
Forces manual RTTI registration of all serializable classes in this namespace.
Should never be required to be explicitly called by users, except if building MRPT as a static library. | https://docs.mrpt.org/reference/latest/group_mrpt_graphs_grp.html | 2022-06-25T07:57:47 | CC-MAIN-2022-27 | 1656103034877.9 | [] | docs.mrpt.org |
Administration¶
The Django framework offers a rich administration (or short admin) interface, which allows you to directly manipulate most of the entries in the database directly. Obviously, only users with the correct permissions are allowed to use this interface. The user created during the installation process using
./manage.py createsuperuser has this superuser status.
The admin interface is available under the link Admin in the navigation bar. It will only be needed on rare occasions, since most of the configurations of the questionaire and the other RDMO functions can be done using the more user-friendly Management interface described in the following chapter of this documentation.
That being said, the admin interface is needed, especially after installation, to set the title and URL of the site, to configure users and groups, to configure the connection to OAUTH providers, and to create tokens to be used with the API. | https://rdmo.readthedocs.io/en/latest/administration/index.html | 2022-06-25T07:59:12 | CC-MAIN-2022-27 | 1656103034877.9 | [] | rdmo.readthedocs.io |
') }}"password: "{{.
env_var accepts a second, optional argument for default value, like so:
...models:jaffle_shop:+materialized: "{{ env_var('DBT_MATERIALIZATION', 'view') }}"
This can be useful to avoid compilation errors when the environment variable isn't available.
Special env var prefixesSpecial env var prefixes
If environment variables are named with one of two prefixes, it will have special behavior in dbt:
DBT_ENV_CUSTOM_ENV_: Any env var named with this prefix will be included in dbt artifacts, in a
metadata.envdictionary, with its prefix-stripped name as its key.
DBT_ENV_SECRET_: Any env var named with this prefix will be scrubbed from dbt logs and replaced with
*****, any time its value appears in those logs (even if the env var was not called directly). While dbt already avoids logging database credentials, this is useful for other types of secrets, such as git tokens for private packages, or AWS keys for querying data in S3.
dbt Cloud Usage
If you are using dbt Cloud, you must adhere to the naming conventions for environment variables. Environment variables in dbt Cloud must be prefixed with
DBT_ (including
DBT_ENV_CUSTOM_ENV_ or
DBT_ENV_SECRET_). Environment variables keys are uppercased and case sensitive. When referencing
{{env_var('DBT_KEY')}} in your project's code, the key must match exactly the variable defined in dbt Cloud's UI. | https://6167222043a0b700086c2b31--docs-getdbt-com.netlify.app/reference/dbt-jinja-functions/env_var | 2021-11-27T03:16:00 | CC-MAIN-2021-49 | 1637964358078.2 | [] | 6167222043a0b700086c2b31--docs-getdbt-com.netlify.app |
dgl.knn_graph¶
dgl.
knn_graph(x, k)[source]¶
Construct a graph from a set of points according to k-nearest-neighbor (KNN) and return.
The function transforms the coordinates/features of a point set into a directed homogeneous graph. The coordinates of the point set is specified as a matrix whose rows correspond to points and columns correspond to coordinate/feature dimensions.
The nodes of the returned graph correspond to the points, where the predecessors of each point are its k-nearest neighbors measured by the Euclidean distance.
If
xis a 3D tensor, then each submatrix will be transformed into a separate graph. DGL then composes the graphs into a large graph of multiple connected components.
- Parameters
-
- Returns
The constructred graph. The node IDs are in the same order as
x.
The returned graph is on CPU, regardless of the context of input
x.
- Return type
-
Examples
The following examples use PyTorch backend.
>>> import dgl >>> import torch
When
xis a 2D tensor, a single KNN graph is constructed.
>>> x = torch.tensor([[0.0, 0.0, 1.0], ... [1.0, 0.5, 0.5], ... [0.5, 0.2, 0.2], ... [0.3, 0.2, 0.4]]) >>> knn_g = dgl.knn_graph(x, 2) # Each node has two predecessors >>> knn_g.edges() >>> (tensor([0, 1, 2, 2, 2, 3, 3, 3]), tensor([0, 1, 1, 2, 3, 0, 2, 3]))
When
xis a 3D tensor, DGL constructs multiple KNN graphs and and then composes them into a graph of multiple connected components.
>>> x1 = torch.tensor([[0.0, 0.0, 1.0], ... [1.0, 0.5, 0.5], ... [0.5, 0.2, 0.2], ... [0.3, 0.2, 0.4]]) >>> x2 = torch.tensor([[0.0, 1.0, 1.0], ... [0.3, 0.3, 0.3], ... [0.4, 0.4, 1.0], ... [0.3, 0.8, 0.2]]) >>> x = torch.stack([x1, x2], dim=0) >>> knn_g = dgl.knn_graph(x, 2) # Each node has two predecessors >>> knn_g.edges() (tensor([0, 1, 2, 2, 2, 3, 3, 3, 4, 5, 5, 5, 6, 6, 7, 7]), tensor([0, 1, 1, 2, 3, 0, 2, 3, 4, 5, 6, 7, 4, 6, 5, 7])) | https://docs.dgl.ai/en/0.6.x/generated/dgl.knn_graph.html | 2021-11-27T02:05:02 | CC-MAIN-2021-49 | 1637964358078.2 | [] | docs.dgl.ai |
Date: Sun, 20 Oct 1996 23:16:12 +0100 From: [email protected] (Nik Clayton) To: [email protected] Subject: Building a release tape Message-ID: <[email protected]>
Next in thread | Raw E-Mail | Index | Archive | Help
Could someone (Jordan?) explain how the files on a release tape should be set up? I realise the method is documented in the INSTALL.TXT file, but scanning the questions archive shows the following message on the subject. And there is no followup indicating whether a) Dennis had the right idea, but gave the wrong args to tar b) He did the right thing, something else is causing his SCSI tape installation to fail c) The instructions in INSTALL.TXT are wrong d) Something else. Any advice gratefully received. N > Date: Sun, 28 Jul 1996 14:07:21 -0500 (CDT) > From: "Dennis R. Conley" <[email protected]> > Subject: how to make the scsi install tape > > Installation from scsi tape always fails. The drive moves the tape around > from quite awhile, so I assume the failure is due to my having made the > tape incorrectly. From INSTALL.TXT: > [...] > | cd /where/you/have/your/dists > | tar cvf /dev/rwt0 (or /dev/rst0) dist1 .. dist2 > > I assume "dist1", "dist2" are directories, e.g. > > % cd /src/tmp/freebsd/2.1.5-RELEASE > % tar c ./bin ./des ./doc ./src > > Unfortunately, this didn't work. Nor did: > > % cd /src/tmp/freebsd/2.1.5-RELEASE > % cd bin ; tar c . > % cd ../des ; tar r . > % cd ../doc ; tar r . > > > So I'm lost: which "files" are tar'd onto the tape ( or, rather, from which > part of the 2.1.5-RELEASE path is the tar performed )? -- --+=[ Blueberry Hill Blueberry New Media ]=+-- --+=[ 1/9 Chelsea Harbour Design Centre, ]=+-- --+=[ [email protected] London, England, SW10 0XE ]=+-- --+=[ This isn't much of a .sig, but then, that wasn't much of a message ]ENTP
Want to link to this message? Use this URL: <> | https://docs.freebsd.org/cgi/getmsg.cgi?fetch=95056+0+/usr/local/www/mailindex/archive/1996/freebsd-questions/19961020.freebsd-questions | 2021-11-27T01:54:43 | CC-MAIN-2021-49 | 1637964358078.2 | [] | docs.freebsd.org |
Cloud Data Integration
Current Version
Current Version
H2L
Transformations
Cloud Data Integration Current Version
Cloud Data Integration Current Version
All Products
Rename Saved Search
Name
* This field is required
Overwrite saved search
Save
Cancel
Confirm Deletion
Are you sure you want to delete the saved search?
OK
Cancel
Table of Contents
Search
No Results
Preface
Transformations
Active and passive transformations
Transformation types
Licensed transformations
Incoming fields
Field name conflicts
Creating a field name conflict resolution
Field rules
Step 1. Choose incoming fields
Step 2. Configure field selection criteria
Step 3. Rename fields
Step 4. Verify order of rule execution
Field rule configuration example
Creating a field rule
Data object preview
Variable fields
Expression macros
Macro types
Macro input fields
Vertical macros
Configuring a vertical macro
Macro input fields for vertical macros
Macro output fields for vertical macros
Field rules for vertical macro output fields
Vertical macro example
Horizontal macros
Horizontal expansion functions
Configuring a horizontal macro
Macro input fields for incoming fields in horizontal macros
Macro input fields for constants in horizontal macros
Transformation output field configuration for horizontal macros
Horizontal macro example
Hybrid macros
File lists
Manually created file lists
File list commands
Command sample file
Using a file list in a Source transformation
Using a file list in a Lookup transformation
Multibyte hierarchical data
Source transformation
Source object
File sources
Database sources
Database source properties
Related objects
Joining related objects
Advanced relationships
Custom queries
Source filtering and sorting
Web service sources
Web service operations for sources
Request messages
Field mapping for web service sources
Partitions
Partitioning rules and guidelines
Partitioning examples
Reading hierarchical data in an elastic mapping
Multibyte hierarchical data
Source fields
Editing native data types in complex file sources
Editing transformation data types
Target transformation
Target object
File targets
File target properties
Flat file targets created at run time
Flat file targets with static file names
Flat file target time stamps
Flat file targets with dynamic file names
Creating a flat file target at run time
Database targets
Database target properties
Database targets created at run time
Creating a database target at run time
Update columns for relational targets
Configuring update columns
Target update override
Guidelines for configuring the target update override
Entering a target update statement
Web service targets
Web service operations for targets
Field mapping for web service targets
Partitions
Writing hierarchical data in an elastic mapping
Multibyte hierarchical data
Target fields
Target transformation field mappings
Configuring a Target transformation
Aggregator transformation
Group by fields
Sorted data
Aggregate fields
Aggregate functions
Nested aggregate functions
Conditional clauses
Advanced properties
Hierarchical data in an elastic mapping
Cleanse transformation
Cleanse transformation configuration
Cleanse transformation field mappings
Cleanse transformation output fields
Data Masking transformation
Masking techniques
Configuration properties for masking techniques
Repeatable output
Optimize dictionary usage
Seed
Unique substitution
Mask format
Source filter characters
Target filter characters
Range
Blurring
Credit card masking
Advanced email masking
IP address masking
Key masking
Phone number masking
Random masking
Social Insurance number masking
Social Security number masking
Custom substitution masking
Dependent masking
Dependent masking parameters
Substitution masking
URL address masking
Mask rule parameter
Creating a Data Masking transformation
Consistent masked output
Rules and guidelines
Example
Data Masking transformation example
Deduplicate transformation
Deduplication and consolidation operations
Identity population data
Groups in duplicate analysis
Deduplicate transformation configuration
Deduplicate transformation field mappings
Metadata fields on the Deduplicate transformation
Link scores and driver scores
Deduplicate transformation output fields
Expression transformation
Expression fields
Window functions
Frame
Partition and order keys
Example: Use a window to calculate expiration dates
Example: Use a window to flag GPS pings
Example: Run an aggregate function on a window
Advanced properties
Hierarchical data in an elastic mapping
Filter transformation
Filter conditions
Advanced properties
Hierarchical data in an elastic mapping
Hierarchy Builder transformation
Using a Hierarchy Builder transformation
Hierarchical schemas
Sample or schema
Creating a hierarchical schema
Intelligent structure models
Output settings
Selecting a hierarchical schema
Creating a hierarchical schema
Creating an intelligent structure model
Field mapping
Selecting a primary or foreign key
Selecting the fields to map
Advanced properties
Multibyte hierarchical data
Hierarchy Builder transformation example
Hierarchy Parser transformation
Using a Hierarchy Parser transformation
Hierarchical schemas
Creating a hierarchical schema
Sample or schema
Hierarchical data limitations
Input settings
Selecting a hierarchical schema
Creating a hierarchical schema from sample
Input field selection
Field mapping
Selecting the elements to convert
Output fields
Selecting an output group
Multibyte hierarchical data
Hierarchy Parser transformation example
Hierarchy Processor transformation
Data processing strategies
Hierarchical to relational data processing
Relational to hierarchical data processing
Hierarchical to hierarchical data processing
Defining a Hierarchy Processor transformation
Adding incoming fields to output groups
Add all descendants
Add single occurring children
Add primitive single occurring children
Preserve incoming field
Flatten selected array
Add selected array as struct
Generating unique keys
Configuring output groups and fields
Aggregating values in an output field array
Output data configuration
Data source configuration
Inheriting data sources from the parent
Data source configuration in hierarchical output example
Data source conflicts
Configure data sources
Configure join conditions
Configure filter conditions
Filter configuration example
Configure group by fields
Configure order by fields
Running a mapping with JSON data
Multibyte hierarchical data
Understanding field limitations
Hierarchy Processor transformation examples
Hierarchical to relational example
Relational to hierarchical example
Hierarchical to hierarchical example
Input transformation
Input fields
Java transformation
Defining a Java transformation
Classpath configuration
Classpath values
Configuring the JVM Classpath for the Secure Agent
Configuring the CLASSPATH environment variable
Configuring the CLASSPATH on Windows
Configuring the CLASSPATH on UNIX
Configuring the design time classpath
Configuring the Java Classpath session property
Java transformation fields
Data type conversion
Sort conditions
Group by fields
Configuring Java transformation properties
Active and passive Java transformations
Defining the update strategy
Using high precision
Processing subseconds
Developing the Java code
Creating Java code snippets
Importing Java packages
Defining helper code
Defining input row behavior
Defining end of data behavior
Defining transaction notification behavior
Using Java code to parse a flat file
Compiling the code
Viewing the full class code
Troubleshooting a Java transformation
Finding the source of compilation errors
Identifying the error type
Java transformation example
Create the source file
Configure the mapping
Configure the Java code snippets
Import packages
Helper code
On Input Row
Compile the code and run the mapping
Java transformation API reference
failSession
generateRow
getInRowType
incrementErrorCount
invokeJExpression
isNull
logError
logInfo
setNull
setOutRowType
Joiner transformation
Join condition
Join type
Advanced properties
Hierarchical data in an elastic mapping
Creating a Joiner transformation
Labeler transformation
Labeler transformation configuration
Labeler transformation field mappings
Labeler transformation output fields
Lookup transformation
Lookup object
Lookup object properties
Multiple match policy restrictions
Custom queries
Lookup condition
Lookup return fields
Advanced properties
Lookup SQL overrides
Guidelines for overriding the lookup query
Lookup source filter
Dynamic lookup cache
Static and dynamic lookup comparison
Dynamic cache updates
Inserts and updates for insert rows
Dynamic cache and lookup source synchronization
Dynamic cache and target synchronization
Field mapping
Generated key fields
Ignore fields in comparison
Dynamic lookup query overrides
Persistent lookup cache
Rebuilding the lookup cache
Unconnected lookups
Configuring an unconnected Lookup transformation
Calling an unconnected lookup from another transformation
Connected Lookup example
Dynamic Lookup example
Unconnected Lookup example
Mapplet transformation
Mapplet transformation configuration
Selecting a mapplet
Mapplet transformation field mappings
Mapplet parameters
Mapplet transformation output fields
Synchronizing a mapplet
Normalizer transformation
Normalized fields
Occurs configuration
Unmatched groups of multiple-occurring fields
Generated keys
Normalizer field mapping
Normalizer field mapping options
Advanced properties
Target configuration for Normalizer transformations
Normalizer field rule for parameterized sources
Mapping example with a Normalizer and Aggregator
Output transformation
Output fields
Field mapping
Parse transformation
Parse transformation configuration
Parse transformation field mappings
Parse transformation output fields
Python transformation
Installing and configuring Python
Python transformation fields
Data type conversion
Data types in input and output fields
Partition keys
Active and passive Python transformations
Resource files
Developing the Python code
Creating Python code snippets
Referencing a resource file
Example: Add an ID column to nonpartitioned data
Example: Use partitions to find the highest salary
Example: Operationalize a pre-trained model
Rank transformation
Ranking string values
Rank caches
Defining a Rank transformation
Rank transformation fields
Defining rank properties
Defining rank groups
Advanced properties
Hierarchical data in an elastic mapping
Rank transformation example
Router transformation
Working with groups
Guidelines for connecting output groups
Group filter conditions
Configuring a group filter condition
Advanced properties
Hierarchical data in an elastic mapping
Router transformation examples
Rule Specification transformation
Rule Specification transformation configuration
Rule Specification transformation field mappings
Rule Specification transformation output fields
Sequence Generator transformation
Sequence Generator transformation uses
Sequence Generator output fields
Sequence Generator properties
Disabling incoming fields
Hierarchical data in an elastic mapping
Sequence Generator transformation rules and guidelines
Sequence Generator transformation example
Sorter transformation
Sort conditions
Sorter caches
Advanced properties
Hierarchical data in an elastic mapping
Sorter transformation example
SQL transformation
Stored procedure or function processing
Connected or unconnected SQL transformation for stored procedure processing
Unconnected SQL transformations
Calling an unconnected SQL transformation from an expression
Invoking a stored procedure before or after a mapping run
Unconnected SQL transformation example
Query processing
Static SQL queries
Selecting multiple database rows
Dynamic SQL queries
Passing the full query
Substituting the table name in a string
Passive mode configuration
SQL statements that you can use in queries
Rules and guidelines for query processing
SQL transformation configuration
Configuring the SQL type
Selecting a stored procedure or function
Selecting a saved query
Entering a query
SQL transformation field mapping
SQL transformation output fields
Advanced properties
Structure Parser transformation
Intelligent structure models
Structure Parser field mapping
Output fields
Advanced properties
Guidelines for working with a Structure Parser transformation in a mapping
Guidelines for output types
Structure Parser transformation configuration
Configuring a Structure Parser transformation
Selecting an intelligent structure model
Selecting an output group
Processing input from a Hadoop Files source
Processing input from a flat file source
Configuring the flat file source
Configuring the Structure Parser transformation to access flat files
Rules and guidelines for intelligent structure models
Structure Parser transformation example
Transaction Control transformation
Transaction control condition
Using Transaction Control transformations in mappings
Sample transaction control mappings with multiple targets
Guidelines for using Transaction Control transformations in mappings
Advanced properties
Union transformation
Comparison to Joiner transformation
Planning to use a Union transformation
Input groups
Output fields
Field mappings
Advanced properties
Union Transformation example
Velocity transformation
Velocity transformation input format
Source configuration for file sources
Velocity template
Testing the template
Velocity transformation output
Target configuration for file targets
Velocity transformation parsers
Examples
XML conversion example
JSON conversion example
Verifier transformation
Address Reference Data
Verifier transformation configuration
Verifier transformation field mappings
Understanding input and output mappings
Verifier transformation output fields
Web Services transformation
Create a Web Services consumer connection
Define a business service
Configure the Web Services transformation
Configuring the transformation
Transaction commit control
Viewing incoming and request fields
Pass-through fields
Web Services transformation example
Multibyte hierarchical data
Transformations
Transformations
Back
Configuring the transformation
Configuring the transformation
When you configure the web services transformation, you connect a source object, configure properties for the transformation, map incoming fields to requested fields for the web service, and map the response to output fields to create one or more success groups.
Data Integration
creates a fault group automatically but you can choose whether to map it to the output fields.
Create a mapping and add the source objects you want to work with.
Add a Web Services transformation to the canvas.
Connect the source to the Web Services transformation.
Select the business service and operation in the
Web Service
tab.
On the
Request Mapping
and
Response Mapping
tabs, create the field mappings between the source fields and the web service request.
For an illustration of the mapping process, see
Web Services transformation example
.
On the
Output Fields
tab, review the success groups, fault group, and field details. You can edit the field metadata, if needed. The success groups contain the SOAP response from the web service. The fault group contains SOAP faults with the fault code, string, and object name that caused the fault to occur.
Define the advanced properties.
Save and run the mapping.
For additional information about the mapping process, see the following sections:
Advanced properties
The following table describes the properties available for the Web Services transformation from the
Advanced
tab:
Property
Description
Cache Size
Memory available for the web service request and response. If the web service request or response contains a large number of rows or columns, you might want to increase the cache size. Default is 100 KB.
Allow Input Flush
The
mapping
task creates XML when it has all of the data for a group. When enabled, the
mapping
task flushes the XML after it receives all of the data for the root value. When not enabled, the
mapping
task stores the XML in memory and creates the XML after it receives data for all the groups.
You cannot select the option to allow input flush if you are connecting to multiple source objects.
Transaction Commit Control
Control to commit or roll back transactions based on the set of rows that pass through the transformation. Enter an IIF function to specify the conditions to determine whether the
mapping
task commits, rolls back, or makes no transaction changes to the row. Use the transaction commit control if you have a large amount of data and you want to control how it is processed.
You cannot configure a transaction commit control if you are connecting to multiple source objects.
Mapping incoming fields
When you define the request mapping, you can configure relationships between more than one source object. You configure relationships between the response mapping and the output fields separately.
When you map incoming fields, note the following guidelines:
If you need to apply an expression to incoming fields, use an Expression transformation upstream of the Web Services transformation.
To ensure that a web service request has all the required information, map incoming derived type fields to fields in the request structure.
You can map the incoming fields to the request mapping as shown in the following image:
Drag each incoming field onto the node in the request structure where you want to map it.
Working with multiple source objects
If you have multiple sources, note the following requirements:
Any source fields you want to designate as primary key and foreign key must use the data type Bigint or String. If needed, you can edit the metadata in the Source transformation.
If the Bigint data type is not available for a source, you can convert the data with an Expression transformation upstream of the Web Services transformation.
Ensure that the source data is sorted on the primary key for the parent object and sorted on the foreign key and primary key for child objects.
Map one of the fields or a group of fields to the recurring elements. In the incoming fields, you can see where each recurring element is mapped.
Map at least one field from each child object to the request structure.
You must map fields from the parent object to fields in the request structure that are higher in the hierarchy than fields from the child object.
For child objects, select a primary key and a foreign key.
On the
Incoming Fields
tab, select the source object you want to designate as the parent object.
Right-click on an incoming field in the tree to designate the primary key and foreign key.
For the foreign key, specify the parent object.
Do not choose a foreign key for the parent object.
Mapping outgoing fields
On the
Response Mapping
tab, you map the response structure to the output fields you want to use. You can choose relational or denormalized format for the output fields.
When you choose
Relational
, the transformation generates the following output groups:
One output group for the parent element.
One output group for each element in which the cardinality is greater than one.
FaultGroup, if it is supported by the connection type you are using.
When you choose
Denormalized
, the transformation generates the following output groups:
Output group for the parent element. In denormalized output, the element values from the parent group repeat for each child element.
FaultGroup, if it is supported by the connection type you are using.
Right-click the node in the node of the response where you want to map the response to output fields. You can choose to map all descendants or only the immediate children, as shown in the following image:
Configure the Web Services transformation
Updated November 19, 2021
Download Guide
Send Feedback
Resources
Cloud Data Integration Homepage
Free Trial
Communities
Knowledge Base
Success Portal
Back to Top
Back | https://docs.informatica.com/integration-cloud/cloud-data-integration/current-version/transformations/web-services-transformation/configure-the-web-services-transformation/configuring-the-transformation.html | 2021-11-27T03:44:55 | CC-MAIN-2021-49 | 1637964358078.2 | [] | docs.informatica.com |
XS2A App
The Open Banking XS2A App handles all consumer interactions. It is used whenever the API flow requires an input from the consumer, such as selecting the consumer bank or the strong customer authentication towards the bank. In addition, depending on the selected flow, the XS2A App may be used to select a specific bank account or to authorize an account-to-account payment.
Usage
Initializing the App and setting the Configuration
To open the XS2A App the provided library has to be included via
script-tag.
<script> window.onXS2AReady = XS2A => { // configure once for all flows (optional) window.XS2A.configure({ autoClose: true, hideTransitionOnFlowEnd: true, openPoliciesInNewWindow: true, psd2RedirectMethod: 'newWindow', theme: 'default', unfoldConsentDetails: false, onLoad: () => { console.log('onLoad called') }, onReady: () => { console.log('onReady called') }, onAbort: () => { console.log('onAbort called') }, onError: error => { console.log('onError called', error) }, onFinished: () => { console.log('onFinished called') }, onClose: () => { console.log('onClose called') }, }) } </script> <script src=""></script>
A preconfigured
window.onXS2AReady-callback will be called once the launcher has been set up.
Thereafter the XS2A object is attached to the window (
window.XS2A).
It's recommended to define a default configuration, which will be used for all
startFlow function calls, in this callback.
Starting a Flow
A Flow can be started using the
startFlow method. After retrieving a client token from our XS2A API it has to be passed to the library.
try { //', { onLoad: () => { console.log('onLoad called #2') }, } ) } catch (e) { // Handle error. }
If a configuration object was provided as the optional second parameter, it's merged with the default configuration.
Closing the App
The App can be closed at any time using the
close() function.
// close the xs2a app modal with this function immediately window.XS2A.close()
Configuration and Callbacks
All configuration options in detail:
Callbacks that are available in the configuration object in detail:
The
error object has the following structure:
{ "category": String, // represents the category of the error "message": ?String, // text to be displayed to the user (optional) "reason": ?Error, // caught `error`-object if there was an internal js-error (optional) }
All possible error categories can be found here.
Examples
Starting multiple consecutive flows:
try { // you can', { hideTransitionOnFlowEnd: false, // keep transition on screen while you can do things in the onFinished callback onFinished: () => { // start second flow after first flow window.XS2A.startFlow( 'eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJmb28iOiJiYXoifQ.jhYdghVeKsi7Y-tdsiSpwuplzjZaSseUcmdoudKbdnQ' ) }, } ) } catch (e) { // Handle error. }
In this example we want to start another flow after the first has finished.
For a smooth transition between the flows the success screen should not be shown after the first flow.
Using the
hideTransitionOnFlowEnd option it is possible to keep the transition screen open until the next flow will start.
For the second
startFlow call the option
hideTransitionOnFlowEnd is not overridden, so the default value (true) is kept. | https://docs.openbanking.klarna.com/xs2a/xs2a-app.html | 2021-11-27T02:18:53 | CC-MAIN-2021-49 | 1637964358078.2 | [] | docs.openbanking.klarna.com |
Register the Firewall
Use your active Palo Alto Networks® Customer Support account to register your firewalls on our Customer Support Portal and then automatically configure your firewall with our recommended Day 1 configuration.
Before you can activate support and other licenses and subscriptions, you must first register the firewall. Before you can register a firewall, though, you must first have an active support account. Perform one of the following tasks depending on whether you have an active support account:
- If you don’t have an active support account, then Create a New Support Account and Register a Firewall.
- If your firewall uses line cards such as an NPC (Network Processing Card), then Register the Firewall Line Cards.
If you are registering a VM-Series firewall, refer to the VM-Series Deployment Guide for instructions.
Create a New
Support Account and Register a Firewall
If you do not already have an active Palo Alto Networks support account, then you need to register your firewall when you create your new support account.
- Go to the Palo Alto Networks Customer Support Portal.
- ClickCreate my account.
- EnterYour Email Address, checkI’m not a robot, and clickSubmit.
- SelectRegister device using Serial Number or Authorization Codeand clickNext.
- Complete the registration form.
- Enter the contact details for the person in your organization who will own this account. Required fields are indicated by red asterisks.
- Create a UserID and Password for the account. Required fields are indicated by red asterisks.
- Enter theDevice Serial NumberorAuth Code.
- Enter yourSales Order NumberorCustomer Id.
- To ensure that you are always alerted to the latest updates and security advisories,Subscribe to Content Update Emails,Subscribe to Security Advisories, andSubscribe to Software Update Emails.
- Select the check box to agree to the End User Agreement andSubmit.
Register a
Firewall
If you already have an active Palo Alto Networks Customer Support account, perform the following task to register your firewall.
- Log in to the firewall web interface.Using a secure connection (HTTPS) from your web browser, log in using the new IP address and password you assigned during initial configuration (https://<IP address>).
- Locate your serial number and copy it to the clipboard.On theDashboard, locate yourSerial Numberin the General Information section of the screen.
- Register the firewall.
- On the Support Home page, clickRegister a Device.
- SelectRegister device using Serial Number or Authorization Code, and then clickNext.
- Enter the firewallSerial Number(you can copy and paste it from the firewall Dashboard).
- (Optional) Enter theDevice NameandDevice Tag.
- (Optional) If the device will not have a connection to the internet, select theDevice will be used offlinecheck box and then, from the drop-down, select theOS Releaseyou plan to use.
- Provide information about where you plan to deploy the firewall including theAddress,City,Postal Code, andCountry.
- Read the End User License Agreement (EULA) and the Support Agreement, thenAgree and Submit.You can view the entry for the firewall you just registered underDevices.
- (Firewalls with line cards) To ensure that you receive support for your firewall’s line cards, make sure to Register the Firewall Line Cards.
(Optional) Perform Day 1 Configuration
After you register your firewall, you have the option of running Day 1 Configuration. The Day 1 Configuration tool provides configuration templates informed by Palo Alto Networks best practices, which you can use as a starting point to build the rest of your configuration.
The benefits of Day 1 Configuration templates include:
- Faster implementation time
- Reduced configuration errors
- Improved security posture
Perform Day 1 Configuration by following these steps:
- From the page that displays after you have registered your firewall, selectRun Day 1 Configuration.If you’ve already registered your firewall but haven’t run Day 1 Configuration, you can also run it from the Customer Support Portal home page by selectingToolsRun Day 1 Configuration.
- Enter theHostnameandPan OS Versionfor your new device, and optionally, theSerial NumberandDevice Type.
- UnderManagement, select eitherStaticorDHCP Clientfor yourManagement Type.SelectingStaticwill require you fill out theIPV4,Subnet Mask, andDefault Gatewayfields.SelectingDHCP Clientonly requires that you enter thePrimary DNSandSecondary DNS. A device configured in DHCP client mode will ensure the management interface receives an IP address from the local DHCP server, or it will fill out all the parameters if they are known.
- Fill out all fields underLogging.
- ClickGenerate Config File.
- To import and load the Day 1 Configuration file you just downloaded to your firewall:
- Log into your firewall web interface.
- Select.DeviceSetupOperations
- ClickImport named configuration snapshot.
- Select the file.
Register the Firewall Line Cards
The following firewalls use line cards that must be registered to receive support with troubleshooting and returns:
- PA-7000 Series firewalls
- PA-5450 firewall
If you do not have a Palo Alto Networks Customer Support account, create one by following the steps at Create a New Support Account and Register a Firewall. Return to these instructions after creating your Customer Support account and registering your firewall.
- Select.AssetsLine Cards/Optics/FRUs
- Register Components.
- Enter the Palo Alto Networks Sales Order Number of the line cards into theSales Order Numberfield to display the line cards eligible for registration.
- Register the line cards to your firewall by entering its chassis serial number in theSerial Numberfield. TheLocation Informationbelow auto-populates based on the registration information of your firewall.
- ClickAgree and Submitto accept the legal terms. The system updates to display the registered line cards under.AssetsLine Cards/Optics/FRUs
Recommended For You
Recommended Videos
Recommended videos not found. | https://docs.paloaltonetworks.com/pan-os/10-1/pan-os-admin/getting-started/register-the-firewall.html | 2021-11-27T03:09:02 | CC-MAIN-2021-49 | 1637964358078.2 | [] | docs.paloaltonetworks.com |
Connections Quickstart
This guide will explain the different components that need to be configured when creating a connection. By the end of this guide, you will have used Skyflow Studio to create a connection.
Prerequisites
For this guide, you will need to be assigned the Vault Owner role for the vault that you will create a connection for (Only Vault Owners can create connections).
Overview
Skyflow Studio simplifies the connection creation process with a wizard that walks you through setting up the different components of a connection in 4 steps:
- Connection configuration
- Route configuration
- Authentication: Connection-level service account
- (Optional) Outbound signature
Step 1: Connection Configuration
In the top menu, click the Settings tab > the Vault tab in the left menu > Connections as shown below. Then click Create Connection.
Name the connection, provide a description, and select the connection mode as either inbound or outbound. For outbound connections, you’ll also need to configure the outbound base URL. Then, click Save & Continue.
Here’s an example:
Step 2: Route Configuration
Routes specify a combination of the relative path and the actions that need to be performed on the configured field. This is how you let the connection know which fields should be tokenized or detokenized in the request and response. You can configure one or more routes to each connection.
In the following example, the connection is configured to perform an exact match on the relative path and then detokenize the token representing PAN contained in the request body. VISA DPS accepts the request, creates a new card ID, and responds with the card ID. This card ID from the VISA DPS response is then tokenized and stored in the “cards” table under the “card_id” column.
You also have the option to configure fields to be tokenized in the URL parameters, headers, and query parameters in addition to the request body and the response body. Additionally, you can configure more than one route and their respective relative paths to be associated with a single connection.
Once all routes are entered, click Save & Continue.
Step 3: Connection Service Account Configuration
In order to authenticate to a connection endpoint and invoke it, Skyflow requires you to create a dedicated service account with the Connection Invoker role assigned to it. This keeps the identity of the client consuming the connection endpoint different from the identity of the service account or the user creating the connection in order to follow the principle of separation of concerns.
Enter a name for your new service account, add a description, and select the Connection Invoker role. This role allows the service account to make requests only to the specified connection. It has no direct read or write access to the data in the vault. This service account is meant to be used in your environment (for example, the backend service for your customer-facing web app) to invoke the connection endpoint that is running in Skyflow’s secure and compliant environment.
Once the fields are complete, click Save & Continue.
Note: This service account should not be given access to talk to the vault directly through any other vault role.
Here’s an example:
Step 4: Outbound Signature Configuration
This is an optional configuration fully dependent on the signing requirements from the third party service. If not needed, you may skip this step or select None.
When configured in the outbound mode, the connection currently supports 3 options for signing outbound requests:
- None
- mTLS: Upload the private key and public key associated with the mTLS certificate from the third party service.
- Shared Key: Upload a shared key provided by the third party service.
Then, click Finish Setup.
You’ve set up your first connection!
In your connections list (under Settings > Vault > Connections), you can click Sample request for any connection to view a sample request to invoke the connection.
| https://docs.skyflow.com/developer-portal/connections/connections-quickstart/ | 2021-11-27T02:28:32 | CC-MAIN-2021-49 | 1637964358078.2 | [array(['/bcf2cedc2fb36a4d3db5fe2bcc168a95/connections-menu.gif',
'connections_menu'], dtype=object)
array(['/static/ff7db6b12d587d02264dce3210064c3a/d52e5/connection-configuration.png',
'connection_configuration connection_configuration'], dtype=object)
array(['/static/1d3fbd8bbd62d7231ed9aeb35e47fa12/c6bbc/route-1.png',
'route_1 route_1'], dtype=object)
array(['/static/403dd9ef5a3830ddf410f864a911b4fa/d9217/authentication.png',
'authentication authentication'], dtype=object)
array(['/02c923be4077ca045d65469b5fcf7f58/sample-connection-request.gif',
'sample_connection_request'], dtype=object) ] | docs.skyflow.com |
Purpose
The CursorHost parcel returns to the server a cursor row identifier and request number that returned the CursorDBC parcel.
Usage Notes
This parcel is generated by the application.
Parcel Data
The following table lists field information for CursorHost parcel.
Fields
Processor identifies the location of the row.
Row identifies the row associated with the cursor.
Request specifies the request number associated with the CursorDBC parcel from which the processor and row were obtained. | https://docs.teradata.com/r/bh1cB~yqR86mWktTVCvbEw/nFjgmFuklmRMa1OWcvPaPQ | 2021-11-27T03:47:44 | CC-MAIN-2021-49 | 1637964358078.2 | [] | docs.teradata.com |
Migrating from 2017 LTS to 2018 LTS
Migrating your project from Unity 2017 to 2018 LTS isn't as difficult as you might think! Here's a guide providing each step in detail.
Deprecated Guide
This guide is deprecated and no longer used! You're probably in the wrong place, but we're leaving this here in case there's some info you need in here.
You might be looking for the Migrating from 2018 LTS to 2019 LTS guide.
If you are starting a new project you can just follow the instructions below to install Unity Hub and Unity 2018.4.x and then consult the doc Choosing your SDK to get the SDK.
For more information about VRChat's upgrade to Unity 2018, read our blog post!
Install Unity Hub (Really!)
Unity Hub is a separate application that allows you to seamlessly install and work with multiple Unity versions at one time. We strongly recommend using it!
- Grab Unity Hub - Download Unity Hub from the Download Unity page. Click the green "Download Unity Hub" button to download only Unity Hub.
- Install Unity Hub - Run the downloaded installer. Once installed you are ready to get Unity 2018.4.x Installed!
You can learn more about Unity Hub in the official Unity documentation.
Install Unity 2018.4.x LTS
Now that you have Unity Hub installed, you are ready to install the correct version of Unity 2018!
- Install Unity - Head over to the doc Current Unity Version to learn what the current version of Unity is, and follow the instructions to install via Unity Hub.
If you're having trouble finding the right version, refer to the Direct Downloads section of the Current Unity Version doc for instructions. But keep in mind you'll have to manually add the install to Unity Hub.
Prepare Your Project for Migration
Before you jump into migrating a project with your fresh install of Unity 2018, you need to prepare your project using the previous version of Unity.
Make a copy of your project! - When migrating an old project from Unity 2017 the first step is to duplicate the whole project folder and give it a new name. Adding "-2018" would work well. Do not export your old project as a UnityPackage, that can take forever/have other errors. Keeping a backup of your project before migration is important! Importing a project into a newer version of Unity will make it very difficult/impossible to port it back to older versions. We cannot help you with reverting your projects.
Clean up assets/scripts! - Some assets/scripts don't work in Unity 2018. Open the new copy of your project inside of Unity 2017 and take care of the following:
Remove Post Processing Stack v1 - If your project uses Post Processing Stack v1, you must remove it before migrating. PPv1 is no longer supported by Unity (or VRChat) in Unity 2018 and will cause issues with importing the new SDK due to script errors. Remove PPv1 by deleting its folder from your assets.
Once you have your project migrated in Unity 2018 you should switch to Post Processing v2. It can be installed via the new [Package Manager]([email protected]/manual/Installation.html) in Unity 2018.
In addition, you must move your global
Post Process Volume component to a game object other than your Refrence Camera. You should keep the
Post Process Layer component on your Reference Camera.
The Reference Camera is specified in the
Scene Descriptor component. The camera you specify is disabled at runtime and components on disabled GameObjects do not run. The Reference Camera is only used to copy various view settings to the player's view camera.
The VRChat SDK will warn you if your Post Processing Volume is on the Reference Camera.
Remove Shader Forge - Shader Forge has been discontinued and may have script errors in Unity 2018. These errors will cause issues importing the new SDK. Remove Shader Forge by deleting its folder from your assets.
Remove Dynamic Fog & Mist - This asset is no longer supported by VRChat in Unity 2018. Remove it.
Update Dynamic Bone - You'll want to update your version of Dynamic Bone to the latest version on the Asset Store. Doing so is pretty easy-- just find Dynamic Bone on the Asset Store window inside of Unity, click Update, and then import the update. Done!
Make sure other assets/scripts work in Unity 2018 - You may have other older assets or scripts that are not compatible with Unity 2018. Either update these or remove them to prepare for the migration to Unity 2018. If upon opening your project in 2018 you find your scene is filled with missing references, you should go to your Console to see what assets/scripts are causing errors and remove them from your project.
The Migration Process
Now that you have installed Unity Hub, the correct version of Unity 2018 and prepared your project, you are ready to begin migrating!
- Grab the latest VRChat SDK - Consult the doc Choosing your SDK for the latest download links to get the latest version of VRCSDK2.
Don't use VRCSDK3 for Migration
We do not support the migration of VRCSDK2 worlds to VRCSDK3.
If you're migrating a project, you should use VRCSDK2.
Don't attempt to use VRCSDK3 for migration-- it is meant for new projects.
Add your project to Unity Hub - In Unity Hub, click the
Addbutton on the main screen, then find the directory of the project copy you prepared.
Set your project to the correct Unity version in Unity Hub - Ensure that you select the current version as the Unity Version for your project in Unity Hub.
Open your project - Click your project in Unity Hub to open it. This step may take a while as Unity reimports assets and updates your project for Unity 2018. Be patient! The process may take 30min+ for large projects! Once your project is open, you are ready to continue.
Did you prepare your project?
As noted above, if you have editor scripts or add-ons that are not compiling correctly, they can get in the way of Unity 2018 importing your assets during the upgrade.
Create an empty scene - To be extra careful, we are going to create a new empty scene and save our project with it open. From the File menu, select New Scene. Then save your scene/project.
Close Unity - We are going to remove the old VRChat SDK next, and this should be done while Unity is closed.
Remove the old VRChat SDK - There are a few things to remove:
a. Navigate to your project's
Assetsoutside of Unity.
b. Delete the folder
VRCSDKand the file
VRCSDK.metalocated in
Assets.
c. Navigate to the
Pluginsfolder inside of
Assets.
d. Delete the
VRCSDKfolder and
VRCSDK.metafile located here too.
Reopen your project, and stay in the empty scene - Stay in the empty scene for the next two steps.
Import VRCSDK2 - Import the new SDK you downloaded earlier as normal. From the Assets menu, select Import Package and then Import Custom Package.
Update the Unity Scripting Runtime version - Press Play at the top of Unity to enter Play Mode. Press it again to exit Play Mode. This will cause Unity to ask you to Restart the editor to update your project's Scripting Runtime version. Click Restart and Unity will reload your project. This will take a moment while Unity recompiles scripts. Unity may also ask you to re-login to your Unity account at this point.
If you can't enter Play Mode because of compile errors, you may need to go to
Project Settings > Player > Other Settingsand set the Scripting Runtime version to ".NET 4.x Equivalent".
- Open your scene - Once Unity is back up, open the scene for your world/avatar. You should be good to go. Get back to creating!
Issues Migrating
If you are having issues installing the SDK (Steps 6-9 above) and you are sure you removed all broken assets/scripts from your project, we recommend trying the following process that includes a few more steps:
Close Unity - The following steps should be done while Unity is closed.
Remove the old VRChat SDK - There are a few things to remove:
a. Navigate to your project's
Assetsfolder outside of Unity.
b. Delete the folder
VRCSDKand the file
VRCSDK.metalocated in
Assets.
c. Navigate to the
Pluginsfolder inside of
Assets.
d. Delete the
VRCSDKfolder and
VRCSDK.metafile located here too.
Open "Regedit" - You can do this by typing
regeditin your Start menu.
Set the path in Regedit - If you're running Windows 10, paste this into the top bar:
Computer\HKEY_CURRENT_USER\Software\Unity Technologies\Unity Editor 5.x
If you're not running Windows 10, go to the path manually. Yes, this path is correct even for Unity 2018.
Delete keys - Delete all keys starting with
VRCin that directory. Highlight them by dragging a box over them, right-click, and click delete. ONLY DELETE the keys starting with
VRC. Very important.
Close Regedit
Reopen your project
Import VRCSDK2 - Import the new SDK you downloaded as normal. From the Assets menu, select Import Package and then Import Custom Package.
The SDK should now be properly installed.
Do I need to migrate and reupload my content?
That depends. Below is a list of things you should consider when assessing if you should migrate and reupload your world or avatars:
Post Processing
You must remove Post Processing Stack v1 - It is no longer supported by Unity or VRChat. See the prepare your project for migration section above for important details.
Crunch Compression
Unity didn't upgrade Crunch Compression for 2018, so you should be fine if you're using Crunch.
UVs and Wrapping
We've seen rare issues where UVs on materials may have become corrupted or mangled. Re-uploading fixes this. You might also want to ensure your texture is set to the proper wrapping mode (Clamp, Repeat, etc).
HDR Colors
Unity 2018 requires that you do a bit of work to fix HDR color selections. Due to a change in HDR values being considered linear instead of gamma space in Unity 2018, your colors in materials may not be correct.
The
VRChat SDK menu contains a tool to swap color space on materials. It is located in the
Utilities sub-menu.
Lights Using Light Temperature
Unity 2018 removes the "Use Color Temperature" mode from all lights. This means you will need to convert all light temperatures to RGB. This script provided by the community member ScruffyRules will convert all of the lights in a scene to RGB. This script is not authored by VRChat.
Note: Projects migrated from Unity 2017 may still show the "Use Color Temperature" controls on your lights. These are non-functional and show in an upgraded project due to the quirks of Unity.
Dynamic Fog & Mist
The asset "Dynamic Fog & Mist" is not supported.
Shaders
Unity has fixed a huge amount of shader bugs with the Unity engine that affected us during the 2018 Beta process! Most (if not all) 2017 content should appear with little to no issue.
Our advice for content is to always migrate and reupload if you are having shader issues.
Keywords
Clearing Keywords
When you change or upgrade your shader, ensure that you remove old, unused keywords from your materials. Having excessive keywords in use is very bad for performance and optimization. Not only will it cause issues with your own avatar, but it may prevent others from seeing all shaders properly.
The VRChat SDK contains a tool to remove keywords from materials on your avatar. This tool can also remove keywords you need, so be careful!
Usually, it is best to check the keywords with this tool-- if you've got too many keywords, you probably need to find another shader. Swap to Standard, clear keywords, then swap to your new shader.
Note for Shader Authors
You may want to consider using the keywords reserved by the Standard shader as your own keywords. These are essentially guaranteed to already be reserved, so if you must use keywords, use the ones already defined by Standard and Post Processing v2. Here's a list of recommended keywords to use.
Shader Compatibility
Check the following tables to see if it is a good idea to re-upload your content.
This list is not comprehensive or complete, and we don't plan on maintaining a full list of shaders you'll need to re-upload your content for . However we know many users use these shaders, so we tried to cover the most common ones and list what steps may be necessary to get them working in Unity 2018.
Community-Provided Information
The information below this notification has been provided in whole or in part by our Community and may contain links to software or files that have not been authored by VRChat.
Ready to go for 2018!
These shaders are mostly good to go. If you run into problems, it might be best to try updating these and re-uploading, just in case.
Unsupported
These shaders meet one of the following conditions:
- The shader does not work in Unity 2018.
- The author has explicitly stated support has ended for the shader.
- The shader's project appears to be abandoned and no updates have occurred in a large amount of time (2 LTS releases)
You must replace following shaders with another shader and reupload using the latest Unity version.
* Video was not produced by VRChat, and may present information or opinions that are not held by VRChat as an organization. This video has been provided as-is to provide information to creators.
Updated 4 months ago | https://docs.vrchat.com/docs/migrating-from-2017-lts-to-2018-lts | 2021-11-27T02:37:27 | CC-MAIN-2021-49 | 1637964358078.2 | [] | docs.vrchat.com |
VRC_PlayerMods
Deprecated
This component is deprecated. It is not available in the latest VRChat SDK, and is either non-functional, or will no longer receive updates. It may be removed at a later date.
Used to controls player settings in a room such as speed and jump.
Warning!
PlayerMods is an old system. Due to this, it is advised only to use it for altering speed and jump.
Player mods can be added by pressing the
Add Mods button at the bottom of the component.
After player mods are set, you cannot change them via animation/trigger.
Updated about 1 month ago
Did this page help you? | https://docs.vrchat.com/docs/vrc_playermods | 2021-11-27T02:14:39 | CC-MAIN-2021-49 | 1637964358078.2 | [] | docs.vrchat.com |
PySINDy¶
PySINDy is a sparse regression package with several implementations for the Sparse Identification of Nonlinear Dynamical systems (SINDy) method introduced in Brunton et al. (2016a), including the unified optimization approach of Champion et al. (2019) and SINDy with control from Brunton et al. (2016b). A comprehensive literature review is given in de Silva et al. (2020).
TrappingSINDy optimizer:
de Silva et al., (2020). PySINDy: A Python package for the sparse identification of nonlinear dynamical systems from data. Journal of Open Source Software, 5(49), 2104,athleen, Peng Zheng, Aleksandr Y. Aravkin, Steven L. Brunton, and J. Nathan Kutz. A unified sparse optimization framework to learn parsimonious physics-informed models from data. arXiv preprint arXiv:1906.10612 (2019). [arXiv]
Brunton, Steven L., Joshua L. Proctor, and J. Nathan Kutz. Sparse identification of nonlinear dynamics with control (SINDYc). IFAC-PapersOnLine 49.18 (2016): 710-715. [DOI]
Kaptanoglu, Alan A., Jared L. Callaham, Christopher J. Hansen, Aleksandr Aravkin, and Steven L. Brunton. Promoting global stability in data-driven models of quadratic nonlinear dynamics. arXiv preprint arXiv:2105.01843 (2021). [arXiv]
Contributors¶
Thanks to the members of the community who have contributed to PySINDy!
User Guide
Useful links | https://pysindy.readthedocs.io/en/latest/ | 2021-11-27T02:04:08 | CC-MAIN-2021-49 | 1637964358078.2 | [] | pysindy.readthedocs.io |
Troubleshooting for Amazon RDS
Use the following sections to help troubleshoot problems you have with DB instances in Amazon RDS and Aurora.
Topics
- Can't connect to Amazon RDS DB instance
- Amazon RDS security issues
- Resetting the DB instance owner password
- Amazon RDS DB instance outage or reboot
- Amazon RDS DB parameter changes not taking effect
- Amazon RDS DB instance running out of storage
- Amazon RDS insufficient DB instance capacity
- MySQL and MariaDB issues
- Can't set backup retention period to 0
For information about debugging problems using the Amazon RDS API, see Troubleshooting applications on Amazon RDS.
Can't connect to Amazon RDS DB instance
When you can't connect to a DB instance, the following are common causes:
Inbound rules – The access rules enforced by your local firewall and the IP addresses authorized to access your DB instance might not match. The problem is most likely the inbound rules in your security group.
By default, DB instances don't allow access. Access is granted through a security group associated with the VPC that allows traffic into and out of the DB instance. If necessary, add inbound and outbound rules for your particular situation to the security group. You can specify an IP address, a range of IP addresses, or another VPC security group.
Note
When adding a new inbound rule, you can choose My IP for Source to allow access to the DB instance from the IP address detected in your browser.
For more information about setting up security groups, see Provide access to your DB instance in your VPC by creating a security group.
Note
Client connections from IP addresses within the range 169.254.0.0/16 aren't permitted. This is the Automatic Private IP Addressing Range (APIPA), which is used for local-link addressing.
Public accessibility – To connect to your DB instance from outside of the VPC, such as by using a client application, the instance must have a public IP address assigned to it.
To make the instance publicly accessible, modify it and choose Yes under Public accessibility. For more information, see Hiding a DB instance in a VPC from the internet.
Port – The port that you specified when you created the DB instance can't be used to send or receive communications due to your local firewall restrictions. To determine if your network allows the specified port to be used for inbound and outbound communication, check with your network administrator.
Availability – For a newly created DB instance, the DB instance has a status of
creatinguntil the DB instance is ready to use. When the state changes to
available, you can connect to the DB instance. Depending on the size of your DB instance, it can take up to 20 minutes before an instance is available.
Internet gateway – For a DB instance to be publicly accessible, the subnets in its DB subnet group must have an internet gateway.
To configure an internet gateway for a subnet
Sign in to the AWS Management Console and open the Amazon RDS console at
.
In the navigation pane, choose Databases, and then choose the name of the DB instance.
In the Connectivity & security tab, write down the values of the VPC ID under VPC and the subnet ID under Subnets.
Open the Amazon VPC console at
.
In the navigation pane, choose Internet Gateways. Verify that there is an internet gateway attached to your VPC. Otherwise, choose Create Internet Gateway to create an internet gateway. Select the internet gateway, and then choose Attach to VPC and follow the directions to attach it to your VPC.
In the navigation pane, choose Subnets, and then select your subnet.
On the Route Table tab, verify that there is a route with
0.0.0.0/0as the destination and the internet gateway for your VPC as the target.
Choose the ID of the route table (rtb-xxxxxxxx) to navigate to the route table.
On the Routes tab, choose Edit routes. Choose Add route, use
0.0.0.0/0as the destination and the internet gateway as the target.
Choose Save routes.
For more information, see Working with a DB instance in a VPC.
For engine-specific connection issues, see the following topics:
Testing a connection to a DB instance
You can test your connection to a DB instance using common Linux or Microsoft Windows tools.
From a Linux or Unix terminal, you can test the connection by enteringfake0.us-west-2.rds.amazonaws.com 8299 Connection to postgresql1.c6c8mn7fake0.us-west-2.rds.amazonaws.com 8299 port [tcp/vvr-data] succeeded!
Windows users can use Telnet to test the connection to a DB instance. Telnet actions aren't supported other than for testing the connection. If a connection is successful, the action returns no message. If a connection isn't successful, you receive an error message such as the following.
C:\>telnet sg-postgresql1.c6c8mntfake0.us-west-2.rds.amazonaws.com 819 Connecting To sg-postgresql1.c6c8mntfake0.us-west-2.rds.amazonaws.com...Could not open connection to the host, on port 819: Connect failed
If Telnet actions return success, your security group is properly configured.
Amazon RDS doesn Modifying an Amazon RDS DB instance.
Amazon RDS security issues
To avoid security issues, never use your master AWS user name and password for a user account. Best practice is to use your master AWS account to create AWS Identity and Access Management ."
You can get this error for several reasons. It might be because your account is missing permissions, or your account hasn't been properly set up. If your account is new, you might not have waited for the account to be ready. If this is an existing account, you might lack permissions in your access policies to perform certain actions such as creating a DB instance. To fix the issue, your IAM administrator needs to provide the necessary roles to your account. For more information, see the IAM documentation.
Resetting the DB instance owner password
If you get locked out of your DB instance, you can log in as the master user. Then you can reset the credentials for other administrative users or roles. If you can't log in as the master user, the AWS account owner can reset the master user password. For details of which administrative accounts or roles you might need to reset, see Master user account privileges.
You can change the DB instance password by using the Amazon RDS console, the AWS CLI command modify-db-instance, or by using the ModifyDBInstance API operation. For more information about modifying a DB instance, see Modifying an Amazon RDS DB instance.
Amazon RDS DB instance outage or reboot
A DB instance outage can occur when a DB instance is rebooted. It can also occur when the DB instance is put into a state that prevents access to it, and when the database is restarted. A reboot can occur when you either manually reboot your DB instance or change a DB instance setting that requires a reboot before it can take effect.
A DB instance reboot doesn't take effect until the DB instance associated with the parameter group is rebooted. The change requires a manual reboot. The DB instance isn't automatically rebooted during the maintenance window.
To see a table that shows DB instance actions and the effect that setting the Apply Immediately value has, see Modifying an Amazon RDS DB instance.
Amazon RDS DB parameter changes not taking effect
In some cases, you might change a parameter in a DB parameter group but don't see the changes take effect. If so, you likely need to reboot the DB instance associated with the DB parameter group. When you change a dynamic parameter, the change takes effect immediately. When you change a static parameter, the change doesn't take effect until you reboot the DB instance associated with the parameter group.
You can reboot a DB instance using the RDS console or explicitly calling the
RebootDBInstance API operation (without failover, if the
DB instance is in a Multi-AZ deployment). The requirement to reboot the associated
DB instance
after a static parameter change helps mitigate the risk of a parameter misconfiguration
affecting an API call. An example of this might be calling
ModifyDBInstance
to change the make sure that your DB instance has enough free storage space.
If your database instance runs out of storage, its status changes to
storage-full. For example, a call to the
DescribeDBInstances API operation for a DB instance that has used up
its storage outputs the following.
aws rds describe-db-instances --db-instance-identifier
mydbinstanceDB SECGROUP default active PARAMGRP default.mysql8.0 in-sync
To recover from this scenario, add more storage space to your instance using the
ModifyDBInstance API operation or the following AWS CLI command.
For Linux, macOS, 60 SECGROUP default active PARAMGRP default.mysql8.0 in-sync
Now, when you describe your DB instance, you see that your DB instance has
modifying status, which indicates the storage is being scaled.
aws rds describe-db-instances --db-instance-identifier
mydbinstance
DBINSTANCE mydbinstance 2009-12-22T23:06:11.915Z db.m5.large mysql8.0 50 sa modifying mydbinstance.clla4j4jgyph.us-east-1.rds.amazonaws.com 3306 us-east-1b 3 60 SECGROUP default active PARAMGRP default.mysql8.0 in-sync
After storage scaling is complete, your DB instance status changes to
available.
aws rds describe-db-instances --db-instance-identifier
mydbinstance
DBINSTANCE mydbinstance 2009-12-22T23:06:11.915Z db.m5.large mysql8.0 60 sa available mydbinstance.clla4j4jgyph.us-east-1.rds.amazonaws.com 3306 us-east-1b 3 SECGROUP default active PARAMGRP default.mysql8.0 in-sync
You can receive notifications when your storage space is exhausted using the
DescribeEvents operation. For example, in this scenario, if you make a
DescribeEvents call after these operations
The
InsufficientDBInstanceCapacity error can be returned when you try to create,
start or modify a DB instance, or when you try to restore a DB instance from a
DB snapshot.
When this error is returned, the following are common causes:
The specific DB instance class isn't available in the requested Availability Zone. You can try one of the following to solve the problem:
Retry the request with a different DB instance class.
Retry the request with a different Availability Zone.
Retry the request without specifying an explicit Availability Zone.
For information about troubleshooting instance capacity issues for Amazon EC2, see Insufficient instance capacity in the Amazon Elastic Compute Cloud User Guide.
The DB instance is on the EC2-Classic platform and therefore isn't in a VPC. Some DB instance classes require a VPC. For example, if you're on the EC2-Classic platform and try to increase capacity by switching to a DB instance class that requires a VPC, this error results. For information about Amazon EC2 instance types that are only available in a VPC, see Instance types available in EC2-Classic.
MySQL and MariaDB issues
You can diagnose and correct issues with MySQL and MariaDB DB instances.
Topics
- Maximum MySQL and MariaDB connections
- Diagnosing and resolving incompatible parameters status for a memory limit
- Diagnosing and resolving lag between read replicas
- Diagnosing and resolving a MySQL or MariaDB read replication failure
- Creating triggers with binary logging enabled requires SUPER privilege
- Diagnosing and resolving point-in-time restore failures
- Replication stopped error
- Read replica create fails or replication breaks with fatal error 1236
Maximum MySQL and MariaDB connections
The maximum number of connections allowed to an RDS for MySQL or RDS for MariaDB DB instance is based on the amount of memory available for its DB instance class. A DB instance class with more memory available results in a larger number of connections available. For more information on DB instance classes, see DB instance classes.
The connection limit for a DB instance is set by default to the maximum for the DB
instance class. You can limit the number of concurrent
connections to any value up to the maximum number of connections allowed. Use
the
max_connections
parameter in the parameter group for the DB instance. For more information, see
Maximum number of database connections and Working with DB parameter groups.
You can retrieve the maximum number of connections allowed for a MySQL or MariaDB DB instance by running the following query.
SELECT @@max_connections;
You can retrieve the number of active connections to a MySQL or MariaDB DB instance by running the following query.
SHOW STATUS WHERE `variable_name` = 'Threads_connected';
Diagnosing and resolving incompatible parameters status for a memory limit
A MariaDB or MySQL DB instance can be placed in incompatible-parameters status for a memory limit when both of the following conditions are met:
The DB instance is either restarted at least three time in one hour or at least five times in one day, or an attempt to restart the DB instance fails.
The potential memory usage of the DB instance exceeds 1.2 times the memory allocated to its DB instance class.
When a DB instance is restarted for the third time in one hour or for the fifth time in one day, Amazon RDS for MySQL performs a check for memory usage. The check makes the a calculation of the potential memory usage of the DB instance. The value returned by the calculation is the sum of the following values:
Value 1 – The sum of the following parameters:
innodb_additional_mem_pool_size
innodb_buffer_pool_size
innodb_log_buffer_size
key_buffer_size
query_cache_size(MySQL version 5.6 and 5.7 only)
tmp_table_size
Value 2 – The
max_connectionsparameter multiplied by the sum of the following parameters:
binlog_cache_size
join_buffer_size
read_buffer_size
read_rnd_buffer_size
sort_buffer_size
thread_stack
Value 3 – If the
performance_schemaparameter is enabled, then multiply the
max_connectionsparameter by
257700.
If the
performance_schemaparameter is disabled, then this value is zero.
So, the value returned by the calculation is the following:
Value 1 + Value 2 + Value 3
When this value exceeds 1.2 times the memory allocated to the DB instance class used by the DB instance, the DB instance is placed in incompatible-parameters status. For information about the memory allocated to DB instance classes, see Hardware specifications for DB instance classes .
The calculation multiplies the value of the
max_connections parameter by the sum of several parameters.
If the
max_connections parameter is set to a large value, it might cause the check to
return an inordinately high value for the potential memory usage of the DB instance.
In this case, consider
lowering the value of the
max_connections parameter.
To resolve the problem, complete the following steps:
Adjust the memory parameters in the DB parameter group associated with the DB instance so that the potential memory usage is lower than 1.2 times the memory allocated to its DB instance class.
For information about setting parameters, see Modifying parameters in a DB parameter group.
Restart the DB instance.
For information about setting parameters, see Starting an Amazon RDS DB instance that was previously stopped.
Diagnosing and resolving lag between read replicas
After you create a MySQL or MariaDB read replica and
the replica is available, Amazon RDS first replicates the changes made to the
source DB
instance from the time the read replica create operation started. During this
phase,
the replication lag time for the read replica is greater than 0. You can monitor
this lag time in Amazon CloudWatch by viewing the Amazon RDS
ReplicaLag
metric.
The
ReplicaLag metric reports the value of the
Seconds_Behind_Master field of the MariaDB or MySQL
SHOW REPLICA STATUS command. For more information, see
SHOW REPLICA STATUS can't be
determined or is
NULL.
Previous versions of MariaDB and MySQL used
SHOW SLAVE STATUS instead of
SHOW REPLICA STATUS. If you are using a MariaDB version before 10.5 or a MySQL
version before 8.0.23, then use
SHOW SLAVE STATUS.
The
ReplicaLag
metric returns -1 during a network outage or when a patch is
applied during the maintenance window. In this case, wait for network connectivity
to be restored or for the maintenance window to end before you check the
ReplicaLag
metric again.
The MySQL and MariaDB read replication technology is
asynchronous. Thus, you can expect occasional increases for the
BinLogDiskUsage metric on the source DB instance and for the
ReplicaLag
metric on
the read replica. For example, consider a situation where a high volume of write
operations to the source DB instance occur in parallel. At the same time, write
operations to the read replica are serialized using a single I/O thread. Such
a
situation can lead to a lag between the source instance and read replica.
For more information about read replicas and MySQL, see Replication implementation details
You can reduce the lag between updates to a source DB instance and the subsequent updates to the read replica by doing the following:
Set the DB instance class of the read replica to have a storage size comparable to that of the source DB instance.
Make sure or MariaDB. For example, suppose that you have a small set of tables that are being updated often and you're using the InnoDB or XtraDB table schema. In this case, dump those tables on the read replica. Doing this causes the database engine to scan through the rows of those tables from the disk and then cache them in the buffer pool. This approach can reduce replica lag. The following shows an example.
For Linux, macOS,, check the
error in the MySQL. The
max_allowed_packetparameter is used to specify the maximum size of data manipulation language (DML) that can be run on the database. If the
max_allowed_packetvalue for the source DB instance is larger than the
max_allowed_packetvalue're creating indexes on a read replica, you need to have the
read_onlyparameter set to 0 to create the indexes. If you're writing to tables on the read replica, it can break replication.
Using a nontransactional storage engine such as MyISAM. Read replicas require a transactional storage engine. Replication is only supported for the following storage engines: InnoDB for MySQL or MariaDB.
You can convert a MyISAM table to InnoDB with the following command:
alter table <schema>.<table_name> engine=innodb;
Using unsafe nondeterministic queries such as
SYSDATE(). For more information, see Determination of safe and unsafe statements in binary logging
in the MySQL documentation. binary log (binlog) position issue, you can change the replica replay position with the
mysql_rds_next_master_logcommand. Your MySQL or MariaDB DB instance must be running a version that supports the
mysql_rds_next_master_logcommand to change the replica. If you do for MySQL or RDS for MariaDB DB instance, you might receive the following error.
"You do not have the SUPER privilege and binary logging is enabled"
To use triggers when binary logging is enabled requires the SUPER privilege, which
is
restricted for RDS for MySQL and RDS for MySQL or RDS, macOS, or Unix:
aws rds create-db-parameter-group \ --db-parameter-group-name
allow-triggers\ --db-parameter-group-family
mysql8.0\ --description "
parameter group allowing triggers"
For Windows:
aws rds create-db-parameter-group ^ --db-parameter-group-name
allow-triggers^ --db-parameter-group-family
mysql8.0^ --description "
parameter group allowing triggers"
Modify the DB parameter group to allow triggers.
For Linux, macOS, or Unix:
aws rds modify-db-parameter-group \ --db-parameter-group-name
allow-triggers\ --parameters "
ParameterName=log_bin_trust_function_creators, ParameterValue=true, ApplyMethod=pending-reboot"
For Windows:
aws rds modify-db-parameter-group ^ --db-parameter-group-name
allow-triggers^ --parameters "
ParameterName=log_bin_trust_function_creators, ParameterValue=true, ApplyMethod=pending-reboot"
Modify your DB instance to use the new DB parameter group.
For Linux, macOS, or Unix:
aws rds modify-db-instance \ --db-instance-identifier
mydbinstance\ --db-parameter-group-name
allow-triggers\ --apply-immediately
For Windows:
aws rds modify-db-instance ^ --db-instance-identifier
mydbinstance^ --db-parameter-group-name
allow-triggers^ --apply-immediately
For the changes to take effect, manually reboot the DB instance.
aws rds reboot-db-instance --db-instance-identifier binary logs 're using in-memory tables with replicated DB instances, you might need to recreate the read replicas after a restart. This might be necessary if a read replica reboots and can't restore data from an empty in-memory table.
For more information about backups and PITR, see Working with backups and Restoring a DB instance to a specified time.
Replication stopped error
When you call the
mysql.rds_skip_repl_error command, you might receive an error message
stating that replication is down or disabled.
This error message appears because replication is stopped and can source. After you have increased the binlog retention
time, you can restart replication and call the
mysql.rds_skip_repl_error command as needed.
To set the binlog retention time, use the mysql.rds_set_configuration procedure. can't create a read replica for the DB instance.
Replication fails with
fatal error 1236.
Some default parameter values for MySQL and MariaDB DB instances help to make sure that the database is ACID compliant and read replicas are crash-safe. They do this by making sure that each commit is fully synchronized by writing the transaction to the binary log before it's committed. Changing these parameters from their default values to improve performance can cause replication to fail when a transaction hasn't been written to the binary log.
To resolve this issue, set the following parameter values:
sync_binlog = 1
innodb_support_xa = 1
innodb_flush_log_at_trx_commit = 1
Can't set backup retention period to 0
There are several reasons why you might need to set the backup retention period to 0. For example, you can disable automatic backups immediately by setting the retention period to 0.
In some cases, you might set the value to 0 and receive a message saying that the retention period must be between 1 and 35. In these cases, check to make sure that you haven't set up a read replica for the instance. Read replicas require backups for managing read replica logs, and therefore you can't set a retention period of 0. | https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/CHAP_Troubleshooting.html | 2021-11-27T02:41:22 | CC-MAIN-2021-49 | 1637964358078.2 | [] | docs.aws.amazon.com |
Third-Party Repositories
The Fedora Workstation Third Party repositories provide access to additional desktop software that is not included in Fedora’s own repos. The repositories are selected and managed by the Fedora Workstation Working Group, in accordance with FESCo’s third party repository policy.
The third-party repositories exist to provide access to additional software that may be necessary or important for users to have access to. This includes some proprietary software.
While facilitating the use of a small selected set of proprietary software, the Fedora project continues to strongly believe in and promote free and open source software. As a result, third party repositories must be enabled by the user in order to be used, and open source alternatives are suggested below.
How to use
The following are basic instructions for how to use the third-party repositories.
Enabling third-party repositories
To install software from the third-party repositories, they must first be enabled. The easiest way to do this is in the Third-Party Repositories page of initial setup.
Alternative methods to enable the third-party repositories include:
Through the info bar that is shown in the Software app when third-party repositories are not enabled.
Enabling Third-Party Repositories in the Software app’s Software Repository settings.
Installing from third-party repositories
Once the repos are enabled, the software they contain can be installed in the usual way. The repos can also be searched and installed using the dnf or flatpak commands, depending on the packaging format used. | https://docs.fedoraproject.org/en-US/workstation-working-group/third-party-repos/ | 2021-11-27T03:35:39 | CC-MAIN-2021-49 | 1637964358078.2 | [] | docs.fedoraproject.org |
The access log provides information about all user activity within Flywheel.
The report contains the following information:
Before you generate a report, you will need to filter the contents of the Access Log because viewing and downloading is limited to the first 10,000 results. An export containing all results can be provided by contacting [email protected]
Once you have generated the report, download the results as a CSV file. The CSV file includes additional information, such as Flywheel hierarchy labels and IDs, which are not exposed on the Access Log page.
Now that you have some of the basics, here's some next steps for administering Flywheel: | https://docs.flywheel.io/hc/en-us/articles/4403390317843-Access-Log | 2021-11-27T03:21:03 | CC-MAIN-2021-49 | 1637964358078.2 | [array(['/hc/article_attachments/4405384438803/1611a8ff0c83e7.png',
'AccessLog.png'], dtype=object) ] | docs.flywheel.io |
Important
The named entities feature is rolling out and will appear in your tenant when it is available to you. Check for them in content explorer and in the data loss prevention (DLP) policy authoring flow.
Named entities are sensitive information types (SIT). They're complex dictionary and pattern-based classifiers that you can use to detect person names, physical addresses, and medical terms and conditions. You can see them in the Compliance Center > Data classification > Sensitive info types. Here is a partial list of where you can use SITs:
- Data loss prevention policies (DLP)
- Sensitivity labels
- Insider risk management
- Microsoft Defender for Cloud Apps
DLP makes special use of named entities in enhanced policy templates, which are pre-configured DLP policies that you can customize for your organizations needs. You can also create your own DLP policies from a blank template and use a named entity SIT as a condition.
Examples of named entity SITs
Named entity SITs come in two flavors, bundled and unbundled
Bundled named entity SITs detect all possible matches. Use them as broad criteria in your DLP policies for detecting sensitive items.
Unbundled named entity SITs have a narrower focus, like a single country. Use them when you need a DLP policy with a narrower detection scope.
Here are some examples of named entity SITs. You can find all 52 of them in the Compliance Center > Data classification > Sensitive info types.
Examples of enhanced DLP policies
Here are some examples of enhanced DLP policies that use named entity SITs. You can find all 10 of them in the Compliance Center > Data loss prevention > Create policy. Enhanced templates can be used in DLP and auto-labeling.
Next steps
For further information
- Create a custom sensitive information type
- Create a custom sensitive information type in PowerShell
- Data loss prevention policies (DLP)
- Sensitivity labels
- Retention labels
- Communication compliance
- Autolabeling policies
- Create, test, and tune a DLP policy
- Create a DLP policy from a template | https://docs.microsoft.com/en-us/microsoft-365/compliance/named-entities-learn?view=o365-worldwide | 2021-11-27T04:02:38 | CC-MAIN-2021-49 | 1637964358078.2 | [] | docs.microsoft.com |
The following sections overview the main workflow of implementing indoor navigation and indoor tracking from scratch using the Navigine Indoor Locations Services.
To get started with Navigine Indoor Location Services, you need to acquire infrastructure components first. The following table provides brief information about the components that you need to set up your location's infrastructure.
Register a Navigine Account to get access to
Navigine Indoor Location Services.
You will need physical access to the location once you are ready to
deploy the infrastructure components to their places.
The building parameters are required at the stage of measuring
sub-locations. The more precise this data is, the better is
the navigation accuracy you get.
You need iBeacon-compatible BLE beacons to set up your target location’s
infrastructure. 8-15 beacons should be enough for a location of
1000 square meters.
As an alternative to using BLE beacons for indoor navigation, you can
use a set of Wi-Fi routers, which rather brings less accuracy and
supports Android devices only.
Which might be an Android smartphone or tablet with Bluetooth 4.0. Make sure
that the device supports Bluetooth LE 4.0 and iBeacon protocol.
You also need a Linux or Windows OS machine for integrating the
Navigine SDK into your navigations Application for Android devices.
To integrate the Navigine SDK into iOS apps, you definitely need
a Mac OS machine and the corresponding developer’s account.
When deciding on the infrastructure components, keep in mind that Navigine provides navigation algorithms, which provide different navigation accuracy. Besides that, the algorithms have different infrastructure requirements as well as provide different methods of linking the actual location to the binary map on the server. For example:
Trilateration algorithm enables you to use the Bluetooth beacon infrastructure to link the real location to the map online. The algorithm is easy to implement - you only need to deploy beacons across the location, and then specify the beacon locations in the online map. In case of using the trilateration algorithm, navigation is performed via processing data from the mobile device and iBeacons. The algorithm relies on the exactly known locations of the iBeacons and calculates the coordinates according to the mathematical models of signal distribution.
For detailed instructions on linking locations of the beacons from real world to online map in Navigine CMS, refer to Using Trilateration.
Fingerprinting algorithm enables you to use both - the iBeacon infrastructure and Wi-Fi infrastructure on Android devices. This algorithm performs navigation through using either data from Wi-Fi infrastructure only or through using both types of infrastructure simultaneously. To use the full capabilities of the fingerprinting algorithm, you need to implement steps described for the trilateration algorithm (above) and measure the location radiomap. The fingerprinting algorithm is more accurate if compared to trilateration as besides the emitter's location the radiomap comprises data about peculiar properties of the signal (such as reflections and dispersion).
For detailed instructions on measuring the radiomap, refer to section Measuring Radiomap of this guide. For the radiomap measurement "Golden Rules", refer to section "Golden Rules" for Radiomap Measurement.
The rest of the sections in this chapter provide complete guidelines on deploying and setting up the indoor navigation infrastructure.
The following section provides guidelines on setting up the iBeacon infrastructure for your indoor navigation app if you use Kontakt.io iBeacons. In case you use another iBeacon provider please refer to it's site for details.
Prior to installing beacons in the target locations, you need to configure them. Use Kontact.io mobile application for iBeacon configuration.
Make sure that the beacons are in iBeacon mode and the signal transmit power is set to`` -12dbm``.
By default the Kontakt.io beacons are set to travel mode and have minimal transmit power for power saving purposes. In the beacon transmit power options choose the 3-rd value, which corresponds to -12dbm and range up to 40 meters. Refer to Transmission power, Range and RSSI article for details on the kontact.io beacons' configuration.
-12dbm
One more way to configure your iBeacons is via the Kontakt.io application. Consider this approach if you want to configure beacons' advanced settings. This approach let's you configure each your beacons one-by-one.
Make sure you have Internet connection.
Launch the kontakt.io Application on your iOS device.
Switch to the Settings tab and enter your kontakt.io login information.
In the Settings tab, scroll down to the Administrator section, and tap Enter administrator mode.
In the Beacons tab, find the ID of the beacon that you want configure, and tap it to connect. Beacon's ID is written on it's back.
Specify the following parameters for each of the beacons you need to configure:
You can find information on configuring iBeacons on the official kontakt.io website, specifically
Take into account the following golden rules during the beacon installation procedure:
The following figure demonstrates the optimal settlement of 18 beacons for a single facility with multiple rooms inside. | https://docs.navigine.com/en/Getting_Started | 2021-11-27T02:05:15 | CC-MAIN-2021-49 | 1637964358078.2 | [] | docs.navigine.com |
Welcome to Django-Qanda’s documentation!¶
Qanda is a simple FAQ app for Django projects.
Here are the main features:
- Published questions can be made public, restricted to logged-in users, or only visible to site staff.
- Topics are self-hiding depending on the access level of the questions they contain.
- Qanda installs with a fully working set of templates so you can start playing straight away, and an example project is provided.
Getting started¶
The first thing you’ll need to do is check out the installation guide and requirements.
If you’re familiar with installing Django apps then the installation is totally standard, with no additional dependencies.
License¶
Django-Qanda is released under the MIT License.
Contribute¶
- Issue Tracker:
- Source Code: | https://django-qanda.readthedocs.io/en/latest/ | 2021-06-12T17:09:15 | CC-MAIN-2021-25 | 1623487586239.2 | [array(['_images/qanda-threepage.png',
'Demo screenshot of provided templates'], dtype=object)] | django-qanda.readthedocs.io |
Selective ONTAP to further restrict access of certain targets to certain initiators. When using SLM with portsets, LUNs will be accessible on the set of LIFs in the portset on the node that owns the LUN and on that node's HA partner.
SLM is enabled by default on all new LUN maps. | https://docs.netapp.com/ontap-9/topic/com.netapp.doc.dot-cm-sanag/GUID-62ABF745-6017-40B0-9D65-CE9F7FF66AB3.html?lang=en | 2021-06-12T18:02:20 | CC-MAIN-2021-25 | 1623487586239.2 | [] | docs.netapp.com |
Interana 2.21 Release Notes
Interana release 2.21 includes the following new features and resolved issues.
New features
Global filters
Use the new global filters to build a set of filters that you can apply to every query. You must have the Publisher or Admin role to create and edit global filters. However, if a Publisher or Admin has defined Global Filters, any user can see and add or remove the filters in the Query Builder. (If no Global Filters have been defined, the Global Filters field will not appear in the Query Builder.)
You can also set them as default, which means they will be applied automatically to any queries your users run (global filters are defined and applied per dataset). As with regular filters, you can use the basic or advanced syntax to define the filter parameters.
To create or edit global filters on your Interana instance, go to
https://<myinstance>/?globalFilters.
See Create and use Global Filters for more information.
If the filters applied in a Global Filter conflict with those in the Explorer, arguments will be treated as AND (and therefore return no results).
First and Last aggregators
We added First and Last aggregators that return the value of a given column with the smallest or largest event timestamp.
You can use the First and Last aggregators when creating named expressions. When working with int columns, the aggregators are included in the Measure list for the following views:
- Time
- Table
- Number
- Bar
- Pie
- Stacked Area Time
For string columns, you can use the aggregators as measures with the Table view.
See Metrics, Measures, and Aggregators for more information about the first/last aggregators.
Advanced filters in named expression builders
We've added an advanced filter option to the named expression builders. You can filter using advanced syntax, including OR statements, when building cohorts, metrics, sessions, funnels, and global filters.
Your advanced filter will be partially validated for correctness when you save a named expression. We will notify you about syntax errors immediately, but referring to an invalid column name will not show up until you attempt to run the named expression.
Group By for sets
You can now group by integer and string sets when building queries. Grouping by a set column will aggregate individual elements of sets across events in the query. Sets are assumed to be unordered and can be comprised of integers and strings.
Because event data can exist in multiple sets, the sum of the events of the set elements will be greater than the actual number of events.
See Group By Sets examples for more information.
Data tooltips
We added tooltips to column names and named expressions in lists. The tooltip pop-ups include more information about the data, based on the Description field of the columns and named expressions. This will help you select the right data when building your queries.
You can also click Edit in the tooltip to open the column detail dialog and add or edit a description.
Embeddable charts
We added an Embed Chart option that lets you use HTML to embed a dashboard chart in a web page or tool that supports HTML. In the Explorer window, select More > Embed Chart, then copy the HTML code to a web page.
This feature is disabled by default. Contact Interana Support if you want to enable this feature.
Improved display of columns in Table view
We have improved the display of information in Table view. In previous releases, if you selected multiple Compare Groups, we included that information in a single column. Now we display separate columns for each group.
New Order By chart controls
We added an Order By chart control that lets you set how data is displayed in Bar, Pie, and Stacked Bar views. You can select to order by the measure or by any groups that you selected, and in either ascending or descending order.
When Ordering By any Measure in Time View, the group labels below the chart are sorted by the first value in the time range.
Editing restrictions for datasets
Only users with Publisher or Admin roles can edit dataset properties, including column descriptions and familiar names.
Decimal support for derived columns
You can now create derived columns that return decimal (double) values.
"Last x" support for time ranges
We support both precise ("now", "1 day ago") and calendar-aligned relative time ("today", "yesterday", "last week"). See Time query syntax for more information.
Support for the Apache Avro file format
We added a new ingest transformer (avro_load) to support ingestion of files in the Apache Avro format ().
Interana Install creates the "customer" by default
In prior releases, after installing Interana it was necessary to run the "create_customer.py" script as a final part of the setup process. Starting in release 2.21, an "interana" customer is pre-created during the install, and instead there are a few distinct steps needed to bootstrap the initial admin user and configure auto-registration rules.
# provision the initial admin user /opt/interana/backend/deploy/user_admin.py signup -c1 -u [email protected] -p test --base_url "" /opt/interana/backend/scripts/rbac/user_role.py create -c1 -r admin -u [email protected] # enable e-mail auto-registration if desired /opt/interana/backend/deploy/update_customer.py -c interana --add_email_suffix interana.com
upgrade_cluster.py improvements
In previous releases, the uncompressed tarball for upgrading could be located in the home folder (~/). But in version 2.21, the uncompressed tarball must be in the
/opt/interana/backend/ folder of the push node.
Resolved issues
- Can't explore off a dashboard chart that reads "current not available"
- Using multiple "text contains" arguments in a cohort will return an error
- Typeaheads for cohorts don't work when Publishing features are enabled
- Average aggregation inside metric returns integer instead of float
- Decimal columns show as data type "int" in the Dataset Settings page
- The "text contains" filter does not accept regexes
- Scrolling is broken on dashboards when scrolling over Table charts with scroll bar
- Sorting by "Last Modified Date" in named expressions uses string instead of date sorting
- delete_columns can take 1 minute to execute for single shard
Known issues
- Tooltips are not available for custom measures
- Changing the familiar name of column doesn't show up in type-ahead until refresh
- Order By is case-sensitive for the Stacked Area Time view. Items that begin with capital letters are sorted before items with lowercase letters (Interana sorts A-Z, then a-z).
- Viewing sets in Samples View does not work for sets with > 10 values
- Unable to use count unique with set columns
- Decimal average fails with error in A/B view
- Cannot run unsampled queries in Chrome v43
- We calculate 6 digits of precision but cut off trailing zeroes
- Cannot change the resolution when you visit open a query from a dashboard if that query is using a Group By
- Cannot filter to or filter out a decimal value if the query groups by decimal values
- Order By does not work with custom metrics
- Cannot edit named expressions if the name includes an apostrophe
- Cannot use derived columns in global filters
- When adding multiple filters that use the same column and filter syntax to global filters, they are combined with the AND syntax instead of OR
- Firefox only: cannot scroll large tables. The workaround is to hover your mouse over where the scroll bar should be on the right side of the window, start to scroll until the bar appears, then click and drag the scroll bar to scroll through the table.
- The chart legend cannot be toggled when using the Distribution or Stacked Bar views
Release 2.21.1
This maintenance release fixes the following issues:
Dashboard charts with stacked bar graphs appear as empty charts in email reports
Switching from Basic to Compare filters in the Explore window does not save the filters to compare group A or B
Ratio metrics
Order By does not correctly sort Custom Metrics in the display
Queries using a custom ratio with "Count Unique <string column>" in the numerator return no results in Table view
Can't use Make Fraction of Total with Count Unique in the numerator when creating a ratio metric
Removing a "true" or "false" filter values does not update the query correctly. New queries still use the true/false value.
Release 2.21.2
This maintenance release fixes an issue that could occur when tiering data. | https://docs.scuba.io/2/Guides/Interana_release_notes/Interana_2.21_Release_Notes | 2021-06-12T17:50:50 | CC-MAIN-2021-25 | 1623487586239.2 | [array(['https://docs.scuba.io/@api/deki/files/3005/221_data_tooltips.png?revision=1&size=bestfit&width=800&height=439',
None], dtype=object) ] | docs.scuba.io |
:
/opt/LifeKeeper/bin/lkstart
When executing this command, the LifeKeeper service will be started and LifeKeeper will be set to start automatically at system startup.
Following the delay of a few seconds, an informational message is displayed.
See the LCD help page by entering man LCD at the command line for details on the lkstart command.
To start only the LifeKeeper process without enabling automatic startup, execute the following command:
service lifekeeper start (or, systemctl start lifekeeper)
Enabling Automatic LifeKeeper Restart
While the above command will start LifeKeeper, it will need to be performed each time the system is re-booted. If you would like LifeKeeper to start automatically when server boots up, type the following command:
chkconfig lifekeeper on (or, systemctl enable lifekeeper)
See the chkconfig man page for further information.
Post your comment on this topic. | https://docs.us.sios.com/spslinux/9.5.0/en/topic/starting-lifekeeper | 2021-06-12T18:16:29 | CC-MAIN-2021-25 | 1623487586239.2 | [] | docs.us.sios.com |
WithdrawByoipCidr
Stops advertising an. AWS SDKs, see the following: | https://docs.aws.amazon.com/AWSEC2/latest/APIReference/API_WithdrawByoipCidr.html | 2021-06-12T18:51:17 | CC-MAIN-2021-25 | 1623487586239.2 | [] | docs.aws.amazon.com |
Tile reference
Your View, Activity, and News pages all contain tiles. Here you can find references for these tiles to help you understand which tiles you can access and from where.
For more information about tiles, see Using tiles.
Tiles support in Jive Daily Hosted
The tiles you add to pages are displayed in the Jive Daily Hosted app in one column regardless of the tile layout selected for the page. You should also note that the app supports not all tiles; you can find the details in this section. | https://docs.jivesoftware.com/9.0_on_prem_int/end_user/jive.help.core/user/TileReferences.html | 2021-06-12T18:31:20 | CC-MAIN-2021-25 | 1623487586239.2 | [] | docs.jivesoftware.com |
Contents:
Contents:
The Trifacta® platform supports multiple types of platform environments. Through the Deployment Manager, you can deploy your flows from a development instance to a production instance.
Tip: When you initially set up a platform instance, you should decide whether it is a Dev instance, a Prod instance or both. Details are below.
NOTE: Assignment of roles must be executed through the Admin Settings page. You cannot assign roles through API commands.
Enable Dev-Only Environment
Deployment Manager configuration is required.
NOTE: Do not include the Deployment role in any users accounts. See Manage Users.
Enable Prod-Only Environment
If you are installing separate instances of the Trifacta platform to serve as Dev/Test and Prod environments, you can configure the Prod environment to serve only production purposes. Users who are permitted access to this environment can create and manage deployments, releases within them, and jobs triggered for these releases.
Tip: Separate Dev and Prod platform instances is recommended.
By default, the installed instance of the platform is configured as a Development instance. To configure the installed platform to operate as a Production instance, please complete the following steps.
NOTE: If you are enabling a Production-only instance of the platform, you should verify that you have deployed sufficient cluster resources for executing jobs and have sufficient nodes and users in your Trifacta license to support it. For more information, see Overview of Deployment Manager.
Steps:
- You can apply this change through the Admin Settings Page (recommended) or
trifacta-conf.json. For more information, see Platform Configuration Methods.
Configure the following setting to be
true:
"deploymentManagement.enabled" : true,
- Save your changes and restart the platform.
User Management for Prod-only
You must create accounts in the Prod instance for users who are to be permitted to create and manage deployments.
Tip: You should limit the number of users who can access a Production environment.
NOTE: Any user who has access to a Production-only instance of the platform can perform all deployment-related actions in the environment. The Deployment role does not apply. For more information, see Manage Users.
Enable All-in-One Environment
In this environment, individual user accounts may access development and testing features of the platform or the Deployment Manager, but not both. A user is a development user or a production user, based upon roles in the user's account.
Steps:
- You can apply this change through the Admin Settings Page (recommended) or
trifacta-conf.json. For more information, see Platform Configuration Methods.
Configure the following setting to be
false:
"deploymentManagement.enabled" : false,
- Save your changes and restart the platform.
User management for All-in-One environment
In this environment, access to Deployment Manager is determined by the presence of the Deployment role in a user's account:
When
deploymentManagement.enabled=false:
Switching roles
- Administrators should not apply these permission changes to admin accounts; use a separate account instead.
- If you switch the Deployment role on a single account, changes that you make to a Dev version of a flow are not automatically applied to a Prod version of the same flow, and vice-versa. You must still export the flow from one environment and import into the other to see any changes.
This page has no comments. | https://docs.trifacta.com/display/r068/Configure+Deployment+Manager | 2021-06-12T17:59:43 | CC-MAIN-2021-25 | 1623487586239.2 | [] | docs.trifacta.com |
Setting row-level security for variables
BMC Remedy Smart Reporting KPI reports require that you set up row level security for variables.
For real-time data, flashboard grants access to statistical data based on user login information.
For historical and summary data in multi-tenant environments, you can grant access to flashboards variables by implementing row-level security. If you do not set permissions for a variable, only the administrator who created the variable can see the information in the flashboard.
This row-level security enables you to show and hide information based on a user's permissions. For example, if you place a flashboard on a Help Desk form, you can display one set of data to support representatives and a different set of data to requesters.
To set row-level security for variables
- In BMC Remedy Mid Tier, open the FB:User Privilege form in New mode.
- From the Variable list, select the variable.
From the User list, select a user with group permissions to the data.
Warning
You can select any user in the group, but select a user that will not be deleted. If the user is deleted, permissions to the variable are lost, and you must select a new user in the FB:User Privilege form.
- From the Group list, select the group to which you want to give row-level security.
You can enter more than one group in this field by selecting from the Group list multiple times.
- Click Save.
- Repeat these steps for each group.
If you delete a variable, the corresponding records you created in the FB:User Privilege form are automatically deleted. | https://docs.bmc.com/docs/itsm91/setting-row-level-security-for-variables-608491110.html | 2021-06-12T18:14:13 | CC-MAIN-2021-25 | 1623487586239.2 | [] | docs.bmc.com |
.
factory
- This is a
buildbot.process.factory.BuildFactoryinstance. is set to run one build at a time or ensure this is fine to run multiple builds from the same directory simultaneously.
- If provided, this is a list of strings that identifies tags for the builder. Status clients can limit themselves to a subset of the available tags. A common use for this is to add new builders to your setup (for a new module, or for a new worker) that do not work correctly yet and allow you to integrate them with the active builders. You can tag these new builders with a
testtag, make your main status clients ignore them, and have only private status clients pick them up. As soon as they work, you can move them over to the active tag.
nextWorker
-'s name. The function can optionally return a Deferred, which should fire with the same results.
nextBuild
- If provided, this is a function that controls which build request will be handled next. The function is passed two arguments, the
Builderobject which is assigning a new job, and a list of
BuildRequestobjects of pending builds. The function should return one of the
BuildRequestobjects, or
Noneif none of the pending builds should be started. This function can optionally return a Deferred which should fire with the same results.
canStartBuild
- If provided, this is a function that can veto whether a particular worker should be used for a given build request. The function is passed three arguments: the
Builder, a
Worker, and a
BuildRequest. The function should return
Trueif the combination is acceptable, or
Falseotherwise.steps']), ] | https://docs.buildbot.net/0.9.5/manual/cfg-builders.html | 2021-06-12T18:42:53 | CC-MAIN-2021-25 | 1623487586239.2 | [] | docs.buildbot.net |
A great deal of information is available via the debug logging system, if you are having issues with minions connecting or not starting run the minion in the foreground:
# salt-minion -l debug
Anyone wanting to run Salt daemons via a process supervisor such as monit,
runit, or supervisord, should omit the
-d argument to the daemons and
run them in the foreground.
No ports need to be opened on the minion, as it makes outbound connections to the master. If you've put both your Salt master and minion in debug mode and don't see an acknowledgment that your minion has connected, it could very well be a firewall interfering with the connection. See our firewall configuration page for help opening the firewall on various platforms.
If you have netcat installed, you can check port connectivity from the minion
with the
nc command:
$ nc -v -z salt.master.ip.addr 4505 Connection to salt.master.ip.addr 4505 port [tcp/unknown] succeeded! $ nc -v -z salt.master.ip.addr 4506 Connection to salt.master.ip.addr 4506 port [tcp/unknown] succeeded!
The Nmap utility can also be used to check if these ports are open:
# nmap -sS -q -p 4505-4506 salt.master.ip.addr Starting Nmap 6.40 ( ) at 2013-12-29 19:44 CST Nmap scan report for salt.master.ip.addr (10.0.0.10) Host is up (0.0026s latency). PORT STATE SERVICE 4505/tcp open unknown 4506/tcp open unknown MAC Address: 00:11:22:AA:BB:CC (Intel) Nmap done: 1 IP address (1 host up) scanned in 1.64 seconds
If you've opened the correct TCP ports and still aren't seeing connections, check that no additional access control system such as SELinux or AppArmor is blocking Salt. Tools like tcptraceroute can also be used to determine if an intermediate device or firewall is blocking the needed TCP ports.
highstates by running.
If the minion seems to be unresponsive, a SIGUSR1 can be passed to the process to display what piece of code is executing. This debug information can be invaluable in tracking down bugs.
To pass a SIGUSR1 to the minion, first make sure the minion is running in the foreground. Stop the service if it is running as a daemon, and start it in the foreground like so:
# salt-minion -l debug
Then pass the signal to the minion when it seems to be unresponsive:
# killall -SIGUSR1 salt-minion
When filing an issue or sending questions to the mailing list for a problem with an unresponsive daemon, be sure to include this information if possible.
As is outlined in github issue #6300, Salt cannot use python's multiprocessing pipes and queues from execution modules. Multiprocessing from the execution modules is perfectly viable, it is just necessary to use Salt's event system to communicate back with the process.
The reason for this difficulty is that python attempts to pickle all objects in memory when communicating, and it cannot pickle function objects. Since the Salt loader system creates and manages function objects this causes the pickle operation to fail.
When a command being run via Salt takes a very long time to return
(package installations, certain scripts, etc.) the minion may drop you back
to the shell. In most situations the job is still running but Salt has
exceeded the set timeout before returning. Querying the job queue will
provide the data of the job but is inconvenient. This can be resolved by
either manually using the
-t option to set a longer timeout when running
commands (by default it is 5 seconds) or by modifying the minion
configuration file:
/etc/salt/minion and setting the
timeout value to
change the default timeout for all commands, and then restarting the
salt-minion service.
Note
Modifying the minion timeout value is not required when running commands from a Salt Master. It is only required when running commands locally on the minion.
If a
state.apply run takes too long, you can find a bottleneck by adding the
--out=profile option. | https://docs.saltproject.io/en/latest/topics/troubleshooting/minion.html | 2021-06-12T18:25:42 | CC-MAIN-2021-25 | 1623487586239.2 | [] | docs.saltproject.io |
A/B test
A method of testing two variants of something, like a web page, marketing campaign, or even just the label on a login button (or maybe that's Sign In, instead?).
Use Scuba’s A/B view to understand the results of your A/B tests. For example, you can examine the results of tests for new layouts, user flows, email subjects, recommendation algorithms, colors, rankings, or new features. Then use filters to drill down into your data and identify the statistically significant results. | https://docs.scuba.io/lexicon/A_B_test | 2021-06-12T17:20:54 | CC-MAIN-2021-25 | 1623487586239.2 | [] | docs.scuba.io |
STM32 PWM and dualPWM servos shared functions. More...
#include "arch/stm32/subsystems/actuators/actuators_shared_arch.h"
#include <libopencm3/stm32/timer.h>
#include "arch/stm32/mcu_arch.h"
Go to the source code of this file.
STM32 PWM and dualPWM servos shared functions.
Definition in file actuators_shared_arch.c.
Set PWM channel configuration.
Definition at line 35 of file actuators_shared_arch.c.
Referenced by set_servo_timer().
Set Timer configuration.
Definition at line 59 of file actuators_shared_arch.c.
References actuators_pwm_arch_channel_init(), PWM_BASE_FREQ, and timer_get_frequency().
Referenced by actuators_dualpwm_arch_init(), and actuators_pwm_arch_init(). | http://docs.paparazziuav.org/latest/actuators__shared__arch_8c.html | 2021-06-12T17:02:00 | CC-MAIN-2021-25 | 1623487586239.2 | [] | docs.paparazziuav.org |
Setting the color depth for captured sessions
Because audit and monitoring service captures user activity as video, you can configure the color depth of the sessions to control the size of data that must be transferred over the network and stored in the database. A higher color depth also increases the CPU overhead on audited computers but improves resolution when the session is played back. A lower color depth decreases the amount of data sent across the network and stored in the database. In most cases, the recommended color depth is medium (16 bit). The CPU and storage estimates in this guide are based on a medium (16 bit) color depth.
To change the color depth for captured sessions:
- Log on to the computer where the Centrify agent for Windows is installed.
- In the list of applications on the Windows Start menu, click Agent Configuration to open the agent configuration panel.
- Click Centrify Auditing and Monitoring Service.
- Click Settings.
- On the General tab, click Configure
- Select the maximum color quality for recorded sessions, then click Next.
- Follow the prompts displayed to change any other configuration settings. | https://docs.centrify.com/Content/auth-admin-win/CapturedSessionsSetColorDepth.htm | 2021-06-12T17:19:35 | CC-MAIN-2021-25 | 1623487586239.2 | [] | docs.centrify.com |
Creating an organizational unit for Centrify
To isolate the evaluation environment from other objects in Active Directory, you can create a separate organizational unit for all of the Centrify-specific objects that are created and managed throughout the evaluation. You must be the Active Directory administrator or have Domain Admins privileges to perform this task.
To create an organizational unit for Centrify
- Open Active Directory Users and Computers and select the domain.
- Right-click and select New > Organizational Unit.
- Deselect Protect container from accidental deletion.
- Type the name for the organizational unit, for example, Centrify, then click OK.
Create additional organizational units
Additional organizational units are not required for an evaluation. In a production environment, however, you might create several additional containers to control ownership and permissions for specific types of Centrify objects. For example, you might create separate organizational units for UNIX Computers and UNIX Groups.
To illustrate the procedure, the following steps create an organizational unit for the Active Directory groups that will be used in the evaluation to assign user access rights to the Centrify-managed computers within the top-level organizational unit for Centrify-specific objects.
To create an organizational unit for evaluation groups
- In Active Directory Users and Computers, select the top-level organizational unit you created in Creating an organizational unit for Centrify.
- Right-click and select New > Organizational Unit.
- Deselect Protect container from accidental deletion.
Type the name for the organizational unit, for example, UNIX Groups, then click OK.
In later exercises, you will use this organizational unit and add other containers to manage additional types of information. | https://docs.centrify.com/Content/auth-unix-eval/OrgUnitCreate.htm | 2021-06-12T18:15:06 | CC-MAIN-2021-25 | 1623487586239.2 | [] | docs.centrify.com |
Dashboard Interface
The dashboard is the inferface connected to your wallet and the engine mainnet network, on there you can deploy, manage states and check all logs of your own graphs, the only official url is app.graphlinq.io
You can also buy GLQ token automatically through the interface and manage your balance for the costs of your running graphs, make deposit and withdrawal request from the ethereum smart-contract.> Github open source repo of the Interface | https://docs.graphlinq.io/dashboard/ | 2021-06-12T18:34:07 | CC-MAIN-2021-25 | 1623487586239.2 | [] | docs.graphlinq.io |
The chapters in this section each describe a part of the application architecture. Together they provide an overview of how requests to the application are authorized, how data is saved and retrieved from the database, and how responses are returned.
These chapters provide an introduction to how the application works. You may need to read the code to learn more about each component. But after reading these chapters you should have an idea of where to look.
Some of the code in OJS and OMP is more than ten years old. You may find parts of the application code that do not conform to coding conventions in this document. This guide describes the architecture which all new contributions should follow.
Each application includes modules in three locations.
ojs │ ├─┬ lib │ ├── pkp # The base library which │ │ # powers all of our applications │ │ │ └── ui-library # The UI component library used │ # for the editorial backend. │ └── plugins # Official and third-party plugins
A class in OJS or OMP will often extend a class in the base library. For example, in OJS we use the
Submission class which extends the
PKPSubmission class.
import('lib.pkp.classes.submission.PKPSubmission'); class Submission extends PKPSubmission { ... }
Both the application and the base library share a similar file structure.
ojs │ ├─┬ classes │ └─┬ submission │ └── Submission.inc.php │ └─┬ lib └─┬ pkp └─┬ classes └─┬ submission └── PKPSubmission.inc.php
The same approach is used in OMP.
We use the term
Context to describe a
Journal (OJS) or
Press (OMP). To reuse code across both applications, you will often see code that refers to the context.
$context = $request->getContext();
This always refers to the
Journal (OJS) or
Press (OMP) object. It is identical to the following code.
$journal = $request->getJournal();
A single instance of OJS can run many journals. It is important to restrict requests for submissions, users and other objects in the system by the context.
$submissions = Services::get('submission')->getMany([ 'contextId' => $request->getContext()->getId(), ]);
Failure to pass a context or context id to many methods will return objects for all contexts.
Usually, the context is taken from the
Request object. Learn more about the Request Lifecycle. | https://docs.pkp.sfu.ca/dev/documentation/en/architecture | 2021-06-12T16:35:12 | CC-MAIN-2021-25 | 1623487586239.2 | [] | docs.pkp.sfu.ca |
Import Servers -:
Pushing agent ACLs
To push the agent ACLs defined for the imported servers, select Push agent ACLs to successfully imported servers. Selecting this option launches an ACL Push Job after servers have been successfully added to the system. For more information, see Creating ACL Push Jobs.
Where to go from here
Return to Importing servers into the system
Was this page helpful? Yes No Submitting... Thank you | https://docs.bmc.com/docs/ServerAutomation/87/installing/component-installation-reference/agent-installation-reference/installing-multiple-agents-using-the-unified-agent-installer-job/importing-servers-into-the-system/import-servers-permissions | 2021-06-12T17:48:11 | CC-MAIN-2021-25 | 1623487586239.2 | [] | docs.bmc.com |
Gradle 1.7 is the fastest Gradle ever. Significant performance optimizations have been made, particularly in the areas of dependency resolution and build script compilation. All users of Gradle will benefit from these improvements, while large builds stand to gain the most. Improving performance and scalability continues to be a pervasive theme in the evolution of Gradle and 1.7 delivers on this.
In addition to these behind the scenes improvements, Gradle 1.7 is also packed with many exciting new features. The new finalizer task mechanism makes it possible to run a task after another task, regardless of its outcome. This is extremely useful for integration testing where resources (such as application servers) must be shutdown after (possibly failing) tests. Another long awaited feature, the ability to control processing of duplicate files in copy and archive creation operations, has also been added in this release.
The improvements to the Build Setup plugin in Gradle 1.7 build upon the existing functionality (i.e. converting Maven projects to Gradle) to add support for generating projects from a template. Over time this mechanism will be expanded to include custom templates, easing the process of creating a new Gradle project of a certain type.
This release of Gradle also includes major steps forward for building native binaries from C++ source. Support for native binaries in general is an area under heavy development. Expect to see Gradle's capabilities in this area continue to improve in upcoming releases.
Excitingly, Gradle 1.7 contains more contributions from developers outside of the core development team than any other release. This is a steadily increasing trend release on release and is adding significant value to Gradle for all Gradle users. Thank you to all who contributed to Gradle 1.7.
For more information on what's new in Gradle 1.7, please read on.
Testtask implements standard
Reportinginterface
ConfigureableReportrenamed to
ConfigurableReport
Here are the new features introduced in this Gradle release.
Gradle 1.7 is the fastest version of Gradle yet. Here are the highlights:
As always, the performance improvements that you actually see for your build depends on many factors.
With this change, the dependency resolution is much faster. Typically, the larger the project is the more configurations and dependencies are resolved during the build. By caching the artifact meta-data in memory Gradle avoids parsing the descriptor when the same dependency is requested multiple times in a build.
An incremental build for a large project should be tangibly faster with Gradle 1.7. A full build may be much faster, too. The level of performance improvement depends on the build. If a large portion of the build time is taken by slow integration tests, the performance improvements are smaller. Nevertheless, some of the large builds that were used for benchmarking show up to 30% speed increase.
Caching the artifact metadata in-memory is very important for local repositories, such as
mavenLocal() and for resolution of snapshots / dynamic versions. Prior to Gradle 1.7, every time a local dependency was resolved, Gradle would load the dependency metadata directly from the local repository. With the in-memory caching of dependency metadata, this behavior now changes. During a single build, a given dependency will be loaded once only and will not be reloaded again from the repository.
This may be a breaking change for builds that depend the on the fact that certain dependencies are reloaded from the repository during each resolution. Bear in mind that the vast majority of builds will enjoy faster dependency resolution offered by the in-memory caching. If your project requires reloading of snapshots or local dependencies during the build please let us know so that Gradle can better understand your scenario and model it correctly.
To avoid increased heap consumption, the in-memory dependency metadata cache may clear the cached data when there is heap pressure.
This change improves the mechanism that Gradle uses to coordinate multi-process access to the Gradle caches. This new mechanism means that the Gradle process now requires far fewer operations on the file system and can make better use of in-memory caching, even in the presence of multiple Gradle processes accessing the caches concurrently.
The caches used for dependency resolution and for incremental build up-to-date checks are affected by this change, meaning faster dependency resolution and incremental build checks.
Coupled with this change are some improvements to the synchronization of worker threads within a given Gradle process, which means parallel execution mode is now more efficient.
The new mechanism is biased to the case where a single Gradle process is running on a machine. There should not be any performance regressions when multiple Gradle processes are used, but please raise a problem report via the Gradle Forums if you observe a regression.
This change improves build script compilation by adding some caching in critical points in the classloader hierarchy. This affects, for example, first time users of a build, build authors, and those upgrading a build to a new Gradle version.
Gradle 1.7 introduces a new task ordering rule that allows a task to finalize some other task. This feature was contributed by Marcin Erdmann.
Finalizer tasks execute after the task they finalize, and always run regardless of whether the finalized task succeeds or fails. Tasks declare the tasks that finalize them.
configure([integTest1, integTest2]) { dependsOn startAppServer finalizedBy stopAppServer }
In this example, it is declared that the
integTest1 and
integTest2 tasks are finalized by the
stopAppServer task. If either of these tasks are executed during a build, the declared finalizer task will be automatically executed after this. If both tasks are executed during a build, the finalizer task will be executed after both tasks have been executed. The finalizer task,
stopAppServer, does not need to be declared as a task to be run when invoking Gradle.
Finalizer tasks can be used to clean up resources, produce reports, or perform any other mandatory function after a task executes regardless of whether it succeeds or fails.
Gradle has had basic support for C++ projects for some time. This is now expanding with the goal of positioning Gradle as the best build system available for native code projects.
This includes:
Some of these features are included in Gradle 1.7 (see below), while others can be expected in the upcoming releases.
A key part of improving C++ support is an improved component model which supports building multiple binary outputs for a single defined native component. Using this model Gradle can now produce both a static and shared version of any library component.
For any library declared in your C++ build, it is now possible to either compile and link the object files into a shared library, or compile and archive the object files into a static library (or both). For any library 'lib' added to your project, Gradle will create a 'libSharedLibrary' task to link the shared library, as well as a 'libStaticLibrary' task to create the static library.
Please refer to the User Guide chapter and the included C++ samples for more details.
Each binary to be produced from a C++ project is associated with a set of compiler and linker command-line arguments, as well as macro definitions. These settings can be applied to all binaries, an individual binary, or selectively to a group of binaries based on some criteria.
binaries.all { // Define a preprocessor macro for every binary define "NDEBUG" compilerArgs "-fconserve-space" linkerArgs "--export-dynamic" } binaries.withType(SharedLibraryBinary) { define "DLL_EXPORT" }
Each binary is associated with a particular C++ tool chain, allowing settings to be targeted based on this value.
binaries.all { if (toolChain == toolChains.gcc) { compilerArgs "-O2", "-fno-access-control" linkerArgs "-S" } if (toolChain == toolChains.visualCpp) { compilerArgs "/Z7" linkerArgs "/INTEGRITYCHECK:NO" } }
More examples of how binary-specific settings can be provided are in the user guide.
The C++ plugins now support using g++ when running Gradle under Cygwin.
The incremental build support offered by the C++ plugins has been improved in this release, making incremental build very accurate:
It is now even easier to obtain JVM dependencies from Bintray's JCenter Repository, with the
jcenter() repo notation. JCenter is a community repository, that is free to publish to via Bintray.
repositories { jcenter() }
This will add to your repository list, as an Apache Maven repository.
Gradle 1.7 adds the ability to specify fine grained configuration of how certain files should be copied by targeting configuration with “Ant Patterns”. This feature was contributed by Kyle Mahan.
Gradle has a unified API for file copying operations, by way of
CopySpec, which includes creating archives (e.g. zips). This new feature makes this API more powerful.
task copyFiles(type: Copy) { from "src/files" into "$buildDir/copied-files" // Replace the version number variable in only the text files filesMatching("**/*.txt") { expand version: "1.0" } }
The
filesMatching() method can be called with a closure and configures an instance of
FileCopyDetails. There is also an inverse variation,
filesNotMatching(), that allows configuration to be specified for all files that do not match the given pattern.
When copying files or creating archives, it is possible to do so in such a way that effectively creates duplicates at the destination. It is now possible to specify a strategy to use when this occurs to avoid duplicates. This feature was contributed by Kyle Mahan.
task zip(type: Zip) { from 'dir1' from 'dir2' duplicatesStrategy 'exclude' }
There are two possible strategies:
include and
exclude.
The
include strategy is equivalent to Gradle's existing behavior. For copy operations, the last file copied to the duplicated destination is used. However, a warning is now issued when this occurs. For archive creation (e.g. zip, jar), duplicate entries will be created in the archive.
The
exclude strategy effectively ignores duplicates. The first thing copied to a location is used and all subsequent attempts to copy something to the same location are ignored. This means that for copy operations, the first file copied into place is always used. For archive operations, the same is true and duplicate entries will not be created.
It is also possible to specify the duplicates strategy on a very fine grained level using the flexibility of the Gradle API for specifying copy operations (incl. archive operations).
task zip(type: Zip) { duplicatesStrategy 'exclude' // default strategy from ('dir1') { filesMatching("**/*.xml") { duplicatesStrategy 'include' } } from ('dir2') { duplicatesStrategy 'include' } }
It is now possible to Gradle Wrapper enable a project without having to create a
Wrapper task in your build. That is, you do not need to edit a build script to enable the Wrapper.
To Wrapper enable any project with Gradle 1.7, simply run:
gradle wrapper
The Wrapper files are installed and configured to use the Gradle version that was used when running the task.
To customize the wrapper task you can modify the task in your build script:
wrapper { gradleVersion '1.6' }
If there is already an explicitly defined task of type
Wrapper in your build script, this task will be used when running
gradle wrapper; otherwise the new implicit default task will be used.
The
build-setup plugin now supports declaring a project type when setting up a build, laying the foundations for creating different types of project starting points conveniently. Gradle 1.7 comes with the
java-library type, which generates:
To create a new Java library project, you can execute the following in a directory (no
build.gradle needed):
gradle setupBuild --type java-library
See the chapter on the Build Setup plugin for more info, including future directions.
It is now possible to explicitly set the identity of a publication with the new publishing plugins. Previously the identity was assumed to be the same of the project.
For a
MavenPublication you can specify the
groupId,
artifactId and
version used for publishing. You can also set the
packaging value on the
MavenPom.
publications { mavenPub(MavenPublication) { from components.java groupId "my.group.id" artifactId "my-publication" version "3.1" pom.packaging "pom" } }
For an
IvyPublication you can set the
organisation,
module and
revision. You can also set the
status value on the
IvyModuleDescriptor.
publications { ivyPub(IvyPublication) { from components.java organisation "my.org" module "my-module" revision "3" descriptor.status "milestone" } }
This ability is particularly useful when publishing with a different
module or
artifactId, since these values default to the
project.name which cannot be modified from within the Gradle build script itself.
The publishing plugins now allow you to publish multiple publications from a single Gradle project.
project.group "org.cool.library" publications { implJar(MavenPublication) { artifactId "cool-library" version "3.1" artifact jar } apiJar(MavenPublication) { artifactId "cool-library-api" version "3" artifact apiJar } }
While possible, it is not trivial to do the same with the old publishing support. The new
ivy-publish and
maven-publish plugins now make it easy.
TestNG supports parameterizing test methods, allowing a particular test method to be executed multiple times with different inputs. Previously in Gradle's test reports, parameterized methods were listed multiple times (for each parameterized iteration) with no way to differentiate the executions. The test reports now include the
toString() values of each parameter for each iteration, making it easy to identify the data set for a given iteration.
Given a TestNG test case:
import org.testng.annotations.*; public class ParameterizedTest { @Test(dataProvider = "1") public void aParameterizedTestCase(String var1, String var2) { … } @DataProvider(name = "1") public Object[][] provider1() { return new Object[][] { {"1", "2"}, {"3", "4"} }; } }
The test report will show that the following test cases were executed:
aParameterizedTestCase(1, 2)
aParameterizedTestCase(3, 4)
This includes Gradle's own HTML test report and the “JUnit XML” file. The “JUnit XML” file is typically used to convey test execution information to the CI server running the automated build, which means the parameter info is also visible via the CI server.
Testtask implements standard
Reportinginterface
The
Reporting interface provides a standardised way to control the reporting aspects of tasks that produce reports. The
Test task type now implements this interface.
apply plugin: "java" test { reports { html.enabled = false junitXml.destination = file("$buildDir/junit-xml") } }
The
Test task provides a
ReportContainer of type
TestReports, giving control over both the HTML report and the JUnit XML result files (these files are typically used to communicate test results to CI servers and other tools).
This brings the
Test task into line with other tasks that produce reports in terms of API. It also allows you to completely disable the JUnit XML file generation if you don't need it.
The above change (
Test task implements standard
Reporting interface) means that the test reports now appear in the build dashboard.
Also, the
buildDashboard task is automatically executed when any reporting task is executed (by way of the new “Finalizer Task” mechanism mentioned earlier).
This change facilitates better reporting of test execution on CI servers, notably Jenkins.
The JUnit XML file format is a de-facto standard for communicating test execution results between systems. CI servers typically use this file as the source of test execution information. It was originally conceived by the “JUnit Ant Tasks” that quickly appeared after the introduction of JUnit and became widely used, without a specification ever forming.
This file also captures the system output (
System.out and
System.err) that occurs during test execution. Traditionally, the output has been recorded at the class level. That is, output is not associated with the individual test cases (i.e. methods) within the class but with the class as a whole. You can now enable “output per test case” mode in Gradle to get better reporting.
test { reports { junitXml.outputPerTestCase = true } }
With this mode enabled, the XML report will associate output to the particular test case that created it. The Jenkins CI server provides a UI for inspecting the result of a particular test case of class. With
outputPerTestCase = true, output from that test case will be shown on that screen. Previously it was only visible on the page for the test class.
This is also necessary for effective use of the Jenkins JUnit Attachments Plugin that allows associating test attachments (e.g. Selenium screen shots) with test execution in the Jenkins UI.
Thanks to a contribution by Olaf Klischat, the Application Plugin now provides the ability to specify default JVM arguments to include in the generated launcher scripts.
apply plugin: "application" applicationDefaultJvmArgs = ["-Dfile.encoding=UTF=8"]
The OSGi plugin uses the Bnd tool to generate bundle manifests. The version used has changed from
1.50.0 to
2.1.0 with this release.
The most significant improvement obtained through this upgrade is the improved accuracy of generated manifests for Java code that uses the “invokedynamic” byte code instruction.
It is now possible to set enum value properties in the Gradle DSL using the name of the value as a string. Gradle will automatically convert it the string to the corresponding enum value. For example, this can be used for setting the (new in 1.7) duplicate handling strategy for file copy operations.
task copyFiles(type: Copy) { from 'source' into 'destination' duplicatesStrategy 'exclude' }
The
duplicatesStrategy property is being set here via the
CopySpec.setDuplicatesStrategy(DuplicatesStrategy) method, which takes an enum value of type
DuplicatesStrategy In the Gradle DSL, the value can be set using the (case-insensitive) name of the desired enum value..
The
Test task has been updated to implement the standard
Reporting interface. The existing API for configuring reporting and results has now been deprecated. This includes the following methods:
disableTestReport()
enableTestReport()
isTestReport()
setTestReport()
getTestReportDir()
setTestReportDir()
getTestResultsDir()
setTestResultsDir()
All of the deprecated functionality is still available via the new
Reporting mechanism.
As a significant performance optimization, Gradle now caches dependency descriptors in memory across the entire build. This means that if that dependency metadata changes during the build, the changes may not be seen by Gradle. There are no identified usage patterns where this would occur, but it is theoretically possible.
Please see the section on “Faster Gradle Builds” for more information.
Some properties of classes introduced by the JaCoCo code coverage plugin have been renamed with better names.
JacocoTaskExtension.destPathrenamed to
destinationFile
JacocoTaskExtension.classDumpPathrenamed to
classDumpFile
JacocoMerge.destFilerenamed to
destinationFile
The
ConvertMaven2Gradle,
GenerateBuildScript and
GenerateSettingsScript classes have been removed. The respective logic is now part of the
buildSetup task which has now the type
SetupBuild.
The plugin creates different set of tasks, with different types and names depending on the build-setup type.
The
setupWrapper task is now called
wrapper.
For consistency with the maven-publish plugin, the task for generating the ivy.xml file for an Ivy publication has changed. This task is now named
generateDescriptorFileFor${publication.name}Publication.
statusvalue of IvyPublication is
integrationand no longer defaults to
project.status
In order to continue decoupling the Gradle project model from the Ivy publication model, the '
project.status' value is no longer used when publishing with the
ivy-publish plugin.
If no status value is set on the
IvyModuleDescriptor of an
IvyPublication, then the default ivy status ('
integration') will be used. Previously, '
release' was used, being the default value for '
project.status'.
The incubating C++ support in Gradle is undergoing a major update. Many existing plugins, tasks, API classes and the DSL have been being given an overhaul. It's likely that all but the simplest existing C++ builds will need to be updated to accommodate these changes.
If you want your existing C++ build to continue working with Gradle, you have 2 options:
ConfigureableReportrenamed to
ConfigurableReportincubating feature
The incubating class
org.gradle.api.reporting.ConfigureableReport was renamed to
org.gradle.api.reporting.ConfigurableReport as the original name was misspelled.
The test task is now skipped when there are no test classes (GRADLE-2702).
Previously, the test task was still executed even if there were no test classes. This meant that dependency resolution would occur and an empty HTML report was generated. This new behavior results in faster builds when there are no test classes. No negative impacts on existing builds are expected.
The OSGi plugin uses the Bnd tool to generate bundle manifests. The version used has changed from
1.50.0 to
2.1.0 with the 1.7 release.
While this should be completely backwards compatible, it is a significant upgrade.
Gradle now treats the configuration mapping of a dependency declaration in an Ivy descriptor file (
ivy.xml) the same way as Ivy. Previously, there were a number of bugs that affected how Gradle handled configuration mappings in Ivy descriptor files.
These changes mean you may see different resolution results in Gradle 1.7 compared to previous Gradle versions, if you are using Ivy repositories.
On behalf of the Gradle community, the Gradle development team would like to thank the following people who contributed to this version of Gradle:
maxPriorityViolationssetting for the CodeNarc plugin (GRADLE-1742).
jcenter()repo notation for Bintray's JCenter repository.. | https://docs.gradle.org/1.7/release-notes.html | 2021-06-12T17:49:54 | CC-MAIN-2021-25 | 1623487586239.2 | [] | docs.gradle.org |
To deploy build, you must configure your server and make sure you’re using the correct response headers, so that the browser can receive the proper response and process the response correctly.
There are two main settings in Unity that affect how you set up the server:
Choose the compression type from the WebGL Player Settings window (menu: Edit > Project Settings > Player, then select WebGL and expand the Publishing Settings section):
For more information on browser support for selected compression methods, see documentation on WebGL browser compatibility.
You might need to adjust your server configuration to match your specific build setup. In particular, there might be issues if you already have another server-side configuration to compress hosted files, which could interfere with this setup. To make the browser perform decompression natively while it downloads your application, append a Content-Encoding header to the server response. This header must correspond to the type of compression Unity uses at build time. For code samples, see Server Configuration Code Samples.
The decompression fallback option enables Unity to automatically embed a JavaScript decompressor into your build. This decompressor corresponds to your selected compression method, and decompresses your content if the browser fails to do so.
Enable decompression fallback from the Player Settings window (menu: Edit > Project Settings > Player, then select WebGL and expand the Publishing Settings section).
When you enable decompression fallback, Unity adds a
.unityweb extension to the build files.
You should consider using Decompression Fallback if you have less experience with server configuration, or if server configuration is unavailable to you.
Note: Using this option results in a larger loader size and a less efficient loading scheme for the build files.
The Decompression Fallback option is disabled by default. Therefore, by default, build files have an extension that corresponds to the compression method you select.
There are two compression methods to choose from: gzip or Brotli. For further information see the compression format section.
To enable browsers to natively decompress Unity build files while they’re downloading, you need to configure your web server to serve the compressed files with the appropriate HTTP headers. This is called native browser decompression. It has the advantage of being faster than the JavaScript decompression fallback, which can reduce your application’s startup time.
The setup process for native browser decompression depends on your web server. For code samples, see Server Configuration Code Samples.
A Content-Encoding header tells the browser which type of compression Unity has used for the compressed files. This allows the browser to decompress the files natively.
Set the Content-Encoding response header to the compression method selected in the Player SettingsSettings that let you set various player-specific options for the final game built by Unity. More info
See in Glossary.
WebAssembly streaming allows the browser to compile the WebAssembly code while it is still downloading the code. This significantly improves loading times.
For WebAssembly streaming compilation to work, the server needs to return WebAssembly files with an
application/wasm MIME type.
To use WebAssembly streaming, you need to serve WebAssembly files with the
Content-Type: application/wasm response header.
A Content-Type header tells the server which media type the content is. This value should be set to
application/wasm for WebAssembly files.
Note: WebAssembly streaming does not work together with JavaScript decompression (when the Decompression Fallback option is enabled). In such cases, the downloaded WebAssembly file must first go through the JavaScript decompressor and therefore the browser cannot stream it during download.
If your file contains JavaScript, you should add the
application/javascript Content-Type header. Some servers might include this automatically, while others do not. | https://docs.unity3d.com/Manual/webgl-deploying.html | 2021-06-12T18:19:12 | CC-MAIN-2021-25 | 1623487586239.2 | [] | docs.unity3d.com |
Configuration/Installation
Before using LifeKeeper to create an SAP resource hierarchy, perform the following tasks in the order recommended below. Note that there are additional non-HA specific configuration tasks that must be performed that are not listed below. Consult the appropriate SAP installation guide for additional details.
The following tasks refer to the “SAP Primary Server” and “SAP Backup Server.” The SAP Primary Server is the server on which the Central Services will run during normal operation, and the SAP Backup Server is the server on which the Central Services will run if the SAP Primary Server fails.
Although it is not necessarily required, the steps below include the recommended procedure of protecting all shared file systems with LifeKeeper prior to using them. Prior to LifeKeeper protection, a shared file system is accessible from both servers and is susceptible to data corruption. Using LifeKeeper to protect the file systems preserves single server access to the data.
Before Installing SAP
The tasks in the following topic are required before installing your SAP software. Perform these tasks in the order given. Please also refer to the SAP document SAP Web Application Server in Switchover Environments when planning your installation in NetWeaver Environments.
Installing SAP Software
These tasks are required to install your SAP software for high availability. Perform the tasks below in the order given. Click on each task for details. Please refer to the appropriate SAP Installation Guide for further SAP installation instructions.
Primary Server Installation
Install the Core Services, ABAP and Java Central Services
Install the Primary Application Server Instance
Install Additional Application Server Instances
Backup Server Installation
Install on the Backup Server
Installing LifeKeeper
Create File Systems and Directory Structure
Move Data to Shared Disk and LifeKeeper
Upgrading From a Previous Version of the SAP Recovery Kit
Configuring SAP with LifeKeeper
Resource Configuration Tasks
The following tasks explain how to configure your recovery kit by selecting certain options from the Edit menu of the LifeKeeper GUI. Each configuration task can also be selected from the toolbar or you may right-click on a global resource in the Resource Hierarchy Tree (left-hand pane) of the status display window to display the same drop down menu choices as the Edit menu. This, of course, is only an option when a hierarchy already exists.
Alternatively, right-click on a resource instance in the Resource Hierarchy Table (right-hand pane) of the status display window to perform all the configuration tasks, except creating a resource hierarchy, depending on the state of the server and the particular resource.
Creating an SAP Resource Hierarchy
Deleting a Resource Hierarchy
Unextending Your Hierarchy
Common Recovery Kit Tasks
Setting Up SAP from the Command Line
To enable the SAP SIOS HA Cluster Connector for an SAP instance, see Activating the SAP SIOS HA Cluster Connector (SSHCC).
For proper administration of the ERS instance in LifeKeeper, the ERS profile must use the Start_Program parameter instead of Restart_Program for starting the ERS process. See the ASCS + ERS Restart_Program Parameter page for details on how to modify this parameter in the ERS instance profile.
Test the SAP Resource Hierarchy
You should thoroughly test the SAP hierarchy after establishing LifeKeeper protection for your SAP software. Perform the tasks in the order given.
Post your comment on this topic. | https://docs.us.sios.com/spslinux/9.5.0/en/topic/sap-installation | 2021-06-12T16:50:26 | CC-MAIN-2021-25 | 1623487586239.2 | [] | docs.us.sios.com |
Use One-Time Session Token to Authenticate with UEF
In testing with the Google Canary Chrome Browser, one of our clients discovered an issue that was blocking users from logging in to their Blackboard Learn instance. After much troubleshooting, we discovered a multi-layer issue that brings us to, you guessed it, cookies.
This affects clients in SaaS with Ultra Base Navigation enabled using Ultra integrations that rely on UEF
Here is a brief description of the contributing factors:
First, the client had built a custom Ultra login page. The page included code designed to ensure that Learn login pages would never render inside of an iframe within Learn. It looks like this:
if ( top != self ) { top.location.replace( self.location.href ); }
In and of itself there’s nothing wrong with it. We, at Blackboard, have removed it from the default Ultra login page, but many clients use it in Original login pages, and so it’s moved with them into Ultra.
If you are unsure whether you have a custom login page, visit help.blackboard.com for more information. 3LO. In most cases, it’s a process that relies on a session cookie to hold everything together. This impending release of Chrome (and other browsers) will block this cookie because everything is happening across domains and involves the use of iframes.
So what happens is that, even though the integration is configured in Learn to not force the end-user to authorize the integration, the lack of the session cookie means that Learn has no idea that this user is logged in, so it pops open the login page..
Related, this same issue affects Safari users when cross-site tracking is disabled.
So what can you.
If you are using LTI 1.3, there’s a small bug in this. I will share a workaround that will both get around this bug, but not fail when the bug is fixed..
LTI 1.3
In LTI 1.3, you will see the value in the))
LTI 1.1
By now, I hope you are using LTI 1.3, but I know many are not. As a result, we also added a one-time session token to LTI 1.1 launches. This will come in the form POST parameter
ext_one_time_session_token. Just like in the 1.3 example, your application should take this value from the LTI launch, append it to the authorization code request endpoint as
one_time_session_cookie=that_token and redirect them to the authorization code endpoint.
Summary.
Regardless of whether you are an administrator or a developer, please feel free to reach out to us at [email protected] with any questions.
Happy coding! | https://docs.blackboard.com/blog/2021/05/10/use-one-time-session-tokens-instead-of-cookies-for-UEF-authentication.html | 2021-06-12T18:10:45 | CC-MAIN-2021-25 | 1623487586239.2 | [] | docs.blackboard.com |
Once you have set up your project for Unity Services, you can enable the Analytics service.
To do this, in the Services window, select Analytics and then click the OFF button to toggle it ON. If you haven’t already done so, at this stage you will need to complete the mandatory Age Designation field for your project. (Again, you may have already done this for a different Unity Service, such as Ads). This age designation selection will appear in the Services window.
You must then hit Play in your project to validate the connection to Unity Analytics. The Unity Editor acts as a test environment to validate your Analytics integration. With Analytics enabled, when you press the Play button, the Editor sends data (an “App Start” event) to the analytics service. This means you can test your analytics without having to build and publish your game. Once you have pressed play, you can check that your project was validated by going to the Analytics Dashboard for this project. To get there, in the services window, click Services -> Analytics -> Go To Dashboard.
The Go to Dashboard button in the Analytics section of the Services window opens the dashboard in a web browser. The dashboard contains tools for visualizing, analyzing, and taking action based on your Analytics data. See Analytics Dashboard. | https://docs.unity3d.com/ru/2017.3/Manual/UnityAnalyticsSetup.html | 2021-06-12T18:44:10 | CC-MAIN-2021-25 | 1623487586239.2 | [] | docs.unity3d.com |
MapEventsManagerService is a util service for managing all the map events (click, mouse up, etc.). It exposes a simple API for entity selection, event priority management and adds custom events (drag and drop, long press).
@Component(...)export class SomeComponent{constructor(private eventManager: MapEventsManagerService){// Input about the wanted eventconst);});}}
In the example above we start listing to Click events. according to
EventRegistration object.
eventManager.register()
Returns RxJs observer of type
DisposableObservable<EventResult> that we can subscribe to.
To remove the event registration just do:
resultObserver.dispose()
event: according to
CesiumEvent enum. All cesium events are supported, includes additional events like drag & drop and long press
entityType: it is possible to register to events on a specific entities types, e.g raise event only when
TrackEntity is Clicked.
AcEntity is the base class for all angular-cesium entities, it is a part of
AcNotification and is required for
MapEventManager to achieve this by setting different priority to each event.
PickOptions: according to the
PickOptions enum, set the different strategies for picking entities on the map:
NO_PICK - Will not pick entities
PICK_FIRST - First entity will be picked . use
Cesium.scene.pick()
PICK_ONE - If few entities are picked plonter is resolved. use
Cesium.scene.drillPick()
PICK_ALL - All entities are picked. use
Cesium.scene.drillPick()
MapEventsManagerService is provided by
<ac-map/>, therefor has few possibilities to reach it:
In any components under
<ac-map/> hierarchy as seen in the example above (recommended).
Using the MapsManagerService.
Using
@viewChild and ac-map reference:
acMapComponent.getMapEventManagerService() .
Meaning that the the callback that you pass to map event manager will be executed outside of angular zone. That is because Cesium run outside of Angular zone in case for performance reasons , kind of
ON_PUSH strategy. For example if you update your html template for every map event and you want it to render, you should use
ChangeDetectorRef or wrap your function with
NgZone.run()
class MyComponent {constructor(eventManager: MapEventsManagerService, ngZone: NgZone){eventManager.register(eventRegistration).subscribe((result) => {ngZone.run(()=>{this.textShownInTheTemplateHtml = result.movment;});});}}
In case a two or more entities are in the same location and both are clicked you have a plonter (which entity should be picked?). This is resolved according to the
PickOptions that we pass to the event registration:
PickOptions.NO_PICK - non of the entities will be picked, you only interested in the map location.
PickOptions.PICK_FIRST - the first(upper) entity will be picked.
PickOptions.PICK_ALL - all entities are picked and returned.
PickOptions.PICK_ONE - only one should be picked, a context will appear allowing the client to choose which entity he wants, selected entity will be passed to the eventcall back.
angular-cesium comes with
ac-default-plonter a basic implementation for the plonter context menu. showing a list of entities names to select from.
It is possible to create your own plonter context menu just take a look at
ac-default-plonter implementation, and disable the default plonter:
<ac-map [disableDefaultPlonter]="true"></ac-map>
stackblitz: | https://docs.angular-cesium.com/guides/map-events | 2021-06-12T16:39:55 | CC-MAIN-2021-25 | 1623487586239.2 | [] | docs.angular-cesium.com |
Data
Usage
To store values of properties within an index, use the following syntax:
Index name On property_expression_list [ Data = stored_property_list ];
Where stored_property_list is either a single property name or a comma-separated list of properties, enclosed in parentheses.
Details
This keyword specifies a list of properties whose values are to be stored within this index.
You cannot use this keyword with a bitmap index.
Refer to the documentation on indices for more details.
Default
If you omit this keyword, values of properties are not stored within the index.
Example
Index NameIDX On Name [ Data = Name ]; Index ZipIDX On ZipCode [ Data = (City,State) ];
See Also
“Index Definitions” in this book
“Defining and Building Indices” in the SQL Optimization Guide
“Introduction to Compiler Keywords” in Defining and Using Classes | https://docs.intersystems.com/healthconnectlatest/csp/docbook/DocBook.UI.Page.cls?KEY=ROBJ_INDEX_DATA | 2021-06-12T18:24:39 | CC-MAIN-2021-25 | 1623487586239.2 | [] | docs.intersystems.com |
ITicket
Store Interface
Definition
This provides an abstract storage mechanic to preserve identity information on the server while only sending a simple identifier key to the client. This is most commonly used to mitigate issues with serializing large identities into cookies.
public interface class ITicketStore
public interface ITicketStore
type ITicketStore = interface
Public Interface ITicketStore | https://docs.microsoft.com/en-us/dotnet/api/microsoft.aspnetcore.authentication.cookies.iticketstore?view=aspnetcore-5.0 | 2021-06-12T16:33:41 | CC-MAIN-2021-25 | 1623487586239.2 | [] | docs.microsoft.com |
A valid value for the URL data type can be composed of the following parts.
Example URL:
NOTE: IP addresses that include the protocol identifier () do not contain domain identifiers and need to be processed using a different set of methods. It might be easier to remove the protocol identifiers and change the data type to IP Address.
The hierarchy of domain names extends from right to left.
This page has no comments. | https://docs.trifacta.com/display/r071/Structure+of+a+URL | 2021-06-12T18:27:36 | CC-MAIN-2021-25 | 1623487586239.2 | [] | docs.trifacta.com |
Between your refrigerator, dishwasher, stove, microwave, coffee pot, and other appliances, kitchens can be quite the energy hogs. Many people make the mistake of assuming energy efficiency in the kitchen means buying lots of expensive Energy Star appliances.
However, that couldn’t be farther from the truth. With a few easy steps, you can save a lot of energy in the kitchen — as much as $30 annually per appliance! In the average kitchen, energy savings can be as much as $130 a year.
- Use your refrigerator wisely. With some small adjustments, you can save as much as $45 a year by using your refrigerator more efficiently. First things first, take a look at your thermostats in the fridge and freezer — aim for 37-40 degrees Fahrenheit in the fridge, and 0-5 degrees in the freezer. Next, turn off the automatic ice maker. It may be convenient, but it costs you $12 to $18 dollars a year to operate.
- Try your hand at “green” cooking. Using your stove and oven more efficiently won’t save you as much on your bill as your fridge, but every little bit helps. When using the stove, trying turning off a burner a few minutes before you’re done cooking — they stay hot for long enough anyways. Additionally, use pans and burners that correspond in size. Large burners waste heat, while small burners have to work overtime to heat a larger pan.
- Use your dishwasher — it’s more efficient than hand washing. If it can fit in the dishwasher, put it in there. Hand washing might seem more efficient, but it actually uses three times as much water as your dishwasher. For extra efficiency, use your machine only when it’s full. However, leave enough room around the dishes so that they’re completely clean when you pull them out. Finally, try using energy-saving features. The heated dry cycle can be as much as 50% of a dishwasher’s energy use. Many modern dishwashers allow you to turn this off, but alternatively, you can just shut off the machine, open the door and pull out the racks to let them air-dry.
By using your kitchen appliances wisely, experts say you can make 10-year old models just as efficient as newer Energy Star models. The dollars add up!
Call today to ask us about maintenance and tune-ups for optimal efficiency in your household appliances. | https://docsapplianceservice.com/docs-guide-energy-efficiency-kitchen/ | 2021-06-12T16:53:23 | CC-MAIN-2021-25 | 1623487586239.2 | [] | docsapplianceservice.com |
Guide for Teachers of the Visually Impaired
According to the 2017 Disability Statistics Annual Report, fewer than .9% of children aged five through 17 in the United States have a vision disability. While children with visual impairments represent a small proportion of the population of students served, their needs can be quite challenging. This addendum was developed to provide educators with an understanding of some differences among children with visual impairment, as well as tools and resources available to help them to learn and thrive. This document is not meant to be comprehensive, or to duplicate existing training materials and documents.
Visual Impairment Defined
Under the IDEA, visual impairment including blindness means an impairment in vision that, even with correction, adversely affects a child’s educational performance. The term includes both partial sight and blindness. [§300.8(c)(13)] Learn more at
Diseases and Disorders
There are multiple conditions which may lead to visual impairment. These abnormalities may impact visual functioning, and by extension, education of the student, in a variety of ways. The effects on a child’s development depend on the severity, type of loss, age at which the condition appears, and overall functioning level of the child. Many children who have multiple disabilities may also have visual impairments resulting in motor, cognitive, and/or social developmental delays.
The following selected terms include only a few of the many visual disorders found in children:
- Amblyopia (condition in which one eye fails to develop clear vision; commonly called lazy eye.)
- Cataracts (clouding of the lens impacting visual clarity)
- Convergence Insufficiency (eyes drift outward when reading or doing close work.)
- Cortical Visual Impairment (visual dysfunction caused by damage or injury to the brain)
- Glaucoma (damage to the optic nerve, usually caused by fluid build-up and increased pressure inside the eye resulting in peripheral vision loss, and difficulty seeing in dim light)
- Hyperopia (distant objects are seen clearly, but close objects are blurred; commonly called farsightedness)
- Infections, malformations, optic nerve defects, and trauma to the eye (various causes and results)
- Retinoblastoma (cancer that begins in the retina and may result in loss of the eye)
- Myopia (close objects are seen clearly, but objects farther away are blurred; commonly called nearsightedness)
- Nystagmus (repetitive, uncontrolled eye movements, often resulting in reduced vision)
- Strabismus (eyes are not both directed toward the same point simultaneously; commonly referred to as crossed-eyes)
Additional information about conditions which cause visual impairment is available from the following sources:
- American Academy of Ophthalmology,
- American Foundation for the Blind,
- Optometrists Network,
- National Eye Institute at the National Institutes of Health,
Educational Implications
There is not a “one-size-fits-all” solution to working with students who have visual impairment; educators must take the student’s individual needs into account when designing the education program. For instance, Albinism is a condition characterized by lack of pigment in the hair, skin, and eyes. The functional impacts on vision may include low vision, nystagmus (involuntary, rapid and repetitive eye movement), and photophobia (extreme light sensitivity). A student with photophobia must take care to limit exposure to bright light. Conversely, students with other eye conditions, such as retinitis pigmentosa, may benefit from increased lighting to be able to perceive printed materials and objects in their environment.
Children with visual impairments should be assessed as soon as possible following identification to benefit from early intervention programs, when applicable. Technology in the form of computers, braille equipment, and low-vision optical and video aids enable most children with visual impairments to participate in regular class activities. IDEA also requires that schools provide accessible educational materials (AEM) to all students who need them-- this can include audio, braille, digital, and large print materials.
Determining Optimal Mode of Learning
The way a student learns, and the accommodations needed, vary from student to student. Tools such as a Functional Vision Assessment and a Learning or Reading Media Assessment can help in determining the optimal mode of delivery for textbooks and other curricular materials, possible accommodations, and assistive technology.
The following assessments should be conducted by a qualified professional such as a TVI. The Oklahoma School for the Blind,, provides outreach services including conducting assessments and a limited number of site visits for consultation.
Examples of Assessment Tools
- The National Reading Media Assessment (NRMA),, is a research-based tool developed by the Professional Development and Research Institute on Blindness,, to determine the most appropriate reading medium/media for students who are blind/visually impaired considering current and future needs.
- The Paths to Literacy Learning Media Assessment,, offers a framework for selecting appropriate literacy media for a student who is visually impaired. Paths to Literacy recommends that a Functional Vision Assessment (FVA) be done first, in order to determine what the student is able to see and how he or she is using his or her vision.
- The Functional Vision Learning Media Assessment (FVLMA), Kit: Functional Vision and Learning Media Assessment_7-96151-00P_10001_11051, is an American Printing House (APH) assessment tool developed to help practitioners gather, store, track, and analyze information regarding students’ functional vision and appropriate learning media.
If assessment results indicate that the student will benefit from the use of braille, the IEP Team should also consider the braille code(s) a student will learn such as Unified English Braille (UEB), UEB plus NEMETH Code, and/or Music Code. Note: Oklahoma public schools were scheduled to complete the transition from English Braille American Edition (EBAE) to UEB in 2019. Find information about online UEB transition courses at.
Assistive Technology
AT includes devices and services that help a person accomplish a task that might otherwise be difficult or impossible to do without it. Many types of AT are available to help students who have visual impairment. AT for vision ranges from low-tech to high-tech and from free to high-cost. Many times a person will need multiple devices depending on the tasks they wish to accomplish, the locations or environments, and the degree of vision loss/amount of usable vision.
During consideration and selection of AT devices for vision needs, educators should consider whether the individual could comprehend materials or access the environment better if materials were enlarged or visually enhanced, auditory feedback were provided, or braille and/or tactile feedback were provided. The findings should guide educators in determining which types of devices to try with a student.
Examples of AT Devices for Vision
- Adapted learning aids (Adaptations may include enlarged display, high contrast colors, auditory feedback, haptic feedback, and tactile feedback.)
- Calculators
- Games
- Tactile graphics tablets
- Recreational/sporting equipment
- Audiobook readers
- Braille displays/notetakers
- Electronic text readers
- Screen reading software
- Text reading apps and software
- Magnification tools
- Optical magnifiers
- Electronic video magnifiers
- Screen magnification software
- Printers
- Braille/tactile graphics embossers
- 3D printers
AT Services
In addition to AT devices are the services needed to help a person select, acquire, and use the AT. Oklahoma ABLE Tech offers the AT Device Loan Program,, to assist with selection of devices. For links to information about specific products available for trial, see ABLE Tech AT Discovery Vision web page,.
ABLE Tech provides Assistive Technology Support Team Training,, to schools through a contract with the OSDE. ABLE Tech training materials and opportunities assist schools in helping consider and assess students’ AT needs as well as help educators with implementing the AT into the students’ curriculum. An additional resource for assessment information is the textbook Assistive Technology for Students Who are Blind or Visually Impaired | A Guide to Assessment by Ike Presley and Frances Mary D’Andrea published by AFB Press. The book is available for purchase from the American Printing House, aph.org
AT Training
Training is an AT service that must be provided to students to enable them to successfully use assistive technology to meet their educational goals. TVIs and other educators may also require professional development in preparation to assist students. With the wide variety of devices and frequent technological advances, it is difficult for any one person to be an expert on all devices. Oftentimes, it is necessary for schools to obtain the training from multiple sources depending on the need.
Below are a few training sources to consider:
- ABLE Tech,
- American Foundation for the Blind (AFB),
- Assistive Technology Industry Association (ATIA),
- Association for Education and Rehabilitation of the Blind and Visually Impaired (AERVBI),
- Freedom Scientific,
- NanoPac,
- National Federation of the Blind (NFB),
- NewView Oklahoma,
- Oklahoma Department of Rehabilitative Services -Visual Services,
- Oklahoma School for the Blind,
AT Funding
Under IDEA, LEAs must provide AT devices and services to students at no cost to families; however, federal and state governmental agencies provide funding for select devices for use by and with students with visual impairments, so that schools do not have to bear the entire cost. The AIM Center at the Oklahoma Library for the Blind and Physically Handicapped,, provides specialized educational materials and equipment for students who qualify for the Federal Quota Program administered by the American Printing House for the Blind. Liberty Braille,, provides textbooks and other curricular materials in large print and braille, in addition to select devices, free of charge to students through a contract with the OSDE. Find additional funding information in the online guide OK Funding for Assistive Technology,.
For more information on the provision of AT, including documenting AT in the Individualized Education Program (IEP), please see the Technical Assistance Document: Assistive Technology for Children and Youth with Disabilities - IDEA Part B.
Teaching Tips/Instructional Strategies
Students with visual impairment need to learn the same information which students without disabilities learn and be held to the same high standards; however, in addition to learning core subjects such as math, English/language arts, history, and science, students with visual impairment may also need to learn specialized skills. These skills include:
- Braille literacy (reading and writing in braille using a variety of tools)
- Auditory literacy (reading with audio format)
- Strategies and techniques for using AT such as braille equipment, screen reading software, and magnification tools
- Activities of Daily Living i.e. “blindness skills” such as cane travel, cooking, self-care, and dressing
Many of these skills must be taught explicitly, as students with visual impairment are frequently unable to learn through visual observation. Whatever the degree of impairment, students who are visually impaired should be expected to participate fully in classroom activities. Although they may confront limitations, with proper planning and adaptive equipment, their participation can be maximized.
Following are tips for maximizing participation.
The Classroom
- Select optimal seating position based on student’s lighting needs.
- Allow space for seeing eye/guide dog if applicable.
- Assist student in using and storing adaptive equipment.
- Keep aisles clear and drawers and cabinets closed.
The Teacher
- Face the class while speaking.
- Permit lectures to be recorded.
- Provide classroom materials in accessible format(s) used by student.
- Be flexible with assignment deadlines.
- Consider alternative assignments (based on IEP Team Decisions).
- Consider alternative measures of assessing achievements (see note below).
- Be specific with directions.
- Provide “hands-on” learning experiences.
- Make sure materials are properly scaled, i.e. enlarged to the student’s optimal font size.
- Ask the student if they have any suggestions.
- Keep communications open.
The Rest of the Class
- Instruct others to yield the right of way.
- Instruct students to help when asked.
- Instruct students to ask if help is needed.
- Instruct students to be considerate of the seeing eye/guide dog.
Accommodation Resources
For information regarding accommodations, see the following:
- Oklahoma Special Education Services Oklahoma Accommodations Guide, Guide_0.pdf
- Oklahoma School Testing Program (OSTP) Accommodations for Students with an Individualized Education Program (IEP) or Section 504 Plan, %2815-16%29_1.pdf
- Oklahoma State Department of Education Overview: Non-Standard Accommodations,
Instructional Settings and Staffing Considerations
Instruction may be provided to students with visual impairments in a variety of settings, including the general education classroom, pull-out for individualized instruction, resource room, self-contained special education classroom, or in a residential program such as the Oklahoma School for the Blind,.
Schools may provide educational and related services to students with visual impairment by employing or contracting with itinerate service providers. Service and staffing time must be considered on an individual basis by the IEP team. The responsibility for providing such services rests with the Local Education Agency (LEA); however, the Oklahoma School for the Blind may provide a limited number of site visits to schools as a support measure.
A Teacher of the Visually Impaired (TVI) is the primary educator who provides specialized instruction to students with visual impairment. The TVI provides lessons in the use of braille and tactile graphics, strategies and use of assistive technology, and many other skills. Pre-certification training for Teachers of the Visually Impaired may be offered by Northeastern State University,, pending minimum course enrollment requirements are met. Educators wishing to enroll in out-of-state training programs can find reviews from the Association for Education and Rehabilitation of the Blind and Visually Impaired,.
Educators may request to update their Oklahoma Teaching Certificate after receiving a passing score on the Oklahoma Subject Area Test (OSAT) for Blind/Visual Impairment (028) provided by the Certification Examinations for Oklahoma Educators (CEOE™),.
Services may also be provided by a Braille Transcriber, who may prepare worksheets, tactile graphics, and other necessary instructional materials for students to use. Additional professionals which may be involved include Orientation and Mobility Specialists (OMS) and Paraprofessionals.
Students with visual impairment may have co-existing disabilities which require additional services such as speech, occupational, or physical therapy. Deaf-blindness is a category of disability which includes students who have sensory losses in both vision and hearing. For assistance in serving students with deaf-blindness, please contact the Oklahoma Deaf-Blind Technical Assistance Project,.
Conclusion
Education of students with visual impairment can be challenging, but the impact on the future employment and personal success for students can be enormous. The information and resources included in this addendum are provided to help. For additional information contact
ABLE Tech,, or the Oklahoma State Department of Education Special Education Services,.
Background information for this document was excerpted from the Oklahoma State Department of Education Visual Impairment Fact Sheet, Impairment_3.pdf. Other sources and links are included within the document. | https://okabletech-docs.org/homepage/aem-ta-document/addendum-guide-for-providing-assistive-technology-for-students-with-visual-impairments/ | 2019-06-16T02:57:46 | CC-MAIN-2019-26 | 1560627997533.62 | [] | okabletech-docs.org |
Technical Assistance Document
Assistive Technology for Children and Youth
with Disabilities IDEA Part B
Joy Hofmeister
State Superintendent of Public Instruction
Oklahoma State Department of Education
Special Education Services
Oklahoma State Department of Education
2500 North Lincoln Boulevard
Oklahoma City, OK 73105
Phone: 405-522-3248
The Oklahoma State Department of Education (OSDE) does not discriminate on the basis of race, color, sex, national origin, age, disability or religion in its programs and activities and provides equal access to the Boy Scouts and other designated youth groups as required by Title VI and VII of the Civil Rights Act of 1964, Title IX of the Education Amendments of 1972, Section 504 of the Rehabilitation Act of 1973, the Age Discrimination Act of 1975, Title II of the Americans with Disabilities Act, and the Boy Scouts of America Equal Access Act.
Civil rights compliances inquiries and complaints the Title IX by local school districts should be presented to the local school district Title IX coordinator.
This publication can be located at the following website:
This document was created in collaboration with Oklahoma Assistive Technology Center and Oklahoma ABLE Tech. Revised June 2018.
Purpose. | https://okabletech-docs.org/homepage/at-ta-document-part-b/ | 2019-06-16T03:22:09 | CC-MAIN-2019-26 | 1560627997533.62 | [] | okabletech-docs.org |
Sending Usage and Diagnostic Data to Cloudera
Minimum Required Role: Cluster Administrator (also provided by
- Redaction of Sensitive Information from Diagnostic Bundles
-
To troubleshoot specific problems, or to re-send an automatic bundle that failed to send, you can manually send:
-.. | https://docs.cloudera.com/documentation/enterprise/5/latest/topics/cm_ag_data_collection.html | 2019-11-11T23:05:18 | CC-MAIN-2019-47 | 1573496664439.7 | [] | docs.cloudera.com |
.
Enabled: Enables/disables current effect: The total number of particles at any one time that will be active.
Timing
Parameters in this tab control the timing of the particles.
Continuous: If false, all particles are emitted at once, and the emitter then dies. If true, particles are emitted gradually over the Emitter Life Time. If true, and Emitter Life Time = 0, particles are emitted gradually, at a rate of Count / Particle Life Time per second, indefinitely.
Spawn Delay: Delays the start of the emitter for the specified time. Useful to delay sub-effects relative to the overall emitter creation time.
Emitter Life Time: If Continuous = true, specifies the life time of the emitter. 0 = infinite life time (default).
Pulse Period: If >0, the emitter will restart repeatedly at this interval.
Particle Life Time: The life time of individual particles. Even after the emitter's life time has expired, spawned particles will live out their own life time.
Remain while Visible: Particles in the effect will not die until the entire emitter is out of view.
Location
Parameters in this tab control the spawning locations of the particles. location of emission when the emitter is attached to geometry, or when the parent particle has geometry:
- None: Particles ignore geometry and emit from emitter center as normal.
- Bounding Box: Deprecated
- Physics: Particles emit from the geometry of the attached physics object (can be a mesh or simple primitive).
- Render: Particles emit from the full mesh of the render object (usually static or animated mesh). Generally more CPU-intensive than emitting from physics.
Attach Form: When Attach Type is), ignoring emitter orientation.
Emit Offset Dir: Forces the particles to emit in all directions from the origin.
Emit Angle::: How many tiles the texture is split into.
- First Tile: The first of the range of tiles used by this particle (numbered from 0).
- Variant Count: How many consecutive tiles in the texture the particle will randomly select from.
- Anims Frame Count: for the particles.: rendered transparent, all alpha values above scaled down to match.
color: Pick the color to apply to the particle.
- Random: How much a particle's initial color will vary (downward) from the default. 0 = no variation, 1 = random black to default.
- Random Hue: Causes the Random color variation to occur separately in the 3 color channels. If false, variation is in luminance only.
- Emitter Strength: Define the color of the particle over the emitter's life time. Add a keyframe color by double clicking inside the panel and assigning a color. You can add multiple keyframes into the timeline by repeated double clicks.
- Particle Life: Define the color of the particle over the particle's life time. Add a keyframe color by double clicking inside the panel and assigning a color. You can add multiple keyframes into the timeline by repeated double clicks.: Causes particles to.
Sound: Browse and choose the sound asset to play with the emitter.
SoundFX Param: Modulate value to apply to the sound. Its effect depends on how the individual sound's "particlefx" parameter is defined. Depending on the sound, this value might affect volume, pitch, or other attributes.
Sound Control Time: control, the world sprite radius. For 3D particles, the scale applied to the geometry.
Stretch: particle, with the specified average speed.
Turbulence Size: Adds a spiral movement to the particles, with the specified radius. The axis of the spiral is set from the particle's velocity.
Turbulence Speed: When Turbulence Size > 0, the angular speed, in degrees/second, of the: The initial angle applied to the particles upon spawning, in degrees. For Facing = Camera particles, only the Y axis is used, and it refers to rotation in screen space. For 3D particles, all 3 axes are used, and refer to emitter local space.
Random Angles: Random variation (bidirectional) to Init Angles , in degrees.
Rotation Rate: Constant particle rotation, in degrees/second. The axes are the same as for Init Angles.
Random Rotation Rate: Random variation (bidirectional) to Rotation Rate ,:).. The coefficient of dynamic friction./Min Distance:.
Fill Rate Cost: Multiplier to this emitter's contribution to total fill rate, which affects automatic culling of large particles when the global limit is reached. Set this > 1 if this effect is relatively expensive or unimportant. Set this < 1, or 0, if the effect is an important one which should not experience automatic culling.
Heat Scale: Multiplier to thermal vision.
Sort Quality: Specifies more accurate sorting of new particles into emitter's list. Particles are never re-sorted after emission, to avoid popping resulting from changing render particle order. They are sorted only when emitted, based on the current main camera's position, as follows:
- 0 (default, fastest): Particle is placed at either the front or back of the list, depending on its position relative to the emitter bounding box center.
- 1 (medium slow): Existing particles are sorted into a temporary list, and new particles do a quick binary search to find an approximate position.
- 2 (slow): Existing particles are sorted into a temporary list, and new particles do a full linear search to find the position of least sort error.
Half Res: Render particles in separate half-res pass, reducing rendering cost.
Streamable: Texture or geometry assets are allowed to stream from storage, as normal.
Configuration
Parameters in this tab control advanced configurations. These settings limit an effect to only be enabled on certain platform configurations. This allows you to create variant effects for different configurations.
Config Min: The minimum system configuration level for the effect. If the config is low than what is set here, the item will not be displayed.
Config Max: The maximum system configuration level for the effect..
Then have a cheaper physicalized version (simple Collision) based sub effect that has a Config Max = High.
DX11:
If_True: Enables the effect only on DX11.
If_False: Enables the effect only on pre-DX11.
Both: Enables the effect to display on both DX versions. | https://docs.cryengine.com/pages/viewpage.action?pageId=11239877 | 2019-11-11T23:22:08 | CC-MAIN-2019-47 | 1573496664439.7 | [] | docs.cryengine.com |
This section describes how you can install the license file stored on the disc. Before using the license file make sure you have the most recent version of the Hex Editor Neo installed on your computer.
Locate the downloaded license file in the Windows Explorer and double-click it. This will activate your copy of the Hex Editor Neo and unlock all features provided by the purchased license.
Alternatively, run the product, execute the Help » License Management… command and press the Install License… button. Locate the license file and press the Open button. | https://docs.hhdsoftware.com/hex/purchasing-hex-editor-neo/installing-license-files.html | 2019-11-11T23:20:24 | CC-MAIN-2019-47 | 1573496664439.7 | [] | docs.hhdsoftware.com |
Binding
Context Class
Definition
Provides information about the addresses, bindings, binding elements and binding parameters required to build the channel listeners and channel factories.
public ref class BindingContext
public class BindingContext
type BindingContext = class
Public Class BindingContext
- Inheritance
-
Remarks. | https://docs.microsoft.com/en-us/dotnet/api/system.servicemodel.channels.bindingcontext?view=netframework-4.8 | 2019-11-11T22:07:16 | CC-MAIN-2019-47 | 1573496664439.7 | [] | docs.microsoft.com |
Message-ID: <296664087.3888.1573512179953.JavaMail.confluence@ip-10-4-1-203> Subject: Exported From Confluence MIME-Version: 1.0 Content-Type: multipart/related; boundary="----=_Part_3887_7292759.1573512179948" ------=_Part_3887_7292759.1573512179948 Content-Type: text/html; charset=UTF-8 Content-Transfer-Encoding: quoted-printable Content-Location:
Your Address Book does not contain any cross cert= ificates capable of authenticating the server
If you see this in a failed promotion log:
11/03/2006 05:06:49 PM EST: : Calling sign utility.= ..
11/03/2006 05:06:51 PM EST: : Error: Your Address B= ook does not contain any cross certificates capable of authenticating the s= erver.
11/03/2006 05:06:51 PM EST: : Action Failed: actSIG= N
Because the signing utility in Build Manager is detached = to your local data directory to run, this error indicates that the ID being= used by the signing utility does not have a cross certificate in your loca= l name and address book for the server where the promotion was targeted. Ha= ve your administrator create a cross certificate in your local name and add= ress book.
Unable to find path to server
If you see this in a failed promotion log:
11/03/2006= 04:22:22 PM EST: : Calling sign utility...
11/03/2006 04:22:29 = PM EST: : Error: Unable to find path to server
11/03/2006 04:22:2= 9 PM EST: : Action Failed: actSIGN
Because the signing utility in Build Manager is deta= ched to your local data directory to run, this error indicates that the ID = being used by the signing utility cannot find the correct connection docume= nt for the target promotion server in your local name and address book. Thi= s is exacerbated by the fact there could be multiple local name an= d address books. Have your administrator create the correct connection docu= ments in all of your local name and address books. Or, use your standard No= tes client for promotions
.
You are not authorized to promote database
If you see this in a failed promotion log:
11/03/2006 03:17:28 PM EST: : You are not authorize= d to promote this database with the selected promotion path.
11/03/2006 03:17:28 PM EST: : Promotion failed.
The ID you have used to sign onto the Notes client is not= present in the promote authority field on the promotion path document in B= uild Manager for the database you are attempting to promote. This field che= ck to see if you have rights to press the promote button for this database.= This field supports groups.
Have your administrator add your name explicitly to the p= romotion authority field or to one of the groups (in the domino directory) = that are present in the promote authority filed on the stored server docume= nt for the promotion target server.
Database properties could not be set
If you see this in a failed promotion log:
11/03/2006 03:43:42 PM EST: : Set Database inherit from t= o:
11/03/2006 03:43:42 PM EST: : Database properties c= ould not be set. Most likely cause is that the master template name was alr= eady in use. Check the log.nsf database on the destination server.
11/03/2006 03:43:42 PM EST: : Action Failed: actDBP= ROP
There are two potential reasons for this issue.
First, the ID being used for changing the database propert= ies in the database properties step does not have permissions on the target= server to create master templates. If the promotion path does not switch t= o another ID to perform the promotion (i.e. Perform Promotion as someone el= se) then it is the ID you used to sign onto the Notes client. If the promot= ion path does switch to another ID, then it is the ID used to by Build Mana= ger.
Have your administrator add the name of the ID being used= to the Create master templates field on the target server document i= n the domino directory or to one of the group documents in the Domino Direc= tory that are already present in the Create master templates field on the t= arget server document in the domino directory.
Second, the master template name the program is attemptin= g to implement on the promoted database is already in use. This applies onl= y for Notes 6.x.x servers and above.
Investigate where the master template name is currently b= eing used and change it or change the master template name you are currentl= y attempting to implement.
CIAO Must be installed to use this feature
If you see this in a failed promotion log:
10/04/2006 03:22:01 PM EDT: : Database copied to : = Templates\Colonypo.ntf On JBXNU01/APPS/SRV/FPLDEV 10/04/2006 03:22:01 PM ED= T: : Error: CIAO Must be installed to use this feature. on line
20 in MAKEVERSION
10/04/2006 03:22:01 PM EDT: : = Action Failed: actMAKEVERSION
Teamstudio CIAO is not installed or implemented prop= erly on the workstation attempting the promotion.
This only happens if = there is a make version action in the promotion process.
There are mul= tiple reasons for this the least of which is that CIAO is not installed. If= CIAO! is not installed, install it or deactivate the Make Version step in = the promotion process.
If CIAO! is installed and not implemented corre= ctly you should contact Teamstudio support.
If you see this in a failed promotion log:
10/20/2009 11:12:50 AM CDT: The log document does not have any version n= umbers. using initial values from config document
10/20/2009 11:12:50 AM CDT: WARNING: Version was successful, but failed = to bump version number. (@1)
10/20/2009 11:12:50 AM CDT: Action Failed: actMAKEVERSION
The Teamstudio CIAO! Log, as defined on the Teamstudio CIAO! Configurati= on document, does not contain any =E2=80=9CDatabase Version History=E2=80= =9D for the database being promoted. Open the database in Teamstudio = CIAO!, and make a version of it.
Incorrect Stored or Attached ID Being Used
If you see this in a failed promotion log: 10/04/2006 03:= 29:50 PM EDT: : Found stored id: FPL Template Signer' 10/04/2006 03:2= 9:50 PM EDT: : Error: Type mismatch on line 65 in DETACHID 10/04/2006= 03:29:50 PM EDT: : Error: Could not detatch id file 10/04/2006 03:29= :50 PM EDT: : Action Failed: actSIGN
This typically indicates that the password entered for th= e stored or attached ID being used by the signing utility is incorrect.
Have your administrator check the signing step in the pro= motion process being used to see if it is an attached ID or a stored ID and= then go to the document for the ID being used and reenter the password.
Unchanged Documents Being Promoted
If you see this in a failed promotion log:
10/05/2006= 03:41:21 PM EDT: : Calling sign utility...
10/05/2006 03:41:21 = PM EDT: : Error: No documents have been modified since specified time.
= 10/05/2006 03:41:21 PM EDT: : Action Failed: actSIGN
10/05/2006 0= 3:41:21 PM EDT: : Promo
You are attempting to promote documents that have not changed since the = last promotion.
No ID File Attached
If you see this in a failed promotion log:
01/17/2007 10:54:06 AM EST: : Found stored id: Test= ID'
01/17/2007 10:54:06 AM EST: : Error: Type mismatch = on line 65 in DETACHID
01/17/2007 10:54:06 AM EST: : Error: Could not deta= tch id file
01/17/2007 10:54:06 AM EST: : Error: Could not swit= ch to 'Promot As' ID. Promotion Failed.
This error is due to the fact that there is no ID file at= tached to the promotion path document in the =E2=80=9CPromote As=E2=80=9D a= rea or if that area is using a stored ID there is no ID file attached to th= e stored ID document. This could be due to forgetting to attach the ID file= , in which case simply attach the appropriate ID to the appropriate documen= t. It could also be due to attempting to attach an ID to the appropriate do= cument while signed in on a Notes R 6.5.4 client. The method we use to atta= ch IDs to documents so that you cannot get to them is not supported in the<= /p>
R 6.5.4 client. Two ways to overcome this issue. Upgrade = the client you are working on or use another Notes client on another machin= e to perform the initial attachment of the ID file.
Promoting Database with Design Notes Still Checke= d Out in CIAO!
If you see this in a failed promotion log:
02/06/2007 03:57:23 PM EST: : Error: CIAO!: Some design n=
otes are still checked out.
You must check in all design notes b= efore performing that action.
02/06/2007 03:57:24 PM EST: : Action Fail= ed: actMAKEVERSION
02/06/2007 03:57:24 PM EST: : Promotion failed.
There are some design elements still checked out in Teamstudio CIAO= . Go back to the Teamstudio CIAO UI and check all design elements in. | http://docs.teamstudio.com/exportword?pageId=35390446 | 2019-11-11T22:42:59 | CC-MAIN-2019-47 | 1573496664439.7 | [] | docs.teamstudio.com |
The SDK provides a query mechanism and object-relational mapping layer to provide robust access to the platform's clinical data.
Entity Managers
The entity manager is a channel to the database which additionally provides a simple caching layer. An entity manager is required to perform any query. Create and close an entity manager with each request that will perform a query against the database. The simplest way to do this in a Rails application is by building two filters and creating an instance variable called
@entity_manager (referred to as
@entity_manager throughout this documentation).
An example set of filters to build and close the entity manager in a Rails application controller:
class ApplicationController < ActionController::Base # ... other typical stuff before_filter :get_entity_manager after_filter :close_entity_manager def get_entity_manager @entity_manager ||= Java::HarbingerSdk::DataUtils.getEntityManager end def close_entity_manager @entity_manager.close if [email protected]_manager.nil? end # rest of application controller end
Not every request requires an entity manager, so this technique is not ideal for all applications, but it serves the purpose of this tutorial and works for many common types of applications.
Querying
Queries should be done through the
harbinger.sdk.Query library. The query library is a wrapper around the JPA query standard. It provides encapsulation for multiple variables that would need to be tracked independently across multiple locations to perform a typical query in JPA. It also provides a means to reference data in other related models without the burden of manually performing joins and tracking aliases.
To build a query object:
# Building the query object directly query = Java::HarbingerSdk::Query.new(Java::HarbingerSdkData::RadExam.java_class,@entity_manager) # Using the method on the desired class query = Java::HarbingerSdkData::RadExam.createQuery(@entity_manager)
Using the query object to fetch a set of results using the list method:
results = query.limit(10).list() results = results.to_a # Makes the returned object behave like a Ruby list
Excluding
list, there are 4 types of methods on the query object: filter builders, select builders, getters, and setters.
Filtering (SQL WHERE CLAUSE)
The
where clause of a
select SQL statement can be very complex, and adding joins only increases the verbosity. Filters are broken down into expressions in the query library. Expressions are built and combined using various methods attached to the query object, which include mathematical operators (
equal,
greaterThan,
lessThan, etc.), string matching (
like,
ilike), and boolean operators (
and,
or,
not).
Example filter expressions:
# SQL equivalent "accession = '12345'" acc_equals = query.equal(".accession","12345") # SQL equivalent "from rad_exams r left outer join patient_mrns pm ON pm.id = r.patient_mrn_id" mrn_like = query.ilike(".patientMrn.mrn","%987%") # Passing a list of expressions (nulls removed automatically) to the # where method are combined with a SQL "AND" query.where([acc_equals,patient_mrns]).limit(10).list()
These filters are unlike conventional JPA because instead of finding the property object to represent a field in the database, the query library accepts a string with special syntax to reference the property. Furthermore, there was an implicit join from
rad_exams to
patient_mrns through the
patient_mrn_id foreign key in the
rad_exams table.
The special syntax states that dot notation in a string is a property. The details of this are explored further in the
Properties and Explicit Joins section below.
There is more syntactic sugar that can be used with the filter builders: whenever you have many filters that share the same operator and need
AND between each of them, you can pass in a hash instead of building each filter individually.
filters = query.equal({".site.site" => "VHS", ".accession" => "12345"}) query.where(filters).limit(10).list()
The syntactic sugar can be stretched further: if the only filters will be an equal comparison with 'AND' between them, simply pass a hash directly to the
where method.
query.where({".site.site" => "VHS", ".accession" => "12345"}).limit(10).list()
Almost every filter takes either 2 arguments (the left and right side of the operator) or a hash that represents a set of 2 arguments. Further details about the available filter methods are in the SDK documentation.
Properties and Explicit joins (SQL FROM clause)
The SDK special syntax above is a powerful tool to query the database because the data model is normalized. The rules are simple and will work for most query use cases, however, there is a way to circumvent them if necessary (further details below in the
Querying section). The rules are:
- To be a property it must be a string
- To be a property it must have only characters (A-z) and include at least one "."
- If it is a property and the string starts with "." then it's implied anything following that "." is part of the base class of the query (ex:
exam_query.property(".id")assumes seeking the
idfield of the
RadExamclass).
- If it is a property and the string does not start with "." then the first alpha string is the name of the class being references as the base class.
Examples of valid property notation:
query = Java::HarbingerSdkData::RadExam.createQuery(@entity_manager) query.property(".id") # the id field of the rad exam query.property(".site.site") # the site field of the site record associated with the rad exam query.property("site.updatedAt") # the updatedAt field of the site
Whenever a property is more than a simple field of the base class, it will be an implicit left outer join. Once that join is performed, you can reference the joined class at the start of the property string as in the above example with
site. If the example had referenced
site.updatedAt before
.site an exception would have been thrown. There are two other cases that can raise an exception:
- Referencing something that doesn't exist (e.g.:
.fakeField).
- Referencing something with an ambiguous path (e.g.:
externalSystem.externalSystemafter joining multiple classes with containing an
externalSystemrelationship in the query).
The
externalSystem use case is an example of the need to declare explicit joins. Other frequent use cases for explicit joins are to either ensure the order of the joins for performance reasons (join order can have substantial performance impact on a query) or to use a join type other than
LEFT OUTER.
Examples of using the
join method for explicit joins:
query = Java::HarbingerSdkData::RadExam.createQuery(@entity_manager) query.join(".siteClass") # joins the site_classes table with a left outer join query.join(".externalSystem","left") # joins the external_systems table with a left outer join query.join(".site","inner") # joins the sites table via an inner join # joins the currentStatuses table with a :right join. This is not going to be # useful if you are still selecting the base class query.join(".currentStatus","right")
These queries can be combined into one statement using a list of lists:
query.join([".siteClass",[".externalSystem","left"],[".site","inner"],[".currentStatus","right"]])
Classes that have already been joined directly can be referenced in subsequent joins. Joins can also be chained like a property if they share the same join type:
query.join(".siteClass") # initial join query.join("siteClass.patientType") # referencing the siteClass directly now that it's joined query.join(".siteClass.patientType") # joins siteClass and patientType with a left outer join
Selecting and Aggregates (SQL SELECT and GROUP BY clause)
Creating a query object also creates a default select clause. That clause is the class of the query.
query = Java::HarbingerSdkData::RadExam.createQuery(@entity_manager) query.limit(10).list() # returns a list of RadExam Objects query.first() # returns a RadExam Object
select is the method for overriding the default selection. It takes an expression, property, or root class or a list of any of those. All of these are valid calls to select:
query.select(query.property(".id")) # query.first() will now return an integer query.select(query.count(".id")) # query.first() will now return the count of rad exam records query.select(query.root("RadExam")) # query.first() will return a RadExam object query.select([query.root("RadExam"),".site.site"]) # select both the RadExam object and the site field query.select([query.count(".id"),".site.site",".currentStatusId"]) # select an aggregate and two properties
When you pass a list to the select method the results coming back are also lists. So for instance if I passed
[".id",".siteId"] then I'd get a list of results that would look like this:
[[1,1],[2,3],[3,1]...] Where each item in the result list is a list with the first item of that list being
id and the second being
siteId.
Remember, when using aggregate functions in conjunction with traditional fields you'll need to have those fields in the group by clause. You'd use the group by clause just as you'd expect. It takes either an expression, property, or property string or a list of those. Here is a complete aggregate query as an example:
query = Java::HarbingerSdkData::RadExam.createQuery(entity_manager) total = query.count(".id") query.select([total,".currentStatus.universalEventTypeId"]).group(".currentStatus.universalEventTypeId").order(query.desc(total)) totals = query.list()
Ordering (SQL ORDER BY clause)
The order by clause can take 3 types of arguments. The first and most common is a property string with or without a direction:
query.order(".updatedAt") # Orders by the updated_at field in ascending order (assumes asc) query.order(".updatedAt asc") # Orders by the updated_at field in ascending order explicitly query.order(".updatedAt desc") # Orders by the updated_at field in descending order query.order(".site.site desc") # Complex properties also work and the join will be done automatically
Lists of ordering property strings can also be passed:
query.order([".site.site",".updatedAt desc"])
The third argument possibility is directly passing a JPA Order expression (
javax.persistence.criteria.Order) into the
order method.
Warning be sure to explicitly join anything in the order clause if it's not referenced in the where clause. Not doing so can give odd results.
Using JPA directly through the Query library
The query library was designed to make common use cases for data access fast and powerful, however it does not cover all scenarios. For certain purposes it makes more sense to bypass the query library and work directly with the underlying JPA objects.
The two primary JPA objects needed are the criteria and the builder.
query.builder # for the builder object query.criteria # for the criteria object
Those objects can be used by referencing the JPA criteria docs to access special functions like
builder.function. The
property method can still be used to get fields and pass those into builder functions (e.g.:
query.builder.equal(query.property(".accession"),"982374")).
Debugging and Common Mistakes
There are a few tips and common mistakes to be aware of when using the query library. Please read through this section entirely as it will almost certainly save you pain in the future.
Using toSQL to debug
It is not uncommon during development for queries to succeed, but not return the expected results and this can be difficult to debug. One tool to assist in understand the query generated is to use the
query.toSQL method to examine the SQL emitted by the library. This will return a string (meant for printing) of the sql being generated by the query library. Although a little difficult to read due to the auto-generated aliases, it can often reveal obvious mistakes in logic or erroneous parameters.
Forgetting the dot
Another common mistake is omitting a leading "." in a method expecting a property which can't handle a string:
query.order("updatedAt desc") # This will throw an exception query.order(".updatedAt desc") # This is correct
Don't re-use query objects
Query objects are mutable because the underlying JPA criteria object is mutable and that object is used to store information (methods are run on it). This makes it easy to get unexpected results when attempting to re-use a query object:
query = HarbingerTransforms::Models.from_table(model).createQuery(entity_manager) filters = query.equal(".site.site", "VHS") count_filters = query.equal(".currentStatus.universalEventType","final") results = query.where(filters).list() count = query.where(count_filters).select(query.count(".id")).first()
Frequently this will yield SQL grammar errors associated with generated aliases that don't exist. To prevent these types of errors, each query should be it's own object.
Large queries and heap space
Sometimes you will create queries that will return a large number of results. A query that returns thousands of results at once is likely to eat a large amount of memory and even crash the web server. If you see an error with
ran out of heap space as a result of a query then this is likely the result. Sometimes you'll be able to load your query, but you run out of memory while looping through the results. This is likely because within the loop you are making calls on each object that load more information into memory. An example of this would be querying for exams and then running
exam.procedure inside your loop. This would then load the procedure object and store it in the exam object. The more calls to such things in your loop the more memory you have to take up for the duration of your loop and beyond if you've stored the results of your query in a variable.
If you intend to display this list to the user then typical pagination using limit and offset will resolve this issue. You'd simply not show so many results to the user at a given time. But what if you have a need to iterate over the entire list? Then you need to break the query down into chunks but still process each item. To do this you could define a function like this one:
def query_each(query,batch_size=25,offset=0,&block) result_set = query.offset(offset).limit(batch_size).list().to_a if result_set.size > 0 result_set.each(&block) batch_query(query,batch_size,offset+batch_size,&block) else nil end end
This will run your query in batches made by offset and limit and so long as there are results it will run the given code block on each result. Here is how you'd use it:
query = Java::HarbingerSdkData::RadExam.createQuery(entity_manager) # This query would give me all the radiology exams in the database and would surely # die if I just ran list() query_each(query) do |exam| puts exam.accession, exam.procedure.code end
Query Examples
Combining all of the knowledge described above, this section provides several query examples that you can use to guide your initial querying. All of these queries assume you've created an entity manager that you've stored in the
entity_manager variable.
Querying with time
A simple query for finding a field that is between two given timestamps
query = Java::HarbingerSdkData::RadExam.createQuery(entity_manager) query.where(query.between(".radExamTime.endExam", 2.days.ago.to_time, 1.day.ago.to_time)).list()
Selecting the year, month, and day as separate field for an exam with the id of 12345.
query = Java::HarbingerSdkData::RadExam.createQuery(entity_manager) query.where({".id" => 12345}).select([query.year(".radExamTime.endExam"), query.month(".radExamTime.endExam"), query.day(".radExamTime.endExam")) exam = query.first()
Selecting all the exams that either began today or have an appointment time of today.
query = Java::HarbingerSdkData::RadExam.createQuery(entity_manager) query.where(query.or([query.equal(query.date(".radExamTime.beginExam"),Date.today.to_time), query.equal(query.date(".radExamTime.appointment"),Date.today.to_time)])) exams = query.list()
or what is likely much faster
query = Java::HarbingerSdkData::RadExam.createQuery(entity_manager) query.where(query.or([query.between(".radExamTime.beginExam",Date.today.to_time,Date.tomorrow.to_time), query.between(".radExamTime.appointment",Date.today.to_time,Date.tomorrow.to_time)])) exams = query.list()
Querying with likes and regex
Here is how to query for a particular procedure based on a simple insensitive like or a pattern (regex).
query = Java::HarbingerSdkData::Procedure.createQuery(entity_manager) # starts with CT, ends with MOD1, must have at least one character in between query.where(query.or([query.ilike(".code","MR%"), query.regex(".code","^CT.+MOD1$")])) procedures = query.list()
Many equals and a like
Assuming everything is going to be combined with an AND then here is a way to take a lot of equals and a like or two and add them as the where clause. This would be common for something like a search in a ruby controller.
query = Java::HarbingerSdkData::RadExam.createQuery(entity_manager) equals = query.equals({".site.site" => params[:site], ".procedure.code" => params[:procedure_code], ".patientMrn.mrn" => params[:mrn], ".siteClass.patientType.patientType" => params[:patient_type], ".externalSystemId" => params[:external_system_id].to_i }.delete_if {|k,v| v.blank? or v == 0}) likes = query.ilike({".patientMrn.patient.name" => params[:patient_name], ".accession" => params[:accession] }.delete_if {|k,v| v.blank? }) exams = query.where([equals,likes]).limit(10).list()
Cancelled Exams
Sometimes you want all exams that are not cancelled. Just saying that the universal event type isn't cancelled will exclude any exam with a status that hasn't been mapped to a trip status. So you'd probably want to do something more like this:
query = Java::HarbingerSdkData::RadExam.createQuery(entity_manager) query.where(query.or([query.notEqual(".currentStatus.universalEventType.eventType","cancelled"), query.isNull(".currentStatus.universalEventTypeId")])) exams = query.limit(10).list()
Aggregate: Count by universal_event_type
Here is a query that will count all the radiology exams in the system and group them by universal event type
query = Java::HarbingerSdkData::RadExam.createQuery(entity_manager) total = query.count(".id") query.select([total,".currentStatus.universalEventTypeId"]).group(".currentStatus.universalEventTypeId").order(query.desc(total)) totals = query.list()
Interacting with ORM objects
If the select clause is not explicitly set, the result of a query is a list of objects. Those objects represent a row in a given table (represented by the class) in the database. Those objects provide access to its fields and relationships through dot notation.
query = Java::HarbingerSdkData::RadExam.createQuery(@entity_manager) exam = query.first() exam.getId # returns the the value in the id field exam.accession # returns the value in the accession field exam.site # returns the corresponding site object linked to this exam exam.site.site # returns the site field of the site object # return the value of the patient_type field in the patient_types table # linked to this exam through the site_classes table exam.siteClass.patientType.patientType
This makes traversing the highly normalized structure much easier, however it is extremely important to note that many foreign keys can be null, so it is easily possible to attempt to call an additional method on a null object. To avoid this error, perform a check in advance:
# currentStatus (a required field for a radExam) and universalEventType (not required for externalSystemStatuses). exam.currentStatus.universalEventType.eventType if exam.currentStatus.universalEventTypeId
When referencing a relationship of an object, the object will query the database to get information about the record that relationship references. Running
exam.siteClass performs a query to
site_classes using the primary key (very fast). The exam object will cache the result of that query, so there is no need to set the return of
exam.siteClass to a variable.
exam.siteClass # A query is run to get the siteClass record exam.siteClass # A query is NOT run exam.siteClass.trauma # A query is NOT run since it's a field and that information has already been retrieved exam.siteClass.patientType # A query is run to get the patientType record
Custom methods have been added to the ORM objects to facilitate returning information more easily, but these methods cannot be referenced/used in a query, only on the ORM object. An example of this is
employee.primarySpecialty. This method performs a query to get the
employeeSpecialtyMapping that is flagged as
primary (a boolean column on the mapping table) associated with the
employee. It returns the
specialty associated with that
employeeSpecialtyMapping. This method simplifies a common use case for a complex relationship model.
It is important to note that while interacting with objects (as opposed to a more primitive data type) is great from a programmers perspective, it has an overhead price in terms of performance and memory usage. It is easy to accidentally instantiate thousands of objects if you run
list() without a limit set. The
limit and
offset functions are important safeguards to prevent these types of runaway behavior. It is strongly recommend to use them frequently, particularly because data across facilities often scales differently and test and development systems frequently contain less data than production systems. Apps ignoring these safeguards can create a hazardous environment on the applications server and are unlikely to pass an application review before deploying an app into production.
Shortcut Methods
While the Query library allows you to build complex queries there are several common query types that can be done without instantiating a new query object. Much of the time you will simply want to get the row associated with a primary key on a given model. The SDK provides a special method for doing this:
# This will give you the exam record with the id of 1 # while the second argument is optional it is recommended you pass in an entity manager exam = Java::HarbingerSdkData::RadExam.withId(1,@entity_manager) # Now you can access the data from that record as described in the "Interacting with ORM Objects" accession = exam.accession begin_exam = exam.radExamTime.beginExam
The shortcut methods are described below. Each takes an optional (but recommended) last argument of an entity manager.
withId(id,[entity_manager])- takes an integer and returns the a single record where the primary key matches the given id.
allWithLimit(limit,[entity_manager])- takes an integer limit returns a list of all the records in the table up to that limit (order is not reliable).
rowsWith(hash,[entity_manager])- takes a key value hash of property to value and will return a list of all records that match the given hash with a where clause consisting of
property = valuejoined by
AND's.
firstWith(hash,[entity_manager])- takes a key value hash of property to value and will return a list of the first record (order not reliable) that matches the given hash with a where clause consisting of
property = valuejoined by
AND's. | https://developer-docs.analytical.info/sdk/query/ | 2019-11-11T23:15:44 | CC-MAIN-2019-47 | 1573496664439.7 | [] | developer-docs.analytical.info |
Redactor v3 Features Source
The source mode is shown when clicking the HTML icon in the toolbar and lets you see/change the raw HTML source for the content.
Enable source mode
To enable it, make sure "Allow Source Mode" is enabled in the Source section of the configurator, and add
html to your toolbar buttons under Toolbar where you want the button to be available.
Syntax highlighting
If desired, you can enable CodeMirror syntax highlighting for easier use of the source. Under the "Source" section of the configurator, make sure "Enable CodeMirror" is enabled. | https://docs.modmore.com/en/Redactor/v3.x/Features/Source.html | 2019-11-11T22:37:30 | CC-MAIN-2019-47 | 1573496664439.7 | [] | docs.modmore.com |
Using badges
From Carleton Moodle Docs, Activity completion must be enabled in the site and the course.
It's possible to add a description of the criterion/criteria to provide more information or relevant links.
- Once criteria have been set, you are returned to the Manage badges screen where you must "enable access" for the badge to be available:
Awarding the badge
Badges may be awarded manually from Course administration > Badges > Manage badges > Recipients and clicking the "Award badge" button.
For information on the Overview, Edit details, Message and Recipients tab, see Managing badges.
Tip: If your site has a large number of users, it's easier to search for email addresses than names.
Revoking a badge
If a badge is awarded my mistake, it may be revoked from the 'Badge recipients' page. Click the badge in question, click the Award button, select the person whose badge you wish to revoke and click 'Revoke'.
Only badges which were awarded manually may be revoked., Activity completion needs to be enabled in the site and relevant courses. In each course, activity completion must be set for the chosen activities, which must be then checked in the course completion settings.
Earning badges
- Once all criteria are set and badge creator is happy with badge details and settings, site users can start earning it. For users to be able to earn a badge, a badge creator/administrator needs to enable access to this badge on a badge overview page or "Manage badges" page (as shown on the picture).
- Normally badges are awarded to users automatically based on their actions in the system. The completion criteria of an active badge are re-calculated every time an event such as completion of a course or activity, or updating user profile happens. If a user has completed all necessary requirements they are issued a badge and sent an email notification. | https://docs.moodle.carleton.edu/index.php?title=Using_badges&oldid=17637&printable=yes | 2019-11-11T22:41:05 | CC-MAIN-2019-47 | 1573496664439.7 | [] | docs.moodle.carleton.edu |
Install the ML-SPL Performance App
Machine learning requires compute resources and disk space. Each algorithm has a different performance cost, which can be complicated by the number of input fields you select and the total number of events processed. Model files are lookups and do increase bundle replication costs.
For each algorithm implemented in ML-SPL, run time, CPU utilization, memory utilization, and disk activity are measured when fitting models on up to 1,000,000 search results, and applying models on up to 10,000,000 search results, each with up to 50 fields.
Through the Settings tab of the MLTK, users with admin access can configure the settings of the
fit and
apply commands. Changes can be made across all algorithms, or for an individual algorithm.
For more information, see Configure algorithm performance costs.
The ML-SPL Performance App for the Machine Learning Toolkit enables users to:
- Ensure you know the impact of making changes to the default performance cost Settings.
- Access performance results for guidance purposes.
- Access performance results for bench-marking purposes.
To learn more about this add-on and to download, see Splunkbase for the ML-SPL Performance App for the Machine Learning Toolkit.
This documentation applies to the following versions of Splunk® Machine Learning Toolkit: 4.4.0, 4.4.1, 4.4.2, 5.0.0
Feedback submitted, thanks! | https://docs.splunk.com/Documentation/MLApp/4.4.1/User/InstallPerfApp | 2019-11-11T22:46:40 | CC-MAIN-2019-47 | 1573496664439.7 | [array(['/skins/OxfordComma/images/acrobat-logo.png', None], dtype=object)] | docs.splunk.com |
Service Fabric and containers
Introduction
Azure Service Fabric is a distributed systems platform that makes it easy to package, deploy, and manage scalable and reliable microservices and containers.
Service Fabric is Microsoft's container orchestrator for deploying microservices across a cluster of machines. Service Fabric benefits from the lessons learned during its years running services at Microsoft at massive scale.
Microservices can be developed in many ways from using the Service Fabric programming models, ASP.NET Core, to deploying any code of your choice. Or, if you just want to deploy and manage containers, Service Fabric is also a great choice.
By default, Service Fabric deploys and activates these services as processes. Processes provide the fastest activation and highest density usage of the resources in a cluster. Service Fabric can also deploy services in container images. You can also mix services in processes, and services in containers, in the same application.
To jump right in and try out containers on Service Fabric, try a quickstart, tutorial, or sample:
Quickstart: Deploy a Linux container application to Service Fabric
Quickstart: Deploy a Windows container application to Service Fabric
Containerize an existing .NET app
Service Fabric Container Samples
What are containers
Containers solve the problem of running applications reliably in different computing environments by providing an immutable environment for the application to run in. Containers wrap an application and all of its dependencies, such as libraries and configuration files, into its own isolated 'box' that contains everything needed to run the software inside the container. Wherever the container runs, the application inside it always has everything it needs to run such as the right versions of its dependent libraries, any configuration files, and anything else it needs to run.
Containers run directly on top of the kernel and have an isolated view of the file system and other resources. An application in a container has no knowledge of any other applications or processes outside of its container. Each application and its runtime, dependencies, and system libraries run inside a container with full, private access to the container's own isolated view of the operating system. In addition to making it easy to provide all of your application's dependencies it needs to run in different computing environments, security and resource isolation are important benefits of using containers with Service Fabric--which otherwise runs services in a process..
Container types and supported environments
Service Fabric supports containers on both Linux and Windows, and supports Hyper-V isolation mode on Windows.
Docker containers on Linux
Docker provides APIs to create and manage containers on top of Linux kernel containers. Docker Hub provides a central repository to store and retrieve container images. For a Linux-based tutorial, see Create your first Service Fabric container application on Linux.
Windows Server containers
Windows Server 2016 provides two different types of containers that differ by level of isolation. Windows Server containers and Docker containers are similar because both have namespace and file system isolation, while sharing the kernel with the host they are running on. On Linux, this isolation has traditionally been provided by cgroups and namespaces, and Windows Server containers behave similarly.
Windows containers with Hyper-V support provide more isolation and security because no container shares the operating system kernel with any other container, or with the host. With this higher level of security isolation, Hyper-V enabled containers are targeted at potentially hostile, multi-tenant scenarios. For a Windows-based tutorial, see Create your first Service Fabric container application on Windows.
The following figure shows the different types of virtualization and isolation levels available.
Scenarios for using containers
Here are typical examples where a container is a good choice:
IIS lift and shift: You can put an existing ASP.NET MVC app in a container instead of migrating it to ASP.NET Core. These ASP.NET MVC apps depend on Internet Information Services (IIS). You can package these applications into container images from the precreated IIS image and deploy them with Service Fabric. See Container Images on Windows Server for information about Windows containers.
Mix containers and Service Fabric microservices: Use an existing container image for part of your application. For example, you might use the NGINX container for the web front end of your application and stateful services for the more intensive back-end computation..
Service Fabric support for containers
Service Fabric supports the deployment of Docker containers on Linux, and Windows Server containers on Windows Server 2016, along with support for Hyper-V isolation mode.
Service Fabric provides an application model in which a container represents an application host in which multiple service replicas are placed. Service Fabric also supports a guest executable scenario in which you don't use the built-in Service Fabric programming models but instead package an existing application, written using any language or framework, inside a container. This scenario is the common use-case for containers.
You can also run Service Fabric services inside a container. Support for running Service Fabric services inside containers is currently limited.
Service Fabric provides several container capabilities that help you build applications that are composed of containerized microservices, such as:
- Container image deployment and activation.
- Resource governance including setting resource values by default on Azure clusters.
- Repository authentication.
- Container port to host port mapping.
- Container-to-container discovery and communication.
- Ability to configure and set environment variables.
- Ability to set security credentials on the container.
- A choice of different networking modes for containers.
For a comprehensive overview of container support on Azure, such as how to create a Kubernetes cluster with Azure Kubernetes Service, how to create a private Docker registry in Azure Container Registry, and more, see Azure for Containers.
Next steps
In this article, you learned about the support Service Fabric provides for running containers. Next, we will go over examples of each of the features to show you how to use them.
Create your first Service Fabric container application on Linux
Create your first Service Fabric container application on Windows
Learn more about Windows Containers
Feedback | https://docs.microsoft.com/en-us/azure/service-fabric/service-fabric-containers-overview | 2019-11-11T21:55:42 | CC-MAIN-2019-47 | 1573496664439.7 | [array(['media/service-fabric-containers/service-fabric-types-of-isolation.png',
'Service Fabric platform'], dtype=object) ] | docs.microsoft.com |
This is an old revision of the document!
Clock. Consult the Text Attribute Markup manual of Pango so see what attributes are supported. As an example you can have the following custom format:
%R%n%d-%m w%V. | https://docs.xfce.org/xfce/xfce4-panel/clock?rev=1555002042 | 2019-11-11T23:05:30 | CC-MAIN-2019-47 | 1573496664439.7 | [] | docs.xfce.org |
dellos9 – Use dellos9 cliconf to run command on Dell OS9 platform¶
New in version 2.5.
Synopsis¶
- This dellos9 plugin provides low level abstraction apis for sending and receiving CLI commands from Dell OS9 network devices.
Status¶
- This cliconf is not guaranteed to have a backwards compatible interface. [preview]
- This cliconf is maintained by the Ansible Community. [community] | https://docs.ansible.com/ansible/latest/plugins/cliconf/dellos9.html | 2019-11-11T23:44:22 | CC-MAIN-2019-47 | 1573496664439.7 | [] | docs.ansible.com |
Hex Editor Neo Documentation Definitive Guide Structure Viewer Overview Language Reference User-Defined Types Attributes
Hex Editor Neo supports a number of useful attributes that change the default behavior for individual bound fields. You should put attributes before a field you want them affect. The following syntax is supported:
attribute-list: [ *attribute-decl ] attribute-decl: noindex | noautohide | read(expr) | format(expr) | description(expr) | color_scheme(expr)
See the following sections for attribute descriptions.
In addition to field attributes, a single display attribute is supported on types. It allows the user to change the default visualization for a type. | https://docs.hhdsoftware.com/hex/definitive-guide/structure-viewer/language-reference/user-defined-types/attributes/overview.html | 2019-11-11T22:26:06 | CC-MAIN-2019-47 | 1573496664439.7 | [] | docs.hhdsoftware.com |
Pachctl diff file
pachctl diff file¶
Return a diff of two file trees.
Synopsis¶
Return a diff of two file trees.
pachctl diff file <new-repo>@<new-branch-or-commit>:<new-path> [<old-repo>@<old-branch-or-commit>:<old-path>]
Examples¶
# Return the diff of the file "path" of the repo "foo" between the head of the # "master" branch and its parent. $ pachctl diff file foo@master:path # Return the diff between the master branches of repos foo and bar at paths # path1 and path2, respectively. $ pachctl diff file foo@master:path1 bar@master:path2
Options¶
--diff-command string Use a program other than git to diff files. --full-timestamps Return absolute timestamps (as opposed to the default, relative timestamps). --name-only Show only the names of changed files. --no-pager Don't pipe output into a pager (i.e. less). -s, --shallow Don't descend into sub directories.
Options inherited from parent commands¶
--no-color Turn off colors. -v, --verbose Output verbose logs | https://docs.pachyderm.com/latest/reference/pachctl/pachctl_diff_file/ | 2019-11-11T22:18:32 | CC-MAIN-2019-47 | 1573496664439.7 | [] | docs.pachyderm.com |
Product Documentation
- LED Products
- Arduino Products
- Audio Products
- Timekeeping Products
- Power Products
- Wire Products
The Satellite Module S OctoBar an OctoBar, which is preset to 100mA current per channel.
Important: The Satellite Module S-001 does not have current control resistors installed. It is intended to be used with a current controlled sink driver, such as the OctoBar.
If a current controlled source is not available, then resistors will need to be added to the external circuit. To drive the Satellite Module S S-001 is to use an OctoBar to drive it. The OctoBar controls up to 150mA on three channels with 10-bit PWM, and easily connects to the Satellite Module S-001. The OctoBar can control up to eight Satellite S-001 devices, connected via common modular jack phone cables. | http://docs.macetech.com/doku.php/satellite_module_s-001 | 2019-11-11T23:16:13 | CC-MAIN-2019-47 | 1573496664439.7 | [] | docs.macetech.com |
Export-BCCache
Package
Syntax
Export-BCCachePackage -Destination <String> [-Force] [[-StagingPath] <String>] [-OutputReferenceFile <String>] [-CimSession <CimSession[]>] [-ThrottleLimit <Int32>] [-AsJob] [-WhatIf] [-Confirm] [<CommonParameters>]
Export-BCCachePackage -Destination <String> [-Force] [-ExportDataCache] [-CimSession <CimSession[]>] [-ThrottleLimit <Int32>] [-AsJob] [-WhatIf] [-Confirm] [<CommonParameters>]
Description
The Export-BCCachePackage cmdlet exports a cache package.
Examples
EXAMPLE 1
PS C:\>Export-BCDataPackage -Destination D:\temp
This example exports all content that has been hashed with calls to Publish-BCFileContent and Publish-BCWebContent. The package containing this data will be exported to D:\temp.
Required Parameters
Specifies the folder location where the data package is stored.
Specifies that the contents of the local data cache are included in the package..
Runs the cmdlet without prompting for confirmation.
Specifies the folder location where the output reference file is generated.
Specifies the folder location of the cache files that are to be packaged. These files are generated from the Publish-BCFileContent | https://docs.microsoft.com/en-us/powershell/module/branchcache/export-bccachepackage?view=winserver2012r2-ps | 2018-03-17T10:12:14 | CC-MAIN-2018-13 | 1521257644877.27 | [] | docs.microsoft.com |
Part 1: Startup
Open the Thunderbird application installed on your computer.
This screen will appear if you haven't configured any email accounts. If not, go to the menu bar and select Tools > Account Settings, and then click "Add Account".
To create a new email account, click on the "Email" icon to continue.
Part 2: Startup
A new window will open asking if you would like to create a new email address. To add your existing email address, click on "Skip this and use my existing email".
Part 3: Creating an account
In the New Account Setup wizard, enter the following information:
Your name: enter your chosen display name.
Password: the Exchange 2013 account password you set up in the Web Control Panel.
Tick Remember password: if you don't want to enter your password every time you launch Thunderbird.
Click on "Continue" to continue with the installation.
Part 4: Advanced configuration
If you click on "Manual configuration", you will see the adjacent screen:
Please check that the following elements are entered correctly:
"Incoming server: IMAP" For Hosted Exchange accounts: Server hostname: ex.mail.ovh.net Port: 143 SSL: STARTTLS Authentication: Normal password.
"Outgoing server: SMTP" For Hosted Exchange accounts: Server hostname: ex.mail.ovh.net Port: 587 SSL: STARTTLS Authentication: Normal password.
"Username": your full email address.
For Private Exchange accounts, enter the server you selected when you ordered the Exchange server.
If the "Normal password" authentication doesn't work, you can also enter "NTLM".
Then click on "Done" to proceed with the final stages of installation.
Part 5: Finish up
Your Exchange 2013 account is now correctly configured in IMAP.
You should see the adjacent screenshot.
Incoming server settings
See this image for a reminder of how to view the account settings "for the incoming server".
Outgoing server settings
See this image for a reminder of how to view the account settings "for the outgoing server".
| https://docs.ovh.com/gb/en/microsoft-collaborative-solutions/exchange_2013_how_to_configure_in_thunderbird/ | 2018-03-17T10:39:40 | CC-MAIN-2018-13 | 1521257644877.27 | [array(['https://docs.ovh.com/de/microsoft-collaborative-solutions/exchange_20132016_konfiguration_in_thunderbird/images/img_1127.jpg',
None], dtype=object)
array(['https://docs.ovh.com/de/microsoft-collaborative-solutions/exchange_20132016_konfiguration_in_thunderbird/images/img_1128.jpg',
None], dtype=object)
array(['https://docs.ovh.com/de/microsoft-collaborative-solutions/exchange_20132016_konfiguration_in_thunderbird/images/img_1129.jpg',
None], dtype=object)
array(['{attach}images/img_1130.jpg', None], dtype=object)
array(['https://docs.ovh.com/de/microsoft-collaborative-solutions/exchange_20132016_konfiguration_in_thunderbird/images/img_1134.jpg',
None], dtype=object)
array(['https://docs.ovh.com/de/microsoft-collaborative-solutions/exchange_20132016_konfiguration_in_thunderbird/images/img_1132.jpg',
None], dtype=object)
array(['https://docs.ovh.com/de/microsoft-collaborative-solutions/exchange_20132016_konfiguration_in_thunderbird/images/img_1133.jpg',
None], dtype=object) ] | docs.ovh.com |
Set up alert actions
Alert actions help you respond to triggered alerts. You can enable one or more alert actions. Learn about the available options.
Deprecated alert action:
The script alert action is deprecated. As an alternative, see About custom alert actions for information on building customized alert actions that can include scripts.
Additional resources
To review alert triggering, see Configuring alert trigger conditions.! | https://docs.splunk.com/Documentation/Splunk/6.6.2/Alert/Setupalertactions | 2018-03-17T10:40:01 | CC-MAIN-2018-13 | 1521257644877.27 | [array(['/skins/OxfordComma/images/acrobat-logo.png', None], dtype=object)] | docs.splunk.com |
Specify a New Central RD CAP Store]., you must use the same case-sensitive shared secret that you specified when configuring the RD Gateway server as a RADIUS client on the central server running NPS.
We also recommend that you do the following:
Generate long shared secrets (more than 22 characters) comprised of a random sequence of letters, numbers, and punctuation.
Change the shared secret often..
To specify a new central RD CAP store
On the RD Gateway server, open Remote Desktop Gateway Manager. To open Remote Desktop Gateway Manager, click Start, point to Administrative Tools, point to Remote Desktop Services, and then click Remote Desktop Gateway Manager.
In the console tree, click to expand the node that represents the local RD Gateway server, which is named for the computer on which the RD Gateway server is running.
In the console tree, expand Policies, and then click Connection Authorization Policies. (). | https://docs.microsoft.com/en-us/previous-versions/windows/it-pro/windows-server-2008-R2-and-2008/cc725810(v=ws.11) | 2018-03-17T11:46:36 | CC-MAIN-2018-13 | 1521257644877.27 | [] | docs.microsoft.com |
Monaca Localkit is a local development environment tool for Monaca apps. It can be used with various development tools including editors, source code management system, task runner and so on. It also allows you to develop offline and provides a faster synchronization with Monaca Debugger.
Before getting started with this tutorial, you need to: | https://docs.monaca.io/en/products_guide/monaca_localkit/tutorial/ | 2018-03-17T10:44:20 | CC-MAIN-2018-13 | 1521257644877.27 | [] | docs.monaca.io |
Step 1: Authorise mailbox sharing
In our example, a folder is shared from the [email protected] account to [email protected]. Firstly: authorise sharing on your account. To do this, right-click on the account name and select "permissions" . A new window will appear.
Step 2: Authorise mailbox sharing
Click on the+ icon to add the user you want to share folders with. A new interface will then appear.
Step 3: Authorise mailbox sharing
Add the user.
Step 4: Authorise mailbox sharing
You can also customise the level of access for this user by changing the permission options in the Permissions section.
For example you may want the [email protected] user to only have access to the "Drafts" folder in the [email protected]. mailbox.
Click "OK"to confirm your selection.]
Permissions will be granted only for the file in question.
Set up folder sharing permissions
You can now give the second user sharing permissions for a folder, such as the "Drafts" folder.
The process is almost the same as before: right-click on the "Drafts" folder, then
click on "Permissions"
This step can be carried out for any folder.
Carry out the same actions as before by adding a user then giving them the necessary permissions for the folder in question.
You can assign different permissions: Owner, Editer, Author, User...
Step 1: Retrieve the shared folder
The user that has been given permission must add the shared folder in OWA.
To do this, right-click on your email account, then select "add shared folder".
Step 2: Retrieve the shared folder
Enter the first user's account name.
Step 3: Retrieve the shared folder
Our shared folder "Sent items" should now appear in OWA.
| https://docs.ovh.com/gb/en/microsoft-collaborative-solutions/exchange_2016_how_to_share_a_folder_via_owa/ | 2018-03-17T10:31:58 | CC-MAIN-2018-13 | 1521257644877.27 | [array(['https://docs.ovh.com/cz/cs/microsoft-collaborative-solutions/exchange_2016_how_to_share_a_folder_via_owa/images/img_2976.jpg',
None], dtype=object)
array(['https://docs.ovh.com/cz/cs/microsoft-collaborative-solutions/exchange_2016_how_to_share_a_folder_via_owa/images/img_2982.jpg',
None], dtype=object)
array(['https://docs.ovh.com/cz/cs/microsoft-collaborative-solutions/exchange_2016_how_to_share_a_folder_via_owa/images/img_2983.jpg',
None], dtype=object)
array(['https://docs.ovh.com/cz/cs/microsoft-collaborative-solutions/exchange_2016_how_to_share_a_folder_via_owa/images/img_2985.jpg',
None], dtype=object)
array(['https://docs.ovh.com/cz/cs/microsoft-collaborative-solutions/exchange_2016_how_to_share_a_folder_via_owa/images/img_2986.jpg',
None], dtype=object)
array(['https://docs.ovh.com/cz/cs/microsoft-collaborative-solutions/exchange_2016_how_to_share_a_folder_via_owa/images/img_2988.jpg',
None], dtype=object)
array(['https://docs.ovh.com/cz/cs/microsoft-collaborative-solutions/exchange_2016_how_to_share_a_folder_via_owa/images/img_2989.jpg',
None], dtype=object) ] | docs.ovh.com |
AWS services or capabilities described in AWS Documentation may vary by region/location. Click Getting Started with Amazon AWS to see specific differences applicable to the China (Beijing) Region.
Initiates the asynchronous execution of the BatchDetectDominantLanguage operation.
This is an asynchronous operation using the standard naming convention for .NET 4.5 or higher. For .NET 3.5 the operation is implemented as a pair of methods using the standard naming convention of BeginBatchDetectDominantLanguage and EndBatchDetectDominantLanguage.
Namespace: Amazon.Comprehend
Assembly: AWSSDK.Comprehend.dll
Version: 3.x.y.z
Container for the necessary parameters to execute the BatchDetectDominantLanguage | https://docs.aws.amazon.com/sdkfornet/v3/apidocs/items/Comprehend/MComprehendBatchDetectDominantLanguageAsyncBatchDetectDominantLanguageRequestCancellationToken.html | 2018-03-17T11:01:00 | CC-MAIN-2018-13 | 1521257644877.27 | [] | docs.aws.amazon.com |
Certificate Template Server
Applies To: Windows Server 2008 R2
High-volume certificate issuance scenarios such as Network Access Protection (NAP) deployments with Internet Protocol security (IPsec) enforcement create unique public key infrastructure (PKI) needs. To address these needs, the following options introduced in Windows Server 2008 R2 can be used to configure certificate templates for use by high-volume certification authorities (CAs). These options are available on the Server tab of a certificate template's property sheet.
Do not store certificates and requests in the CA database
Certificates issued in high-volume scenarios typically expire within hours of being issued, and the issuing CA processes a high volume of certificate requests. By default, a record of each request and issued certificate is stored in the CA database. A high volume of requests increases the CA database growth rate and administration cost.
The Do not store certificates and requests in the CA database option configures the template so that the CA processes certificate requests without adding records to the CA database.
Important
The issuing CA must be configured to support certificate requests that have this option enabled. On the issuing CA, run the following command: CertUtil.exe –SetReg DBFlags +DBFLAGS_ENABLEVOLATILEREQUESTS.
Do not include revocation information in issued certificates
Revocation of certificates by some high-volume CAs is not beneficial because the certificates typically expire within hours of being issued.
The Do not include revocation information in issued certificates option configures the template so that the CA excludes revocation information from issued certificates. This prevents checking revocation status during certificate validation and reduces validation time.
Note
This option is recommended whenever the Do not store certificates and requests in the CA database option is used. | https://docs.microsoft.com/en-us/previous-versions/windows/it-pro/windows-server-2008-R2-and-2008/dd759149(v=ws.11) | 2018-03-17T10:14:04 | CC-MAIN-2018-13 | 1521257644877.27 | [] | docs.microsoft.com |
Understand Git history
Git stores history as a graph of snapshots — called commits — of the entire repository. Each commit also contains a pointer to one or more previous commits. Commits can have multiple parents, creating a history that looks like a graph instead of a straight line. This difference in history is incredibly important and is the main reason users copies ("pulls") all commits from the
master branch of the remote repo (called
origin by default) to the
master branch of the local repo. The pull operation copied one new commit, and the
master branch in the local repo is now pointing to this new
master.! | https://docs.microsoft.com/en-us/vsts/git/concepts/history | 2018-03-17T10:57:51 | CC-MAIN-2018-13 | 1521257644877.27 | [array(['_img/history/history-abc.png', 'three commits in a line'],
dtype=object)
array(['_img/history/history-abcd.png',
'a fourth commit, D, is added to the line'], dtype=object)
array(['_img/history/history-abcd-cool-new-feature.png',
'Branch cool-new-feature is added'], dtype=object)
array(['_img/history/history-abcd-cool-new-feature-e-f.png',
'added two new commits'], dtype=object)
array(['_img/history/history-abcd-cool-new-feature-e-f-merge.png',
'after the merge'], dtype=object)
array(['_img/history/gitlogconsole.png', 'console log of git graph'],
dtype=object) ] | docs.microsoft.com |
Introduction¶
Sensors are the logic bricks that cause the logic to do anything. Sensors give an output when something happens, e.g. a trigger event such as a collision between two objects, a key pressed on the keyboard, or a timer for a timed event going off. When a sensor is triggered, a positive pulse is sent to all controllers that are linked to it.
The logic blocks for all types of sensor may be constructed and changed using the Logic Editor details of this process are given in the Sensor Editing page. | https://docs.blender.org/manual/en/dev/game_engine/logic/sensors/introduction.html | 2018-03-17T10:32:20 | CC-MAIN-2018-13 | 1521257644877.27 | [] | docs.blender.org |
This is the home of the Teiid Examples space.
The goal of this space is to provide you with one place to find all relative how-to information for your data integration needs. This will include, but not limited to:
- Quick starts and examples, which can be general in nature or focus on a specific functionality, but meant to get you moving forward quickly in your development
- End-to-End (E2E) how-to's, that take you from modeling in Teiid Designer, to deploying to a Teiid instance, and executing and validating success.
- Resources and artifacts, which include pre-built designer project sets; and database DDL (create tables) and DML (to load data) scripts
Another goal we have, is when a feature is demonstrated, you will be able to see it demonstrated using modeling, as well as, in the dynamicVDB form, when possible. This is a new objective for us, so many do not meet this goal. But as we move forward, we hope to improve and help you understand the data integration options to meet your needs.
Labels:
None | https://docs.jboss.org/author/display/teiidexamples/Home?showChildren=false | 2018-03-17T10:47:56 | CC-MAIN-2018-13 | 1521257644877.27 | [] | docs.jboss.org |
3. Language reference¶
3.1. Statements, blocks, literals etc.¶
Statements in Bakefile are separated by semicolon (
;) and code blocks are
marked up in with
{ and
}, as in C. See an example:
toolsets = gnu vs2010; program hello { sources { hello.cpp } }
In particular, expressions may span multiple lines without the need to escape newlines or enclose the expression in parenthesis:
os_files = foo.cpp bar.cpp ;
3.2. Values, types and literals¶
Similarly to the
make syntax, quotes around literals are optional –
anything not a keyword or special character or otherwise specially marked is a
literal; specifically, a string literal.
Quoting is only needed when the literal contains whitespace or special
characters such as
= or quotes. Quoted strings are enclosed between
"
(double quote) or
' (single quote) characters and may contain any
characters except for the quotes. Additionally, backslash (
\) can be used
inside quoted strings to escape any character. [1]
The two kinds of quoting differ:
- Double-quoted strings are interpolated. That is, variable references using
$(...)(see below) are recognized and evaluated. If you want to use
$as a literal, you must escape it (
\$).
- Single-quoted strings are literal,
$doesn’t have any special meaning and is treated as any other character.
Values in Bakefile are typed: properties have types associated with them and only values that are valid for that type can be assigned to them. The language isn’t strongly-typed, though: conversions are performed whenever needed and possible, variables are untyped by default. Type checking primarily shows up when validating values assigned to properties.
The basic types are:
- Boolean properties can be assigned the result of a boolean expression or one of the
trueor
falseliterals.
- Strings. Enough said.
- Lists are items delimited with whitespace. Lists are typed and the items must all be of the same type. In the reference documentation, list types are described as “list of string”, “list of path” etc.
- Paths are file or directory paths and are described in more detail in the next section.
- IDs are identifiers of targets.
- Enums are used for properties where only a few possible values exist; the property cannot be set to anything other than one of the listed strings.
-
AnyTypeis the pseudo-type used for untyped variables or expressions with undetermined type.
3.3. Paths¶
File paths is a type that deserves more explanation. They are arguably the most important element in makefiles and project files both and any incorrectness in them would cause breakage.
All paths in bakefiles must be written using a notation similar to the Unix
one, using
/ as the separator, and are always relative. By default, if you
don’t say otherwise and write the path as a normal Unix path (e.g.
src/main.cpp), it’s relative to the source directory (or srcdir for
short). Srcdir is the implicitly assumed directory for the input files
specified using relative paths. By default, it is the directory containing the
bakefile itself but it can be changed as described below. Note that this may
be – and often is – different from the location where the generated output
files are written to.
This is usually the most convenient choice, but it’s sometimes not sufficient.
For such situations, Bakefile has the ability to anchor paths under a
different root. This is done by adding a prefix of the form of
@<anchor>/
in front of the path. The following anchors are recognized:
-
@srcdir, as described above.
-
@top_srcdiris the top level source directory, i.e. srcdir of the top-most bakefile of the project. This is only different from @srcdir if this bakefile was included from another one as a submodule.
-
@builddiris the directory where build files of the current target are placed. Note that this is not where the generated makefiles or projects go either. It’s often a dedicated directory just for the build artifacts and typically depends on make-time configuration. Visual Studio, for example, puts build files into
Debug/and
Release/subdirectories depending on the configuration selected.
@builddirpoints to these directories.
Here are some examples showing common uses for the anchors:
sources { hello.cpp; // relative to srcdir @builddir/generated_file.c; } includedirs += @top_srcdir/include;
3.3.1. Changing srcdir¶
As mentioned above,
@srcdir can be changed if its default value is
inconvenient, as, for example, is the case when the bakefile itself is in a
subdirectory of the source tree.
Take this for an example:
// build/bakefiles/foo.bkl library foo { includedirs += ../../include; sources { ../../src/foo.cpp ../../src/bar.cpp } }
This can be made much nicer using
scrdir:
// build/bakefiles/foo.bkl srcdir ../..; library foo { includedirs += include; sources { src/foo.cpp src/bar.cpp } }
The
srcdir statement takes one argument, path to the new srcdir (relative
to the location of the bakefile). It affects all
@srcdir-anchored paths,
including implicitly anchored ones, i.e. those without any explicit anchor, in
the module (but not its submodules). Notably, (default) paths for generated
files are also affected, because these too are relative to
@srcdir.
Notice that because it affects the interpretation of all path expressions in
the file, it can only be used before any assignments, target definitions etc.
The only thing that can precede it is
requires.
3.4. Variables and properties¶
Bakefile allows you to set arbitrary variables on any part of the model. Additionally, there are properties, which are pre-defined variables with a set meaning. Syntactically, there’s no difference between the two. There’s semantical difference in that the properties are usually typed and only values compatible with their type can be assigned to them. For example, you cannot assign arbitrary string to a path property or overwrite a read-only property.
3.4.1. Setting variables¶
Variables don’t need to be declared; they are defined on first assignment. Assignment to variables is done in the usual way:
variable = value; // Lists can be appended to, too: main_sources = foo.cpp; main_sources += bar.cpp third.cpp;
Occasionally, it is useful to set variables on other objects, not just in the current scope. For example, you may want to set per-file compilation flags, add custom build step for a particular source file or even modify a global variable. Bakefile uses operator :: for this purpose, with semantics reminiscent of C++: any number of scopes delimited by :: may precede the variable name, with leading :: indicating global (i.e. current module) scope. Here’s a simple example:
3.4.2. Referencing variables¶
Because literals aren’t quoted, variables are referenced using the make-like
$(<varname>) syntax:
platform = windows; sources { os/$(platform).cpp }
A shorthand form, where the brackets are omitted, is also allowed when such use is unambiguous: [2]
if ( $toolset == gnu ) { ... }
Note that the substitution isn’t done immediately. Instead, the reference is included in the object model of the bakefiles and is dereferenced at a later stage, when generating makefile and project files. Sometimes, they are kept in the generated files too.
This has two practical consequences:
-
It is possible to reference variables that are defined later in the bakefile without getting errors.
-
Definitions cannot be recursive, a variable must not reference itself. You cannot write this:defines = $(defines) SOME_MORE
Use operator
+=instead:defines += SOME_MORE
3.5. Targets¶
Target definition consists of three things: the type of the target (an executable, a library etc.), its ID (the name, which usually corresponds to built file’s name, but doesn’t have to) and detailed specification of its properties:
type id { property = value; property = value; ...sources specification... ...more content... }
(It’s a bit more complicated than that, the content may contain conditional statements too, but that’s the overall structure.)
3.5.1. Sources files¶
Source files are added to the target using the
sources keyword, followed by
the list of source files inside curly brackets. Note the sources list may
contain any valid expression; in particular, references to variables are
permitted.
It’s possible to have multiple
sources statements in the same target.
Another use of
sources appends the files to the list of sources, it doesn’t
overwrite it; the effect is the same as that of operator
+=.
See an example:
program hello { sources { hello.cpp utils.cpp } // add some more sources later: sources { $(EXTRA_SOURCES) } }
3.5.2. Headers¶
Syntax for headers specification is identical to the one used for source files,
except that the
headers keyword is used instead. The difference between
sources and headers is that the latter may be used outside of the target (e.g.
a library installs headers that are then used by users of the library).
3.6. Templates¶
It is often useful to share common settings or even code among multiple
targets. This can be handled, to some degree, by setting properties such as
includedirs globally, but more flexibility is often needed.
Bakefile provides a convenient way of doing just that: templates. A template is a named block of code that is applied and evaluated before target’s own body. In a way, it’s similar to C++ inheritance: targets correspond to derived classes and templates would be abstract base classes in this analogy.
Templates can be derived from another template; both targets and templates can be based on more than one template. They are applied in the order they are specified in, with base templates first and derived ones after them. Each template in the inheritance chain is applied exactly once, i.e. if a target uses the same template two or more times, its successive appearances are simply ignored.
Templates may contain any code that is valid inside target definition and may reference any variables defined in the target.
The syntax is similar to C++ inheritance syntax:
template common_stuff { defines += BUILDING; } template with_logging : common_stuff { defines += "LOGGING_ID=\"$(id)\""; libs += logging; } program hello : with_logging { sources { hello.cpp } }
Or equivalently:
template common_stuff { defines += BUILDING; } template with_logging { defines += "LOGGING_ID=\"$(id)\""; libs += logging; } program hello : common_stuff, with_logging { sources { hello.cpp } }
3.7. Conditional statements¶
Any part of a bakefile may be enclosed in a conditional
if statement.
The syntax is similar to C/C++’s one:
defines = BUILD; if ( $(toolset) == gnu ) defines += LINUX;
In this example, the
defines list will contain two items,
[BUILD,
LINUX] when generating makefiles for the
gnu toolset and only one item,
BUILD, for other toolsets.
The condition doesn’t have to be constant, it may reference e.g. options, where
the value isn’t known until make-time; Bakefile will correctly translate them into
generated code. [3]
A long form with curly brackets is accepted as well; unlike the short form, this one can contain more than one statement:
if ( $(toolset) == gnu ) { defines += LINUX; sources { os/linux.cpp } }
Conditional statements may be nested, too:
if ( $(build_tests) ) { program test { sources { main.cpp } if ( $(toolset) == gnu ) { defines += LINUX; sources { os/linux.cpp } } } }
The expression that specifies the condition uses C-style boolean operators:
&&
for and,
|| for or,
! for not and
== and
!= for equality
and inequality tests respectively.
3.8. Build configurations¶
A feature common to many IDEs is support for different build configurations,
i.e. for building the same project using different compilation options.
Bakefile generates the two standard “Debug” and “Release” configurations by
default for the toolsets that usually use them (currently “vs*”) and also
supports the use of configurations with the makefile-based toolsets by
allowing to specify
config=NameOfConfig on make command line, e.g.
$ make config=Debug # ... files are compiled with "-g" option and without optimizations ...
Notice that configuration names shouldn’t be case-sensitive as
config=debug is handled in the same way as
config=Debug in make-based
toolsets.
In addition to these two standard configurations, it is also possible to define your own custom configurations, which is especially useful for the project files which can’t be customized as easily as the makefiles at build time.
Here is a step by step guide to doing this. First, you need to define the new configuration. This is done by using a configuration declaration in the global scope, i.e. outside of any target, e.g.:
configuration ExtraDebug : Debug { }
The syntax for configuration definition is reminiscent of C++ class definition and, as could be expected, the identifier after the colon is the name of the base configuration. The new configuration inherits the variables defined in its base configuration.
Notice that all custom configurations must derive from another existing one, which can be either a standard “Debug” or “Release” configuration or a previously defined another custom configuration.
Defining a configuration doesn’t do anything on its own, it also needs to be
used by at least some targets. To do it, the custom configuration name must be
listed in an assignment to the special
configurations variable:
configurations = Debug ExtraDebug Release;
This statement can appear either in the global scope, like above, in which case it affects all the targets, or inside one or more targets, in which case the specified configuration is only used for these targets. So if you only wanted to enable extra debugging for “hello” executable you could do
program hello { configurations = Debug ExtraDebug Release; }
However even if the configuration is present in the generated project files after doing all this, it is still not very useful as no custom options are defined for it. To change this, you will usually also want to set some project options conditionally depending on the configuration being used, e.g.:
program hello { if ( $(config) == ExtraDebug ) { defines += EXTRA_DEBUG; } }
config is a special variable automatically set by bakefile to the name of
the current configuration and may be used in conditional expressions as any
other variable.
For simple cases like the above, testing
config explicitly is usually all
you need but in more complex situations it might be preferable to define some
variables inside the configuration definition and then test these variables
instead. Here is a complete example doing the same thing as the above snippets
using this approach:
configuration ExtraDebug : Debug { extra_debug = true; } configurations = Debug ExtraDebug Release; program hello { if ( $(extra_debug) ) { defines += EXTRA_DEBUG; } }
Note
As mentioned above, it is often unnecessary (although still possible) to
define configurations for the makefile-based toolsets as it’s always
possible to just write
make CPPFLAGS=-DEXTRA_DEBUG instead of using an
“ExtraDebug” configuration from the example above with them. If you want to
avoid such unnnecessary configurations in your makefiles, you could define
them only conditionally, for example:
toolsets = gnu vs2010; if ( $toolset == vs2010 && $config == ExtraDebug ) defines += EXTRA_DEBUG;
would work as before in Visual Studio but would generate a simpler makefile.
3.9. Build settings¶
Sometimes, configurability provided by configurations is not enough and more flexible settings are required; e.g. configurable paths to 3rdparty libraries, tools and so on. Bakefile handles this with settings: variable-like constructs that are, unlike Bakefile variables, preserved in the generated output and can be modified by the user at make-time.
Settings are part of the object model and as such have a name and additional properties that affect their behavior. Defining a setting is similar to defining a target:
setting JDK_HOME { help = "Path to the JDK"; default = /opt/jdk; }
Notice that the setting object has some properties. You will almost always want to set the two shown in the above example. help is used to explain the setting to the user and default provides the default value to use if the user of the makefile doesn’t specify anything else; both are optional. See Setting properties for the full list.
When you need to reference a setting, use the same syntax as when referencing variables:
includedirs += $(JDK_HOME)/include;
In fact, settings also act as variables defined at the highest (project) level. This means that they can be assigned to as well and some nice tricks are easily done:
setting LIBFOO_PATH { help = "Path to the Foo library"; default = /opt/libfoo; } // On Windows, just use our own copy: if ( $toolset == vs2010 ) LIBFOO_PATH = @top_srcdir/3rdparty/libfoo;
This removes the user setting for toolsets that don’t need it. Another handy use is to import some common code or use a submodule with configurable settings and just hard-code their values when you don’t need the flexibility.
Note
Settings are currently only fully supported by makefiles, they are always replaced with their default values in the project files.
3.10. Submodules¶
A bakefile file – a module – can include other modules as its children.
The
submodule keyword is used for that:
submodule samples/hello/hello.bkl; submodule samples/advanced/adv.bkl;
They are useful for organizing larger projects into more manageable chunks, similarly to how makefiles are used with recursive make. The submodules get their own makefiles (automatically invoked from the parent module’s makefile) and a separate Visual Studio solution file is created for them by default as well. Typical uses include putting examples or tests into their own modules.
Submodules may only be included at the top level and cannot be included
conditionally (i.e. inside an
if statement).
3.11. Importing other files¶
There’s one more way to organize source bakefiles in addition to submodules:
direct import of another file’s content. The syntax is similar to submodules
one, using the
import keyword:
// define variables, templates etc: import common-defs.bkl; program myapp { ... }
Import doesn’t change the layout of output files, unlike
submodule.
Instead, it directly includes the content of the referenced file at the point
of import. Think of it as a variation on C’s
#include.
Imports help with organizing large bakefiles into more manageable files. You could, for example, put commonly used variables or templates, files lists etc. into their own reusable files.
Notice that there are some important differences to
#include:
- A file is only imported once in the current scope, further imports are ignored. Specifically:
- Second import of
foo.bklfrom the same module is ignored.
- Import of
foo.bklfrom a submodule is ignored if it was already imported into its parent (or any of its ancestors).
- If two sibling submodules both import
foo.bkland none of their ancestors does, then the file is imported into both. That’s because their local scopes are independent of each other, so it isn’t regarded as duplicate import.
- An imported file may contain templates or configurations definitions and be included repeatedly (in the (1c) case above). This would normally result in errors, but Bakefile recognizes imported duplicates as identical and handles them gracefully.
The
import keyword can only be included at the top level and cannot be
done conditionally (i.e. inside an
if statement).
3.12. Version checking¶
If a bakefile depends on features (or even syntax) not available in older
versions, it is possible to declare this dependency using the
requires
keyword.
// Feature XYZ was added in Bakefile 1.1: requires 1.1;
This statement causes fatal error if Bakefile version is older than the specified one.
3.13. Loading plugins¶
Standard Bakefile plugins are loaded automatically. But sometimes a custom
plugin needed only for a specific project is needed and such plugins must be
loaded explicitly, using the
plugin keyword:
plugin my_compiler.py;
Its argument is a path to a valid Python file that will be loaded into the
bkl.plugins module. You can also use full name of the module to make it
clear the file is a Bakefile plugin:
plugin bkl.plugins.my_compiler.py;
See the Writing Bakefile plugins chapter for more information about plugins.
3.14. Comments¶
Bakefile uses C-style comments, in both the single-line and multi-line variants. Single-line comments look like this:
Multi-line comments can span several lines:
They can also be included in an expression: | http://docs.bakefile.org/en/latest/language.html | 2018-04-19T19:20:13 | CC-MAIN-2018-17 | 1524125937016.16 | [] | docs.bakefile.org |
AWS services or capabilities described in AWS Documentation may vary by region/location. Click Getting Started with Amazon AWS to see specific differences applicable to the China (Beijing) Region.
RequiresIndexDocuments.
Namespace: Amazon.CloudSearch
Assembly: AWSSDK.dll
Version: (assembly version)
Container for the necessary parameters to execute the IndexDocuments service method.
.NET Framework:
Supported in: 4.5, 4.0, 3.5 | https://docs.aws.amazon.com/sdkfornet/latest/apidocs/items/MCloudSearchCloudSearchIndexDocumentsIndexDocumentsRequestNET45.html | 2018-04-19T19:52:37 | CC-MAIN-2018-17 | 1524125937016.16 | [] | docs.aws.amazon.com |
Authenticating Requests: Using the Authorization Header (AWS Signature Version 4)
Topics
Overview
Using the HTTP
Authorization header is the most common method of
providing authentication information. Except for POST
requests and requests that are signed by using query parameters, all Amazon S3
bucket operations and object operations use the
Authorization request
header to provide authentication information.
The following is an example of the
Authorization header value. Line
breaks are added to this example for readability:
The following is the properly formatted version of the same Authorization header:
Note the following:
There is space between the first two components,
AWS4-HMAC-SHA256and
Credential
The subsequent components,
Credential,
SignedHeaders, and
Signatureare separated by a comma.
The following table describes the various components of the
Authorization header value in
the preceding.
We recommend you include payload checksum for added security.
Unsigned payload option – Do not include payload checksum in signature calculation.
For step-by-step instructions to calculate signature and construct the Authorization header value, see Signature Calculations for the Authorization Header: Transferring Payload in a Single Chunk (AWS Signature Version 4).. For more information, see Signature Calculations for the Authorization Header: Transferring Payload in Multiple Chunks (Chunked Upload) (AWS Signature Version 4).
When you send a request, you must tell Amazon S3 which of the preceding options you
have
chosen in your signature calculation, by adding the
x-amz-content-sha256 header with one of the following
values:
If you choose chunked upload options, set the header value to
STREAMING-AWS4-HMAC-SHA256-PAYLOAD.
If you choose to upload payload in a single chunk, set the header value to the payload checksum (signed payload option), or set the value to the literal string
UNSIGNED-PAYLOAD(unsigned payload option).
Upon receiving the request, Amazon S3 re-creates the string to sign using information
in the
Authorization header and the
date header. It then
verifies with authentication service the signatures match. The request date can
be
specified by using either the HTTP
Date or the
x-amz-date
header. If both headers are present,
x-amz-date takes precedence.
If the signatures match, Amazon S3 processes your request; otherwise, your request will fail.
For more information, see the following topics:
Signature Calculations for the Authorization Header: Transferring Payload in a Single Chunk (AWS Signature Version 4)
Signature Calculations for the Authorization Header: Transferring Payload in Multiple Chunks (Chunked Upload) (AWS Signature Version 4) | https://docs.aws.amazon.com/AmazonS3/latest/API/sigv4-auth-using-authorization-header.html | 2018-04-19T19:42:14 | CC-MAIN-2018-17 | 1524125937016.16 | [] | docs.aws.amazon.com |
How to handle errors with promises (HTML)
[This article is for Windows 8.x and Windows Phone 8.x developers writing Windows Runtime apps. If you’re developing for Windows 10, see the latest documentation]
It can sometimes be difficult to know how to handle exceptions in the different stages of a promise. In Quickstart: Using promises in JavaScript we showed how to use a basic Windows Library for JavaScript promise and handle errors in the then function.
In this topic we show several ways to catch and handle errors when you use promises. You don't need to implement all of these kinds of error handling, but you should pick the type of error handling that best suits your app.
Prerequisites
- The code in this topic is based on the app created in Quickstart: Using promises in JavaScript, so if you want to follow along you should create the app first. Note that the colors mentioned below appear as described only if you use the "ui-dark.css" stylesheet.
Instructions
Step 1: Turn on JavaScript first-chance exceptions
If you turn on first-chance exceptions, you'll be able to see exceptions without doing anything else to handle them. In Visual Studio, on the Debug menu, select Exceptions, and in the Exceptions dialog box, make sure that JavaScript Runtime Exceptions are selected to be Thrown. After doing this, run the app in debug mode. When an exception is thrown, you see a message box that displays an error. You see this kind of dialog box only when the app is running in the debugger, so users won't see these exceptions.
Step 2: Add an error handler in a then or done function
When you use a promise, you should add a then or done function to explicitly handle the completed value of the promise (that is, the value that's returned by the promise if there is no error) and the error value. Both then and done take three functions (completed, error, and progress) as optional parameters. You use the completed function to perform updates after the promise has been fulfilled, the error function to handle errors, and the progress function to display or log the progress that the promise is making..
Let's look at a couple of ways to handle errors in then and done functions. In this example, we'll set up a chain of two then functions where the first then passes an error to the second one.
In the TestPromise project, add a second DIV element and give it an ID of "div2":
<div id="div2">Second</div>
In the change handler, remove the error handler from the then function, and get the second DIV element:
function changeHandler(e) { var input = e.target; var resDiv = document.getElementById("divResult"); var twoDiv = document.getElementById("div2"); WinJS.xhr({url: e.target.value}).then(function fulfilled (result) { if (result.status === 200) { resDiv.style.backgroundColor = "lightGreen"; resDiv.innerText = "Success"; }); }
Add a second then function and add to it a fulfilled function that changes the color of the second DIV if the result is successful. Add the original error handler to that function."; } }, function "; } }); }
When you run this code in the debugger, try inputting a URL that isn't valid. You can see that the execution enters the error function in the second then function. As a result, the first DIV should be red, and the second DIV should be black.
In the following example we'll remove the error function from the second then function and add a done function that doesn't have an error handler.
In the TestPromise project, if you have not already added a second DIV element with an element of "div2", you should do so now.
Modify the change handler to remove the error function from the second then and add a done function that turns the second DIV blue. The change handler code should look like this:"; } }) .done(function (result) { if (result.status === 200) { twoDiv.style.backgroundColor = "lightBlue"; } }); }
When you run this code in the debugger, try inputting a URL that isn't valid. You should see a message box displaying an error. Both the first and second DIVs should be black, because none of the code in the then functions or the done function was executed.
Step 3: Add a WinJS.promise.onerror handler
The onerror event occurs whenever a runtime error is caught in a promise..
Here's how to add a general error handler:
In the TestPromise project, remove the then function from the
changeHandlercode. The resulting
changeHandlerfunction should look like this:
function changeHandler(e) { var input = e.target; var resDiv = document.getElementById("divResult"); WinJS.xhr({url: e.target.value}); }
Create a general error handling function and subscribe to the onerror event in the app.activated event handler:
app.activatedHandler = function (args) { var input = document.getElementById("inUrl"); input.addEventListener("change", changeHandler); WinJS.Promise.onerror = errorHandler } function errorhandler(event) { var ex = event.detail.exception; var promise = event.detail.promise; }
The error event provides general error information, such as the exception, the promise in which it occurred, and the current state of the promise (which is always error). But it probably doesn't provide all the information you need to handle the error gracefully. Still, it can provide useful information about errors that you don't explicitly handle elsewhere in your code.
Related topics
Asynchronous programming in JavaScript | https://docs.microsoft.com/en-us/previous-versions/windows/apps/hh700337(v=win.10) | 2018-04-19T19:47:37 | CC-MAIN-2018-17 | 1524125937016.16 | [] | docs.microsoft.com |
API testing means that there's an easy way to send HTTP requests within your tests.
This makes it easier to extract values returned from an API call and use them in UI related steps, and validate that the value in the backend correlates to the frontend.
As always, we differentiate between actions and validations:
- API actions - Should be used when in need to get data and use it for calculation, or to save it for later use in the test.
- API validations - Should be used to validate data returned from an API call. Usually validating data on the backend.
API Action
How to add
- Hover over the arrow menu where you want to add your API call.
- Click ‘+’ ⇒ ‘Actions’ ⇒ ‘Add API action’.
3. Fill the following data:
- Method: Get /Post/ Put/ Patch/ Delete/ Copy/ Head/ Options
- URL: Needless to explain :)
- Header: Fill the keys and values that need to be sent to your API. The 'Raw' switch will allow you to enter the values in their raw format e.g. when copies from the browser's devtools network panel.
- Body: Select the data structure you want to send, and fill in the text box. Use the "Text" option for entering free text, e.g. sending a key and a value.
- Send via web page (aka "send cookies"): Uncheck only if you want to send the API call outside the browser context so that browser-restrictions do not apply to it. For example, if your API doesn't support CORS.
Keep this option checked if you need the API to also send browser information such as cookies (they are sent automatically).
Run additional code
Used to run code after the API call. You can run any JavaScript code, and use the data returned from the API call.
- Status code
- Response Body
- Response Headers
If the "Response Body" content type is XML/JSON the parameter type will be an Object otherwise the parameter type is String.
As any JS step, you can export the returned value from the API to be use in the following steps.
Note: If you're running via a webpage, and the page has not finished loading, this step can fail. If the previous step requires loading, add a wait for before the API step to verify that the page has finished loading.
API Validation
Use an API validation step to validate the returned value or to validate them against elements in the UI.
As in every custom (JS) validation, it is necessary to return a boolean (truthy/falsy in JS) values in the additional code section.
Read more about validations here.
How to add API steps
- Hover over the arrow menu where you want to add your API call.
- Click ‘+’ ⇒ ‘Validations’ ⇒ ‘Validate API’.
- Fill the data as described in API action.
- Fill the additional code section so it returns True/False value.
Parameters
You can use parameters in the API step as you would in any other custom (JS) step. Either in-param, as dependency injection, or out-param via the exports/exportsGlobal.
Read more about parameters options here.
Using parameters in the sent HTTP request
Parameters can also be used in the header, body, and URL.
Since those sections are cumbersome to write in JS, we made it easy for you. In these sections you will need to add double curly brackets around the parameters.
For example:
Note: You can use triple brackets if you do not want the parameters to be decoded. e.g {{{param}}}
Using parameters in the HTTP response
Parameters added in the property panel will automatically be added to the function's signature. Plus you can also access any other variables in the test's scope.
Result Run
After running the step, you'll see the response returned from the API call on the "Response" tab, and sometime more info, such as response status code, call duration, and the size of the binary files.
Reusing API steps
API steps are reusable components similar to groups and custom actions. | https://docs.testim.io/advanced-steps/api-testing | 2018-04-19T19:14:29 | CC-MAIN-2018-17 | 1524125937016.16 | [array(['https://downloads.intercomcdn.com/i/o/39175989/9c923c39c738c86b0150e1c7/add+api.gif',
None], dtype=object)
array(['https://downloads.intercomcdn.com/i/o/39177067/5055c1402ca5aa21d100eb8e/image.png',
None], dtype=object)
array(['https://downloads.intercomcdn.com/i/o/39257693/49c410c8b7d2a5e9ed9c0fbb/image.png',
None], dtype=object)
array(['https://downloads.intercomcdn.com/i/o/42237365/c604f064ceb08d6f6ff4503f/image.png',
None], dtype=object)
array(['https://downloads.intercomcdn.com/i/o/39177559/3e0f57ccc10ea8ad7d7e1582/image.png',
None], dtype=object)
array(['https://downloads.intercomcdn.com/i/o/39177667/6b2bbeff075b3dd4d06a7825/image.png',
None], dtype=object) ] | docs.testim.io |
Version 4.4 Release Notes
Contents
Release Information
Version 4.4.0
- Release Type: Beta (For Testing)
- Release Date: 1st December 2010
Version 4.4.1
- Release Type: Stable Release
- Release Date: 13th December 2010
Version 4.4.2
- Release Type: Bug Fix/Patch Release
- Release Date: 13th January 2011
Full Changelog
You can view a full list of features, changes, tweaks and enhancements in the forum announcement @
Upgrade Steps
To upgrade your WHMCS System, simply follow the instructions below.
- Begin by taking a backup of your database using a tool such as phpMyAdmin
- Now download the latest WHMCS version either from our client area () or from the provider of your license
- Next, unzip the contents of the WHMCS finish, delete the install folder from your server
The upgrade is now completed but remember to update your custom template with any new or changed template files as listed below.
Template Changes
The following is a list of client area template files that have changed in this release.
- announcements.tpl (default template only - fixed twitter link)
- clientareahome.tpl (added support for addon output from hooks/modules)
- clientareacreditcard.tpl (disabled auto complete for cc fields)
- clientareadomaindns.tpl (added support for LogicBoxes remote DNS Record Interface)
- clientareadomainemailforwarding.tpl (same as above)
- creditcard.tpl (disabled auto complete for cc fields)
The following order form template files have also changed:
- login.tpl (updated login form post url)
- viewcart.tpl (disabled auto complete for cc fields)
With the implementation of multi-language support for the admin area, nearly all the admin area templates have changed so we recommend a full replacement of the /admin/templates/ folder and files.
Points Of Note
- New Addon Modules System - Firstly, don't worry - all existing "admin" modules are backwards compatable with the new addon modules system so they aren't going to break or stop workingas a result of the upgrade. However, the new addon modules system will require you to activate any existing addon modules you use before they show up. This can be done in Setup > Addon Modules by full administrator level users, and we have created a brief video tutorial to demonstrate the new addon modules system management area and walk you through how to do this @
- Admin Area Template Updates - The admin area has had a new menu system implemented and so if you experience any display issues after upgrading, check that the templates_c folder is still writeable and that you've refreshed your browser cache for the new CSS definitions to take effect (either perform a hard refresh or clear your cache). And if you still have problems after that, double check all the file uploads completed successfully.
Video Tutorials
- Invoice Splitting & Merging -
- Admin Product/Service Upgrades -
- Mass Product/Service Updates -
- New Addon Modules System -
Documentation Links
For further information and guidance on how to use some of the new features, please refer to the new documentation links below:
New Language File Lines
There are only 3 new client area language file lines in V4.4 and can be found at the bottom of all supplied language files. | http://docs.whmcs.com/Version_4.4_Release_Notes | 2017-06-22T14:09:02 | CC-MAIN-2017-26 | 1498128319575.19 | [] | docs.whmcs.com |
App Features¶
API responses may be modified to exclude applications a device is unable to run.
Features List¶
GET
/api/v2/apps/features/¶
Returns a list of app features devices may require.
Response
The response will be an object with each key representing a feature. The following parameters will be set for each feature:
If a feature profile is passed, then each feature will also contain the following:
Example:
{ "apps": { "position": 1, "name": "Apps", "description": "The app requires the `navigator.mozApps` API." }, "packaged_apps": { "position": 2, "name": "Packaged apps", "description": "The app requires the `navigator.mozApps.installPackage` API." }, ... } | http://firefox-marketplace-api.readthedocs.io/en/latest/topics/app_features.html | 2017-06-22T14:18:06 | CC-MAIN-2017-26 | 1498128319575.19 | [] | firefox-marketplace-api.readthedocs.io |
Writing TDMS files¶
npTDMS has rudimentary support for writing TDMS files. The full set of optimisations supported by the TDMS file format for speeding up the writing of files and minimising file size are not implemented by npTDMS, but the basic functionality required to write TDMS files is available.
To write a TDMS file, the
nptdms.TdmsWriter class is used, which
should be used as a context manager.
The
__init__ method accepts the path to the file to create, or a file
that has already been opened in binary write mode:
with TdmsWriter("my_file.tdms") as tdms_writer: # write data
The
nptdms.TdmsWriter.write_segment() method is used to write
a segment of data to the TDMS file. Because the TDMS file format is designed
for streaming data applications, it supports writing data one segment at a time
as data becomes available.
If you don’t require this functionality you can simple call
write_segment once
with all of your data.
The
write_segment method takes a list of objects, each of which must be an
instance of one of:
nptdms.RootObject. This is the TDMS root object, and there may only be one root object in a segment.
nptdms.GroupObject. This is used to group the channel objects.
nptdms.ChannelObject. An object that contains data.
nptdms.TdmsObject. A TDMS object that was read from a TDMS file using
nptdms.TdmsFile.
Each of
RootObject,
GroupObject and
ChannelObject
may optionally have properties associated with them, which
are passed into the
__init__ method as a dictionary.
The data types supported as property values are:
- Integers
- Floating point values
- Strings
- datetime objects
- Boolean values
For more control over the data type used to represent a property value, for example
to use an unsigned integer type, you can pass an instance of one of the data types
from the
nptdms.types module.
A complete example of writing a TDMS file with various object types and properties is given below:
from nptdms import TdmsWriter, RootObject, GroupObject, ChannelObject root_object = RootObject(properties={ "prop1": "foo", "prop2": 3, }) group_object = GroupObject("group_1", properties={ "prop1": 1.2345, "prop2": False, }) data = np.array([1.0, 2.0, 3.0, 4.0, 5.0]) channel_object = ChannelObject("group_1", "channel_1", data, properties={}) with TdmsWriter("my_file.tdms") as tdms_writer: tdms_writer.write_segment([ root_object, group_object, channel_object])
You could also read a TDMS file and then re-write it by passing
nptdms.TdmsObject
instances to the
write_segment method. If you want
to only copy certain objects for example, you could do something like:
from nptdms import TdmsFile, TdmsWriter original_file = TdmsFile("original_file.tdms") with TdmsWriter("copied_file.tdms") as copied_file: objects_to_copy = [obj for obj in original_file.objects.values() if include_object(obj)] copied_file.write_segment(objects_to_copy) | http://nptdms.readthedocs.io/en/latest/writing.html | 2017-06-22T14:02:47 | CC-MAIN-2017-26 | 1498128319575.19 | [] | nptdms.readthedocs.io |
Brenda Longfellow is an award-winning filmmaker, writer and film theorist. Her productions includeOur Marilyn (1987), an experimental documentary on Canadian swimmer Marilyn Bell; the feature-length drama Gerda (1992), on the life and times of Gerda Munsinger;, which won Best Arts Program at the Yorkton Film Festival, Bronze at the Columbus Film Festival, and a Golden Rose at the Montreux Television Festival.
Dr. Longfellow has published numerous articles on feminist film theory and Canadian cinema inCineTracts, Screen, CineAction and the Journal of Canadian Film Studies. She is a co-editor of the recent anthology Gendering the Nation: Canadian Women Filmmakers.
Session Title: Offshore: An Interactive Web Documentary in Process
OFFSHORE is an interactive web documentary currently in process that uses non-linear storytelling to immerse viewers in the real world consequences of offshore drilling: ‘Extreme Oil’ some call it or “Cowboy Drilling”– hundreds of miles offshore, thousands of feet beneath the ocean floor, in dangerous and risky conditions where the hazards are immense but the profits are bigger, and where the consequences of something going wrong are catastrophic. Patchily regulated, frequently invisible, offshore represents the last frontier of hydrocarbon extraction in the 21st century as ‘wildcatters’ and oil majors, Russian oligarchs and Scottish ‘minnows’ get in the game, elbowing for leases and drilling off the coasts of Barbados, China, Russia, Greenland, Vietnam, Cuba, Brazil, Angola, Ghana and newly minted petro states like the tiny islands of Sao Tome and Principe in the Gulf of Guinea.
Using an innovative mixture of fiction and documentary, OFFSHORE takes the viewer on a journey to three key sites: Louisiana, the Gulf of Guinea and the Arctic as we meet characters and commentators implicated in the shadow world of offshore. Our central ‘avatar’ is a 25 year old computer programmer named Elsbeth. Elsbeth is an MIT graduate and secret hacker. We journey with Elsbeth as she moves on and off the rig, flying the ‘hitch’ every 21 days. Moving between on and offshore locations, virtual and real spaces, our stalwart heroine encounters documentary characters and fictional composites, enters a secret archive on board which holds clues and evidence on corruption in the oil industry and begins to connect with an international network of informants and correspondents on the web who feed her information on the secret political machinations, regulatory malfeasance and ecological fallout of offshore oil exploitation.
As she delves further into this web of intrigue, she faces a stark and dramatic choice: to continue her support of a system that provides her with handsome renumeration or shift her loyalties to a growing underground movement of hackers and activists, nouveau pirates and dreamers who are taking over abandoned oil rigs and reconstituting them as utopic intentional communities.
This paper will show our prototypes and discuss some of the challenges we are facing as we attempt to integrate real world actions and virtual spaces as we design a web journey that implicates viewers in deep ethical and political issues.
Pingback: OFFSHORE, an interactive documentary about the next chapter of oil exploration and exploitation | Transmedia Camp 101 | http://i-docs.org/idocs-2012/speakers-2/brenda-longfellow/ | 2017-06-22T14:18:35 | CC-MAIN-2017-26 | 1498128319575.19 | [array(['http://i2.wp.com/i-docs.org/wp-content/uploads/2012/02/brenda.jpeg?resize=200%2C133',
'brenda'], dtype=object) ] | i-docs.org |
Authentication¶
Not all APIs require authentication. Each API will note if it needs authentication.
Two options for authentication are available: shared-secret and OAuth.
OAuth¶
Marketplace provides OAuth 1.0a, allowing third-party apps to interact with its API. It provides it in two flavours: 2-legged OAuth, designed for command line tools and 3-legged OAuth designed for web sites.
See the OAuth Guide and this authentication flow diagram for an overview of OAuth concepts.
Web sites¶
Web sites that want to use the Marketplace API on behalf of a user should use the 3-legged flow to get an access token per user.
When creating your API token, you should provide two extra fields used by the Marketplace when prompting users for authorization, allowing your application to make API requests on their behalf.
- Application Name should contain the name of your app, for Marketplace to show users when asking them for authorization.
- Redirect URI should contain the URI to redirect the user to, after the user grants access to your app (step D in the diagram linked above).
The OAuth URLs on the Marketplace are:
- The Temporary Credential Request URL path is /oauth/register/.
- The Resource Owner Authorization URL path is /oauth/authorize/.
- The Token Request URL path is /oauth/token/.
Command-line tools¶
If you would like to use the Marketplace API from a command-line tool you don’t need to set up the full 3 legged flow. In this case you just need to sign the request. Some discussion of this can be found here.
Once you’ve created an API key and secret you can use the key and secret in your command-line tools.
Production server¶
The production server is at.
- Log in using Persona:
- At provide the name of the app that will use the key, and the URI that Marketplace’s OAuth provide will redirect to after the user grants permission to your app. You may then generate a key pair for use in your application.
- (Optional) If you are planning on submitting an app, you must accept the terms of service:
Development server¶
The development server is at.
We make no guarantees on the uptime of the development server. Data is regularly purged, causing the deletion of apps and tokens.
Using OAuth Tokens¶
Once you’ve got your token, you will need to ensure that the OAuth token is sent correctly in each request.
To correctly sign an OAuth request, you’ll need the OAuth consumer key and secret and then sign the request using your favourite OAuth library. An example of this can be found in the example marketplace client.
Example headers (new lines added for clarity):
Content-type: application/json Authorization: OAuth realm="", oauth_body_hash="2jm...", oauth_nonce="06731830", oauth_timestamp="1344897064", oauth_consumer_key="some-consumer-key", oauth_signature_method="HMAC-SHA1", oauth_version="1.0", oauth_signature="Nb8..."
If requests are failing and returning a 401 response, then there will likely be a reason contained in the response. For example:
{"reason": "Terms of service not accepted."}
Example clients¶
- The Marketplace.Python library uses 2-legged OAuth to authenticate requests.
- Curling is a command library to do requests using Python. | http://firefox-marketplace-api.readthedocs.io/en/latest/topics/authentication.html | 2017-06-22T14:15:52 | CC-MAIN-2017-26 | 1498128319575.19 | [] | firefox-marketplace-api.readthedocs.io |
Exam Centers
Accredited degree, diploma, certificate programs and IAP requires that you attempt the final exam of the program in an approved exam center. This article covers commonly asked questions related register an exam center in your city.
3. How can I register for an existing center or how can I send a request for setting up a new center?
4. I am travelling out of the country on the date of the final exams, what should I do?
5. Is it possible for me to register at two centers as I am travelling in the final exam period?
6. Who or what is the proctor?
Proctor is a person who invigilate students during the exams and is appointed by the exam center itself.
7. Where is the exam center registration portal?
The ‘Exam Center Portal’ is located at the IOU campus ‘My Home’ page: IOU Campus > My Home > scroll > Student Applications > Exam Center Portal. Or once logged in, it may be accessed through this direct link.
8. When are the exams conducted?
For all important deadlines including the final examination dates, please see the semester events' schedule of the current semester here , which can be downloaded through the ‘Academic Block’ of ‘My Home’ page.
9. Does IOU send the passwords to the center and when?
Yes, IOU sends the passwords to the exam center few days before the commencement of the final exams.
10. My proctor have not received the password. What can I do?
Kindly ask the proctor for further assistance before the late exam period.
12. I have not registered in any exam center for my final exam and the deadline for registration have ended. What can I do ? | http://docs.islamiconlineuniversity.com/article/903-exam-centers | 2017-06-22T14:16:11 | CC-MAIN-2017-26 | 1498128319575.19 | [] | docs.islamiconlineuniversity.com |
:
Constructors¶
- class
machine.
Pin(id, ...)¶
Create a new Pin object associated with the id. If additional arguments are given, they are used to initialize the pin. See
pin.init().
Methods¶([value])¶
Pin objects are callable. The call method provides a (fast) shortcut to set and get the value of the pin. See pin.value for more details.
Attributes¶
Constants¶
The following constants are used to configure the pin objects. Note that not all constants are available on all ports. | http://docs.openmv.io/library/machine.Pin.html | 2017-06-22T14:12:19 | CC-MAIN-2017-26 | 1498128319575.19 | [] | docs.openmv.io |
How to create an app in PubCoder:
the following guide will show you how to prepare a layout and export it as a native App for Android and iOS devices
– Create a default workspace (or add a new one, if already existing), based on a iOS or Android format.
– Multi-format App: if you want to create a design for a specific device size, add a new workspace in the desired format (iOS or Android) with the desired size. Make sure that the new workspace is based on the first workspace you created, so that all assets inherit the properties set in the first workspace.
– Multi-orientation App: you can create a design for a specific orientation. Just switch the “Orientation” selection button to the desired one and layout your assets as desired.
– Multi-localized App: you can create as many localized versions as you prefer. Add new workspaces based on a new localization, switch to the new workspace and customize your assets. PubCoder will take care to feed a Language Menu in the App with all the localizations found, via this menu the End User to view the App in any desired language included in your App.
In General Settings you can set the User Interface options for your App. Please note that all settings are valid for the specific workspace you are working on. Main options are:
– Page Progression Direction. Here you can set the reading direction of your App, options are: left to right and right to left.
– Pixels density: define high resolution rate for your App (see Pixels density section)
– Facing Pages in Landscape: in a Portrait workspace you can activate rotation in Landscape and view on screen two pages in portrait mode.
– Pages Thumbnail Mask: the Table of Contents Menu is a Scrollview at the bottom of screen with a print screen of the page as thumbnail of the page. You can customize its aspect by inserting here your preferred image as mask.
– Swipe to Navigate: enable swipe to browse between pages. If not activated we recommend to use custom navigation assets on overlay (see below Custom User Interface).
– Touch to Open App Menu: enable toggle open and close default toolbars on touching any part of the screen (excluding interactive assets). If not activated we recommend to include a custom button on overlay (see below Custom User Interface).
– Custom User Interface. If you don’t want to use swipe to navigate and touch to open App menu features, you can create your own custom assets to browse between pages and access to specific funcionalities (open index and localizations menu). We recommend to use the Overlay layout in the Table of Contents. Here you can position your custom assets for navigation options: go to next page, go to previous page, open App menu, open localizations.
| https://docs.pubcoder.com/en/create-an-app-in-pubcoder/ | 2017-06-22T14:05:47 | CC-MAIN-2017-26 | 1498128319575.19 | [array(['https://docs.pubcoder.com/wp-content/uploads/2014/10/4.screenLandscapeWithLanguages.jpg',
'screen Landscape With Languages'], dtype=object)
array(['https://docs.pubcoder.com/wp-content/uploads/2014/10/overlay.png',
'overlay'], dtype=object)
array(['https://docs.pubcoder.com/wp-content/uploads/2014/10/overlay-include-facing-pages.png',
'overlay-include-facing-pages'], dtype=object) ] | docs.pubcoder.com |
Send Docs Feedback
4.14.04.02 - Apigee Edge on-premises release notes?
- If something's not working: Ask the Apigee Community or see Apigee Support.
- If something's wrong with the docs: Send Docs Feedback
(Incorrect? Unclear? Broken link? Typo?) | http://docs.apigee.com/release-notes/content/4140402-apigee-edge-premises-release-notes | 2017-02-19T16:36:01 | CC-MAIN-2017-09 | 1487501170186.50 | [] | docs.apigee.com |
Welcome to phconvert’s documentation!¶
phconvert is a python 2 & 3 library which helps writing valid Photon-HDF5 files. This document contains the API documentation for phconvert.
The phconvert library contains two main modules: hdf5 and loader. The former contains functions to save and validate Photon-HDF5 files. The latter, contains functions to load other formats to be converted to Photon-HDF5.
The phconvert repository contains a set the notebooks to convert existing formats to Photon-HDF5 or to write Photon-HDF5 from scratch:
In particular see notebook Writing Photon-HDF5 files (read online) as an example of writing Photon-HDF5 files from scratch.
Finally, phconvert repository contains a JSON specification of the Photon-HDF5 format which lists all the valid field names and corresponding data types and descriptions.
Contents: | http://phconvert.readthedocs.io/en/stable/ | 2017-02-19T16:40:42 | CC-MAIN-2017-09 | 1487501170186.50 | [] | phconvert.readthedocs.io |
Query Java Objects
MQL allows you to easily filter, transform and join objects from within your Java code. This example will show you how to filter a List of User objects.
Let’s say your User data base has a List of users like this:
where your User object has fields along with getters and setters for the fields:
And you would like to filter these users to only include engineers.
First, populate the context:
Then, execute your query to select them:
You’ll get a single object back in the collection with the user "Dan Diephouse." | https://docs.mulesoft.com/mule-user-guide/v/3.2/mql-query-java-objects | 2017-02-19T16:31:30 | CC-MAIN-2017-09 | 1487501170186.50 | [] | docs.mulesoft.com |
User Guide
Local Navigation
Search This Document
Link a friend with a contact in the contacts application
When you link a friend with a contact in the contacts application, you might be able to perform additional tasks. For example, during a chat, you might be able to call or send an email message quickly to the friend from the menu.
Next topic: Rename a friend
Previous topic: View a friend's information
Was this information helpful? Send us your comments. | http://docs.blackberry.com/en/smartphone_users/deliverables/40700/Link_friend_w_contacts_app_GTalk_610530_11.jsp | 2013-12-05T03:17:02 | CC-MAIN-2013-48 | 1386163038799 | [] | docs.blackberry.com |
Jet from Sun will take you through how to create EJB3 classes.
Here's an extremely simple example of a stateless session bean showing some of the JEE-style:
A maven project that builds a deployble webapp for the example we've been looking at is attached to this page. | http://docs.codehaus.org/pages/diffpages.action?pageId=68403&originalId=228169971 | 2013-12-05T03:05:23 | CC-MAIN-2013-48 | 1386163038799 | [] | docs.codehaus.org |
FEST's Swing module, drag 'n drop)
- Reliable GUI component lookup (by type, by name or custom search criteria)
- Support for all Swing components included in the JDK
- Compact and powerful API for creation and maintenance of functional GUI tests
- Support for regular expression matching
-. | http://docs.codehaus.org/pages/viewpage.action?pageId=228174061 | 2013-12-05T03:17:03 | CC-MAIN-2013-48 | 1386163038799 | [] | docs.codehaus.org |
13 drracket:language-configuration
Adds language to the languages offered by DrRacket.
The settings is a language-specific record that holds a value describing a parameterization of the language.
The show-welcome? argument determines if if a “Welcome to DrRacket”.)
The first two results of the function return a language object and a settings for that language, as chosen by the user using the dialog. The final function should be called when keystrokes are typed in the enclosing frame. It is used to implement the shortcuts that choose the two radio buttons in the language dialog. | http://docs.racket-lang.org/tools/drracket_language-configuration.html | 2013-12-05T03:18:25 | CC-MAIN-2013-48 | 1386163038799 | [] | docs.racket-lang.org |
User Guide
Local Navigation
Search This Document
Welcome to BlackBerry!
This is one of the many resources available to help you use your BlackBerry® device. You can look for answers in the Help application on the Home screen of your device, or by pressing the Menu key and clicking Help in most applications.
Next topic: Feature availability
Was this information helpful? Send us your comments. | http://docs.blackberry.com/en/smartphone_users/deliverables/11298/Welcome_to_BlackBerry_50Stratus_and_later_1025438_11.jsp | 2013-12-05T03:04:06 | CC-MAIN-2013-48 | 1386163038799 | [] | docs.blackberry.com |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.