content
stringlengths 0
557k
| url
stringlengths 16
1.78k
| timestamp
timestamp[ms] | dump
stringlengths 9
15
| segment
stringlengths 13
17
| image_urls
stringlengths 2
55.5k
| netloc
stringlengths 7
77
|
---|---|---|---|---|---|---|
New Relic has a number of APIs. This document will introduce you to the REST API, which lets you retrieve data from New Relic products, extract data via GET requests, and also includes some configuration and delete capabilities.
You can also use the API Explorer to understand the data available to you via the REST API, to obtain cURL commands, and to see JSON responses.
Access to metric timeslice data depends on your subscription level. Summary data is the only data available for free New Relic accounts.
Video tutorial
For an overview of New Relic's REST API, watch this video.
[video link] For more information, check out New Relic University’s tutorial New Relic APIs.
Setup
Owner or Admins
To use the REST API you must activate API access and generate your API keys from your account settings. You can then acquire data via the command line. The command structure follows this template:
curl -X GET <URL> -H "X-api-key:${API_KEY}" -d '<PAYLOAD>'
The
GET command could also be a
POST or
DELETE, depending on the query intent.
The examples in New Relic documentation use cURL as a common command line tool to pull metric timeslice data from the REST API. However, you can use any method to make your REST requests. The
curl commands include target URLs, header information, and data which are relevant for any request mechanism.
URL
The API calls require a URL to specify the location from which the data will be accessed. You must replace the placeholder
<URL> with the appropriate URL which will change depending on the type of data being requested. In general the URL follows this template:{APPID}/metrics/data.json
The ${APPID} specifies the exact application or product for which the data is being requested. The information following this parameter will vary depending on the data request.
If you have an EU region account, the URL is:
api.eu.newrelic.com/v2/applications/${APPID}/metrics/data.json
You can retrieve XML data instead of JSON by replacing
.json with
.xml.
API key
${API_KEY}
New Relic API calls require an API key in the call header. The API key uniquely identifies your account and authorizes access to your account data. New Relic borrows the placeholder ${API_KEY} from Unix shell programming; be sure to replace ${API_KEY} with an API key from your New Relic account.
Query details (PAYLOAD)
The <PAYLOAD> contains the query details, which define:
- The metric name you want to query and the value you want to retrieve
- The defined time range for retrieving metrics
- (Optional): The average of the metric timeslice data by using summarize
Examples
See the following documents for example API use cases:
- APM examples (how to retrieve metric timeslice data from New Relic APM)
- Browser examples (how to retrieve metric timeslice data from New Relic Browser)
- Labels examples (how to retrieve information about your labels and categories for apps)
- Plugin examples (how to retrieve information and metric timeslice data about plugins from New Relic Plugin Central)
For alerting, see the appropriate documents for your alerting system: | https://docs.newrelic.com/docs/apis/rest-api-v2/getting-started/introduction-new-relic-rest-api-v2 | 2019-06-16T03:38:08 | CC-MAIN-2019-26 | 1560627997533.62 | [] | docs.newrelic.com |
New features
Added
productattribute to existing datastore instrumentations.
Added
db.collectionto datastore span event attributes.
Improvements
trusted_account_key,
account_id, and
primary_application_idmay now be configured via a configuration file while in serverless mode.
Optimized exclusive time duration calculator.
Previously, the agent would spend a lot of time sorting redundant arrays while calculating the exclusive time for the segments of a trace. This has been refactored into a single postorder traversal over the tree which will calculate the exclusive time for all segments in the subtree rooted at a given segment.
Fixes
Fixed a bug where data belonging to distributed traces starting in the Node.js agent would be prioritized over data produced from traces starting in other language agents. Previously, the agent would use the same random number for both the transaction priority (used for data sampling) and the distributed tracing trace sampling decision (whether to create DT data for a given transaction). This random number reuse resulted in a bias that caused data from distributed traces started in the Node.js agent to be prioritized above data that belongs to distributed traces started in other language agents. The agent now makes individual rolls for each of these quantities (i.e. the transaction priority and trace sampling decision), eliminating the bias.
Prevent a split on undefined location under certain conditions in Memcached. | https://docs.newrelic.com/docs/release-notes/agent-release-notes/nodejs-release-notes/node-agent-560 | 2019-06-16T02:48:59 | CC-MAIN-2019-26 | 1560627997533.62 | [] | docs.newrelic.com |
New Relic's global data hosting structure consists of two regions: the European Union (EU) region and the United States (US) region. Selecting your preferred region during the account setup process allows you to specify in which region your performance monitoring data will be hosted.
Requirements
Access to the New Relic EU region requires the latest agent version.
- For new customers: Install the most recent agent version.
- For existing customers: Update to the most recent agent version.
Minimum agent version required:
-. New Relic offers almost all the same active products, features, support offerings, and performance levels in the EU region as is offered in the US region.
The following are not supported with an EU region account:
- Synthetics currently is not available in the EU region.
- Plugins is unavailable and not supported in the EU region.
- Deprecated products and features are not available in the EU region.
EU region account hierarchy
If your data is currently being hosted in the US region, a new account must be created to store data in the EU region. You cannot view EU data from a US account, or US data from an EU account and the data collected remains separate. The data can't be aggregated or migrated between accounts.
For standard accounts, you can only have one master account. For more information, see Manage apps or users with sub-accounts.
For partnership accounts, no changes to the partnership owner account are required. However, because data cannot be shared across regions, a partnership requires a master account for each region.
- Hierarchy example for partnership accounts
With partnership accounts, a new master account must be created for any data to be host in the EU region. This hierarchy illustrates how global accounts are structured with partnership owner accounts. Data is not aggregated beyond the master account.
Example hierarchy for partnership organizations. Because data cannot be shared across regions, a partnership will require a master account for each region.
Create an EU region account
To create a New Relic account in the EU region:
Go to New Relic's EU region:
Accessing New Relic One
New Relic One is our unified UI that gathers everything you monitor with New Relic in one place. If your accounts report data to the EU data center, use the following link to access: one.eu.newrelic.com
Billing and pricing
New Relic's account billing process and pricing options are the same for both the EU and US regions.
Operational access and processing
Application performance data is hosted in the region selected during account creation. All other information, including account information (such as license subscription information, billing, and internal monitoring) is hosted in the US region and replicated in the EU region.
New Relic may access and process application performance data in the United States and such other jurisdictions where New Relic has affiliates and subsidiaries, including as may be necessary to maintain, secure, or perform the services, to provide technical support, or as necessary to comply with law or a binding order of a government body. New Relic APM, mouse over the application name to view the URI. If it begins with
rpm.eu.newrelic.com/, it is an EU-based account.
- Check your New Relic license key. If it begins with
EU, it is an EU-based account. | https://docs.newrelic.com/docs/using-new-relic/welcome-new-relic/getting-started/introduction-eu-region-data-center | 2019-06-16T03:53:00 | CC-MAIN-2019-26 | 1560627997533.62 | [] | docs.newrelic.com |
SafeSquid SWG
Rapid changes in web technologies, present a variety of security challenges for enterprises.
The spectrum encompasses challenges in malware defence, facility abuse, loss of confidential data, privacy breaches, etc.
Enterprises cannot depend on security awareness of the general work force to deal with vulnerabilities hidden in modern web applications.
Threats like Ransomware, Phishing attacks, persistent targeted behavior altering attacks, etc. are some of the common examples of such vulnerabilities.
Detailed vendor-agnostic multi-dimensioned Web Security GRC policies not only facilitate business but also improve the security awareness of the general work force.
Adopting security technologies that offer maximum customization generally ensure best policy fitment.. | https://docs.safesquid.com/wiki/SafeSquid_SWG | 2019-06-16T02:46:51 | CC-MAIN-2019-26 | 1560627997533.62 | [] | docs.safesquid.com |
Improvement: QRIN-25 - Update QR Invoice Core Library to meet Style Guide QR-bill (16.01.2019)
Please note:
Support for Swiss Payments Code version 1.0 has been completely dropped, as this version will never be used in production.
Although this version comes with some API incompatibilities compared to the 0.x versions, upgrading to this version is an easy task.
New Feature: QRIN-10 - Create a way to pre-initialize the library (see How to Eager Initialize the QR Invoice Library )
Improvement: QRIN-7 - Update OpenPDF to 1.1.0 (optional BouncyCastle dependency)
Improvement: QRIN-8 - Update Maven Plugin Versions
Bugfix: QRIN-9 - TTF Scanning on Windows does not look into default font directory
Starting with this version, all artifacts are published to Central Repository. Furthermore all QR Invoice Solutions are dual licensed under the AGPLv3 as well as our commercial licenses.
New Feature: Standalone application "qrinvoice-rest-standalone implemented, which exposes REST API with swagger documentation
New Feature: QR Code - rendering support for PDF, JPEG and BMP added
New Feature: Payment part - rendering support for JPEG and BMP added
Bugfix: Wrong line widths when printing free text fields without printing the boundary line
New Feature: Payment part - rendering support for GIF, TIFF and PNG added (feature requested by client)
Improvement: Performance optimizations, mainly affecting SwissQrCode creation
Bugfix: Adjusted free text field corner sizes and line thickness in order to match exact dimensions from the specification / example file
New Feature: QrInvoiceCodeParser is capable of scanning QR Codes from images
New Feature: QR Code can be created as GIF, TIFF and PNG
Improvement: Validation considers max length of the Swiss Payments Code to be encoded in QR Code / Validates max supported QR version 25
Improvement: Warns if inputs are not trimmed
Improvement: Added detailed page on Specification Requirements Fulfillment
Improvement: Better layouting of the payment part depending on the payload
Improvement: Increased test coverage
Bugfix: Invalid character written to the SPC
Bugfix: Line ending (element separator) on last line of the SPC lead to more than 30 lines
Bugfix: Missing reference number in the information section (payment part)
Bugfix: Exception when attempting to create huge rasterized QR Code. Now a maximum of 10'000 pixels is validated. | https://docs.qr-invoice.ch/latest/release_notes/index.html | 2019-06-16T02:51:45 | CC-MAIN-2019-26 | 1560627997533.62 | [] | docs.qr-invoice.ch |
TheEdit a knowledge articleSelect a knowledge article categoryMove a knowledge articleImport a Word document to a knowledge baseCreate a knowledge article from a customer service caseRelated ConceptsRetire a knowledge article On this page Send Feedback Previous Topic Next Topic | https://docs.servicenow.com/bundle/kingston-servicenow-platform/page/product/knowledge-management/task/t_ApproveKnowledgeSubmission.html | 2019-06-16T03:15:29 | CC-MAIN-2019-26 | 1560627997533.62 | [] | docs.servicenow.com |
Innovation / Measurement Error¶
Innovation Overview¶
The innovation (measurement error) is formed from the sensor measurements and the predicted states. As the measurements and the system states are often not the same, one or the other needs to be transformed into the measurement. In the case of this algorithm, the state consists of an attitude quaternion, NED-velocity, and NED-position. The measurement come from accelerometer readings, GPS latitude/longitude/altitude measurements, and horizontal/vertical velocities along with ground-track. In this case either the states need to be converted to match the measurements or vice-versa.
Once the measurements vectors are formed, the innovation (measurement error), \(\vec{\nu}_{k}\), is computed:
This result is used in the update stage of the EKF to generate the state error, \({\Delta\vec{x}}_{k}\), given the Kalman gain matrix.
The available sensor information is used}\))
Measurement Details To Be Provided | https://openimu.readthedocs.io/en/latest/algorithms/Innovation.html | 2019-06-16T04:00:03 | CC-MAIN-2019-26 | 1560627997533.62 | [] | openimu.readthedocs.io |
django-GDPR-assist¶
Tools to help manage user data in the age of GDPR:
- Find, export and anonymise personal data to comply with GDPR requests
- Track anonymisation and deletion of personal data to replay after restoring backups
- Anonymise all models to sanitise working copies of a production database
Contents:
- Installation
- Usage
- The
PrivacyMetaobject
- Anonymising objects
- Commands
- The admin site
- Upgrading
- Contributing | https://django-gdpr-assist.readthedocs.io/en/stable/ | 2019-06-16T03:03:47 | CC-MAIN-2019-26 | 1560627997533.62 | [] | django-gdpr-assist.readthedocs.io |
Regression Analysis
Build Linear Regression Model
Input Data
Input data should contain at least one numeric column for "What to Predict" and more than one categorical and/or numeric columns as Variable Columns.
What to Predict - Numeric column that you want to Predict.
Variable Columns - Numeric and/or Categorical columns that you want to check importance to predict your "What to Predict" column.
Analytics Properties
- Sample Data Size - Number of rows to sample before building linear regression model.
- Seed - Seed used to generate random numbers. Specify this value to always reproduce the same result.
- P Value Threshold to be Significant - P value must be smaller than this value for coefficients to be considered statistically significant.
- Sort Variables by Coefficients - If set to TRUE, variables displayed in Coefficients View are sorted by coefficients.
How to Use This Feature
- Click Analytics View tab.
- If necessary, click "+" button on the left of existing Analytics tabs, to create a new Analytics.
- Select "Regression Analysis".
"Coefficients" View
"Coefficients" View displays coefficient estimates for all the predictor variables with Error Bars with P value as a color (i.e. If P Value < 0.05, the color is blue)
"Coefficients Table" View
"Coefficients Table" View displays more details for all the variables along with other metrics like Coefficient, Standard Error, t-Ratio, P-Value, etc. You can click on the column headers to sort the data with a help of bar visualization.
"Model Summary" View
"Model Summary" View displays the summary of the model created for this Regression Analysis. Each column shows the model information like R Squared, Root Mean Square Error from where you can understand model performance..
Root Mean Square errors - The Root Mean Square Error (RMSE) (also called the root mean square deviation, RMSD) is a frequently used measure of the difference between values predicted by a model and the values actually observed from the environment that is being modeled.
F Ratio - F Ratio gives you a measure of how much of the variation is explained by the model (per parameter) versus how much of the variation is unexplained (per remaining degrees of freedom). This unexplained variation is your error sums of squares. With this F Ratio, a higher ratio means that your model explains that much more of the variation per parameter than there is error per remaining degree of freedom.
"Prediction" View
"Prediction" View compares the predicted values and the actual values to see how good or bad the model’s prediction capability is. If it is the perfect model, meaning it can predict with 100% accuracy, then all the dots should be lining up along with the gray line called ‘Perfect Fit’.
"Residual" View
"Residual" View shows the residual which is difference between the predicted and the actual values. And visualizing the Residuals can reveal a lot of useful information that can guide you to decide what types of data transformations are needed to improve the model. (e.g. Interpreting residual plots to improve your regression)
R Package
The
Regression Analysis uses
build_lm.fast function from Exploratory R Package under the hood. | https://docs.exploratory.io/analytics/regression.html | 2019-06-16T02:30:58 | CC-MAIN-2019-26 | 1560627997533.62 | [array(['images/var_importance_column_select.png', None], dtype=object)
array(['images/regression_coeff.png', None], dtype=object)
array(['images/regression_coeff_table.png', None], dtype=object)
array(['images/regression_model_summary.png', None], dtype=object)
array(['images/regression_prediction.png', None], dtype=object)
array(['images/regression_residual.png', None], dtype=object)] | docs.exploratory.io |
[Modified: 8.5.113.11]
Workspace supports keyboard shortcuts and hotkey combinations for certain common functions. The Workspace keyboard shortcuts and hotkeys are configured by your administrator. This is to ensure that there is no conflict between Workspace and other applications that you might use. Please ask your administrator for a list of the shortcuts and hotkeys that are configured for Workspace.
Contents
Shortcut and Hotkey Combinations
A shortcut is a combination of keys that you press to activate a certain function or behavior in a specific window or view. Your operating system might support shortcut keys for the following functions: copy, cut, paste, undo, delete, find, maximize window, minimize window, open menu and select command, switch application, cancel, change focus, and so on. Consult your operating-system documentation for a list of supported keyboard shortcuts.
Hotkeys are also combinations of keys that you press to perform certain functions; however, hotkeys are available to you no matter what window or application is active. For example, your administrator might have configured a hotkey combination for you that enables you to answer a phone call (voice interaction) or reject an email interaction that has been routed to you. When the preview is displayed on your desktop, you can use the hotkey combination to perform the action without first having to switch to the interaction preview.
Sometimes there might be a conflict between the keyboard shortcut that your administrator has configured for Workspace functionality and the keyboard shortcuts that control the Rich Text Editor that you might be using for email and other text-based interactions. You might have to navigate away from the text editor field before you can use the shortcut. If you experience a shortcut conflict, please notify your administrator to change the custom shortcut.
Access Keys
In addition, access keys are available for most Workspace menu items. Each supported menu item has an underlined letter or character. Press the Alt key to open a menu in the active window, and then press the letter or character that corresponds to the menu item that you want to select.
Workspace supports keyboard navigation for all features in the interaction windows. All features, functions, options, and menus are 100 percent navigable by keyboard.
The Workspace interface is 100 percent navigable by keyboard. This functionality enables users who cannot use a mouse, or who are using a device for accessibility that relies on keyboard navigation, to manipulate the desktop components. Keyboard navigation enhances the productivity of any user.
The appearance of the component that you select changes as you move the focus from one component to another. For example, buttons change color, and menus open with the current selection highlighted by color.
Some screen-reader applications are not compatible with these navigation shortcuts, because the screen reader uses some of these keys for other purposes. When screen reader mode is on, use Alt + N to disable the keyboard navigation function.
Note: If you are already in screen-reader mode, all keyboard shortcuts are disabled, except for the Alt + n commands. Your system administrator turns screen-reader mode on and off.
Two keyboard shortcuts enable you to navigate among components—for example, from one menu to the next or from one view of the interaction interface to the next:
- Tab—Moves the focus to the next component (menu, field, button, view, and so on)
- Shift + Tab—Moves the focus to the previous component (menu, field, button, view, and so on)
- Beginning with version 8.5.113.11, Workspace enables you to enter TABs in the email composition area of outgoing email interactions by pressing the TAB key. Now, to use the TAB key to step to the next control or field, you must first press Ctrl-TAB to step out of the text composition area. This feature might be disabled in environments configured for accessibility; if so, you will not be able to enter TABs in the email composition area, but you can use the TAB key to move to the next control in the tab order.
Movement occurs from left to right and from top to bottom, unless the ordering of components dictates otherwise. Navigation moves from component to component within a view, and from view to view within the application.
The following table contains keyboard shortcuts that enable you to manipulate controls, such as menus, lists, and buttons, in the Workspace interface.
Workspace High Contrast Theme
[Added: 8.5.100.05]
Workspace enables visually impaired agents to use a high contrast theme that compliments the Windows high contrast themes that are available from the Windows Personalization control panel. The Workspace high contrast theme follows Web Content Accessibility Guidelines (WCAG) 2.0 requirements with some limitations. The Workspace high contrast theme was tested against red/green and blue/yellow color deficit vision. The Workspace high contrast theme functions whether or not one of the Windows high contrast themes are in use.
You can access the high contrast theme from the Workspace Main Menu by selecting Main Menu>Change Theme>High Contrast. You must exit and relaunch Workspace to make the high contrast theme active.
Related Resources
The Workspace Desktop Edition User's Guide (English only) provides detailed lessons for using all the features of Workspace. You might find the following lessons useful:
Feedback
Comment on this article: | https://docs.genesys.com/Documentation/IW/8.5.1/Help/Keyboard_Navigation_and_Accessibility | 2019-06-16T03:29:17 | CC-MAIN-2019-26 | 1560627997533.62 | [] | docs.genesys.com |
Content with label categories+wcm in GateIn WCM (See content from all spaces)
Related Labels:
future, post, downloads, template, content, examples, page, gatein, tags, editor, demo, composition, configuration, s, installation, started, api, getting, uploads,
tasks, templates, design
more »
( - categories, - wcm )
Powered by a free Atlassian Confluence Open Source Project License granted to Red Hat, Inc.. Evaluate Confluence today. | https://docs.jboss.org/author/label/GTNWCM/categories+wcm | 2019-06-16T03:18:32 | CC-MAIN-2019-26 | 1560627997533.62 | [] | docs.jboss.org |
All content with label async+client+distribution+gridfs+infinispan+query+snapshot+state_transfer.
Related Labels:
expiration, publish, datagrid, coherence, interceptor, server, rehash, replication, transactionmanager, release, partitioning, deadlock, archetype, lock_striping, nexus, guide, schema, listener, cache,
amazon, s3, memcached, grid, test, api, xsd, ehcache, maven, documentation, 缓存,, interactive, xaresource, build, hinting, searchable, demo,, hot_rod
more »
( - async, - client, - distribution, - gridfs, - infinispan, - query, - snapshot, - state_transfer )
Powered by a free Atlassian Confluence Open Source Project License granted to Red Hat, Inc.. Evaluate Confluence today. | https://docs.jboss.org/author/label/async+client+distribution+gridfs+infinispan+query+snapshot+state_transfer | 2019-06-16T03:20:17 | CC-MAIN-2019-26 | 1560627997533.62 | [] | docs.jboss.org |
public static class Insert Insert.Options and(Using using)
using- an INSERT option.
Optionsobject.
public Insert value(String name, Object value)
name- the name of the column to insert/update.
value- the value to insert/update for
name.
public Insert values(String[] names, Object[] values)
names- a list of column names to insert/update.
values- a list of values to insert/update. The
ith value in
valueswill be inserted for the
ith column in
names.
IllegalArgumentException- if
names.length != values.length.
null. by the user. If the native protocol version 1 is in use, the driver will default to not generating values since those are not supported by that version of the protocol. In practice, the driver will automatically call this method with
true() | https://docs.datastax.com/en/drivers/java/2.1/com/datastax/driver/core/querybuilder/Insert.Options.html | 2019-06-16T03:02:13 | CC-MAIN-2019-26 | 1560627997533.62 | [] | docs.datastax.com |
Writing PowerShell Commandlets for IIS 7.0
by Sergei Antonov
Introduction
With PowerShell shipped, IIS administrators get a new tool to use. The following article concentrates on administration tasks for IIS 7.0 and above; however, PowerShell can be used for existing IIS 6.0 servers.
This article focuses on the administration of the remote IIS server using PowerShell on the client machine. As of this writing, this is possible only if you use the WMI provider that the IIS team shipped with Vista and Windows Server® 2008. In this case, you do not need to have anything related to IIS on your client computer – WMI provides the connection to the actual configuration available on the remote server.
Note
You may also use Microsoft.Web.Administration within PowerShell to perform administration functions. However, this article does not focus on that technique.
The PowerShell team created a special command to use for access to WMI objects – get-wmiobject. Normally it returns an object, which is created inside of PowerShell that exposes WMI properties and methods, instead of the usual managed code object that is returned for regular classes.
This synthetic object exposes metadata defined in WMI namespace, not metadata from System.Management.ManagementBaseObject which is used as the base. This gives the user the namespace view which was exposed by the WMI provider, and which reflects the administration model of the configured entity.
Unfortunately, this command does not work with IIS namespaces. For PowerShell version 1.0, the get-wmiobject supports only the default authentication level for the remote DCOM connection. This is not enough either for IIS 6.0 or for IIS 7.0 and above. When users configure IIS, it may be necessary to send passwords and other sensitive data over the network connection to edit and store it in the configuration. To support that, IIS providers require an authentication level "Packet Privacy". There is no way to supply this requirement to the get-wmiobject cmdlet.
With this limitation we have two options:
- Use PowerShell as a generic scripting interface to System.Management namespace. Use PowerShell also to do the following: write script code that configures the connection to the remote WMI namespace; and, to retrieve and save administration data using System.Management objects, accessed through PowerShell. It is like C# programming, only in PowerShell language. Usually this works well, but for the WMI case, PowerShell applies a special adapter that automatically converts all System.Management objects into synthetic PowerShell objects that expose WMI namespace as a primary API. For this reason, we must overcome adapter changes and write additional code to access the "native" System.Management substructure, which quickly turns this programming into an unnecessarily complicated exercise.
Therefore, we explore the other option:
- Write PowerShell cmdlets in C# and access the required functionality in C# code. In this case, we choose any suitable APIs to configured entities. We use WMI to access the server. Compared to the same code in PowerShell, C# is much more effective, as it does not need to be parsed and interpreted each time.
PowerShell Cmdlet
To start writing cmdlets, you need a client computer installed with PowerShell. You must also install PowerShell SDK, or simply copy reference DLLs to the working folder using the trick posted by Jeffrey Snover in the PowerShell team blog. Be sure that you have the DCOM connection on your server. The easiest way to confirm this is to start the utility wbemtest, which is available on each Windows platform, and try the connection.
- Start wbemtest.
- Click Connect.
- Enter connection parameters:
- Replace "root\default" by \<computer>\root\webadministration, where "<computer>" has to be name of your server.
- Enter the credentials of the account that has administrator rights on the server.
- Select "Packet Privacy" in Authentication level group.
- Click Connect. WMI on your client machine connects to the WMI service on your server machine. If it is not accessible, you get an error message dialog box.
- Perform some simple action that engages the WMI provider on the server box, to confirm that it works. Do an enumeration of sites:
- Click on "Enum Instances" button and enter "site" for class name.
- When it works, the resulting dialog shows a list of all sites available on your server.
PowerShell cmdlet is simply a managed code assembly implemented following formal rules, which are documented in PowerShell SDK. Find them on-line.
Before writing any code, it is useful to have a plan.
First, implement a cmdlet that enumerates all IIS sites on the remote server. This cmdlet returns an array of site objects, which represent the IIS configuration element with properties, defined for the site. We will add some extra properties that are useful to have in that object.
We want the cmdlet to look like the following:
get-iissite –computer somecomputer –name somesite –credential $(get-credential)
If we do not pass the credential to the cmdlet programmatically, PowerShell produces a dialog box requesting the user name and password for the remote server.
To get the site object from the remote computer, we must provide the following parameters from our cmdlet:
public string Computer; public string Name; public PSCredential Credential;
All these parameters are public properties in the cmdlet class, decorated by the Parameter attribute.
Implement the first cmdlet. Since get-iissite is not our last command, it is better to do two things: separate code that is responsible for the connection to the server into the parent class RemotingCommand; and, inherit the cmdlet from that class.
using System; using System.Net; using System.Management; using System.Management.Automation; using System.ComponentModel; using System.Security; namespace Microsoft.Samples.PowerShell.IISCommands { public class RemotingCommand : PSCmdlet { private string computer = Environment.MachineName; [Parameter( ValueFromPipeline = true, ValueFromPipelineByPropertyName = true)] [ValidateNotNullOrEmpty] public string[] Computer { get { return computer; } set { computer = value; } } private PSCredential credential = null; [Parameter( ValueFromPipeline = true, ValueFromPipelineByPropertyName = true)] [CredentialAttribute] public PSCredential Credential { get { return credential; } set { credential = value; } } protected ManagementScope GetScope(string computerName) {); return scope; } protected override void EndProcessing() { if (null == credential) { // Check variable first object varCred = GetVariableValue("IISCredential"); if (varCred != null && varCred.GetType() == typeof(PSObject)) { credential = ((PSObject)varCred).BaseObject as PSCredential; } if (null == credential) { // use credential of current user or process SecureString ss = new SecureString(); foreach (char c in CredentialCache.DefaultNetworkCredentials.Password.ToCharArray()) { ss.AppendChar(c); } credential = new PSCredential( CredentialCache.DefaultNetworkCredentials.UserName, ss); } } } protected ManagementClass CreateClassObject( string computerName, string classPath ) { return new ManagementClass( GetScope(computerName), new ManagementPath(classPath), new ObjectGetOptions() ); } }
Class RemotingCommand includes parameters and methods required for connection.
- GetScope() returns the System.Management object that has all information required for connection to the remote namespace. Look at the connection.Authentication property. It is initialized to AuthenticationLevel.PacketPrivacy. This is a mandatory requirement. Otherwise, WMI will refuse the connection.
- Method CreateClassObject() is a utility method that uses connection scope to create a specified class based on remote WMI namespace.
- EndProcessing() method is the standard method that each cmdlet class should implement. It is called from PowerShell when our cmdlet is processed. The implementation of EndProcessing() tries to fill the credential property, if it is empty. On the first attempt, it gets the credential from the external variable IISCredential (just for convenience).
When in a PowerShell session, the user may want to place the user name and password into this variable and use it multiple times in multiple commands. If this variable is not defined, or contains an unsuitable object type, the code gets the credentials of the current user or process. It works when the user is running this command locally on the server, using the administrator account. In this case, we do not need to enter any credentials at all. For each loop in this code, there is a known trick to convert the password from string into SecureString.
Now we implement get-iissite.
[Cmdlet(VerbsCommon.Get, "IISSite")] public class GetSiteCommand : RemotingCommand { private string name = null; [Parameter( Position = 0, ValueFromPipeline = true, ValueFromPipelineByPropertyName = true)] public string Name { get { return name; } set { name = value; } } protected override void EndProcessing() { base.EndProcessing(); ManagementObjectCollection sites = CreateClassObject(computerName, "Site").GetInstances(); foreach (ManagementObject site in sites) { string siteName = site.GetPropertyValue("Name") as string; if (Name != null) { if (siteName.Equals(Name, StringComparison.InvariantCultureIgnoreCase)) { WriteObject(siteName); break; } } else { WriteObject(siteName); } } } } //GetSiteCommand // // [RunInstaller(true)] // public class IISDemoCmdSnapIn : PSSnapIn {…} // } // Microsoft.Samples.PowerShell.IISCommands
In the first cut, the command returns only site names. If the user wants some specific site, they must supply the name of this site to the command; otherwise, all sites on that particular computer will be returned.
To finish the command, you must add the implementation of the class, inherited from PSSnapin. This class is used to register our commands. It has nothing specific to IIS; see the complete code in the source file IISDemoCmd.cs.
Build the cmdlet now and see how it works. You could do it from Visual Studio, but it is simple enough to build it from the command line. Suppose you placed PowerShell reference DLLs into a folder
c:\sdk. The following command line builds the cmdlet into IISDemoCmd.dll and places it into the same folder, where the source file is located
%windir%\Microsoft.NET\Framework\v2.0.50727\csc /t:library /r:c:\sdk\system.management.automation.dll IISDemoCmd.cs
Now you must register the command and add it into PowerShell. This procedure is described in PowerShell Programming Reference. Start PowerShell and execute the following commands from the same folder where you built cmdlet DLL.
>set-alias installutil $env:windir\Microsoft.NET\Framework\v2.0.50727\installutil >installutil iisdemocmd.dll >add-pssnapin IISDemoCmdSnapIn
This adds the cmdlet to the running instance of PowerShell. Save those command lines into a script file.You will use them again as you continue working on the cmdlets. You find this script in the file demo_install.ps1.
See if the command is available:
>get-command Get-IISSite CommandType Name Definition ----------- ---- ---------- Cmdlet Get-IISSite Get-IISSite [[-Name] <String...
Now try it. Suppose you connect to computer test_server, using the local administrator account.
>Get-IISSite -computer test_server -credential $(get-credential administrator) Default Web Site Foo Bar
This command line receives the credential object from get-credential cmdlet, which will interact with user to get the password. It is also possible to produce the credential programmatically, but you must type the password in the script, which is not at all secure.
>$global:iiscredential = new-object System.Management.Automation.PsCredential "Administrator",$(convertto-securestring "password" -asplaintext -force)
This command stores the credential in the global variable $iiscredential, and the cmdlet will automatically use it. In real sitautions, however, it is better to store the credential into a variable using the get-credential command: $global:iiscredential = get-credential Administrator.
Now that command again.
>Get-IISSite -computer test_server Default Web Site Foo Bar
All infrastructure is in place. Now to return to the command and add the rest of the data from the site.
Adding Configuration Data to the Site
We have to convert the object from ManagementBaseObject to PSObject and return it to PowerShell. PSObject is a freeform container that can be filled with different kinds of data. We will use PSNoteProperty type. Keep the cmdlet code clean and add a new class that is responsible for the conversion.
class ObjectConverter { public static PSObject ToPSObject( ManagementBaseObject source ) { PSObject obj = new PSObject(); foreach (PropertyData pd in source.Properties) { if (pd.Value.GetType() == typeof(System.Management.ManagementBaseObject)) { obj.Properties.Add(new PSNoteProperty( pd.Name, ObjectConverter.ToPSObject(pd.Value as ManagementBaseObject) )); } else if (pd.Value.GetType() == typeof(ManagementBaseObject[])) { ManagementBaseObject[] ar = pd.Value as ManagementBaseObject[]; PSObject[] psar = new PSObject[ar.Length]; for (int i = 0; i < ar.Length; ++i) { psar[i] = ObjectConverter.ToPSObject(ar[i]); } obj.Properties.Add(new PSNoteProperty(pd.Name, psar)); } else { obj.Properties.Add(new PSNoteProperty(pd.Name, pd.Value)); } } return obj; } }
This code recourses for complex properties and adds simple properties such as PSNoteProperty to the resulting PSObject. We also add a dedicated method that will deal with the conversion into out cmdlet. This method converts all WMI data, and adds two more properties: the computer name and the credential that was used to get the connection to this computer. This helps to distinguish each site from other objects in PowerShell session.
private PSObject ConstructPSSite( string computerName, ManagementObject site) { PSObject pssite = ObjectConverter.ToPSObject(site); pssite.Properties.Add(new PSNoteProperty("Computer", computerName)); pssite.Properties.Add(new PSNoteProperty("Credential", Credential)); return pssite; }
Replace the site name that we returned to PowerShell with a whole object. Method EndProcessing() in the cmdlet now looks like this:
protected override void EndProcessing() { base.EndProcessing(); ManagementObjectCollection sites = CreateClassObject(Computer, "Site").GetInstances(); foreach (ManagementObject site in sites) { string siteName = site.GetPropertyValue("Name") as string; if (Name != null) { if (siteName.Equals(Name, StringComparison.InvariantCultureIgnoreCase)) { WriteObject(ConstructPSSite(Computer, site)); break; } } else { WriteObject(ConstructPSSite(Computer, site)); } } }
When we repeat the build and the registration, and run the command again, we see more data about the site:
> get-iissite -computer test-server "default web site"
If you compare this with WMI schema for the site, you see that all data are now available; plus, we have additional properties that we added in cmdlet. All properties are accessible from PowerShell through "dot" notation.
> $sites = get-iissite -computer test-server >$sites[0] >$sites[0].Limits ConnectionTimeout MaxBandwidth MaxConnections ----------------- ------------ -------------- 00000000000200.000000:000 4294967295 4294967295 > $sites[0].Limits.MaxBandwidth 4294967295
We have made good progress, but not good enough. The WMI site also has methods, so try to add them as well. This is simple to do in PowerShell – you name the method and tell PowerShell where the code is located. We will add methods of the PSCodeMethod type. To keep the code for the methods, we add the class SiteMethods.
public class SiteMethods { static public void Start(PSObject site) { InvokeMethod(site, "Start"); } static public void Stop(PSObject site) { InvokeMethod(site, "Stop"); } static public string GetStatus(PSObject site) { uint status = (uint)InvokeMethod(site, "GetState"); string statusName = status == 0 ? "Starting" : status == 1 ? "Started" : status == 2 ? "Stopping" : status == 3 ? "Stopped" : "Unknown"; return statusName; } static private object InvokeMethod(PSObject site, string methodName) { string computerName = site.Properties["Computer"].Value as string; string siteName = site.Properties["Name"].Value as string; PSCredential credential = site.Properties["Credential"].Value as PSCredential;); string sitePath = "Site.Name=\"" + siteName + "\""; ManagementObject wmiSite = new ManagementObject( scope, new ManagementPath(sitePath), new ObjectGetOptions()); return wmiSite.InvokeMethod(methodName, new object[] { }); } }
As you see, this code creates a WMI object for the site and calls WMI methods on this object. This code uses two additional properties that we added to the site. With this class, we can extend the method ConstructPSSite. We must also add a reference to the System.Reflection namespace.
private PSObject ConstructPSSite( string computerName, ManagementObject site) { PSObject pssite = ObjectConverter.ConvertSiteToPSObject(site); pssite.Properties.Add(new PSNoteProperty("Computer", computerName)); pssite.Properties.Add(new PSNoteProperty("Credential", Credential)); Type siteMethodsType = typeof(SiteMethods); foreach (MethodInfo mi in siteMethodsType.GetMethods()) { if (mi.Name.Equals("Start", StringComparison.InvariantCultureIgnoreCase)) { pssite.Methods.Add(new PSCodeMethod("Start", mi)); } if (mi.Name.Equals("Stop", StringComparison.InvariantCultureIgnoreCase)) { pssite.Methods.Add(new PSCodeMethod("Stop", mi)); } if (mi.Name.Equals("GetStatus", StringComparison.InvariantCultureIgnoreCase)) { pssite.Properties.Add(new PSCodeProperty("Status", mi)); } } return pssite; }
In addition to the methods added, there is one dynamic property-- "Status". It behaves the same way as properties in C# classes; it is a function that is called when PowerShell needs its value. The code is very simple, because we refer methods from the same assembly as our cmdlet. Nothing prevents loading any other assembly and getting the information about the methods of its classes. If those methods have the right signature, PowerShell uses it the same way.
The object now looks like:
>$s = get-iissite "Default Web Site" –computer test-server > $s | get-member TypeName: System.Management.Automation.PSCustomObject Name MemberType Definition ---- ---------- ---------- Start CodeMethod static System.Void Start(PSObject site) Stop CodeMethod static System.Void Stop(PSObject site) Status CodeProperty System.String Status{get=GetStatus;} Equals Method System.Boolean Equals(Object obj) GetHashCode Method System.Int32 GetHashCode() GetType Method System.Type GetType() ToString Method System.String ToString() ApplicationDefaults NoteProperty System.Management.Automation.PSObjec... Bindings NoteProperty System.Management.Automation.PSObjec... Computer NoteProperty System.String Computer=iissb-101 Credential NoteProperty System.Management.Automation.PSCrede... Id NoteProperty System.UInt32 Id=1 Limits NoteProperty System.Management.Automation.PSObjec... LogFile NoteProperty System.Management.Automation.PSObjec... Name NoteProperty System.String Name=Default Web Site ServerAutoStart NoteProperty System.Boolean ServerAutoStart=True TraceFailedRequestsLogging NoteProperty System.Management.Automation.PSObjec... VirtualDirectoryDefaults NoteProperty System.Management.Automation.PSObjec... >$s.Status Started > $s.Stop() > $s.Status Stopped > $s.Start() > $s.Status Started
With the ability to add methods and dynamic properties to the objects, we can synthesize what we need for any situation. In addition to the methods and properties added in cmdlet, we can add more in the script, without any need to use C# code.
It is also possible to load the definition of the object from XML. A good candidate for additional properties is data exposed from IIS through perf counters related to site-- for example, the total count of processed requests. Those data are easily accessible directly from the managed code-- there is no need to use WMI.
How to Call One cmdlet From Another cmdlet
Getting site objects is important, but we need more, like writing a command to add a new site. To create sites, we can use the abstract method Create() defined on the Site class in the WMI WebAdministration namespace. The cmdlet looks like this:
>add-iissite –name <siteName> -computer <serverName> -credential <credential> -bindings <array-of-bindings> –homepath <path> -autostart
We have the same parameters as defined in the Create method. In addition, the command should support –whatif and –passthru switches. The first shows the result of the command execution, but does not make any changes; the second instructs the command to output the result into the pipeline. These two switches are highly recommended to use in "destcructive" commands. To support the –whatif cmdlet, the class must be decorated by the attribute SupportsShouldProcess = true.
Here is part of the code (find the whole code in the iisdemocmd.cs).
[Cmdlet(VerbsCommon.Add, "IISSite", SupportsShouldProcess = true)] public class AddSiteCommand : RemotingCommand { //… private SwitchParameter passThru = new SwitchParameter(false); [Parameter] public SwitchParameter PassThru { get { return passThru; } set { passThru = value; } } protected override void EndProcessing() { base.EndProcessing(); if (ShouldProcess(string.Format("{0} bound to {1} on {2}", name, bindings.ToString(), rootFolder))) { object[] args = new object[4]; args[0] = Name; ManagementBaseObject[] mbarr = new ManagementBaseObject[bindings.Length]; for (int b = 0; b < bindings.Length; ++b) { mbarr[b] = ObjectConverter.ToManagementObject( GetScope(Computer), "BindingElement", bindings[b]); } args[1] = mbarr; args[2] = rootFolder; args[3] = autoStart; ManagementClass siteClass = CreateClassObject(Computer, "Site"); try { siteClass.InvokeMethod("Create", args); } catch (COMException comEx) { WriteError(new ErrorRecord(comEx, comEx.Message, ErrorCategory.InvalidArgument, Name)); } if (PassThru.IsPresent) { string getSiteScript = "get-iissite" + " -name " + Name + " -computer " + Computer + " -credential $args[0]"; this.InvokeCommand.InvokeScript( getSiteScript, false, PipelineResultTypes.Output, null, Credential); } } } }
This code uses a new method in the ObjectConverter class to produce a bindings array. The method ToManagementObject() converts input parameters that can be PSObject or Hashtable into the instance of ManagementBaseObject class. Since the call to Create could fail with perfectly correct parameters if the site with those parameters is already available, we call this method in try/cach.
Finally, if the cmdlet checks to see if the user specified –passthru is present, it calls PowerShell to execute a piece of script that returns this new site. In this script, we call our command "get-iissite" and reuse the parameters passed to the current command. InvokeScript puts the result in the pipeline as requested, so there is no need to do anything else. This is an example of how we can do "callback" to PowerShell, passing formatted command lines statically or dynamically. Of course, it is possible to write it as C# code, but it requires either cutting and pasting large parts of GetSiteCommand, or reorganizing and refactoring the namespace.
At the beginning of the method EndProcessing() we see the call to ShouldProcess(). This is how the –whatif switch is supported. When the user passes this switch, this method prints text passed to it as a parameter, and the return is false. All actions that can change the environment must be performed when this call returns true. PowerShell has other switches that can interact with the user and ask confirmation before performing any action. ShouldProcess() returns as a result of this confirmation.
Test the new command with the following:
> add-iissite Foo @{Protocol="http"; BindingInformation="*:808"} e:\inetpub\demo -computer test-server -whatif What if: Performing operation "Add-IISSite" on Target "Foo bound to System.Management.Automation.PSObject[] on e:\inetpub\demo".
This is how –whatif works. We get a rather cryptic message about what happens when this command executes. To make it more clear, we must format it properly. Parameter Bindings is entered in command line as a hash table, and gets passed into the cmdlet as Hashtable wrapped into PSObject. To produce meaningful text from it, we must add more smart code – the default ToString() simply returns the class name.
Insert this block of text in place of ShouldProcess() line:
StringBuilder bindingText = new StringBuilder("("); foreach (PSObject b in bindings) { Hashtable ht = b.BaseObject as Hashtable; foreach (object key in ht.Keys) { string bstr = String.Format("{0}={1}", key.ToString(), ht[key].ToString()); bindingText.Append(bstr + ","); } bindingText.Remove(bindingText.Length - 1, 1); bindingText.Append(";"); } bindingText.Remove(bindingText.Length - 1, 1); bindingText.Append(")"); if (ShouldProcess(string.Format("{0} bound to {1} on {2}", name, bindingText.ToString(), rootFolder)))
After the cmdlet is built and executed, we see the following output:
> add-iissite Foo @{Protocol="http"; BindingInformation="*:888"} e:\inetpub\demo -computer test-server -whatif What if: Performing operation "Add-IISSite" on Target "Foo bound to (BindingInformation=*:888,Protocol=http) on e:\inetpub\demo".
This is much more understandable. From this new code, it is also clear why we must process Hashtable in the method ToManagementObject() – this is a common type in PowerShell for passing structured parameters.
Now run the command.
> add-iissite Foo @{Protocol="http"; BindingInformation="*:888"} e:\inetpub\demo -computer test-server -passthru | format-table Name,Status Name Status ---- ------ Foo Stopped > get-iissite -computer sergeia-a | format-table name,status Name Status ---- ------ Default Web Site Started Foo Stopped
The first command created the site on the remote server and then retrieved it and passed to the pipeline. To ensure it was done correctly, we got a list of the sites, and indeed, the new site is available. By default, the server will try to start the site, unless we add the parameter –AutoStart false. If there is some problem in the parameters-- for example, the server cannot find the home folder-- then the site will remain stopped.
Extending cmdlets to Work With a Server Farm
For now we have two commands: get-iissite and add-iissite. We are missing cmdlets to save the modified site and to delete the site. The deletion command should be remove-iissite, to keep it compatible with the PowerShell naming standards. The Save command will have a name set-iissite. For remove-iissite, we modify the get-iissite code and call the method Delete() on ManagementObject.
[Cmdlet(VerbsCommon.Remove, "IISSite", SupportsShouldProcess = true)] public class RemoveSiteCommand : RemotingCommand { private string name = null; [Parameter( Position = 0, ValueFromPipeline = true, ValueFromPipelineByPropertyName = true)] [ValidateNotNullOrEmpty] public string Name { get { return name; } set { name = value; } } protected override void EndProcessing() { base.EndProcessing(); if (ShouldProcess(string.Format("{0} on server {1}", name, Computer))) { ManagementObject site = CreateClassInstance(Computer, "Site.Name=\"" + Name + "\""); site.Delete(); } } } //RemoveSiteCommand
We also added a simple method CreateClassInstance() to the parent cmdlet. This method produces the object instance bound to the object path. Another change is that the Name parameter now cannot be empty – otherwise the user can delete all the sites by mistake. Finally, we added ShouldProcess() call to enable the –whatif and –confirm switches.
> Remove-IISSite foo -computer test-server -confirm Confirm Are you sure you want to perform this action? Performing operation "Remove-IISSite" on Target "foo on server sergeia-a". [Y] Yes [A] Yes to All [N] No [L] No to All [S] Suspend [?] Help (default is "Y"): <CR> > get-iissite -computer test-server | ft name,status Name Status ---- ------ Default Web Site Started
We can implement the last command set-iissite as an exercise, modifying the add-iissite cmdlet and calling the Put() on ManagementObject.
Now scale the commands out and adapt them to work with multiple servers. This is easy:
Change the property Computer on the parent command to represent an array of strings:
private string[] computer = { Environment.MachineName }; [Parameter( ValueFromPipeline = true, ValueFromPipelineByPropertyName = true)] [ValidateNotNullOrEmpty] public string[] Computer { get { return computer; } set { computer = value; } }
Then add an extra loop over this array into each cmdlet to perform the same action on each computer. Here is an example from get-iissite:
foreach (string computerName in Computer) { ManagementObjectCollection sites = CreateClassObject(computerName, "Site").GetInstances(); foreach (ManagementObject site in sites) { if (Name != null) { string siteName = site.GetPropertyValue("Name") as string; if (siteName.Equals(Name, StringComparison.InvariantCultureIgnoreCase)) { WriteObject(ConstructPSSite(computerName, site)); break; } } else { WriteObject(ConstructPSSite(computerName, site)); } } }
Now we can manipulate sites on the whole server farm.
> get-iissite -computer test-server,iissb-101,iissb-102 | ft Computer,Name,Status Computer Name Status -------- ---- ------ test-server Default Web Site Started iissb-101 Default Web Site Started iissb-101 Demo Started iissb-102 Default Web Site Started
Save the server names from the farm into text file and use them as a parameter:
>$("test-server","iissb-101","iissb-102" >farm.txt >cat farm.txt test-server tissb-101 tissb-102 >get-iissite –computer $(cat farm.txt) | ft Computer,Name,Status Computer Name Status -------- ---- ------ test-server Default Web Site Started iissb-101 Default Web Site Started iissb-101 Demo Started iissb-102 Default Web Site Started >get-iissite –computer $(cat farm.txt) | where {$_.Computer –like "iissb*"} | ft Computer,Name,Status Computer Name Status -------- ---- ------ iissb-101 Default Web Site Started iissb-101 Demo Started iissb-102 Default Web Site Started
We can do more advanced things using more PowerShell language. The following code enumerates sites on servers with names. Starting with "iissb", store the list into the variable and then stop all sites that are started.
> $sitelist = get-iissite -computer $(cat farm.txt) | where {$_.Computer -like "iissb*"} > foreach ($site in $sitelist) { >> if ($site.Status -eq "Started") {$site.Stop()} >> } >> > get-iissite -computer $(cat farm.txt) | ft Computer,Name,Status Computer Name Status -------- ---- ------ test-server Default Web Site Started iissb-101 Default Web Site Stopped iissb-101 Demo Stopped iissb-102 Default Web Site Stopped
The variable $sitelist keeps the site list, but thanks to the dynamic nature of the property site.Status, we see the actual, not stored, status of each object.
> $sitelist | ft computer,name,status Computer Name Status -------- ---- ------ iissb-101 Default Web Site Stopped iissb-101 Demo Stopped iissb-102 Default Web Site Stopped
We can do the same without using any variables. Start any stopped site on any server on the server farm.
> get-iissite -computer (cat farm.txt) | foreach { if ($_.Status -eq "Stopped") { $_.Start() }} > get-iissite -computer $(cat farm.txt) | ft Computer,Name,Status Computer Name Status -------- ---- ------ test-server Default Web Site Started iissb-101 Default Web Site Started iissb-101 Demo Started iissb-102 Default Web Site Started
In the accompanying source file iisdemocmd.cs, you find more commands for manipulating virtual directories and some properties in the configuration sections.
Conclusion
As we can see, having only three commands allows us to cover most of the needs in the administration of IIS sites. Combined with the flexibility and richness of the shell language, each command adds a great deal of functionality. At the same time, writing a new command is not much more complicated than implementating similar script in VBScript or Jscript.
The IIS team plans to add full scale support of PowerShell into IIS 7.0 and above. This includes implementing a navigation provider, a property provider and all the other pieces of functionality required to work with all aspects of administration. Follow the progress of these upcoming improvements and look for the announcement on and on the PowerShell site. | https://docs.microsoft.com/en-us/iis/manage/powershell/writing-powershell-commandlets-for-iis | 2019-06-16T03:24:28 | CC-MAIN-2019-26 | 1560627997533.62 | [] | docs.microsoft.com |
This document contains the configuration options for the New Relic APM .NET agent. Both the .NET Framework agent and the .NET Core agent use the same configuration options and have the same APM features, unless otherwise stated.
Configuration overview
New Relic APM agent configuration options allow you to control some aspects of how the agent behaves. Some of these config options are part of the basic install process (like setting your license key and app name), but most are more advanced settings, such as: setting a log level, setting up proxy host access, excluding certain attributes, and enabling distributed tracing.
The .NET agent gets its configuration from the
newrelic.config file, which is generated as part of the install process. By default, only a global
newrelic.config file is created, but you can also create app-local
newrelic.config files for finer control over a multi-app system. Other ways to set config options include: using environment variables, or setting server-side configuration from the UI. For more on the various config options and what overrides what, see Config settings precedence.
Both .NET agents (.NET Framework and .NET Core) use the same configuration options and have the same APM features, unless otherwise stated.
If you make changes to the config file and want to validate that it's in the right format, you can check it against the XSD file (located at
C:\ProgramData\New Relic\.NET Agent\newrelic.xsd) with any XSD validator.
For IIS: after you change your
newrelic.config or
app.config file, perform an
IISRESET from an administrative command prompt. Log level adjustments do not require a reset.
Configuration settings precedence
By default, the .NET agent only includes a global configuration file named
newrelic.config. The agent provides additional configuration options, all of which override the global
newrelic.config settings.
Required environment variables
New Relic's .NET Framework and .NET Core agents rely on environment variables to tell the .NET Common Language Runtime (CLR) to attach New Relic to your processes. Some .NET agent install procedures (like the MSI installer) will automatically set these variables for you; some procedures will require you to manually set them.
Security recommendation: You should give consideration to which users can set system environment variables. You should also secure the accounts under which your applications execute to prevent user environment variables overriding system environment variables
- .NET Framework environment variables
For the .NET Framework agent, the following variables are required:
COR_ENABLE_PROFILING=1 COR_PROFILER={71DA0A04-7777-4EC6-9643-7D28B46A8A41} NEWRELIC_INSTALL_PATH=path\to\agent\directory
The .NET Framework MSI installer will add these to IIS or as system-wide environment variables.
- .NET Core environment variables
For the .NET Core agent, the following variables are required:
Linux:
CORECLR_ENABLE_PROFILING=1 CORECLR_PROFILER={36032161-FFC0-4B61-B559-F6C5D41BAE5A} CORECLR_NEWRELIC_HOME=path/to/agent/directory CORECLR_PROFILER_PATH="${CORECLR_NEWRELIC_HOME}/libNewRelicProfiler.so"
Windows:
CORECLR_ENABLE_PROFILING=1 CORECLR_PROFILER={36032161-FFC0-4B61-B559-F6C5D41BAE5A} CORECLR_NEWRELIC_HOME=path\to\agent\directory CORECLR_PROFILER_PATH=path\to\agent\directory\NewRelic.Profiler.dll
If your system has previously used monitoring services (non-New Relic), you may have a "profiler conflict" when trying to install and use the New Relic agent. More details:
- Profiler conflict explanation
New Relic’s .NET agents rely on environment variables to tell the .NET Common Language Runtime (CLR) to load New Relic into your processes. The install-related environment variables are Microsoft variables, not New Relic variables. They can be used by other .NET profilers, and only one profiler can be attached to a process at a time. For this reason, if you have used previous application monitoring products, you may have profiler conflicts.
For specific install instructions, see the .NET agent install documentation.
Setup options
Use these options to setup and configure your agent. The New Relic .NET agent supports the following categories of setup options:
- Multiple applications
- Configuration element
- Service element
- Proxy element
- Log element
- Application element (configuration)
- Data transmission element
- Host name
Configuration element
The root element of the configuration document is a
configuration element.
<configuration xmlns="urn:newrelic-config" agentEnabled="true" maxStackTraceLines="50" timingPrecision="low">
The
configuration element supports the following attributes:
- agentEnabled
Enable or disable the New Relic agent.
- maxStackTraceLines
The maximum number of stack frames to trace in any stack dump.
- timingPrecision
Controls the precision of the timers. High precision will provide better data, but at a lower execution speed. Possible values are
highand
low.
Service element
The first child of the
configuration element is a
service element. The service element configures the agent's connection to the New Relic service.
<service licenseKey="YOUR_LICENSE_KEY" sendEnvironmentInfo="true" syncStartup="false" sendDataOnExit="false" sendDataOnExitThreshold="60000" autoStart="true"/>
The
service element supports the following attributes:
- licenseKey (REQUIRED)
Your New Relic license key. New Relic uses the license key to match your app's data to the correct account in the UI.
- sendEnvironmentInfo
Instructs the agent to record execution environment information. Environment information includes operating system, agent version, and which assemblies are available.
- syncStartup
Block application startup until the agent connects to New Relic. If set to
true, the first transaction may take substantially longer to complete, because it is blocked until the connection to New Relic is finished.
- sendDataOnExit
Block application shutdown until the agent sends all data from the latest harvest cycle.
- sendDataOnExitThreshold
The minimum amount of time the process must run before the agent blocks it from shutting down. This setting only applies when
sendDataOnExitis
true.
- requestTimeout
The agent's request timeout when communicating with New Relic.
- autoStart
Automatically start the .NET agent when the first instrumented method is hit.
- ssl (DEPRECATED)
The agent communicates with New Relic via HTTPS by default, and New Relic requires HTTPS for all traffic to New Relic APM and the New Relic REST API.
Proxy element
The
proxy element is an optional child of the
service element. Use the proxy element if you want the agent to communicate with New Relic service via a proxy.
<proxy host="hostname" port="PROXY_PORT" uriPath="path/to/something.aspx" domain="mydomain.com" user="PROXY_USERNAME" password="PROXY_PASSWORD"/>
The
proxy element supports the following attributes:
- host
Defines the proxy host.
- port
Defines the proxy port.
- uriPath
Optionally define a proxy URI path.
- domain
Optionally define a domain to use when authenticating with the proxy server.
- user
Optionally define a user name for authentication.
Optionally define a password for authentication.
Log element
The
log element is a child of the
configuration element. The
log element configures New Relic's logging . The agent generates its own log file to keep its logging information separate from your application's logs.
<log level="info" auditLog="false" console="false" directory="PATH\TO\LOG\DIRECTORY" fileName="FILENAME.log" />
The
log element supports the following attributes:
- level
Defines the level of detail recorded in the log file. Possible values, in increasing order of detail, are:
off
error
warn
info
debug
finest
all
Increasing the logging level will increase New Relic's performance impact.
- auditLog
Records all data sent to and received from New Relic in both an auditlog log file and the standard log file.
- console
Send log messages to the console, in addition to the log file.
- directory
The directory to hold log files generated by the agent. If this is omitted, then a directory named logs in the New Relic Agent installation area will be used by default.
- fileName
Defines a name for the log file. If you do not define a
fileName, the name is derived from the name of the monitored process.
Application element (configuration) (REQUIRED)
The
application element is a child of the
configuration element. This element defines your application name, and disables or enables sampling. This element is required.
- name
The name of your .NET application is a child of the
applicationelement. New Relic will aggregate your data according to this name. For example, if you have two running applications named
AppAand
AppB, you will see two applications in the New Relic interface:
AppAand
AppB.
You can also assign up to three names to your app. The first name is the primary name. For example:
<application> <name>MY APPLICATION PRIMARY</name> <name>SECOND APP NAME</name> <name>THIRD APP NAME</name> </application>
- disableSamplers
Samplers collect information about memory and CPU consumption. Set this to
trueto disable sampling.
Data transmission element
The
dataTransmission element is a child of the
configuration element. This element affects how data is sent to New Relic and can be used if you have specific data transmission requirements.
<dataTransmission putForDataSend="false" compressedContentEncoding="deflate"/>
The
dataTransmission element supports the following attributes:
- putForDataSend
Defines the HTTP method used when sending data to New Relic. Set this to
trueto enable using the PUT method when sending data. The POST method is used by default.
Host name
If the default host name label in the APM UI is not useful, you can decorate that name in the New Relic UI with a display name. After the application process is restarted and the .NET agent is reporting again, the display name will show in the servers list (as seen in the example below), in addition to the default host name.
To set a display name, choose one of the following options. The environment variable takes precedence over the config file value.
- Set via config file
Set displayName attribute in the
processHostelement in newrelic.config. The
processHostelement is a child of the
configurationelement.
<configuration . . . > <processHost displayName="CUSTOM_NAME" /> </configuration>
- Set via environment variable
Set the
NEW_RELIC_PROCESS_HOST_DISPLAY_NAMEenvironment variable:
NEW_RELIC_PROCESS_HOST_DISPLAY_NAME= "CUSTOM_NAME"
Restart your application to see your changes in the New Relic UI.
Instrumentation options
Use these options to configure which elements of your application and environment to instrument. New Relic for .NET supports the following categories of instrumentation options:
Instrumentation element
The
instrumentation element is a child of the
configuration element. By default, the .NET agent instruments IIS asp worker processes and Azure web and worker roles. To instrument other processes, see Instrumenting custom applications.
Applications element (instrumentation)
The
applications element is a child of the
instrumentation element. The applications element specifies which non-web apps to instrument. It contains a
name attribute.
This is not the same as the
application (configuration) element, which is a child of the
configuration element.
<instrumentation> <applications> <application name="MyService1.exe" /> <application name="MyService2.exe" /> <application name="MyService3.exe" /> </applications> </instrumentation>
Attributes element
An attribute is a key/value pair that determines the properties of an event or transaction. Each attribute is sent to APM transaction traces, APM error traces, Insights Transaction events, Insights TransactionError events, or Insights PageView events. The primary
attributes element enables or disables attribute collection for the .NET agent, and defines specific attributes to collect or exclude. You can also configure attribute settings based on their destination: Error collection, transaction traces, Browser instrumentation, and transaction events.
In this example, the agent excludes all attributes whose key begins with myApiKey (myApiKey.bar, myApiKey.value), but collects the custom attribute myApiKey.foo.
<attributes enabled="true"> <exclude>myApiKey.*</exclude> <include>myApiKey.foo</include> </attributes>
You can view the .NET APM attributes on the .NET agent attributes page. You can also define custom attributes with the agent API call
AddCustomParameter.
- enabled
Enable or disable attribute collection. When set to
falsein the primary attribute element, this setting overrides all attribute settings for individual destinations.
- include
If attributes are enabled, the agent will collect all attribute keys specified in this list. To specify multiple attribute keys, specify each individually. You can also use a
*wildcard character at the end of a key to match multiple attributes (for example,
myApiKey.*). For more information, see Attribute rules.
- exclude
If attributes are enabled, the agent will not collect attribute keys specified in this list. To specify multiple attribute keys, specify each individually. You can also use a
*wildcard character at the end of a key to match multiple attributes (for example,
myApiKey.*). For more information, see Attribute rules.
Feature options
Use these options to enable, disable, and configure New Relic features. New Relic for .NET allows you to configure the following features:
- App pools
- Cross application traces
- Error collection
- High security mode
- Strip exception messages
- Transaction events
- Custom events
- Custom parameters
- Labels
- Browser instrumentation
- Slow Queries
- Transaction traces
- Datastore tracer
- Distributed tracing
App pools
This is only applicable to a system's global config file.
The
applicationPools element is a child of the
configuration element. The
applicationPools element specifies for the profiler exactly which application pools to instrument and uses the same name as the IIS application pool name. This configuration element is useful when you may need to instrument only a small subset of your app pools. For example, a given server might have several hundred application pools, but only a few of those pools need to be instrumented by the .NET agent.
Here is an example of disabling instrumentation for specific application pools:
<applicationPools> <applicationPool name="Foo" instrument="false"/> <applicationPool name="Bar" instrument="false"/> </applicationPools>
Here is an example of disabling instrumentation for all application pools currently executing on the server and enabling instrumentation for specific application pools:
<applicationPools> <defaultBehavior instrument="false"/> <applicationPool name="Foo" instrument="true"/> <applicationPool name="Bar" instrument="true"/> </applicationPools>
The
applicationPools element supports the following elements:
- defaultBehavior
Defines how the .NET agent will behave on a "global" level for application pools served via IIS. The .NET agent instruments all application pools by default. When
true, application pools listed under applicationPool with an
instrumentattribute set to false will not be instrumented.
Essentially, when set to
false, the application pool list act as a whitelist. When set to
true, the application pool list acts as a blacklist.
- applicationPool
Defines instrumentation behavior for a specific application pool. The
nameattribute is the name of an application pool. Enable or disable profiling in the
instrumentattribute. Define this application in the
nameattribute.
Cross application traces
A distributed tracing feature is now available. Distributed tracing improves on cross application tracing; it's recommended for monitoring activity in complex distributed systems.
The
crossApplicationTracer element is a child of the
configuration element.
crossApplicationTracer links transaction traces across applications. When linked in a service-oriented architecture, all instrumented applications that communicate with each other via HTTP will now "link" transaction traces with the applications that they call and the applications they are called by. Cross application tracing makes it easier to understand the performance relationship between services and applications.
<crossApplicationTracer enabled="true"/>
The
crossApplicationTracer element supports the following attribute:
- enabled
Enable or disable cross application tracing
Error collection
The
errorCollector element is a child of the
configuration element.
errorCollector configures error collection, which captures information about uncaught exceptions and sends them to New Relic.
<errorCollector enabled="true" captureEvents="true" maxEventSamplesStored="100"> <ignoreErrors> <exception>System.IO.FileNotFoundException</exception> <exception>System.Threading.ThreadAbortException</exception> </ignoreErrors> <ignoreStatusCodes> <code>401</code> <code>404</code> </ignoreStatusCodes> <attributes enabled="true"> <exclude>myApiKey.*</exclude> <include>myApiKey.foo</include> </attributes> </errorCollector>
For an overview of error configuration in New Relic APM, see Manage errors in APM.
The
errorCollector element supports the following elements and attributes:
- enabled
Enable or disable the error collector.
- captureEvents
Enable or disable the capturing of error events.
- maxEventSamplesStored
Reservoir limit for error events.
- ignoreErrors
Lists specific exceptions to not report to New Relic. The full name of the exception should be used, such as
System.IO.FileNotFoundException.
- ignoreStatusCodes
Lists specific HTTP error codes to not report to New Relic. You can use standard integral HTTP error codes, such as just 401, or you may use Microsoft full status codes with decimal points, such as 401.4 or 403.18.
- attributes
Use this sub-element to customize your agent attribute settings for error traces. This sub-element uses the same settings as the primary
attributeselement:
enabled,
include, and
exclude.
High security mode
The
highSecurity element is a child of the
configuration element. To enable high security mode, set this property to
true and enable high security property in the New Relic user interface. Enabling high security means SSL is turned on, request parameters and custom parameters are not collected, strip exception messages is enabled, and queries cannot be sent to New Relic in their raw form.
- enabled
Enable or disable high security mode. Example:
<highSecurity enabled="true"/>
Strip exception messages
The
stripExceptionMessages element is a child of the
configuration element. To enable strip exception messages, set this property to
true. By default, this is set to false, which means that the agent sends messages from all exceptions to the New Relic collector. If you enable high security mode, this is automatically changed to true, and the agent strips the messages from exceptions.
- enabled
Enable or disable strip exception messages. Example:
<stripExceptionMessages enabled="true"/>
Transaction events
The
transactionEvents element is a child of the
configuration element. Use
transactionEvents to configure transaction events.
<transactionEvents enabled="true" maximumSamplesStored="10000"> <attributes enabled="true"> <exclude>myApiKey.*</exclude> <include>myApiKey.foo</include> </attributes> </transactionEvents>
The
transactionEvents element supports the following attributes:
- enabled
Enable or disable the event recorder.
- maximumSamplesStored
The maximum number of samples to store in memory at once.
- attributes
Use this sub-element to customize your agent attribute settings for transaction events. This sub-element uses the same settings as the primary
attributeselement:
enabled,
include, and
exclude.
Custom events
The
customEvents element is a child of the
configuration element. Use
customEvents to configure custom events.
<customEvents enabled="true" maximumSamplesStored="10000"/>
The
CustomEvents element supports the following attributes:
- enabled
Enable or disable the event recorder.
- maximumSamplesStored
The maximum number of samples to store in memory at once.
Custom parameters
The
customParameters element is a child of the
configuration element. Use
customParameters to configure custom parameters.
<customParameters enabled="true" />
The
CustomParameters element supports the following attributes:
- enabled
Enable or disable the capture of custom parameters.
Labels
The
labels element is a child of the
configuration element. This sets the label names and values to associate with the application. The list is a semicolon delimited list of colon-separated name and value pairs. You can also use with the
NEW_RELIC_LABELS environment variable. Example:
<labels>foo:bar;zip:zap</labels>
Browser instrumentation
The
browserMonitoring element is a child of the
configuration element.
browserMonitoring configures New Relic Browser in your .NET application. Browser gives you insight your end users' performance experience. This is accomplished by measuring the time it takes for your users' browsers to download and render your webpages by injecting a small amount of JavaScript code into the header and footer of each page.
// If you use both the Exclude and Attribute elements // the Exclude element must be listed first. <browserMonitoring autoInstrument="true"> <requestPathsExcluded> <path regex="url-regex-1"/> <path regex="url-regex-2"/> ... <path regex="url-regex-n"/> </requestPathsExcluded> <attributes enabled="true"> <exclude>myApiKey.*</exclude> <include>myApiKey.foo</include> </attributes> </browserMonitoring>
The
browserMonitoring element supports the following attributes:
- autoInstrument
By default the agent automatically injects the Browser agent JavaScript. To turn off automatic injection, set this attribute to
false.
- attributes
Use this sub-element to customize your agent attribute settings for Browser. This sub-element uses the same settings as the primary
attributeselement:
enabled,
include, and
exclude.
- requestPathsExcluded
Use this sub-element to prevent the Browser agent from being injected in specific pages. The element is used as follows:
<requestPathsExcluded> <path regex="url-regex-1"/> <path regex="url-regex-2"/> ... <path regex="url-regex-n"/> </requestPathsExcluded>
The agent will not inject the Browser agent into pages whose URL matches one of the specified regular expressions. The regular expression should follow Microsoft guidelines for the Regex class.
It is a reference to the virtual directory of the path in your application and not the full URL of the path you wish to exclude. For example, to exclude the pages in would simply insert
/mywebpages/as the path regex value.
The
requestPathsExcludedelement should be used in cases where it is impossible or undesirable to use the
DisableBrowserMonitoring()call. To minimize a possible performance impact try to use as few regular expressions as possible and keep them as simple as possible.
Slow queries
The
slowSql element is a child of the
configuration element.
slowSql configures capturing information about slow query executions, and captures and obfuscates explain plans for these queries.
<slowSql enabled="true"/>
The
slowSql element supports the following attribute:
- enabled
Enable or disable slow query tracing.
Transaction traces
The
transactionTracer element is a child of the
configuration element.
transactionTracer configures transaction traces. Included in the trace is the exact call sequence of the transactions, including any query statements issued.
<transactionTracer enabled="true" transactionThreshold="apdex_f" stackTraceThreshold="500" recordSql="obfuscated" explainEnabled="true" explainThreshold="500" maxSegments="3000" maxStackTrace="30" maxExplainPlans="20"> <attributes enabled="true"> <exclude>myApiKey.*</exclude> <include>myApiKey.foo</include> </attributes> </transactionTracer>
The
transactionTracer element supports the following attributes:
- enabled
Enable or disable transaction traces.
- transactionThreshold
Defines the threshold for transaction traces. If a transaction takes longer than the threshold, it is eligible for being traced. See transaction trace basics for more about the rules governing traces.
The default value is
apdex_f, which sets the threshold to four times the application's apdex_t value. For more information about apdex_t, see Apdex.
You can also set the threshold to be a specific time value in milliseconds.
- recordSql
Select a query tracing policy. Options are
off, which records nothing;
obfuscated, which records an obfuscated version of the query; or
raw, which records the query exactly as it is issued to the database.
Recording raw queries may capture sensitive information.
- stackTraceThreshold
Defines the stack trace threshold: this is the threshold at which segments will be given a stack trace in the transaction trace.
- explainEnabled
When
true, the agent captures
EXPLAINstatements for slow queries.
- explainThreshold
The agent collects slow query data for queries that exceed this threshold, along with any available explain plans, as part of transaction traces.
- maxSegments
The maximum number of segments to collect in a transaction trace.
- maxStackTrace
The maximum number of stack traces to collect during a harvest cycle.
- maxExplainPlans
The maximum number of explain plans to collect during a harvest cycle.
- attributes
Use this sub-element to customize your agent attribute settings for transaction traces. This sub-element uses the same settings as the primary
attributeselement:
enabled,
include, and
exclude.
Datastore tracer
The
datastoreTracer element is a child of the
configuration element.
<datastoreTracer> <instanceReporting enabled="true" /> <databaseNameReporting enabled="true" /> <queryParameters enabled="false" /> </datastoreTracer>
The
datastoreTracer element supports the following sub-elements:
- instanceReporting
Use this sub-element to enable collection of datastore instance metrics (such as the host and port) for some database drivers. These are reported on slow query traces and transaction traces. The default value of attribute
enabledis
true.
- databaseNameReporting
Use this sub-element to enable collection of the database name on slow query traces and transaction traces for some database drivers. The default value of attribute
enabledis
true.
- queryParameters
Use this sub-element to enable collection of the SQL query parameters on slow query traces. The default value of attribute
enabledis
false.
- Recording query parameters may capture sensitive information.
- The
transactionTracer.recordSqlconfiguration option must be set to
rawor this option is ignored.
Distributed tracing
Enabling distributed tracing disables cross application tracing, and has other effects on New Relic APM features. Before enabling, read the transition guide.
Requires .NET agent version 8.6.45.0 or higher.
Distributed tracing lets you see the path that a request takes as it travels through a distributed system.
To enable distributed tracing, choose one of the following options:
- Enable via config file
Set the
<distributedTracing>element to
trueto enable via the newrelic.config file. This element is a child of the
<configuration>element.
<configuration . . . > <distributedTracing enabled="true" /> </configuration>
- Enable via environment variable
Set the
NEW_RELIC_DISTRIBUTED_TRACING_ENABLEDenvironment variable in the application's environment.
NEW_RELIC_DISTRIBUTED_TRACING_ENABLED=true
Span events
Span events are enabled by default. They are automatically disabled if distributed tracing is also disabled.
Requires .NET agent version 8.6.45.0 or higher.
To disable span events, choose one of the following options:
- Disable via config file
Set the
<spanEvents>element to
falseto disable via the newrelic.config file. This element is a child of the
<configuration>element.
<configuration . . . > <spanEvents enabled="false" /> </configuration>
- Disable via environment variable
Set the
NEW_RELIC_SPAN_EVENTS_ENABLEDenvironment variable in the application's environment.
NEW_RELIC_SPAN_EVENTS_ENABLED=false
Settings in app.config or web.config
You can also configure the following settings in your app's
app.config or
web.config, within the outermost element,
<configuration>:
- Enable and disable the agent
<appSettings> <add key = "NewRelic.AgentEnabled" value="false" /> </appSettings>
This setting does not work with ASP.NET Core apps when using the
web.configfile.
- Application name
For more information, see Name your .NET application.
<appSettings> <add key = "NewRelic.AppName" value ="Descriptive Name" /> </appSettings>
- License key
<appSettings> <add key = "NewRelic.LicenseKey" value ="XXXXXXXX" /> </appSettings>
- Change newrelic.config location
Designates an alternative location for the config file outside of the local root of the app or global config location. The location entered must be an absolute path.
<appSettings> <add key = "NewRelic.ConfigFile" value="C:\Path-to-alternate-config-dir\newrelic.config" /> </appSettings> | https://docs.newrelic.com/docs/agents/net-agent/configuration/net-agent-configuration | 2019-06-16T03:28:13 | CC-MAIN-2019-26 | 1560627997533.62 | [array(['https://docs.newrelic.com/sites/default/files/styles/inline_660px/public/thumbnails/image/net-config-precedence-core.png?itok=YF2IGywq',
'.NET configuration precedence .NET configuration precedence'],
dtype=object)
array(['https://docs.newrelic.com/sites/default/files/thumbnails/image/crop-display-host-name_0.png',
'crop-cosmetic-label-hostname.png crop-cosmetic-label-hostname.png'],
dtype=object) ] | docs.newrelic.com |
OneFlow Documentation
Use the guides, tutorials and reference documentation to better understand OneFlow's products and services.
Products
Product specific documentation to help you get started with our products, or to provide you with more in-depth tutorials and knowledge.Read the Docs
API Reference
Looking for endpoints? Each section lists details of what API endpoints are available for our products.Read the Docs
Release Notes
Release notes are included for each product to highlight the latest changes.Read the Docs | https://docs.oneflowcloud.com/ | 2019-06-16T03:35:40 | CC-MAIN-2019-26 | 1560627997533.62 | [] | docs.oneflowcloud.com |
Elevated Privacy
Contents
Overview
Use Elevated Privacy to protect privacy activity across different websites.
Otherwise third-party cookies will be tracking your activities.
Example : When you are surfing internet by logging into any of your accounts like Hotmail, Yahoo, Gmail, Online Banking…etc. your activities will be tracked by third party and referral domains.
Global
Enabled
Enable or Disable this section
- TRUE : Enable strict privacy and third party cookies blocking.
- FALSE : Disable strict privacy and third party cookies blocking.
Elevated policies
Create the Policies for Elevated Privacy.
ALL The Following Entries will be tested from top to bottom.
Click on Add below, to add a new entry.
Example: After enabling this section and creating a policy, you are unable to logging into websites with third-party account details like, you are unable to login into flipkart or amazon with Facebook or Gmail accounts.).
Privacy Levels
Apply 'Privacy Level' as per your requirement.
Caution: If you select 'Paranoid' level privacy, it may cause problems for web servers which give response based on User-Agent.
- NOT_REQUIRED : Select this if you want to disable ‘Elevated Privacy’.
- LOW : Select this, if you want to block Third-Party Cookies only.
- STANDARD : Select this, if you want block Third-Party Cookies and hide the HTTP & HTTPS referer.
- PARANOID : Select this, if you want block Third-Party Cookies and hide the HTTP & HTTPS referer and also hide different User Agents. | https://docs.safesquid.com/wiki/Elevated_Privacy | 2019-06-16T02:46:40 | CC-MAIN-2019-26 | 1560627997533.62 | [] | docs.safesquid.com |
Contents Now Platform Administration Previous Topic Next Topic LDAP data transformation Subscribe Log in to subscribe to topics and get notified when content changes. ... SAVE AS PDF Selected Topic Topic & Subtopics All Topics in Contents Share. mapson | https://docs.servicenow.com/bundle/kingston-platform-administration/page/integrate/ldap/concept/c_LDAPDataTransformation_1.html | 2019-06-16T03:15:56 | CC-MAIN-2019-26 | 1560627997533.62 | [] | docs.servicenow.com |
DisassociateConnectPeer
Disassociates a core network Connect peer from a device and a link.
Request Syntax
DELETE /global-networks/
globalNetworkId/connect-peer-associations/
connectPeerIdHTTP/1.1
URI Request Parameters
The request uses the following URI parameters.
- connectPeerId
The ID of the Connect peer to disassociate from a device.
Length Constraints: Minimum length of 0. Maximum length of 50.
Pattern:
^connect-peer-([0-9a-f]{8,17})$
Required: Yes
- globalNetworkId
The ID of the global network.
Length Constraints: Minimum length of 0. Maximum length of 50.
Pattern:
[\s\S]*
Required: Yes
Request Body
The request does not have a request body.
Response Syntax
HTTP/1.1 200 Content-type: application/json { "ConnectPeerAssociation": { "ConnectPeerId": "string", "DeviceId": "string", "GlobalNetworkId": "string", "LinkId": "string", "State": "string" } }
Response Elements
If the action is successful, the service sends back an HTTP 200 response.
The following data is returned in JSON format by the service.
- ConnectPeerAssociation
Describes the Connect peer association.
Type: ConnectPeerAssociation object
Errors
For information about the errors that are common to all actions, see Common Errors.
- AccessDeniedException
You do not have sufficient access to perform this action.
HTTP Status Code: 403
- ConflictException
There was a conflict processing the request. Updating or deleting the resource can cause an inconsistent state.
HTTP Status Code: 409
-: | https://docs.aws.amazon.com/networkmanager/latest/APIReference/API_DisassociateConnectPeer.html | 2022-01-16T22:45:41 | CC-MAIN-2022-05 | 1642320300244.42 | [] | docs.aws.amazon.com |
A time series is an ordered sequence of measurements of a variable that are arranged according to the time of occurrence. Time series are typically measured at some constant frequency and the data points are generally, but not necessarily, spaced at uniform time intervals.
- The data points are not independent of one another.
- The dispersion of data points varies as a function of time.
- The data frequently indicates trends.
- The data tends to be cyclic.
- Budgetary analysis
- Economic forecasting
- Inventory analysis
- Process control
- Quality control
- Sales forecasting
- Stock market analysis
- Workload projections
- Yield projections
The EXPAND ON clause enables various forms of time series expansion on a PERIOD column value of an input row by producing a set of value-equivalent rows, one for each granule in the specified time period. The number of granules is defined by the anchor name you specify for the clause.
You can expand sparse PERIOD representations of relational data into a dense representation of the same data. Data converted to a dense form can be more easily manipulated by complex analyses such as moving average calculations without having to write complex SQL requests to respond to business questions made against sparse relational data.
- Interval expansion, where rows are expanded by user-specified intervals.
- Anchor point expansion, where rows are expanded by user-specified anchored points.
- Anchor PERIOD expansion, where rows are expanded by user-specified anchored periods. | https://docs.teradata.com/r/FaWs8mY5hzBqFVoCapztZg/PmWyMYkhtMyInPDrtDZK5g | 2022-01-16T22:02:54 | CC-MAIN-2022-05 | 1642320300244.42 | [] | docs.teradata.com |
Version compatibility matrix
Before you install or upgrade, verify the compatibility of your Automation Anywhere Control Room and Enterprise Client versions.
Note:
- TaskBots created in earlier releases are compatible with this release.
- TaskBots and MetaBots created/saved in this release do not work if the product is downgraded to earlier versions. For example: TaskBots created in the Automation Anywhere 11.3 do not work in Automation Anywhere 10 LTS. because the 11.x obfuscation algorithm is enhanced from 10.x.
- MetaBots created in earlier versions are compatible with this version.
- For IQ Bot Version 6.0 or later to Control Room Version 11.3 or later compatibility, see the IQ Bot documentation.
Control Room – Enterprise Client compatibility matrix, versions 11.x
Note: The Control Room version must be equal or higher than the Enterprise Client version.
Y* indicates that MetaBot with VCS is not working.
Control Room – Enterprise Client compatibility matrix, versions 10.x
Control Room – IQ Bot Compatibility Matrix
Create or update cluster.properties file
Based on the compatibility information in the table, update your cluster.properties file for the listed parameters.
- Locate the file in your Control Room directory (for example, C:\Program Files\Automation Anywhere\Enterprise\config\).
If the file does not exist in your Control Room directory:
- Create a file with the filename cluster.properties.
- Add the property options to the file as mentioned in the Notes column of the table.
- Save the cluster.properties file.
- Restart the following services:
- Automation Anywhere Control Room Caching
- Automation Anywhere Control Room Messaging
- Automation Anywhere Control Room Service
Products for Upgrade Compatibility Matrix
Following matrix shows products compatibility of Control Room, Bot Insight, IQ Bot, and BotFarm versions for upgrading to Control Room.
Note: "-" indicates not available, "N" indicates not supported, and "Y" indicates supported version. | https://docs.automationanywhere.com/de-DE/bundle/enterprise-v11.3/page/enterprise/topics/release-notes/cr-client-compatibity-matrix.html | 2022-01-16T21:15:18 | CC-MAIN-2022-05 | 1642320300244.42 | [] | docs.automationanywhere.com |
Beaker installation
This is a playbook to install an all-in-one beaker server & lab controller. It has been designed to be used with Distributed-CI but it can also be used independently of it.
Download
You can download the role from Github with the following command:
git clone cd ansible-playbook-dci-beaker
Download the dependencies
ansible-galaxy install -r requirements.yml -p roles/
Initial configuration
Before applying the role, the following files need some configuration:
inventory: adjust the hostname and user of the machine to use as a beaker server
group_vars/all: adjust the values for admin user and lab setup (DHCP range, systems access)
You can skip Redhat subscription role by setting
role_redhat_subscription to false.
Deployment
Call Ansible with the following command:
ansible-playbook -i inventory playbook.yml
For Ansible ≤ 2.5.6 users, you will face a problem with the
service_facts module. The solution is to skip the firewall tag:
ansible-playbook -i inventory playbook.yml --skip-tags=firewall
Post-Deployment
After running the playbook the first time, the credentials for services on the beaker server will be stored in the
credentials/ folder, unless the variables ared hardcoded in
group_vars/all
Note about Virtual machines
Virtual machines can use the "virsh" power type. As the virtualization support in Beaker is based on libvirt tools, you might need to install additional packages (like libvirt-client). For more information, please refer to Beaker Project official documentation.
It is also possible to configure a virtual BMC to manage virtual machines using the IPMI protocol. For more information, please refer to Virtual BMC official documentation. | https://docs.distributed-ci.io/ansible-playbook-dci-beaker/ | 2022-01-16T21:29:51 | CC-MAIN-2022-05 | 1642320300244.42 | [] | docs.distributed-ci.io |
Note
This SDK version is intended for use with Services belonging to a Directory or Organization created in or migrated to the Admin Center.
Use this SDK to interact with the TruValidate Multifactor Authentication Platform API in your .NET application. This documentation explains how to use the SDK in the most common scenarios.
Before you can begin using the Platform API, you need a Service. If you have not created a Service yet, you can use our Help Center to create one.
An example app is included in the source repository as a project:
The TruValidate Multifactor Authentication Service SDK for .NET is available via nuget.
Package: iovation.LaunchKey.Sdk
The package is compatible with both .NET Framework and .NET Core deployments and is tested to run properly on Windows, Linux, and Mac.
Instructions for using Service SDKs can be found here: TruValidate Multifactor Authentication Service SDK Documentation.
We use the Trace API for deprecation warnings. You can find more information on how to set up TraceListeners | https://docs.launchkey.com/service-sdk/dotnet.html | 2022-01-16T22:18:28 | CC-MAIN-2022-05 | 1642320300244.42 | [] | docs.launchkey.com |
[−][src]Crate wrapcenum_derive
Internal macro used in nvml-wrapper.
This macro is tied to the crate and is not meant for use by the general public.
Its purpose is to auto-generate both a
TryFrom implementation converting an
i32
into a Rust enum (specifically for converting a C enum represented as an integer that
has come over FFI) and an
as_c method for converting the Rust enum back into an
i32.
It wouldn't take much effort to turn this into something usable by others; if you're interested feel free to contribute or file an issue asking me to put some work into it. | https://docs.rs/wrapcenum-derive/latest/wrapcenum_derive/ | 2022-01-16T22:06:22 | CC-MAIN-2022-05 | 1642320300244.42 | [] | docs.rs |
Kibana¶
From:
Kibana is a free and open user interface that lets you visualize your Elasticsearch data and navigate the Elastic Stack. Do anything from tracking query load to understanding the way requests flow through your apps.
Authentication¶
Starting in Security Onion 2.3.60, we support Elastic authentication via so-elastic-auth.
Dashboards¶
We’ve included the old 16.04 dashboards in case you performed an in-place upgrade and have any old 16.04 data. These dashboards are named with the
z16.04 prefix and will only show old 16.04 data. The new Security Onion 2 dashboards are all named with the
Security Onion prefix and they should be used for any new data going forward.
If you ever need to reload dashboards, you can run the following command on your manager:
so-kibana-config-load
If you try to modify a default dashboard, your change will get overwritten. Instead of modifying, copy the desired dashboard and edit the copy.
Pivoting¶
Kibana uses multiple hyperlinked fields to accelerate investigations and decision-making:
Transcript¶
When present, clicking the hyperlinked
_id field allows an analyst to pivot to full packet capture via our PCAP interface. You can usually find the
_id field as the rightmost column in the log panels at the bottom of the dashboards:
You can also find the
_id field by drilling into a row in the log panel.
Search Results¶
Search results in the dashboards and through Discover are limited to the first
100 results for a particular query. If you don’t feel like this is adequate after narrowing your search, you can adjust the value for
discover:sampleSize in Kibana by navigating to
Stack Management ->
Advanced Settings and changing the value. It may be best to change this value incrementally to see how it affects performance for your deployment.
Timestamps¶
By default, Kibana will display timestamps in the timezone of your local browser. If you would prefer timestamps in UTC, you can go to
Management –>
Advanced Settings and set
dateFormat:tz to
UTC.
Configuration¶
Kibana’s configuration can be found in
/opt/so/conf/kibana/. However, please keep in mind that most configuration is managed with Salt, so if you manually make any modifications in
/opt/so/conf/kibana/, they may be overwritten at the next salt update.
Starting in 2.3.90,
/opt/so/conf/kibana/etc/kibana.yml can be managed using the
kibana pillar placed in the manager pillar file located under
/opt/so/saltstack/local/pillar/minions/. The manager pillar file will end with either
*_manager.sls,
*_managersearch.sls,
*_standalone.sls, or
*_eval.sls depending on the manager type that was chosen during install.
- An example of a Kibana pillar may look as follows:
kibana: config: elasticsearch: requestTimeout: 120000 data: autocomplete: valueSuggestions: timeout: 2000 terminateAfter: 200000 logging: root: level: warn
Diagnostic Logging¶
Kibana logs to
/opt/so/log/kibana/kibana.log.
If you try to access Kibana and it says
Kibana server is not ready yet even after waiting a few minutes for it to fully initialize, then check
/opt/so/log/kibana/kibana.log. You may see something like:
Another Kibana instance appears to be migrating the index. Waiting for that migration to complete. If no other Kibana instance is attempting migrations, you can get past this message by deleting index .kibana_6 and restarting Kibana
If that’s the case, then you can do the following (replacing
.kibana_6 with the actual index name that was mentioned in the log):
curl -k -XDELETE sudo so-kibana-restart
If you then are able to login to Kibana but your dashboards don’t look right, you can reload them as follows:
so-kibana-config-load
Features¶
Starting in Security Onion 2.3.40, Elastic Features are enabled by default..
More Information¶
See also
For more information about Kibana, please see. | https://docs.securityonion.net/en/2.3/kibana.html | 2022-01-16T21:53:07 | CC-MAIN-2022-05 | 1642320300244.42 | [array(['https://user-images.githubusercontent.com/1659467/95376132-9c077c00-08ae-11eb-9675-8bddb3d20719.png',
'https://user-images.githubusercontent.com/1659467/95376132-9c077c00-08ae-11eb-9675-8bddb3d20719.png'],
dtype=object)
array(['https://user-images.githubusercontent.com/1659467/95376213-c22d1c00-08ae-11eb-8ac0-73d7766d2d39.png',
'https://user-images.githubusercontent.com/1659467/95376213-c22d1c00-08ae-11eb-8ac0-73d7766d2d39.png'],
dtype=object) ] | docs.securityonion.net |
This is the NewsPaper Lite theme documentation page. We have tried our best to provide proper, simple and clear instruction. If you think it can be improved, please let us know. We will update as per your feedback too.
Please click on the left sidebar’s title to navigate the section. | https://docs.themecentury.com/getting-started/ | 2022-01-16T22:43:02 | CC-MAIN-2022-05 | 1642320300244.42 | [] | docs.themecentury.com |
You can generate a self-signed certificate for a vRealize Log Insight Windows or Linux agent by using the OpenSSL tool.
Prerequisites
Procedure
- Create a certificate folder in the path mentioned for ssl_ca_path in the liagent.ini file.
- Open the Command Prompt and run the following command.
/etc/pki/tls/certs/ > openssl req -newkey rsa:2048 -new -nodes -x509 -days 3650 -keyout key.pem -out ca.pem
OpenSSL prompts you to supply certificate properties, including country, organization, and so on.
Results
Two files are created, key.pem and ca.pem.
- key.pem is the private key.
- ca.pem is a certificate signed by
key.pem. | https://docs.vmware.com/en/VMware-vRealize-Log-Insight-Cloud/services/User-Guide/GUID-E4085404-F74A-49CF-BAE6-D64018582634.html | 2022-01-16T23:14:30 | CC-MAIN-2022-05 | 1642320300244.42 | [] | docs.vmware.com |
Scale and limits
This document tracks current scale and limitations known in OSM.
Considerations
The scale limits documented here have to be put in light of current architecture. Current architecture relies on a global broadcast mechanism that does not account for proxy configuration deltas, therefore all proxy configurations are computed and pushed upon any change.
Testing and measures
We currently hold a single test which attempts to scale infinitely a topology subset, test proper traffic configuration between the new pods/services being deployed in the iteration, and stop if any failure is seen. It is also acknowledged that some of the scale constraints need to be addressed before it even makes sense to proceed with any additional scale testing, hence the lack of additional test scenarios.
Test
The test was run in different OSM form factors, factoring in different amounts of RAM/CPU, to better qualify potential limits in case any of those were to be a constraint upon deployment.
Test details:
- Commit-id: 4381544908261e135974bb3ea9ff6d46be8dbd56 (5/13/2021)
- 10 Node (Kubernetes v1.20, nodes: 4vcpu 16Gb)
- Envoy proxy log level Error
- 2048 bitsize RSA keys
- OSM controller
- Log level Error
- Using default max 1.5 CPU
- Using Max Memory 1GB
- OSM Injector
- Log level Error
- Using default max 0.5 CPU
- Using default max 64MB Memory
- HTTP debug server disabled on OSM
- Test topology deploys each iteration:
- 2 clients
- 5 replicaset per client
- 5 server services
- 2 replicaset per server service
- 1 TrafficSplit between the 5 server services (10 pods backed)
- Total of 20 pods each iteration. 10 client pods will REST GET the Traffic split.
- Correctness is ensured. It is checked that all TrafficSplit server members are eventually reached.
- Test timeout for network correctness: 150 seconds
Assessment and Limits
Note: Assuming proxy per pod, so pod/proxies can be used interchangeably.
Test failed at around 1200 pods, with kubernetes unable to bring up in time a pod in the mesh.
CPU
OSM Controller
- 1vcpu per 700 proxies, giving more cpu does not scale linearly (m<1) with current architecture; horizontal scaling should be considered to increase supported mesh size.
- Network settlment times vary from <10s with no pods to +2min at 1000 pods.
ADS Performance
- With the recent ADS pipelining changes in OSM, it is ensured not too many ADS updates are scheduled for busy work at the same time, ensuring low times as granted by the available CPU. This yields more deterministic results, with all updates always under sub 0.10s window, and serialization of number of events as opposed to arbitrary scheduling from Golang.
- The number of XDS updates over the test grows additively with any current number of onboarded proxies, each iteration occupying more time until basically iterations overlap, given the rate at which OSM can compute ADS updates.
OSM Injector
- Injector can handle onboarding 20 pods concurrently per 0.5cpu, with rather stable times to create the 2048-bit certificates and webhook handling staying regularly below 5s, with some outliers in the 5-10s and in very limited occasions in the 10-20s (and probably closer to 10).
- Since 99% of the webhook handling time happens in the RSA certificate creation context, injector should scale rather linearly with added vcpu.
Prometheus
- Our control plane qualification testing has disabled envoy scraping for the time being.
- Scraping the control plane alone, requires around 0.25vcpu per 1000 proxies (given number of metrics scraped and scrape interval used), see in orange Fig 2.
Memory
OSM Controller
Memory per pod/envoy onboarded in the network is calculated after the initial snapshot with nothing onboarded on the mesh is seen to take into account standalone memory used by OSM.
- Memory (RSS) in controller: 600~800KB per proxy
OSM Injector
OSM injector doesn’t store any intermediate state per pod, so it has no immediate memory scalability constraints at this scale order of magnitude.
Prometheus
- Our control plane qualification testing has disabled envoy scraping for the time being.
- Prometheus shows a memory increase per proxy of about ~0.7MB per proxy to handle the metric listed by OSM metrics.
Feedback
Was this page helpful?
Glad to hear it! Please tell us how we can improve.
Sorry to hear that. Please tell us how we can improve. | https://release-v0-9.docs.openservicemesh.io/docs/concepts_features/scale/ | 2022-01-16T21:50:06 | CC-MAIN-2022-05 | 1642320300244.42 | [array(['../images/scale/prox.png', None], dtype=object)
array(['../images/scale/cpu.png', None], dtype=object)
array(['../images/scale/histogram.png', None], dtype=object)
array(['../images/scale/sidecar-inj.png', None], dtype=object)
array(['../images/scale/mem.png', None], dtype=object)] | release-v0-9.docs.openservicemesh.io |
Configuration files¶
Roddy currently supports two different types of configuration files: - XML based which allows to use all configuration features - Bash based which only allows a reduced set of configuration features
Normally, Roddy workflows and projects are configured with XML files. This document will give you all the details you need to know about those special files. Don’t be afraid of messing up things in configuration files. Roddy checks at least a part (not everything) of the files, when they get loaded and will inform you about structural errors as good as possible.
Types of files¶
Roddy configuration files exist in three flavours:
- Project configuration files
- Workflow or analysis configuration files
- Generic configuration files.
All file types may contain the same content type though analysis configuration files will normally look different than e.g. project configuration files. The main difference between the different types is their position in the configuration inheritance tree, their filename and their header.
Filenames¶
Roddy imposes some filename conventions to identify XML files when they are loaded from disk:
- Project configuration files look like projects*[yourfilename]*.xml
- Workflow configuration files use the pattern analysis*[yourfilename]*.xml
Common configuration files do not use any pattern. You can name them like you want, except for the above patterns.
Inheritance structure¶
Configurations and configuration files can be linked in several ways:
- Subconfigurations extend their parent configuration(s)
- Configuration files can import other configuration, this is only possible on the top-level of a configuration file, a subconfiguration cannot do this
- Analysis configuration files can be imported as an analysis import by a project configuration or subconfiguration
- An analysis can be imported by a project but not vice-versa | https://roddy-documentation.readthedocs.io/en/stable/config/configurationFiles.html | 2022-01-16T21:19:02 | CC-MAIN-2022-05 | 1642320300244.42 | [] | roddy-documentation.readthedocs.io |
Introduction
Soveren discovers personally identifiable information, also known as PII or personal data, in structured API flows. Throughout this documentation, we will be using PII and personal data interchangeably.
Soveren monitors and parses traffic between the services, identifying personal information along with its sensitivity, with sensitivity graded in accordance with the consequences that might arise if that information was leaked or used inappropriately. Preconfigured dashboards provide a view into privacy incidents and risks related to PII so that engineering and security leaders can make informed security and privacy decisions.
How Soveren works
Soveren has a hybrid architecture:
- Soveren gateway is a pre-packaged container installed in your perimeter. It parses structured HTTP JSON traffic, gathers metadata about PII, and sends the metadata to the cloud.
- Soveren сloud is a SaaS managed by Soveren. It provides dashboards to gain visibility into different PII-related statistical data and metrics.
Soveren gateway
Soveren gateway is a pre-packaged container deployed on premise and configured to analyze the relevant part of inter-service HTTP API requests and responses that have the
application/json content type.
The gateway processes them asynchronously and gathers metadata about PII from the payloads.
The collected metadata is sent to Soveren сloud. It contains information about how the payload was structured (what fields), which PII types were detected, and which services were involved in the communication. No part of the actual payload contents is included in the metadata.
Technically, the gateway consists of a standard proxy (a Traefik fork), messaging system (Apache Kafka), and detection component, which discovers PII based on custom machine learning algorithms.
As shown in the diagram below, the gateway can be deployed at different places in your perimeter and can receive traffic from services that are deployed on any platform.
Soveren сloud
Soveren сloud is a SaaS managed by Soveren. It offers a set of dashboards that provide various views into the metadata collected by Soveren gateway. That includes analytics and stats on which PIIs have been observed and how sensitive they are, what services are involved, and what the potential limitations in the API structure are from the privacy standpoint. | https://docs.soveren.io/en/stable/ | 2022-01-16T21:14:14 | CC-MAIN-2022-05 | 1642320300244.42 | [array(['img/dashboards/pii-types-overview-cropped.png',
'PII dashboard PII dashboard'], dtype=object)
array(['img/architecture/architecture-concept.jpg',
'Soveren architecture simplified Soveren architecture simplified'],
dtype=object)
array(['img/architecture/integration-4.png',
'Integration options Integration options'], dtype=object)] | docs.soveren.io |
It is very easy to use FCKeditor in your ASP web pages. All the integration files are available in the official distributed package. Just follow these steps.
Integration step by step
Step 1
Suppose that the editor is installed in the /fckeditor/ path of your web site. The first thing to do is to include the "ASP Integration Module" file in the top of your page, just like this:
<!-- #INCLUDE file="/fckeditor/fckeditor.asp" -->
Step 2
Now FCKeditor is available and ready to use. Just insert the following code in your page to create an instance of the editor (usually inside a <form> tag):
<%" %>
In the above example, BasePath is set to the URL path to the FCKeditor installation folder. The Create method receives the "FCKeditor1" parameter, which is the name used to post the editor data on forms.
Step 3
The editor is now ready to be used. Just open the page in your browser to see it at work.
Sample Code
The complete sample - find the full sample in your Samples directory.
<!-- #INCLUDE </head> <body> <form action="sampleposteddata.asp" method="post" target="_blank"> <%" %> <br /> <input type="submit" value="Submit" /> </form> </body> </html>
Handling the posted data
The editor instance just created will behave like a normal <textarea> field in a form. To retrieve its value you can do something like this:
<% Dim sValue sValue = Request.Form( "FCKeditor1" ) %>
In the above example, "FCKeditor1" is the parameter passed to the Create method when creating the editor instance.
Additional information
- You can find some samples on how to use the editor in the "_samples/asp" directory of the distributed package. | http://docs.cksource.com/FCKeditor_2.x/Developers_Guide/Integration/ASP | 2017-07-20T14:42:38 | CC-MAIN-2017-30 | 1500549423222.65 | [] | docs.cksource.com |
Drawings and Layers
A very important concept to understand in Harmony are drawings, drawing elements, and layers. Layers are also referred to as columns in traditional animation. A drawing element is a directory containing multiple drawings and is linked to a column in the Xsheet view and a layer in the Timeline view. The layer and column are generally named the same way as the drawing element (folder). Note that there is a slight variation between a drawing element and a layer.. A layer will be represented as a module (node) in the Network view.
In traditional and paperless animation, a drawing element or layer can be a character, for example, level B. In cut-out animation, a drawing element can be the hand layer.
When you add a column to your scene, a module and a folder (element folder) are also added. By default, the element folder and layer (module) are named the same way as the column. As explained above, the element folder's purpose is to contain all the drawings related to this column. For example, in cut-out animation, a character can have many mouths available. All these mouth drawings will be contained in this folder, even if they are not currently exposed in the scene. In other words, there is always a drawing container hooked to a layer or column, unless that layer is linked to another drawing element (clone).
You can find the drawing element folders in your scene's subdirectory called
elements.
You can add drawing elements from the Timeline view, Xsheet view and the top menu .
In order to understand what happens when you duplicate a drawing, extend an exposure, create cycles or delete a drawing, it is important to know how a layer works.
Each layer is linked to a column and that column.
. | http://docs.toonboom.com/help/harmony-11/paint/Content/_CORE/_Workflow/006_Organization_File_Structure/004_H1_Drawing_and_Layer.html | 2017-07-20T14:24:41 | CC-MAIN-2017-30 | 1500549423222.65 | [array(['../../../Resources/Images/_ICONS/Home_Icon.png', None],
dtype=object)
array(['../../../Resources/Images/HAR/_Skins/stage.png', None],
dtype=object)
array(['../../../../Skins/Default/Stylesheets/Images/transparent.gif',
'Closed'], dtype=object)
array(['../../../Resources/Images/HAR/_Skins/draw.png',
'Toon Boom Harmony 11 Draw Online Documentation'], dtype=object)
array(['../../../../Skins/Default/Stylesheets/Images/transparent.gif',
'Closed'], dtype=object)
array(['../../../Resources/Images/HAR/_Skins/sketch.png', None],
dtype=object)
array(['../../../../Skins/Default/Stylesheets/Images/transparent.gif',
'Closed'], dtype=object)
array(['../../../Resources/Images/HAR/_Skins/controlcenter.png',
'Installation and Control Center Online Documentation Installation and Control Center Online Documentation'],
dtype=object)
array(['../../../../Skins/Default/Stylesheets/Images/transparent.gif',
'Closed'], dtype=object)
array(['../../../Resources/Images/HAR/_Skins/scan.png', None],
dtype=object)
array(['../../../../Skins/Default/Stylesheets/Images/transparent.gif',
'Closed'], dtype=object)
array(['../../../Resources/Images/HAR/_Skins/stagePaint.png', None],
dtype=object)
array(['../../../../Skins/Default/Stylesheets/Images/transparent.gif',
'Closed'], dtype=object)
array(['../../../Resources/Images/HAR/_Skins/stagePlay.png', None],
dtype=object)
array(['../../../../Skins/Default/Stylesheets/Images/transparent.gif',
'Closed'], dtype=object)
array(['../../../Resources/Images/HAR/_Skins/stageXsheet.png', None],
dtype=object)
array(['../../../../Skins/Default/Stylesheets/Images/transparent.gif',
'Closed'], dtype=object)
array(['../../../Resources/Images/HAR/Stage/Harmony_Basic_Concepts/HAR11_module_column_element_folder_concept.png',
'Toon Boom Harmony Drawing, Column and Drawing Element Concept Illustration Toon Boom Harmony Drawing, Column and Drawing Element Concept Illustration'],
dtype=object)
array(['../../../Resources/Images/HAR/Stage/Harmony_Basic_Concepts/HAR11_Drawing_Concept.png',
'Toon Boom Harmony Drawing and Drawing Elements Concept Toon Boom Harmony Drawing and Drawing Elements Concept'],
dtype=object)
array(['../../../Resources/Images/HAR/Stage/Layers/an_cycle_drawings.png',
None], dtype=object) ] | docs.toonboom.com |
This resource is freely shared by the author whose only request is that she received acknowledgment during all quotations from and/or use of this resource
Скачать
224.09 Kb.
Название
This resource is freely shared by the author whose only request is that she received acknowledgment during all quotations from and/or use of this resource
страница
1/6
Дата конвертации
21.04.2013
Размер
224.09 Kb.
Тип
Документы
1
2
3
4
5
6
IIK by Stacie Roblin
Exploring the Night Sky
Indigenous Inquiry Kit
Created and Written by:
Stacie Roblin
[email protected]
Brandon University
Fall 2010
This resource is freely shared by the author whose only request is that she received acknowledgment during all quotations from and/or use of this resource.
Table of Contents
Section I: Overview
Page 3
Rationale Page 4
Outcomes Page 5
Annotated Bibliography
Literature Resources Page 7
Non-literature Resources Page 12
Websites Page 13
Educational Documents and Curricula Page 14
Section II: Book Critiques
Page 15
Books Used (reviewed)
Review #1 Page 16
Her Seven Brothers
Review #2 Page 19
The Missing Sun
Review #3 Page 22
Keepers of the Night: Nocturnal Stories and Nocturnal
Activities for Children
Review #4 Page 25
Thirteen Moons on Turtles Back: A Native American Year of Moons
Review #5 Page 29
Coyote and the Sky: How the Sun, Moon, and Stars Began
Review #6 Page 32
Star Tales: North American Indian Stories About the Stars
Review #7 Page 36
Star Boy
Review #8 Page 39
Living the Sky: The Cosmos of the American Indian
Additional Books Page 43
Section III: Lesson Plans
Page 48
Lesson #1: The Sun (Science) Page 49
Lesson #2: Thirteen Moons (ELA) Page 52
Lesson #3: Aboriginal Myths and Legends (ELA) Page 55
Lesson #4: The Moon and Eclipses (Science) Page 58
Section IV: Resources
Page 61
Section I:
Overview
Rationale
A large part of Manitoba’s history involves the Native North American people who lived on this land before we did; many children of these people still live here today. It is important for teachers to integrate aspects of Aboriginal culture into the curricula we teach; not only in social studies but in all subject areas of the curriculum.
When teachers integrate aspects of Aboriginal culture into the different subject areas, it not only teaches students about the Aboriginal culture but it also helps to engage students, especially those who are Aboriginal. The middle years are an important time for children. At this age, students are searching for their own identities and trying to find out who they are and where they belong in life. It is important for teachers to use topics and resources that are relevant to students and that will help them discover who they are as a person.
I have have created this inquiry kit as one way of integrating Aboriginal culture into my future classroom. This kit revolves around two broad aspects of Aboriginal culture:
The importance of the night sky and other celestial objects in the everyday lives of Native North American people.
The importance of the oral tradition (storytelling and listening) to Aboriginal culture.
Many different aspects of the night sky have played a very important role in the daily lives
of many different Aboriginal groups. The night sky is relatively predictable from year to year and for this reason, different celestial objects have been used during travel, to predict weather, and to describe events that occur each year. Much of this has been passed along through stories from generation to generation.
This kit has been designed to be used in a grade 6 thematic inquiry unit about our solar
system (Grade 6, Cluster 4: Exploring the Solar System), however, it may be used at many different grade levels. This thematic unit can be used to meet outcomes in science, social studies, English Language Arts, and art as well as introduce and teach a variety of different Aboriginal perspectives.
Throughout this unit, students will explore a variety of different Native North American myths and legends about different aspects of the night sky, as well as learn about the solar system that we live in. In this kit, I have included a variety of children’s stories. Many of these books tell wonderful stories and are beautifully illustrated. Even though children at the grade 6 level do not normally read children’s books, these books work very well to introduce different topics and are also great for students at a variety of different reading levels.
This inquiry kit uses a variety of different resources (both literature and non-literature) and I hope that it will help students explore the night sky around them and learn the importance of the oral tradition to Aboriginal people.
Exploring the Night Sky
Thematic Inquiry Unit Outcomes
This kit has been designed as a thematic unit called Exploring the Night Sky designed for grade 6 students. This unit incorporates outcomes from Cluster 4: The Solar System from the Manitoba grade 6 science curriculum, as well as outcomes from the grade 6 English Language Arts curriculum and the grade 6 social studies curriculum. I have also included some of the Aboriginal perspectives that students may gain through this inquiry unit. These perspectives are from the
Integrating Aboriginal Perspectives into Curricula
document.
Many of the books and items in this Indigenous Inquiry Kit can be used in many different areas across the curriculum and at many different grade levels. The following outcomes are some ways this kit may be used.
Aboriginal Perspectives:
Students will....
Demonstrate an understanding of the importance of oral tradition in Aboriginal cultures.
Demonstrate awareness of traditional Aboriginal practices associated with the seasonal cycles.
Demonstrate awareness of the special significance of celestial objects for the Aboriginal peoples of North America.
Demonstrate understanding of the importance of listening in Aboriginal cultures.
Demonstrate awareness that Aboriginal stories often have specific teachings or purposes.
Demonstrate willingness to retell Aboriginal stories
Demonstrate awareness that traditional Aboriginal stories express the uniqueness of each Aboriginal culture.
Describe three purposes of Aboriginal stories.
Curriculum Outcomes
Annotated Bibliography
(Literature Resources)
Ahenakew, Freda. (Illus. Sherry Farrell Racette). (1999).
Wisahkecahk flies to the moon.
Winnipeg, MB: Pemmican Publishing Inc.
A story about a young boy who decides he would like to fly to the moon. He grabs on to the tail of a crane who flies him to the moon where he sits and admires the beautiful scenery surrounding him (eg. stars, the Earth). As he sits there, the moon starts to change and get smaller. The moon eventually disappears and Wisahkecahk falls back down to Earth and lands in a muskeg. This is a story about the creation of muskegs and it also explains how the crane gets its long legs. This story is told in both English and Cree.
Bourdeau Waboose, Jan. (Illus. Brian Deines). (2001).
Sky sisters.
Toronto, ON: Kids Can Press
Ltd.
This is a story about two sisters who journey through the snow to reach Coyote Hill where they dance in the snow and wait for the SkySpirits to come. While they wait, the girls look at Grandmother Moon and the stars surrounding her. Soon the SkySpirits (northern lights) come and the girls dance beneath them as they watch the lights dance across the sky.
Bruchac, James & Bruchac, Joseph. (Illus. Stefano Vitale). (2008).
The girl who helped Thunder
and other Native American folktales.
New York, NY: Sterling Publishing Co. Inc.
This is a collection of Native American folktales from different regions and tribes of Native North Americans. This collection of stories tells many tales about different aspects of the natural world. There are three stories in this book that relate to the night sky: The Sister and her Seven Brothers, Why Moon Has One Eye, and How Raven Brought Back the Sun.
Bruchac, Joseph, & London, Jonathan. (Illus. Thomas Locker). (1992).
Thirteen moons on
Turtles back: A Native American year of moons.
New York, NY: Paperstar.
In this book, Joseph Bruchac and Jonathan Landon have told the stories of one moon from each of thirteen different groups of Native Americans. These moons include Moon of Popping Trees (Northern Cheyenne), Baby Bear Moon (Potawatomi), Maple Sugar Moon (Anishinabe), Frog Moon (Cree), Budding Moon (Huron), Strawberry Moon (Seneca), Moon When Acorns Appear (Pomo), Moon of Wild Rice (Menominee), Moose-Calling Moon (Micmac), Moon of Falling Leaves (Cherokee), Moon When Deer Drop Their Horns (Winnebago), Moon When Wolves Run Together (Lakota Sioux), and Big Moon (Abenaki). Each different moon has it’s own poetic story to go along with it that describes why they have named that specific moon.
Bruchac, Joseph & Locker, Thomas. (1998).
The Earth under Sky Bear’s feet: Native American
poems of the land.
Toronto, ON: Paperstar Publishing.
This is a collection of Native American poems about the land. These poems are about everything that Sky Bear can see from the sky. The first poem in this book is called Sky Bear, which is a poem about the Big Dipper. Each poem is accompanied by a beautiful illustration.
Bruchac, Joseph & Ross, Gayle. (1995),
The story of the Milky Way: A Cherokee tale.
New York, Ny:
Dial Books.
This retelling of a Cherokee folktale presesnts an explanation for the origin of the Milky Way. When a great spirit dog begins to rob cornmeal belonging to an old couple, the wise Beloved Woman devises a plan for the whole village to frighten the dog away for good. Running away across the sky, the dog leaves a trail of dropped cornmeal, each grain of which becomes a star. Only in the final passage does the reader learn that the Cherokee name for the Milky Way means "the place where the dog ran."
Bushey, Jeanne. (Illus. Vladyana Krykorka). (2004).
Orphans in the sky.
Calgary, AB: Red Deer
This is a story about a brother and sister who were forgotten when their people move to a new camp. They wait for them to come back, but they never do. The children do not know what they are going to do; they can’t survive all on their own. This story tells of the journey the children make to go and live amongst the stars. They dance and play in the sky and are known as Sister Lightning and Brother Thunder.
Caduto, Michael. J. & Bruchac, Joseph. (Illus. John Kahionhes Fadden and Carol Wood).
(1997).
Keepers of the Earth: Native American stories and environmental activities for children.
Golden, Colorado: Fulcrum Publishing.
This is a book filled with different Native American stories about the environment as well as activities to accompany each story. The book is divided into 9 different sections: Creation, Earth, Wind and Weather, Water, Sky, Seasons, Plants and Animals, Life, Death and Spirit, and Unity of Earth.
Caduto, Michael. J. & Bruchac, Joseph. (Illus. David Kanietakeron Fadden). (2001).
Keepers of
the night: Native stories and nocturnal activities for children.
Calgary, AB: Fifth House Publishers.
This book is a collection of aboriginal stories about the things that happen at night as well as activities to accompany these stories. In this book, there is a chapter called Oot-Kwah-Tah, The Seven Star Dancers, which contains this story as well as one called
The Creation of the Moon
. The discussion ideas and activities relate the night sky to the Native American culture. This book also provides a lot of information about the constellations and the moon including a list of the constellations and when they are visible in specific areas of the world.
Eyvindson, Peter. (Illus. Rhian Brynjolson). (1993).
The missing sun
. Winnipeg, MB: Pemmican
Publications.
This is a children’s story about a young girl who has just moved from Regina, Saskatchewan to Inuvik, Northwest Territories. The girl is told that the sun disappears in the winter but she does not believe it is true until it actually happens for the first time while she is living there. Her Aboriginal friends tell her the story of how Raven stole the sun, but Emily is reluctant to believe this because her mother had explained to her that the reason that the sun disappears every year is because of the Earth’s rotation and tilt.
At first, Emily does not mind having no sun but soon she starts to miss it and asks the raven to bring it back to them. Eventually, the sun does come back, but according to to her friend, it is a brand new (and much brighter sun). It is not the sun that Raven stole from them.
Garcia, Emmett, “Shkeme.” (Illus. Victoria Pringle). (2006).
Coyote and the sky: How the sun,
moon, and stars began.
Albuquerque, NM: University of New Mexico Press.
This is a story about the Animal People’s journey from the Third World (their world) into the fourth world (our world). The only animal who was not allowed to come was Coyote because he is known for being a trickster. When they arrived in the Fourth World, there was no light so they returned to the Third World for help. The Animal People brought back burning hot coals which they flung into the sky and these became the sun. But then night came and it was completely dark again. Again, they returned for help.
This time, they returned with many more coals which they, again, flung into the sky. This became our moon but this still was not bright enough for them, so they returned for more coals again. This time, Coyote snuck into the Fourth World with them. Instead of immediately throwing these coals into the sky, they drew pictures with them for awhile. Before they had the coals bundled again, Coyote snuck up behind them and flung the coals into the sky. These became our stars and our constellations.
Goble, Paul. (1988).
Her Seven Brothers
. New York: Aladdin Paperbacks.
This is a story about a young Cheyenne girl who did not have any brothers or sisters. This girl has a vision of seven brothers living in the north country with no sisters. She decides to make a beautiful shirt and pair of moccasins for each of the seven brothers as an offering for them to accept her as their sister when she finds them. The brothers were very proud to have this girl as their sister and became very protective of her as time went on.
One day, the chief of the Buffalo Nation comes to the tipi of the seven brothers and demands that they give him their sister or else he will kill them all. They refuse and the chief returns with all of the Buffalo People to kill the brothers and take the sister. The brothers are not sure what to do but the littlest brother shoots an arrow into the ground and a pine tree appears. They all climb up it and continue to shoot arrows up and climb until they are up amongst the stars, where they will remain forever.
1
2
3
4
5
6
Добавить в свой блог или на сайт
Похожие:
Professional interests eLearning in Human Resource Development International Human Resource Development education
As he freely received instruction atld inspiration from many others, he now passes it on to others. If you can use any material in this book, preach it brother! To the glory of God and salva- tion of souls! 5
A resource Guide for Neuroscience
A resource Management Bulletin
A resource Management Bulletin
A resource Management Bulletin
A resource Management Bulletin
A resource Management Bulletin
A resource • An Inspiration • a network
A resource Management Bulletin | http://lib.convdocs.org/docs/index-233763.html | 2017-07-20T14:40:59 | CC-MAIN-2017-30 | 1500549423222.65 | [] | lib.convdocs.org |
UDN
Search public documentation:
SpeedTree
Speed TreeDocument Summary: How to use SpeedTree Actors in the editor, This document has been updated with Speedtree5 information. Document Changelog: Created by Daniel Wright.
Broad ConceptsThere are 3 components to Speedtree 5.0: the SpeedTree Modeler, SpeedTree Compiler and the integration into the UE3 game/editor. In the modeler you iterate on the tree's shape, wind and collision. The modeler renders using an external application and may not match up with what you get in UE3. The modeler exports .SPM files, which can only be read by the compiler. The compiler takes the .SPM file and generates texture atlases, vertex data and other information about the tree. It saves everything but the textures into a .SRT file, which is what you import into UE3. For artists: Be sure to read the documentation from IDV at:
%speedtree apps folder%/Content/Tools/5.0/SpeedTree_Applications_v5.0_Full/Documentation For programmers: Be sure to read the Speedtree SDK documentation from IDV.
AtlasesSpeedtree likes to combine all textures onto an atlas. This works best for our games if we only combine the billboard images onto an atlas. This allows us to share leaf and trunk textures across multiple trees to save memory. If you want you can create your own atlas in Photoshop of bark and leaves that can be shared across multiple trees, just skip the Atlas generation step in the compiler.
LeavesSpeedtrees support using billboards as leaves but this isn't very effective, it looks much better to define a mesh as a group of leaves or a branch.
BranchesBranches look better and run better if they are meshes rather than the frond type mesh that uses masked materials. Using the meshes reduces overdraw and can be optimized out at distances.
CompilingCompiling is fairly straightforward. Load your speedtree. Don't create a texture atlas but do create a billboard atlas. Select finish. In the left window unselect Texture Copy otherwise it'll always copy textures used on the tree to the export directory. Select a valid file type for the atlas file types (usually .tga). You can then select Session->Compile Now.
LODsSpeedtree 5.0 has two or more LOD's of geometry and then a final LOD which is just a billboard. The high detail geometry LOD morphs into the low detail geometry LOD based on distance and the parameters you setup on the SpeedtreeActor in the editor. Then the billboard screendoor fades in, and the low detail geometry LOD fades out.
LOD setup in the ModelerUnder 'Object Properties - Tree', check 'Enabled' under the 'Level of Detail' category. Change the 'Count' to the value you need. Switch 'Preview style' to 'Use near and far', and either use mouse wheel or click left and middle mouse buttons and drag to zoom in and out. You can change the LOD distances under this category to preview geometry LOD transitions, but keep in mind that you can't view billboard transitions, and the distances you set here will not be used in UE3.
LOD setup in the CompilerBefore you open your tree in the compiler make sure there are no errors when loading it in the modeler. The compiler will not tell you if there are any errors, but they will mess up your export in difficult to detect ways. For example if your leaf meshes are missing when you compile, billboards will be exported off-center and partly clipped. It will notify you in the Output window that a mesh failed to load. When you create a new session and select your tree, use these settings on the 'Compilation Settings' dialog:
LOD setup in UE3Import the .SRT file from the compiler. Next, import BillboardAtlas_Diffuse.tga and when you get to the Import dialog, check 'Dither Mip-maps alpha'. This will create noise in the alpha channel which will be used to screendoor fade the billboards, and do the same thing for TextureAtlas_Diffuse.tga. If you want the branches to dither fade, do the same thing when importing the branches texture. For bushes where the branches aren't noticeable from a distance you can skip the dithering and use an opaque material which will be more efficient. Remember to pick TC_Normalmap for compression when you import BillboardAtlas_Normal. Create a material for the leaves. Here's a minimal setup for dither fading during LOD to work:
| https://docs.unrealengine.com/udk/Three/SpeedTree.html | 2017-07-20T14:43:30 | CC-MAIN-2017-30 | 1500549423222.65 | [array(['rsrc/Three/SpeedTree/CompilationSettings.jpg',
'CompilationSettings.jpg'], dtype=object)
array(['rsrc/Three/SpeedTree/BillboardAtlasArtifact.jpg',
'BillboardAtlasArtifact.jpg'], dtype=object)
array(['rsrc/Three/SpeedTree/BranchMaterial.jpg', 'BranchMaterial.jpg'],
dtype=object) ] | docs.unrealengine.com |
.
Date: July 26, 2016
Version Number: 1.0.3.76
ORION-892 - Futures should have up/down/child methods to navigate component or element hierarchy
ORION-905 - ST.future.Item should be able to return component futures (via asButton, etc) for lists of components in Modern toolkit
ORION-906 - Grid and dataView futures should provide select/deselect/selected/deselected methods
Total: 4
ORION-1013 - Can't add Generic Web Driver pool
ORION-226 - Browser pool editor does not produce a proper BrowserStack configuration file
ORION-932 - Properly display browser names, versions and platforms for Sauce Labs and BrowserStack capabilities files
ORION-916 - Go to Declaration fails to open editor tab
ORION-927 - Code completion failing because of incorrect cursor position
ORION-555 - Special keys are recorded but not played back
ORION-813 - Click events are not properly translated on touch device browsers
ORION-843 - Link to proxy server settings is missing in Getting Started sequence
ORION-910 - License activation does not properly use system-defined proxy (must manually enter)
ORION-690 - Connections fail using self-signed certifcates or custom AD CA
ORION-937 - "Cannot read property 'isVersion' of undefined" error in Studio when app launched from UIWebView (PhoneGap / Cordova)
ORION-1008 - Infinity loading mark on description into test runner, when xit method is used.
ORION-717 - Sencha Studio stuck on Loading Tests
ORION-970 - New test suites are not loaded by the browser if added after the Runner is launched
ORION-978 - Test runner should launch local browsers using loopback address (127.0.0.1) not external address
ORION-1020 - stc throwing "Error: Couldn't connect to selenium server"
ORION-833 - STC throws error after launching on OS X.
ORION-955 - Wrong stc version
Total: 21
Total: 3
Date: June 14, 2016
Version Number: 1.0.2.151
ORION-740 - ST.future.DataView should support modern toolkit
ORION-767 - ST.future.CheckBox should support modern toolkit
ORION-776 - ST.future.Grid/Row/Cell should support modern toolkit
ORION-780 - ST.future.Button should support modern toolkit
ORION-782 - Should provide ST.future.Select class for modern toolkit
ORION-783 - ST.future.Component should support modern toolkit
ORION-784 - ST.future.Field should support modern toolkit
ORION-785 - ST.future.Item should support modern toolkit
ORION-788 - ST.future.TextField should support modern toolkit
ORION-632 - Context menu for files should be common to all file nodes
ORION-864 - Studio should allow user to provide alternate proxy server settings vs those defined in the OS
Total: 15
ORION-651 - ST.textfield() should be able to type() empty string, but throws error.
ORION-810 - ST.Version getRelease() method throw exception
ORION-816 - ST.Element getComponent() fails for Ext JS 4/Sencha Touch
ORION-817 - ST.item.blurred() is not working on modern
ORION-834 - Tests are infinity running after call ST.cell/cellAt/cellBy/cellWith.
ORION-631 - Copies of shellwrapper.sh installed by EA/Beta breaks Sencha Test builds
ORION-659 - Can execute App watch when Cmd Integration is disabled
ORION-793 - App watch is not checked if service is running
ORION-716 - Shortcut name incorrect on Windows - "Sencha Sencha Test"
ORION-857 - Installer doesn't install ST to folder Applications on OS X
ORION-861 - Unable to install Sencha Studio into 'Program Files' directory.
ORION-894 - Destination not writable error at the end of installation
ORION-237 - Inconsistent results of demo workspace
ORION-340 - Studio ignores custom message in matchers.
ORION-341 - Custom matchers without result message crashes test if result is false
ORION-703 - Failed expectation message not meaningful when using Jasmine Spies
ORION-728 - Futures and event player do not work from beforeAll or afterAll
ORION-736 - Calling Jasmine's it() with no test fn does not treat test as disabled
ORION-737 - Expectations in beforeAll functions are not reported
ORION-597 - Prompts for "Sencha Account" should read "Sencha Forum Account" to avoid confusion
ORION-847 - Impossible to activate Trial via provided Activation code.
ORION-875 - Activation error "Cannot activate license: License is corrupted"
ORION-878 - Activation trial is possible with any email
ORION-718 - Sencha Studio stuck on development build
ORION-752 - Page with a "head" tag containing attributes will cause tests and event recorder to fail
ORION-792 - Missing files for unittest in 4.x frameworks
ORION-770 - Default scenario path for all new scenarios created
ORION-802 - Renaming scenario directory to an existing directory results in ENOTEMPTY
ORION-848 - 'cacheBuster' error is observed after entering Location(URL).
ORION-229 - Error if last opened workspace is missing
ORION-230 - Creating a workspace / application from Studio displays red alert badges
ORION-424 - Unable to navigate if too many tabs are open (no overflow handling)
ORION-478 - Tabs cannot be closed until the content is edited in some way
ORION-511 - Studio task has undefined name
ORION-585 - Manual file system updates only work for one new file at a time
ORION-590 - Test summary incorrectly indicates a passed result when only one test case fails
ORION-602 - Socket hang up at createHangUpError when opening applications over HTTPS
ORION-672 - Studio becomes unusable when entering location URL without two slashes or scheme
ORION-674 - Swiping between 2 screenshots isn't working.
ORION-696 - Cant create workspace or open existing one on Windows
ORION-700 - Event recorder doesn't work after creating workspace
ORION-701 - Hidden files and folders should not be displayed
ORION-719 - Studio displays misleading message when connecting to Archive Server with no test runs
ORION-743 - Test page gets wrongly cached on Safari leading to JSON errors and perpetual reload
ORION-757 - Unclear error message if parking lot port is in use
ORION-760 - User is not navigated from test runner to project settings when no location URL is configured
ORION-839 - Sencha Studio is not properly closed on Linux and Windows
ORION-215 - Test describe blocks are run too early - cannot use Ext or app code in them
ORION-490 - Global variables leaks are sometimes wrongly reported due to race condition
ORION-535 - Error when connecting to self-signed cert endpoint
ORION-598 - Cache disable setting in test runner does not affect corresponding Ext JS loader setting
ORION-626 - Sencha Test throws error running tests when Ripple plugin in Chrome is installed
ORION-638 - Package test configuration requires user to specify Location (URL)
ORION-771 - TestRunner does not show parked remote browsers with exactly matching userAgent
ORION-772 - Player timeouts from one spec are sometimes reported under subsequent specs
ORION-620 - Test Runner displays non-JS files and displays wrong results for them in some cases
ORION-641 - Changing scenario's location value has no effect in certain cases
ORION-643 - When selecting an individual test or a group of tests, the master checkbox for the scenario should be unchecked
ORION-653 - Newly added specs are labelled as passed even though they have never run
ORION-738 - Test runner does not always reuse locally parked agents
ORION-791 - STC exits with error "Cannot find module 'process'"
ORION-891 - Unable to run STC, when path to the STC contains 'space'.
Total: 68
Total: 3
Date: March 2, 2016
Version Number: 1.0.1.38
ORION-571 - Should provide ST.future.Element#content() method to wait for exact markup
ORION-572 - Should provide ST.future.Element#text() method to wait for exact textContent
Total: 5
ORION-575 - ST.future.Element text state methods do not work with input element values
ORION-578 - Calling ST future methods inside and() callback does not properly delay the callback's completion
ORION-405 - Studio repeatedly creates parking pages on Linux, Win
ORION-539 - Recorder stuck on Launching
ORION-45 - Test runner shows different result counts for folders for some browsers
ORION-509 - Top-level disabled suites do not appear in the test runner UI.
ORION-97 - Update can be completed when instance of Orion is running
ORION-217 - Many tree node icons do not propagate to their associated tabs
ORION-459 - Two indexing processes can run at the same time
ORION-502 - Uncaught TypeError: j.record.drop appears in ST
ORION-544 - Settings window does not layout correctly after clearing logs
ORION-554 - New added file has folder icon instead of file icon
ORION-557 - Using context menu to create Jasmine suite often instead creates a Scenario node (only on Windows)
ORION-559 - Allowed global variables are not allowed for second and next test run until restart Studio
ORION-565 - Sencha Studio does not properly report failed archive downloads and leaves progress bar
ORION-574 - Navigation tree gains Scenario nodes for files created outside of Studio (found by file system change notification)
ORION-355 - Test selection is reset when run is stopped
ORION-358 - Test results are additive
ORION-392 - Test result summary is not properly reset
ORION-457 - Studio gets stuck in "Loading Tests" state
ORION-521 - Re-running tests will cause opening several browser tabs/windows
ORION-538 - No visual indication when filters are enabled in the test runner
ORION-551 - "Start App Watch?" popup is shown every time a new browser is selected
Total: 29
Date: February 16, 2016
Version Number: 1.0.0
The following bugs were reported by the Sencha Community - a big thanks to everyone who tried out the Beta!
ORION-247 - Unable to configure archive server
ORION-255 - Unable to create/open workspace using Cmd 5.x
ORION-320 - Cannot generate Sencha Touch App
ORION-321 - SenchaTestDemo needs some more examples with older frameworks
ORION-351 - Testing ExtJS 4.2 app - never loads in any browser (just keeps loading)
ORION-426 - Creating test project deletes all comments from app.json | http://docs.sencha.com/sencha_test/1.0.3/guides/release_notes.html | 2017-07-20T14:33:23 | CC-MAIN-2017-30 | 1500549423222.65 | [] | docs.sencha.com |
JTable::bind
The "API17" namespace is an archived namespace. This page contains information for a Joomla! version which is no longer supported. It exists only as a historical reference, it will not be improved and its content may be incomplete and/or contain broken links.
JTable::bind
Description
Method to bind an associative array or object to the instance.This method only binds properties that are publicly accessible and optionally takes an array of properties to ignore when binding.
public function bind ( $src $ignore=array )
See also
JTable::bind source code on BitBucket
Class JTable
Subpackage Database
- Other versions of JTable::bind
User contributed notes
Code Examples
Advertisement | https://docs.joomla.org/API17:JTable::bind | 2017-07-20T14:51:38 | CC-MAIN-2017-30 | 1500549423222.65 | [] | docs.joomla.org |
UDN
Search public documentation:
MobileProfiling > Profiling for Mobile Devices
UE3 Home > Mobile Home > Profiling for Mobile Devices
UE3 Home > Mobile Home > Profiling for Mobile Devices
Profiling for Mobile Devices
Overview
In general, profiling for mobile devices involves the same techniques and tools that profiling for PC games with Unreal Engine 3 involves. However, there are some minor differences and considerations that must be taken into account as there is no in-game console on mobile devices, certain files are saved on the device instead of in the game's directory, etc. In addition, there are tools specifically for profiling on mobile devices, such as Apple's Instruments tool. This document serves to detail the process of using Unreal Engine 3's profiling tools along with other external tools to get the most out of the engine on mobile devices. For general profiling and optimization information when developing with Unreal Engine 3, see the Performance, Profiling, and Optimization page.
STAT Commands
STAT commands are one of the most useful and common methods of profiling. Each command displays a different group of statistics on the screen giving a realtime snapshot of what is going on under the hood at any given time. This makes it extremely easy to go to a specific trouble spot in the game and see immediately what might be the issue.
Executing CommandsThere is no console on mobile games so there is no means to arbritrarily execute commands through keyboard entry. Some methods of executing commands are:
- Kismet - Sequences can be set up in Kismet to execute STAT commands using the Console Command action. These sequences can be triggered at the beginning of the level or by specific events.
- UnrealScript - UnrealScript can be used to execute STAT commands by calling the
ConsoleCommand()function on the
PlayerControllerand passing it the command to execute. This gives great flexibility, but obviously requires changing code and recompiling to call different commands.
- Menu Buttons - A debug menu can be created using the Mobile Menu System, where each button in the menu executes a different command through UnrealScript using the same method described above.
Limited ScreenspaceKeep in mind that the STAT commands display the statistical information directly on the screen. This means it may be possible that only a portion of the stats for any one command may be visible. It also makes displaying multiple groups of stats simultaneously virtually impossible. Of course, you can always use these commands when running the game in the Mobile Previewer which will allow you to see the full set of stats. Just be aware that certain aspects may perform differently in the Mobile Previewer than on the actual device.
Game Thread Profiling
The tools used to profile gameplay on PC in Unreal Engine 3 can also be used with mobile devices. This includes the Gameplay Profiler and Stats Viewer. These are both extremely useful tools that can be used to dump information to files that can then be opened in their respective tools and analyzed to see what might be causing any issues.
Retrieving Profiling FilesWhen running on a mobile device, the profiling files are created on the device itself. In order to use those files, they need to be recovered from the device. The process for doing so is detailed below. How to Get Files from iPhone via the Unreal iPhone Packager tool:
- Open IPP.exe in
/binaries/iPhone/
- In the Deployment Tools tab, select the device and click Backup Documents
- Navigate to the IPA that you used on device. For example, if you cooked Release MobileGame, the IPA would be:
\Binaries\IPhone\Release-iphoneos\MobileGam\MobileGame.ipa.
- The files will be saved to
\UnrealEngine3\MobileGame\iOS_Backups\
- You can then open up any profiling files via the associated application, such as GameplayProfiler.exe.
Instruments
Instruments is an appllication provided by Apple for profiling applications on iOS devices (and OS X as well). It allows you to track processes and collect data on both the app and the operating system. This gives you the ability to do detailed performance analysis of your game running on the device.
- Select Memory Monitor and Activity Monitor from the iPhone section of the LIbrary.
- Select the iOS device running the game and All Processes from the dropdown by the Record button.
- Click the Record button to begin profiling.
Memory Profiler
Unreal Memory Profiler now supports advanced memory tracking for iOS. This can be used to help you investigate any bottle necks you may be facing.
Common Performance Issues
- Using gamma correction on mobile devices can cause a serious impact on performance. It is only meant for use on powerful and future mobile devices (iPad 2 and better). If you have enabled gamma correction on mobile devices for your maps and are noticing performance issues, it may be necessary to disable it and address the lack of gamma correction through content. See Gamma for information on designing content for non-gamma corrected mobile devices. | https://docs.unrealengine.com/udk/Three/MobileProfilingHome.html | 2017-07-20T14:42:31 | CC-MAIN-2017-30 | 1500549423222.65 | [array(['rsrc/Three/MobileProfilingHome/stats.jpg', 'stats.jpg'],
dtype=object)
array(['rsrc/Three/MobileProfilingHome/gameplay.jpg', 'gameplay.jpg'],
dtype=object)
array(['rsrc/Three/MobileProfilingHome/Instruments.jpg',
'Instruments.jpg'], dtype=object)
array(['rsrc/TWiki/TWikiDocGraphics/warning.gif', 'ALERT! ALERT!'],
dtype=object)
array(['rsrc/TWiki/TWikiDocGraphics/warning.gif', 'ALERT! ALERT!'],
dtype=object) ] | docs.unrealengine.com |
All Services applications need to use the same configuration file format. This document specifies it.
The configuration file is a ini-based file. (See for more details.) Variable names.
Here are a set of rules for converting values:
Examples:
[section1] # comment a_flag = True a_number = 1 a_string = "other=value" another_string = other value a_list = one two three user = ${USERNAME}
An INI file can extend another file. For this, a “DEFAULT” section must contain an “extends” variable that can point to one or several INI files which will be merged into the current file by adding new sections and values.
If the file pointed to to.
There’s one implementation in the core package of the Python server, but it could be moved to a standalone distribution if another project wants to use it. | https://mozilla-services.readthedocs.io/en/latest/server-devguide/confspec.html | 2017-07-20T14:35:30 | CC-MAIN-2017-30 | 1500549423222.65 | [] | mozilla-services.readthedocs.io |
In this lesson, you test the screen that you created in Lesson 4. You can test this screen either within the editor or in the browser. You also learn how the transaction manager controls an application's behavior and appearance.
In this lesson you learn how to:
In your web browser, you can test any screen that is stored in the web application library.
Test the screen in the browser
In your web browser, you can test any screen that is stored in the web application library.
Type the following from the command line of the
WebInstallDir/
util directory:
UNIX
monitor -start
WebAppname
Windows
monitor -start
WebAppname
The screen appears in the browser.
To continue testing the screen in the web browser, skip to Step 5.
You can also test and debug any screen within the editor. The editor's graphical environment shows how the screen appears and behaves in the host GUI platform. Any changes that you make to a screen can be tested immediately without saving edits, so you can experiment without committing to the changes. For more information about test mode, refer to Chapter 38, "Testing Application Components," in Application Development Guide.
dstord.scr.
With the screen that you just built, you can access real data from the
vidsales database. The commands associated with the screen's buttons let you access and maintain data in the database.
If necessary, open the
vidsales database (Database
Connect).
The first record in the
distributors table displays.
After you press View, the Save and Delete buttons become inactive. The active or inactive state of buttons depends on the last command to execute. In this case, the request to view records prevents you from saving or deleting records.Panther's transaction manager protects widgets from data entry by the style that it applies to each one. Styles can set a widget's color and protections. In this case, they activate and deactivate (gray out) push buttons without requiring you to write any code. You can change the behavior of the default styles with the styles editor, or change the widget's style assignment in the editor.
Also, because the View command only allows read access, the record data display as literal text and not inside input fields.
All values are cleared from the fields and the Reset command closes the current transaction, so that you can execute another transaction command.
More About the Transaction Manager
You can create complex database query/update screens without having to write any code. That's because the transaction manager "knows" about the interaction between database tables and columns (via information retrieved from the database during the importation process). Given this information, the transaction manager generates the appropriate SQL statements for fetching or updating the database, and keeps track of any data changes. When your application issues the SAVE command, the transaction manager automatically generates SQL commands to update the database to match the data on the screen.
In order to update database records, you must select them. When you select records for update, by default Panther protects primary key fields from data entry. This is a result of Panther's application of a style to each widget.
The Select command selects a record for update. The first record in the
distributors table displays.
The value in
distrib_id is shown as read-only because it is a primary key in the
distributors table; in general, primary key fields in database tables cannot be changed.
P.O. Box 133. Here you can enter data and edit existing data on the screen.
To save your changes to the database, you must issue a Save command.
Panther calls the update procedure to update the database.
Note: The following is only true for Windows. In Motif, the system menu is available, and you can choose close.
More About Wizard-generated Buttons
The transaction-specific buttons generated by the screen wizard let users update, insert, select, and delete database records. IN general, the buttons operate on the master table and any other updatable tables on the screen. However, on some screens, the default behavior might have unwanted results. For instance, the Delete button on the
dstord.scrclient screen deletes the master and the associated details. Because the order items associated with the detail are not present on the screen, these would be orphaned. Therefore, you might want to remove the Delete button from this type of screen.
The transaction menu options in the test mode offer functionally that is equivalent to the buttons.
In this lesson, you tested the Distributor Order screen by performing these tasks:
vidsales database.
You learned:
What did you learn?
You learned: | http://docs.prolifics.com/panther/html/gt_html/tutor2_5.htm | 2018-10-15T13:22:32 | CC-MAIN-2018-43 | 1539583509196.33 | [] | docs.prolifics.com |
Setup
To create such an AI Action, open the Flow Graph editor in Sandbox and follow these steps:
- Open the Flow Graph editor in Sandbox
- From the Flow Graph editor's menu select File -> New AI Action
- You will be prompted to save that AI Action under a new XML file. Name the file however you want (for example "sample1.xml") and make sure it's in the
GameSDK\Libs\ActionGraphs\directory (you may need to create the directory if it doesn't exist already)
- Your new AI Action sample1 should now show up in the tree view under "AI Actions"
- Create a new arbitrary flow graph for en entity - this flow graph will then house your AI Action
- In that flow graph, add the node AI:Execute, double-click the Action property and select your new sample1 action from the list of available AI Actions
AI Action flowgraphs use the following two entities as parameters:
- User: Usually the AI who executes the action
- Object: Can be any entity
Example
| https://docs.cryengine.com/pages/viewpage.action?pageId=29798761 | 2018-10-15T13:29:03 | CC-MAIN-2018-43 | 1539583509196.33 | [array(['/download/attachments/1933342/idle_human_drink.png?version=1&modificationDate=1327587229000&api=v2',
None], dtype=object) ] | docs.cryengine.com |
Starts a middleware transaction
xa_begin [ EXCEPTION_HANDLER
handler, UNLOAD_HANDLER
handler,
TIMEOUT
timeout]
- EXCEPTION_HANDLER
handler
- Specifies an exception handler to be installed for the duration of the transaction; use
NULLif none is to be specified. For further information on exception events and handlers, refer to "Exception Events" in JetNet/Oracle Tuxedo Guide.
- UNLOAD_HANDLER
handler
- Specifies an unload handler to be installed for the duration of the transaction. The handler should control all unloading of transaction data to Panther target variables; use
@NULL if none is to be specified.
For example, this command specifies the unload handler
myhandler:xa_begin UNLOAD_HANDLER "myhandler"
For more information on unload events and handlers, refer to "Unload Events" in JetNet/Oracle Tuxedo Guide.
- TIMEOUT
timeout
- Resume processing if the transaction is not complete before
timeoutelapses. If you omit this option, transaction processing continues without a time limit. Specify
timeoutwith this format:"[ +
days
hours::
minutes::]
seconds"
Seconds are required; minutes, hours, or days (space delimiter between days and hours) can also be specified. If more than seconds is specified, the + symbol and the quotation marks are mandatory. If only seconds are specified, both are optional.
Note: JPL's colon preprocessor expands colon-prefixed variables. To prevent expansion of variables that contain colons, you must prefix literal colons with another colon (
::) or a backslash (
\:).
For example, this command specifies a time interval of
30seconds:xa_begin TIMEOUT 30
The following command specifies a time interval of 3 hours:xa_begin TIMEOUT "+3::00::00"
Oracle Tuxedo
Client, Server
The
xa_begincommand initiates a transaction to be performed on XA-compliant resource managers. Once initiated, a transaction must be completed by a call to either xa_commit, xa_rollback or xa_end. When a transaction is in progress, any service requests made to XA-compliant resources can be processed on behalf of the current transaction.
Use the
EXCEPTION_HANDLERoption to specify an exception handler to be installed for the lifetime of this transaction. All exceptions generated within the scope of this transaction are passed to the associated handler, unless a more specific scope has specified its own handler, for example, by an individual request.
For example, this command starts a transaction with the exception handlermy_exc_handler:
xa_begin EXCEPTION_HANDLER "my_exc_handler"
Exceptions related to the parsing or execution of the
xa_begincommand do not cause the associated exception handler to be invoked, since the exception occurs before the transaction has begun.
For information about event scopes and handler properties, refer to "Handler Scope and Installation"in JetNet/Oracle Tuxedo Guide.
The following application properties are affected by execution of
xa_begin:
xa_begincan generate the following exceptions:
// Process a bank account withdrawal.
// FML buffers are used in a call to service WITHDRAWAL
proc withd ()
vars message
//******** Perform ATM Withdrawal ********
if (account_id == "")
{
msg quiet "Account id is required"
return 0
}
if (amount > 0)
{
xa_begin
service_call "WITHDRAWAL" ({account_id, amount}, \
{message, balance = account_balance})
xa_end
if (@app()->tp_svc_outcome == TP_FAILURE)
{
msg quiet message
}
}else
{
msg quiet "Invalid withdrawal amount"
}
return 0
xa_commit, xa_end, xa_rollback | http://docs.prolifics.com/panther/html/prg_html/jplref46.htm | 2018-10-15T13:40:07 | CC-MAIN-2018-43 | 1539583509196.33 | [] | docs.prolifics.com |
Did you find this page useful? Do you have a suggestion? Give us feedback or send us a pull request on GitHub.
First time using the AWS CLI? See the User Guide for help getting started. .
See also: AWS API Documentation
See 'aws help' for descriptions of global parameters.
add-tags-to-resource --resource-name <value> --tags <value> [--cli-input-json <value>] [--generate-cli-skeleton <value>]
--resource-name (string)
The Amazon RDS resource that the tags are added to. This value is an Amazon Resource Name (ARN). For information about creating an ARN, see Constructing an RDS Amazon Resource Name (ARN) .
--tags (list)
The tags to be assigned to the Amazon RDS resource.. | https://docs.aws.amazon.com/cli/latest/reference/rds/add-tags-to-resource.html | 2018-10-15T13:22:56 | CC-MAIN-2018-43 | 1539583509196.33 | [] | docs.aws.amazon.com |
:
Scaled consolidated edge, DNS load balancing with private IP addresses using NAT in Lync Server 2013
Scaled consolidated edge, DNS load balancing with public IP addresses in Lync Server 2013
Scaled consolidated edge with hardware load balancers topology
Important
If you are using Call Admission Control (CAC), you still must assign IPv4 addresses to the Edge Server internal interface. CAC uses IPv4 addresses and must have them available to operate.
In This Section | https://docs.microsoft.com/en-us/lyncserver/lync-server-2013-single-consolidated-edge-with-private-ip-addresses-and-nat | 2018-10-15T13:32:56 | CC-MAIN-2018-43 | 1539583509196.33 | [array(['lyncserver/images/gg399001.d9b889c1-587c-4732-9b68-841186ccff78%28ocs.15%29.jpg',
'd9b889c1-587c-4732-9b68-841186ccff78 d9b889c1-587c-4732-9b68-841186ccff78'],
dtype=object) ] | docs.microsoft.com |
Where you are
Add breakpoints in application scripts
Set a watch and a conditional breakpoint
Now you run the application in debug mode. You step through the code line by line.
About the Step buttons
You can use either Step In or Step Over to step through an application one statement at a time. They have the same result except when the next statement contains a call to a function.
Use Step Over to execute the function as a single statement. Use Step In if you want to step into a function and examine the effects of each statement in the function.
If you have stepped into a function, you can use Step Out to execute the rest of the function as a single step and return to the next statement in the script that called the function.
Click the Start button (
) in PainterBar1
or
Select Debug>Start pbtutor from the menu bar.
The application starts and runs until it hits a breakpoint (in this case, the call to the assignment statement for the toolbar title for sheet windows).
You return to the Debug window, with the line containing the breakpoint displayed. The yellow arrow cursor means that this line contains the next statement to be executed.
Click the Global tab in the lower-left stack.
The Global Variables view displays.
Double-click transaction sqlca.
Find the DBMS property, which has a String datatype.
Notice that this property does not yet have a value associated with it because the Debugger interrupted execution before the ProfileString function executed.
To execute the next statement, click the Step In button (
) in PainterBar1
or
Select Debug>Step In from the menu bar.
The application starts execution of the Open event for the MDI frame window.
Use Step In or Step Over to step through the code until you reach this statement in the script for the frame window Open event:
open(w_welcome)
After PowerBuilder finishes executing this statement, the login window displays and the Debug window is minimized.
The Open event for the frame window also has a posted call to the ue_postopen function (that you stepped through without examining). This function in turn includes code that starts the processing of a chain of sheet manager functions. These functions are processed at the end of the script for the Open event, after the login window displays.
Click Step Over (
) until the login window displays and the Debugger is minimized.
Type dba in the User ID box of the login window.
Type sql in the Password box and click OK.
You return to the Debug window. The yellow arrow in the Source view points to the next executable statement, the CREATE statement for the connection service object. This is the first executable line in the script for the Clicked event of the cb_ok command button.
Select the Call Stack tab in the lower-right stack.
The yellow arrow in the Call Stack view indicates the current location in the call stack. If you double-click another line in the stack, the Source and Variables views change to display the context of that line, and a green arrow indicates the line in the Source view. If you then single-click another line in the stack, a green arrow displays in the Call Stack view to indicate the line for which context is displayed. When you continue to step through the code, the Source and Variables views return to the current context.
Click the Step In button.
The Debugger takes you to the script for the Constructor event of the connection service object.
Click the Step Out button (
).
Click the Global tab in the lower-left stack.
Look again at the Transaction object properties.
You step out of the Constructor event in a single step and return to the script for the OK button Clicked event. Now the value of sqlcode has changed, and the sqlerrortext and DBMS property have values, but the UserID, DBPass, and DBParm properties do not.
The values were assigned during execution of the Constructor event of the connection service object after the of_GetConnectionInfo function returned information from the INI file, but because you commented out the lines in the code for the UserID, DBPass, and DBParm properties, these values were not retrieved.
Click on the Local tab in the lower-left stack.
The local variables for the Clicked script have not yet been assigned values.
Use the Step In button to step through the three assignment statements for the local variables.
As you step through each statement, you can check that the values assigned to the local variables are what you expected.
Click again on the Global tab in the lower-left stack and expand the Transaction object.
Use the Step In button to step through the three lines that instantiate the Transaction object (SQLCA) with user-entry values for UserID, DBPass, and DBParm.
As you step through each statement, you can check that the values you entered in the login window are being assigned to the Transaction object. You are still not connected to the database until the connection service object of_Connect function is executed.
Click the Continue button (
) in PainterBar1.
The Continue button resumes execution until the next breakpoint. The database connection is established, the login window closes, and the MDI frame for your application displays. The application is waiting for user input.
Select File>Report>Maintain Customers from the menu bar.
The application continues until it reaches the line in the RowFocusChanged event that contains the next breakpoint you added.
The RowFocusChanged event for a DataWindow occurs before the DataWindow is displayed. For this reason, execution stops before the Customer window is opened. | https://docs.appeon.com/appeon_online_help/pb2019/getting_started/ch11s02.html | 2019-12-06T02:44:39 | CC-MAIN-2019-51 | 1575540484477.5 | [] | docs.appeon.com |
deeppavlov.models.go_bot¶
- class
deeppavlov.models.go_bot.network.
GoalOrientedBot(tokenizer: deeppavlov.core.models.component.Component, tracker: deeppavlov.models.go_bot.tracker.Tracker, template_path: str, save_path: str, hidden_size: int = 128, obs_size: int = None, action_size: int = None, dropout_rate: float = 0.0, l2_reg_coef: float = 0.0, dense_size: int = None, attention_mechanism: dict = None, network_parameters: Dict[str, Any] = {}, load_path: str = None, template_type: str = 'DefaultTemplate', word_vocab: deeppavlov.core.models.component.Component = None, bow_embedder: deeppavlov.core.models.component.Component = None, embedder: deeppavlov.core.models.component.Component = None, slot_filler: deeppavlov.core.models.component.Component = None, intent_classifier: deeppavlov.core.models.component.Component = None, database: deeppavlov.core.models.component.Component = None, api_call_action: str = None, use_action_mask: bool = False, debug: bool = False, **kwargs)[source]¶
The dialogue bot is based on, which introduces Hybrid Code Networks that combine an RNN with domain-specific knowledge and system action templates.
The network handles dialogue policy management. Inputs features of an utterance and predicts label of a bot action (classification task).
An LSTM with a dense layer for input features and a dense layer for it’s output. Softmax is used as an output activation function.
process_event(event_name, data)[source]¶
Update learning rate and momentum variables after event (given by event_name)
- class
deeppavlov.models.go_bot.tracker.
Tracker[source]¶
An abstract class for trackers: a model that holds a dialogue state and generates state features.
- class
deeppavlov.models.go_bot.tracker.
DefaultTracker(slot_names: List[str])[source]¶
Tracker that overwrites slots with new values. Features are binary indicators: slot is present/absent.
- class
deeppavlov.models.go_bot.tracker.
FeaturizedTracker(slot_names: List[str])[source]¶
Tracker that overwrites slots with new values. Features are binary features (slot is present/absent) plus difference features (slot value is (the same)/(not the same) as before last update) and count features (sum of present slots and sum of changed during last update slots). | https://docs.deeppavlov.ai/en/0.2.0/apiref/models/go_bot.html | 2019-12-06T03:41:06 | CC-MAIN-2019-51 | 1575540484477.5 | [] | docs.deeppavlov.ai |
All content with label api+cache+concurrency+hot_rod+infinispan+infinispan_user_guide+jboss_cache+locking+read_committed+release+scala+setup+user_guide+xml.
Related Labels:
podcast, expiration, publish, datagrid, coherence, interceptor, server, replication, transactionmanager, dist, partitioning, query, deadlock, intro, archetype, pojo_cache, lock_striping, jbossas, nexus,
guide, schema, amazon, s3, grid, test, jcache, xsd, maven, documentation, wcm, youtube, userguide, write_behind, 缓存, ec2, hibernate, aws, interface, custom_interceptor, clustering, eviction, gridfs, out_of_memory, fine_grained, import, index, events, batch, configuration, hash_function, buddy_replication, loader, xa, pojo, write_through, cloud, mvcc, notification, tutorial, presentation, jbosscache3x, distribution, meeting, cachestore, data_grid, cacheloader, hibernate_search, resteasy, cluster, br, development, websocket, async, transaction, interactive, xaresource, build, gatein, searchable, cache_server, installation, ispn, client, migration, non-blocking, jpa, filesystem, tx, article, gui_demo, eventing, client_server, testng, standalone, snapshot, webdav, repeatable_read, hotrod, docs, batching, consistent_hash, store, whitepaper, jta, faq, as5, spring, 2lcache, jsr-107, lucene, jgroups, rest
more »
( - api, - cache, - concurrency, - hot_rod, - infinispan, - infinispan_user_guide, - jboss_cache, - locking, - read_committed, - release, - scala, - setup, - user_guide, - xml )
Powered by a free Atlassian Confluence Open Source Project License granted to Red Hat, Inc.. Evaluate Confluence today. | https://docs.jboss.org/author/label/api+cache+concurrency+hot_rod+infinispan+infinispan_user_guide+jboss_cache+locking+read_committed+release+scala+setup+user_guide+xml | 2019-12-06T03:52:19 | CC-MAIN-2019-51 | 1575540484477.5 | [] | docs.jboss.org |
All content with label async+cloud+deadlock+ehcache+hot_rod+hotrod+infinispan+jboss_cache+jgroups+jta+listener+publish+release+user_guide.
Related Labels:
podcast, expiration, datagrid, coherence, interceptor, server, replication, recovery, transactionmanager, partitioning, query, intro, archetype, jbossas, lock_striping, nexus, guide, schema, cache,
amazon, s3, memcached, grid, jcache, test, api, xsd, maven, documentation, youtube, userguide, write_behind, 缓存, ec2, hibernate, aws, interface, custom_interceptor, clustering, setup, eviction, gridfs, concurrency, out_of_memory, import, index, events, hash_function, configuration, batch, buddy_replication, loader, xa, write_through, remoting, mvcc, tutorial, notification, murmurhash2, presentation, xml, read_committed, jbosscache3x, distribution, meeting, cachestore, data_grid, cacheloader, hibernate_search, resteasy, cluster, br, development, websocket, transaction, interactive, xaresource, build, demo, cache_server, installation, client, migration, non-blocking, jpa, tx, eventing, client_server, testng, murmurhash, infinispan_user_guide, standalone, snapshot, webdav, docs, batching, consistent_hash, store, faq, 2lcache, as5, jsr-107, lucene, locking, rest
more »
( - async, - cloud, - deadlock, - ehcache, - hot_rod, - hotrod, - infinispan, - jboss_cache, - jgroups, - jta, - listener, - publish, - release, - user_guide )
Powered by a free Atlassian Confluence Open Source Project License granted to Red Hat, Inc.. Evaluate Confluence today. | https://docs.jboss.org/author/label/async+cloud+deadlock+ehcache+hot_rod+hotrod+infinispan+jboss_cache+jgroups+jta+listener+publish+release+user_guide | 2019-12-06T03:27:32 | CC-MAIN-2019-51 | 1575540484477.5 | [] | docs.jboss.org |
All content with label client+dist+eventing+gridfs+hotrod+infinispan+migration+publish+query+snapshot.
Related Labels:
expiration, datagrid, coherence, interceptor, server, replication, transactionmanager, release, deadlock, archetype, jbossas, nexus, guide, schema, listener, cache, s3, amazon, memcached,
grid, test, jcache, api, xsd, ehcache, maven, documentation, write_behind, ec2, 缓存, hibernate, aws, interface, setup, clustering, eviction, concurrency, out_of_memory, jboss_cache, import, index, events, configuration, hash_function, batch, buddy_replication, loader, xa, cloud, remoting, mvcc, tutorial, notification, murmurhash2, read_committed, xml, jbosscache3x, distribution, cachestore, data_grid, cacheloader, hibernate_search, cluster, development, websocket, transaction, interactive, xaresource, build, searchable, demo, installation, cache_server, scala, command-line, non-blocking, filesystem, jpa, tx, gui_demo, shell, client_server, murmurhash, infinispan_user_guide, standalone, webdav, repeatable_read, docs, consistent_hash, batching, store, jta, faq, as5, 2lcache, jgroups, lucene, locking, rest, hot_rod
more »
( - client, - dist, - eventing, - gridfs, - hotrod, - infinispan, - migration, - publish, - query, - snapshot )
Powered by a free Atlassian Confluence Open Source Project License granted to Red Hat, Inc.. Evaluate Confluence today. | https://docs.jboss.org/author/label/client+dist+eventing+gridfs+hotrod+infinispan+migration+publish+query+snapshot | 2019-12-06T03:58:23 | CC-MAIN-2019-51 | 1575540484477.5 | [] | docs.jboss.org |
Convert leads into opportunities¶
Opportunity is a qualified lead, specific deal has met certain criteria which indicate a high value to the business, or a high probability of closing but when you have details of your visitors, it is just a lead. You have to get the enough details form you visitors, if matches with your business interest you can convert them into opportunity.
You can collect the leads instead of creating an opportunity and setup the process to qualify those leads before you convert them into opportunity.
Business case¶
Assumed that My company is collecting contacts of all the visitors, through contact us page or visitor tracking system. Create an leads from the contact information and qualify them before converting them into an opportunities.
Configuration¶
By default you have an opportunity created in the sales channel, you can have a leads when someone contact you on the website contact us page or send an email to [email protected], to activate leads goto CRM / Configuration / Settings and activate the Leads feature.
You will now have a new submenu Leads under Pipeline where they will aggregate.
Qualify Leads¶
Send the mass mail on new leads received everyday, prepare a good description of your product service details in an email and try to get more information from leads and their expectations from your products or service.
Tip
Define a good subject, include your product / service name into the [ ] square bracket, that make sure that your email will not go to the spam.
You can convert those leads into the opportunity, when your visitor reply to your email which was sent in mass mail.
Convert lead into an opportunity¶
The leads can be converted to an opportunity either manually or automatic depending on the volume of leads you have.
Manual conversion¶
Every day review your leads having reply from the prospects and convert all those leads into an opportunities. You can apply filter Unread Messages
Open the wizard Convert to Opportunity wizard form the Action menu and you are ready to convert selected leads into opportunities.
Apply duplication option will be selected automatically when system detect the duplicate leads in the system based on the email or phone number, duplicated leads will be displayed below form.
You can change the Sales channel if you would like to transfer the opportunity in other channel. You can choose either you would like to link the opportunity with customer by selecting existing or create a new or leave empty. You can create a customer later at the time of create a proposal for them.
The Salesman has to be assigned manually while converting leads into opportunity.
Automatic conversion¶
The automatic conversion and assignation of the opportunity can be done with the help of Lead Score application. You have to install and configure the scoring rules and assignation rules in order to convert leads into opportunity and assign to the correct member in the team.
You can define domain on the sales channel which will fetch leads accordingly and convert it into the opportunity. The domain may include lead scores, page visited by visitor, and other information such as country, city, availability of the email or phone.
Please go through Automatic leads assignation to team members topic in Customer Relationship Management section. | https://odoobooks.readthedocs.io/en/12.0/crm/acquire_leads/convert_lead.html | 2019-12-06T02:55:05 | CC-MAIN-2019-51 | 1575540484477.5 | [array(['../../_images/image71.png', 'image0'], dtype=object)
array(['../../_images/image51.png', 'image1'], dtype=object)
array(['../../_images/image81.png', 'image2'], dtype=object)
array(['../../_images/image61.png', 'image3'], dtype=object)] | odoobooks.readthedocs.io |
Measure which marketing campaign creates more opportunities opportunities to me?, how many new opportunities? so that they can focus more on the platform which brings more business.
Configuration¶
Assumed that the Website Builder and eCommerce applications are already installed. What we need is an Link tracker application which is supporting application to the website.
Install the Website Link Tracker application if it is not installed automatically.
Install the contact us form on the website, so that when user fill the contact detail to get more information about the product, you can have an opportunity created in the CRM application.
Generate leads/opportunities from your website contact page -
Add contact us form on product page¶
You can add the contact us page on the frequently sold product page, which help us to generate the leads. Drag and drop the Form builder widget, select the option Create a lead.
The default fields will be added to the screen, Opportunity, you can change the label to Subject and add additional fields from the widget customization option, such as Name, Email and Mobile.
Creating an opportunities¶
Visitor visit the page through the link you shared on the Google Searching, the visitor will be tracked and same information will be attached to the Campaign, Medium and Source, when opportunities created.
Campaign Analysis¶
The number of opportunities can be grouped by the Source and Medium to check which platform bring how many opportunities coming from which marketing platform.
Can be analysed in detail by applying group by Source and then Medium. It will give us more clear view on from where the opportunities coming from.
| https://odoobooks.readthedocs.io/en/12.0/crm/marketing_activity/visitor_to_opportunitie_conversion.html | 2019-12-06T02:39:56 | CC-MAIN-2019-51 | 1575540484477.5 | [array(['../../_images/image122.png', 'image0'], dtype=object)
array(['../../_images/image151.png', 'image2'], dtype=object)
array(['../../_images/image161.png', 'image3'], dtype=object)
array(['../../_images/image102.png', 'image5'], dtype=object)
array(['../../_images/image112.png', 'image6'], dtype=object)
array(['../../_images/image75.png', 'image7'], dtype=object)] | odoobooks.readthedocs.io |
You can register app agent node properties to customize app agent behavior at the application, tier, or node level.
About Node Properties
App agent node properties control the features and preferences for the Java Agent and .NET Agent. Such agent-specific settings include limits on the number of business transactions, minimum number of requests to evaluate before triggering a diagnostic session, and so on. Node properties follow an inheritance model similar to instrumentation detection, so you can set an individual property globally for an application or at the the tier or node levels.
App agent node properties are not supported for the following agents: PHP, Node.js, Python, Web Server Agent, or the C/C++ SDK.
Even though it is possible to configure node properties in the app-agent-config.xml file in the agent home directory, AppDynamics recommends that you use the Controller UI to configure node properties. The Controller UI displays only those node properties that are registered to the agent.
The App Agent Node Properties reference includes additional properties that do not appear in the UI by default. You can register these properties yourself, but unregistered properties are intended for specific application or troubleshooting scenarios and can impact the performance of your deployment. You should register properties or configure properties directly in app-agent-config.xml only under the guidance of AppDynamics Support or as specifically instructed by the documentation.
Edit a Registered Node Property
In the Controller UI, you can access node properties for a particular node or for all nodes from the dashboard of any node in the application, as follows:
- Access the node dashboard by going to the App Servers page (see Tiers and Nodes).
- Expand the tier that contains the node on which you want to configure a node property and double-click the node.
- In the node dashboard, click Actions > Configure App Server Agent.
- Select the Use Custom Configuration button, and then find and double click on the property you want to modify. See Hierarchical Configuration Inheritance for more information on how node settings work.
After customizing a configuration, you can copy the configuration to other nodes, to the tier, or apply it to the entire application.
Add a Registered Node Property
You can register and configure unregistered App Agent node properties as instructed by AppDynamics Support or as documented.
To register a node property, create a custom configuration for the node, as described in Edit Registered Node Property. Add properties by clicking the + plus icon at the top of the list of current node properties.
In the Create Agent Property window, use the values from App Agent Node Properties Reference to provide values for the name, description, type, and value of the property.
App Agent Node Properties by Type
App Agent Node Properties Reference describes the agent properties in detail. It lists the properties in alphabetical order. The following table provides an alternate view of the properties. It lists the properties by type, enabling you to browse the properties by functionality and feature area. | https://docs.appdynamics.com/display/PRO43/App+Agent+Node+Properties | 2019-12-06T03:16:36 | CC-MAIN-2019-51 | 1575540484477.5 | [] | docs.appdynamics.com |
Snapshot Replication
SQL Server
Azure SQL Database (Managed Instance only)
Azure Synapse Analytics (SQL DW)
Parallel Data Warehouse
Snapshot replication distributes data exactly as it appears at a specific moment in time and does not monitor for updates to the data. When synchronization occurs, the entire snapshot is generated and sent to Subscribers.
Note
Snapshot replication can be used by itself, but the snapshot process (which creates a copy of all of the objects and data specified by a publication) is also commonly used to provide the initial set of data and database objects for transactional and merge publications..
How.
In addition to the standard snapshot process described in this topic, a two-part snapshot process is used for merge publications with parameterized filters.
The following illustration shows the principal components of snapshot replication.
Snapshot Agent.
Releases any locks on published tables.
During snapshot generation, you cannot make schema changes on published tables. After the snapshot files are generated, you can view them in the snapshot folder using Windows Explorer.
Distribution Agent and Merge Agent.
Feedback | https://docs.microsoft.com/en-us/sql/relational-databases/replication/snapshot-replication?view=sql-server-2017 | 2019-12-06T03:28:55 | CC-MAIN-2019-51 | 1575540484477.5 | [array(['media/snapshot.gif?view=sql-server-2017',
'Snapshot replication components and data flow Snapshot replication components and data flow'],
dtype=object) ] | docs.microsoft.com |
math
FormIt math Hook
The math hook will allow you to have a math-based question on your form to prevent spam. It will render a math question that must be answered correctly, as follows:
12 + 23?
Available Properties
Usage
Include it as a hook in your FormIt call:
[[!FormIt? &hooks=`math`]]
To make the math question required, use the call as follows:
[[!FormIt? &hooks=`math` &validate=`math:required`]]
Paste this sample HTML in the part of the form you want to include the math question:
<label>[[!+fi.op1]] [[!+fi.operator]] [[!+fi.op2]]?</label> [[!+fi.error.math]] <input type="text" name="math" value="[[!+fi.math]]" /> <input type="hidden" name="op1" value="[[!+fi.op1]]" /> <input type="hidden" name="op2" value="[[!+fi.op2]]" /> <input type="hidden" name="operator" value="[[!+fi.operator]]" />
The math question in the place of the input named "math".
NOTE: The form fields 'op1', 'op2' and 'operator' are not used anymore from FormIt version 2.2.11 and up.
Customizing the Operator Text
If you don't want just "-" or "+" as the operator, and want to hide it even more from spam bots, you can use output filters to further add ambiguity to the math equation. Change the line with the equation text in it to:
<label>[[!+fi.op1]] [[!+fi.operator:is=`-`:then=`minus`:else=`plus`]] [[!+fi.op2]]?</label>
This will render the equation like "23 plus 41?" or "50 minus 12?" instead of a -/+ symbol, making it harder for spam bots. | https://docs.modx.org/current/en/extras/formit/formit.hooks/math | 2019-12-06T04:27:57 | CC-MAIN-2019-51 | 1575540484477.5 | [] | docs.modx.org |
Production environment install
This topic is a step-by-step guide on how to set up XL Deploy in a production-ready environment. It describes how to configure the product, environment, and server resources to get the most out of the product. This topic is structured in three sections:
- Preparation: Describes the prerequisites for the recommended XL XL Deploy. This guide provides best practices and sizing recommendations.
XL Deploy is active/active capable, and external workers are fully supported. Both are recommended features. The load balancer fronting each of the XL XL Deploy setup is a clustered, multi-node active/active setup with multiple external workers. As such, you will need multiple machines co-located in the same network segment.
Alternatives for the recommended setup are to use local workers, or try out a Kubernetes-based setup.
Obtaining XL Deploy servers
The Requirements for installing XL Deploy topic describes the minimum system requirements. Here are the recommended requirements for each XL Deploy production machine (both masters and workers):
- 3+ Ghz 2 CPU quad-core machine (amounting to 8 cores) or better
- 4 GB RAM or more
- 100 GB hard disk space
Note: All of the XL Deploy cluster nodes must reside in the same network segment. This is required for the clustering protocol to optimally function. For best performance with minimize network latency, it is also recommended that your database server be located in the same network segment.
Obtaining the XL Deploy distribution
Download the XL Deploy ZIP package from the XebiaLabs Software Distribution site (requires customer log-in).
For information about the supported versions of XL Deploy, see Supported XebiaLabs product versions.
Choosing a database server
A production setup requires an external clustered database to store the XL Deploy data. The supported external databases are described in Configure the XL Deploy SQL repository.
For more information about hardware requirements, see the database server provider documentation.
Artifacts storage location
You can configure XL.
XL Deploy can only use one local artifact repository at any time. The configuration option
xl.repository.artifacts.type can be set to either
file or
db to select the storage repository.
Choosing a load balancer
The recommended XL
XL XL Deploy or Connect XL Deploy to your LDAP or Active Directory
Choosing a monitoring and alerting solution
For a production installation, make sure you set up a monitoring system to monitor the system and product performance for the components comprising your installation. XL:
These tools allow log files to be read and indexed while they are being written, so you can monitor for errant behavior during operation and perform analysis after outages.
Database server configuration
The basic database setup procedure, including schemas and privileges, is described in Configure the XL Deploy SQL repository. For various databases, additional configuration options are required to use them with XL Deploy or for a better performance.
MySQL or MariaDB
Important: MariaDB is not an officially supported database for XL Deploy,. See the PostgreSQL documentation to locate this file on your operating system.
Security settings
It is important to harden the XL Deploy environment from abuse. There are many industry-standard practices to ensure that an application runs in a sandboxed environment. You should minimally take the following actions:
- Run XL Deploy in a VM or a container. There are officially supported docker images for XL Deploy.
- Run XL Deploy on a read-only file system. XL Deploy needs to write to several directories during operation, specifically its
conf/,
export/,
log/,
repository/, and
work/subdirectories. The rest of the file system can be made read-only. (The
export/directory needs to be shared between XL Deploy master instances)
- Enable SSL on JMX on XL Deploy server and satellites. See Using JMX counters for XL Satellite and
conf/xl-deploy.conf.example.
- Configure secure communications between XL Deploy and satellites.
- Change the default Derby database to a production-ready database as described above.
- Do not enable SSL since the load balancer will offload SSL as described in Finalize the node configuration and start the server.
Operating system
XL Deploy supports running on any commercially supported Microsoft Windows Server version (under Mainstream Support), or any Linux/Unix operating systems. Ensure that you maintain these systems with the latest security updates.
Java version
Important: XL Deploy requires Java SE 8 or Java SE 11. Running XL Deploy on non-LTS Java versions is not supported. See the Java SE support roadmap for details and dates.
XL Deploy can run on the Oracle JDK or JRE, as well as OpenJDK. Always run the latest patch level of the JDK or JRE, unless otherwise instructed. For more information on Java requirements see Requirements for installing XL Deploy.
Installation and execution
The XL Deploy installation procedure for both master and worker nodes is the same. To install XL XL XebiaLabs XL Deploy into the
C:\xebialabs\xl-deployit-<version>-serverdirectory.
- Grant the
xl-deployuser
Read,
Read & execute,
List folder contents, and
Writepermissions to this installation directory so XL Deploy can add and modify necessary files and create subdirectories. Alternatively, grant
Writepermission to the
conf\and
log XebiaLabs Software Distribution site (requires customer log-in).
Configure the SQL repository
For a clustered production setup, XL Deploy requires an external database, as described in Configure the XL Deploy SQL repository.
Configure XL Deploy master-worker connectivity
The suggested production setup uses external workers to execute deployments and control tasks. On each node, edit
conf/xl-deploy.conf to include a setting
xl.task.in-process-worker = false. On each of the nodes running XL Deploy as a worker instance, the
-master <address>:<port> flags’ addresses will be resolved against DNS, so you need to make sure that your DNS server resolves each as an
A record or an
SRV record listing each of the XL Deploy master instances. This address will be resolved periodically (as determined by the
xl.worker.connect.interval setting in the
conf/xl-deploy.conf file), and the worker will adjust the addresses it connects to. XL Deploy
- Connect XL Deploy to your LDAP or Active Directory
Configure XL Deploy Java virtual machine (JVM) options
To optimize XL Deploy performance, you can adjust JVM options to modify the runtime configuration of XL Deploy. To increase performance, add or change the following settings in the
conf/xld-wrapper-linux.conf or the
conf\xld-wrapper-windows.conf file.
Configure the task execution engine
Deployment tasks are executed by the XL Deploy workers’ task execution engines. Based on your deployment task, one of the XL Deploy master instances generates a deployment plan that contains steps that one of the XL Deploy workers will carry out to deploy the application. You can tune the XL XL Deploy server.
Because this is the initial installation, XL Deploy prompts a series of questions. See the table below for the questions, recommended responses, and considerations.
After you answer
yes to the final question, the XL Deploy server will boot up. During the initialization sequence, it will initialize the database schemas and display the following message:
You can now point your browser to https://<IP_OF_LOADBALANCER>/
Stop the XL XL Deploy master instances run without issue, you can install XL XL Deploy nodes.
<7>Every XL XL Deploy
To prevent inadvertent loss of data, ensure you regularly back up your production database as described in Back up XL Deploy.
Set up monitoring
Set up the desired metrics
Ensure that you monitor the following statistics for the systems that comprise your XL Deploy environment including the load balancer, your XL Deploy nodes, and database servers:
- Network I/O
- Disk I/O
- RAM usage
- CPU usage
Add monitoring to XL XL Deploy system. You should also make sure to not expose insecure or unauthenticated JMX over the network, as it can be used to execute remote procedure calls on the JVM.
The optimal solution is to set up
collectd to aggregate the statistics on the XL Deploy server and push them to a central collecting server that can graph them. To do this, you must install the following tools on the XL Deploy server:
After these tools are installed, you can use this
collect.conf sample, which is preconfigured to monitor relevant XL XL Deploy communicates
Connectivity to middleware
This section reviews how XL Deploy will traverse your network to communicate with middleware application servers to perform deployment operations. Since XL Deploy is agentless, communication is done using standard SSH or WinRM protocols.
Standard XL Deploy connectivity
In this example, XL Deploy, using the Overthere plugin, connects to the target server using either SSH or WinRM.
For more information, review the following:
Standard XL Deploy connectivity using Jumpstation
- XL Deploy, using the Overthere plugin, connects to the jumpstation target server using SSH. Nothing is installed on the jumpstation server.
- Connection is made from jumpstation, using SSH or WinRM, to the target server.
For more information, see Jumpstation details and Connect XL Deploy through an SSH jumpstation or HTTP proxy
Standard XL Deploy connectivity using Satellite
How it works:
- XL Deploy communicates to XL Satellite application using TCP.
- Deployment workload is moved from XL Deploy JVM to XL Satellite.
- XL Satellite, using the Overthere plugin, connects to the Target server using SSH or WinRM.
For more information, see getting started with the satellite module.
Communication protocols and capabilities. | https://docs.xebialabs.com/v.9.0/xl-deploy/how-to/set-up-xl-deploy-in-production/ | 2019-12-06T04:24:35 | CC-MAIN-2019-51 | 1575540484477.5 | [array(['/static/Production-XLD-9.0-setup-eebcabaee500d29eadeaa49a8683ae6c.png',
'XL Deploy Production Configuration'], dtype=object)
array(['/static/xld-overthere-1126c3f945b9aa373d9e11bc35fff5d8.png',
'XL Deploy connects using SSH or WinRM'], dtype=object)
array(['/static/xld-overthere-jumpstation-9fe2cccad064fcaaea0cbab273ae0eab.png',
'XL Deploy connects using jumpstation to target server'],
dtype=object)
array(['/static/xld-overthere-satellite-f64ca61b4db6a99002bc1d3c6b4b0071.png',
'XL Deploy, first moves workload to XL Satellite through TCP, then SSH or WinRM to target server'],
dtype=object)
array(['/static/standard-xld-jumpstation-8805ef8efed72c673d117d1004e044a9.png',
'XL Deploy, connecting to DMZ using SSH, connects to Jumpstation'],
dtype=object) ] | docs.xebialabs.com |
Two-factor providers¶
Two-factor auth providers apps are used to plug custom second factors into the Nextcloud core.
Implementing a simple two-factor auth provider¶
Two-factor auth providers must implement the OCP\Authentication\TwoFactorAuth\IProvider interface. The example below shows a minimalistic example of such a provider.
<?php namespace OCA\TwoFactor_Test\Provider; use OCP\Authentication\TwoFactorAuth\IProvider; use OCP\IUser; use OCP\Template; class TwoFactorTestProvider implements IProvider { /** * Get unique identifier of this 2FA provider * * @return string */ public function getId() { return 'test'; } /** * Get the display name for selecting the 2FA provider * * @return string */ public function getDisplayName() { return 'Test'; } /** * Get the description for selecting the 2FA provider * * @return string */ public function getDescription() { return 'Use a test provider'; } /** * Get the template for rending the 2FA provider view * * @param IUser $user * @return Template */ public function getTemplate(IUser $user) { // If necessary, this is also the place where you might want // to send out a code via e-mail or SMS. // 'challenge' is the name of the template return new Template('twofactor_test', 'challenge'); } /** * Verify the given challenge * * @param IUser $user * @param string $challenge */ public function verifyChallenge(IUser $user, $challenge) { if ($challenge === 'passme') { return true; } return false; } /** * Decides whether 2FA is enabled for the given user * * @param IUser $user * @return boolean */ public function isTwoFactorAuthEnabledForUser(IUser $user) { // 2FA is enforced for all users return true; } }
Register the provider state¶
To always know if a provider is enabled for a user, the server persists the enabled/disabled state of each provider-user tuple. Hence a provider app has to propagate these state changes. This is handled by the provider registry.
You can have the registry injected via constructor dependency injection. Whenever the provider state
is changed (user enables/disables the provider), the
enableProviderFor or
disableProviderFor
method must be called.
Note
This provider registry was added in Nextcloud 14. For backwards compatibility, the server
still occasionally uses the
IProvider::isTwoFactorAuthEnabledForUser method if the provider state
has not been set yet. This method will be removed in future releases.
Registering a two-factor auth provider¶
You need to inform the Nextcloud core that the app provides two-factor auth functionality. Two-factor
providers are registered via
info.xml.
<two-factor-providers> <provider>OCA\TwoFactor_Test\Provider\TwoFactorTestProvider</provider> </two-factor-providers>
Providing an icon (optional)¶
To enhance how a provider is shown in the list of selectable providers on the login page, an icon can be specified. For that the provider class must implement the IProvidesIcons interface. The light icon will be used on the login page, whereas the dark one will be placed next to the heading of the optional personal settings (see below).
Provide personal settings (optional)¶
Like other Nextcloud apps, two-factor providers often require user configuration to work. In Nextcloud 15 a new, consolidated two-factor settings section was added. To add personal provider settings there, a provider must implement the IProvidesPersonalSettings interface.
Make a provider activatable by the admin (optional)¶
In order to make it possible for an admin to enable the provider for a given user via the occ command line tool, it’s necessary to implement the OCP\Authentication\TwoFactorAuth\IActivatableByAdmin interface. As described in the linked interface documentation, this should only be implemented for providers that need no user interaction when activated.
Make a provider deactivatable by the admin (optional)¶
In order to make it possible for an admin to disable the provider for a given user via the occ command line tool, it’s necessary to implement the OCP\Authentication\TwoFactorAuth\IDeactivatableByAdmin interface. As described in the linked interface documentation, this should only be implemented for providers that need no user interaction when deactivated. | https://docs.nextcloud.com/server/17/developer_manual/app/two-factor-provider.html | 2019-12-06T03:17:49 | CC-MAIN-2019-51 | 1575540484477.5 | [] | docs.nextcloud.com |
.
WARNING
AJAX currently will not work with multi-page forms at this time. We hope to be able to resolve this issue in an upcoming release.
- Sessions for incomplete submissions are stored for 3 hours, and then are removed after that.
WARNING
Multi-page forms cannot be started, stopped and returned to again at a later time to finish. The process has to be a continuous one, but the user has 3 hours before the form will timeout the submission.
- Users can go backward in forms (if enabled).
- Any data entered into the current page that has NOT yet been submitted "forward" will not be saved when clicking Previous submit button. As in, clicking the Previous button will not save any changes you made to that form page.
WARNING
Multi-page forms cannot go backward at this time when using in conjunction with reCAPTCHA v2 Invisible and reCAPTCHA v3 (it works fine with reCAPTCHA v2 Checkbox). We hope to be able to resolve this issue in an upcoming release.
- If an earlier page contains file upload field(s), files will actually be uploaded before the form is officially submitted.
- If the form is never completed, the submission clearing (described above) will remove the file after 3 hours.
When editing forms in the Composer interface inside the control panel,. | https://docs.solspace.com/craft/freeform/v3/overview/multi-page-forms.html | 2019-12-06T03:55:12 | CC-MAIN-2019-51 | 1575540484477.5 | [array(['/assets/img/cp_forms-composer-multipage.5440bf29.png',
'Composer - Multi-page'], dtype=object)
array(['/assets/img/templates_form-multipage.4123d91e.png', 'Form'],
dtype=object) ] | docs.solspace.com |
Linux Install¶
Installation will require Python 2.7 and pip.
Ubuntu¶
You can install the required packages on Ubuntu by running the following command:
sudo apt-get install -y python python-pip python-dev build-essential git curl -sL | sudo -E bash - sudo apt-get install -y nodejs sudo:
cd ~/ sudo apt-get install git git clone# . Fedora Server: dnf install python dnf install redhat-rpm-config // fix for error: command 'gcc' failed with exit status 1
All set, head back to the basic install guide. | http://rocketmap.readthedocs.io/en/latest/basic-install/linux.html | 2017-11-17T21:15:03 | CC-MAIN-2017-47 | 1510934803944.17 | [] | rocketmap.readthedocs.io |
A DEM Worker that manages virtual machines through SCVMM must be installed on a host where the SCVMM console is already installed.
A best practice is to install the SCVMM console on a separate DEM Worker machine. In addition, verify that the following requirements have been met.
The DEM worker must have access to the SCVMM PowerShell module installed with the console.
The PowerShell Execution Policy must be set to RemoteSigned or Unrestricted.
To verify the PowerShell Execution Policy, enter one of the following commands at the PowerShell command prompt.
help about_signing help Set-ExecutionPolicy
If all DEM Workers within the instance are not on machines that meet these requirements, use Skill commands to direct SCVMM-related workflows to DEM Workers that are.
The following additional requirements apply to SCVMM.
This release supports SCVMM 2012 R2, which requires PowerShell 3 or later.
Install the SCVMM console before you install vRealize Automation DEM Workers that consume SCVMM work items.
If you install the DEM Worker before the SCVMM console, you see log errors similar to the following example.
Workflow 'ScvmmEndpointDataCollection' failed with the following exception: The term 'Get-VMMServer' is not recognized as the name of a cmdlet, function, script file, or operable program. Check the spelling of the name, or if a path was included, verify that the path is correct and try again.
To correct the problem, verify that the SCVMM console is installed, and restart the DEM Worker service.
Each SCVMM instance must be joined to the domain containing the server.
The credentials used to manage the endpoint representing an SCVMM instance must have administrator privileges on the SCVMM server.
The credentials must also have administrator privileges on the Hyper-V servers within the instance.
Hyper-V servers within an SCVMM instance to be managed must be Windows 2008 R2 SP1 Servers with Hyper-V installed. The processor must be equipped with the necessary virtualization extensions .NET Framework 4.5.2 or later must be installed and Windows Management Instrumentation (WMI) must be enabled.
To provision machines on an SCVMM resource, you must add a user in at least one security role within the SCVMM instance.
To provision a Generation-2 machine on an SCVMM 2012 R2 resource, you must add the following properties in the blueprint.
Scvmm.Generation2 = true Hyperv.Network.Type = synthetic
Generation-2 blueprints should have an existing data-collected virtualHardDisk (vHDX) in the blueprint build information page. Having it blank causes Generation-2 provisioning to fail.
For more information, see Configure the DEM to Connect to SCVMM at a Different Installation Path.
For additional information about preparing for machine provisioning, see Preparing Your SCVMM Environment. | https://docs.vmware.com/en/vRealize-Automation/7.2/com.vmware.vrealize.automation.doc/GUID-5887CFDF-E3F1-4056-9AFF-FE925B26FF23.html | 2017-11-17T21:36:12 | CC-MAIN-2017-47 | 1510934803944.17 | [] | docs.vmware.com |
If an Admin user forgets the password to the Web user interface, the account becomes unreachable.
Problem
If vRealize Log Insight has only one Admin user and the Admin user forgets the password, the application cannot be administered. If an Admin user is the only user of vRealize Log Insight, the whole Web user interface becomes inaccessible.
Cause
vRealize Log Insight does not provide a user interface for Admin users to reset their own passwords, if the user does not remember their current password.
Admin users who are able to log in can reset the password of other Admin users. Reset the Admin user password only when all Admin user accounts' passwords are unknown.
Prerequisites
Verify that you have the root user credentials to log in to the vRealize Log Insight virtual appliance. See Configure the Root SSH Password for the vRealize Log Insight Virtual Appliance
To enable SSH connections, verify that TCP port 22 is open.
Procedure
- Establish an SSH connection to the vRealize Log Insight virtual appliance and log in as the root user.
- Type li-reset-admin-passwd.sh and press Enter.
The script resets the Admin user password, generates a new password and displays it on the screen.
What to do next
Log in to the vRealize Log Insight Web user interface with the new password and change the Admin user password. | https://docs.vmware.com/en/vRealize-Log-Insight/4.5/com.vmware.log-insight.administration.doc/GUID-48C871F8-6289-406C-9C9A-59E4EA1AF2E5.html | 2017-11-17T21:42:37 | CC-MAIN-2017-47 | 1510934803944.17 | [] | docs.vmware.com |
Transaction¶
This class represents a Tryton transaction that contains thread-local parameters of a database connection. The Transaction instances are context manager that will commit or rollback the database transaction. In the event of an exception the transaction is rolled back, otherwise it is commited.
Transaction.
start(database_name, user[, readonly[, context[, close[, autocommit]]]])¶
Start a new transaction and return a context manager. The non-readonly transaction will be committed when exiting the with statement without exception. The other cases will be rollbacked.
Transaction.
set_context(context, **kwargs)¶
Update the transaction context and return a context manager. The context will be restored when exiting the with statement.
Transaction.
reset_context()¶
Clear the transaction context and return a context manager. The context will be restored when exiting the with statement.
Transaction.
set_user(user[, set_context])¶
Modify the user of the transaction and return a context manager. set_context will put the previous user id in the context to simulate the record rules. The user will be restored when exiting the with statement.
Transaction.
set_current_transaction(transaction)¶
Add a specific
transactionon the top of the transaction stack. A transaction is commited or rollbacked only when its last reference is popped from the stack.
Transaction.
new_transaction([autocommit[, readonly]])¶
Create a new transaction with the same database, user and context as the original transaction and adds it to the stack of transactions.
Transaction.
join(datamanager)¶
Register in the transaction a data manager conforming to the Two-Phase Commit protocol. More information on how to implement such data manager is available at the Zope documentation.
This method returns the registered datamanager. It could be a different yet equivalent (in term of python equality) datamanager than the one passed to the method. | http://trytond.readthedocs.io/en/latest/ref/transaction.html | 2017-11-17T20:55:33 | CC-MAIN-2017-47 | 1510934803944.17 | [] | trytond.readthedocs.io |
Deploying desktops on virtual machines that are managed by vCenter Server provides all the storage efficiencies that were previously available only for virtualized servers. Using View Composer increases the storage savings because all virtual machines in a pool share a virtual disk with a base image.
Reducing and Managing Storage Requirements
| | https://docs.vmware.com/en/VMware-Horizon-6/6.1.1/com.vmware.horizon-view.desktops.doc/GUID-5E1CED3D-3E99-4511-B735-958F4057C8AF.html | 2017-11-17T21:42:01 | CC-MAIN-2017-47 | 1510934803944.17 | [] | docs.vmware.com |
Bases: astropy.units.core.UnitBase
Create a composite unit using expressions of previously defined units.
Direct use of this class is not recommended. Instead use the factory function Unit(...) and arithmetic operators to compose units.
Attributes Summary
Methods Summary
Attributes Documentation
Return the bases of the composite unit.
Return the powers of the composite unit.
Return the scale of the composite unit.
Methods Documentation
Return a unit object composed of only irreducible units. | https://astropy.readthedocs.io/en/v0.2.5/_generated/astropy.units.core.CompositeUnit.html | 2017-11-17T21:14:53 | CC-MAIN-2017-47 | 1510934803944.17 | [] | astropy.readthedocs.io |
© 2011-2016Sub
- 5.10. Redis Transactions
- 5.11. Pipelining
- 5.12. Redis Scripting
- 5.13. Support Classes
- 6. Reactive Redis support
- 7. Redis Cluster
- 8. Redis Repositories
- 8.1. Usage
- 8.2. Object to Hash Mapping
- 8.3. Keyspaces
- 8.4. Secondary Indexes
- 8.5. Time To Live
- 8.6. Persisting References
- 8.7. Persisting Partial Updates
- 8.8. Queries and Query Methods
- 8.9. Redis Repositories running on Cluster
- 8.10. CDI integration
- Appendixes
Preface
1. New Features
New and noteworthy in the latest releases.
1.1. via
JedisClientConfigurationand
LettuceClientConfiguration.
Revised
RedisCacheimplementation.
Add
SPOPwith count command for Redis 3.2.
1.2.based or Spring, or Spring Data examples, please refer to Getting Started - this documentation refers only to Spring Data Redis Support and assumes the user is familiar with the key value storages and Spring concepts.
2. Why Spring Data Redis?).
3. Requirements
Spring Data Redis 1.x binaries requires JDK level 6.0 and above, and Spring Framework 5.0.1.RELEASE and above.
4. Getting Started.
4.1. First Steps
As explained.
4.1.1. Knowing Spring.
4.1.2. Knowing.
4.1.3. Trying Out The Samples.
4.2. Need Help?
If you encounter issues or you are just looking for advice, feel free to use one of the links below:
4.2.1. Community Support
The Spring Data tag on Stackoverflow. (@SpringData) on Twitter.
Reference Documentation
Document structure
This part of the reference documentation explains the core functionality offered by Spring Data Redis.
Redis support introduces the Redis module feature set.
5. Redis support.
5.1. Redis Requirements.
5.3. Connecting.
5.3.1. RedisConnection and RedisConnectionFactory s are created through
RedisConnectionFactory. In addition, the factories act as
PersistenceExceptionTranslator s,.
5.3.2. Configuring Jedis connector>
5.3.3. Configuring Lettuce connector
Lettuce is a netty-basedConnection..
5ized/deserialized through Java. The serialization mechanism can be easily changed on the template, and the Redis module offers several implementations()); } }
5.6. String-focused convenience classes"); } }); }
5.7. Serializers
From the framework perspective, the data stored in Redis is just bytes. While Redis itself supports various types, for the most part these refer to the way the data is stored rather than what it represents. It is up to the user to decide whether the information gets translated into Strings or any other objects.
The conversion between the user (custom) types and raw data (and vice-versa) is handled in Spring Data Redis in the
org.springframework.data.redis.serializer package.
This package contains two types of serializers which as the name implies, takes care of the serialization process:
Two-way serializers based on`RedisSerializer`.
Element readers and writers using
RedisElementReaderand
RedisElementWriter.
The main difference between these variants is that
RedisSerializer primarily serializes to
byte[] while readers and writers use
ByteBuffer.
Multiple implementations are available out of the box, two of which have been already mentioned before in this documentation:
the
StringRedisSerializer
JdkSerializationRedisSerializer
However one can use
OxmSerializer for Object/XML mapping through Spring OXM support or either.); } }
5.8.2. Jackson2HashMapper
Jackson2HashMapper provides Redis Hash mapping for domain objects.9..
5.9.1. Sending/Publishing messages
To publish a message, one can use, as with the other operations, either the low-level
RedisConnection or the high-level
RedisTemplate. Both entities offer the
publish method that accepts as");
5.9.
Message Listener Containers s doesn’t extend).
5.10. Redis Transactions.
5.10.1. @Transactional Support
Transaction Support is disabled by default and has to be explicitly");
5.
5.12. Redis Scripting and
ReactiveRedisTemplate. Both use a configurable
ScriptExecutor /
ReactiveScriptExecutor to run the provided script. By default, the
ScriptExecutor takes care of serializing the provided keys and arguments and deserializing the script result. This is done via the key and value serializers of the template. There is an additional overload code above configures a
Red.
5).. Reactive Redis support
This section covers reactive Redis support and how to get started. You will find certain overlaps with the imperative Redis support.
6.1. Redis Requirements
Spring Data Redis requires Redis 2.6 or above and Java SE 8.0 or above. In terms of language bindings (or connectors), Spring Data Redis currently integrates with Lettuce as the only reactive Java connector. Project Reactor is used as reactive composition library.
6.2. Connecting to Redis using a reactive driver
ReactiveRedisConnection and
ReactiveRedisConnectionFactory interfaces for working with and retrieving active
connections to Redis.
6.2.1. Redis Operation Modes
Redis can be run as standalone server, with Redis Sentinel or in Redis Cluster mode. Lettuce supports all above mentioned connection types.
6.2.2. ReactiveRedisConnection and ReactiveRedisConnectionFactory
ReactiveRedisConnection provides the building block for Redis communication as it handles the communication with the Redis back-end. It also automatically translates the underlying driver exceptions to Spring’s consistent DAO exception hierarchy so one can switch the connectors without any code changes as the operation semantics remain the same.
Active
ReactiveRedisConnections are created through
ReactiveRed
ReactiveRedisConnectionFactory is to configure the appropriate connector through the IoC container and inject it into the using class.
6.2.3. Configuring Lettuce connector
Lettuce is supported by Spring Data Redis through the
org.springframework.data.redis.connection.lettuce package.
Setting up
ReactiveRedisConnectionFactory for Lettuce can be done as follows:
@Bean public ReactiveRedisConnectionFactory connectionFactory() { return new LettuceConnectionFactory("localhost", 6379); }
A more sophisticated configuration, including SSL and timeouts, using
LettuceClientConfigurationBuilder might look like below:
have a look at
LettuceClientConfiguration.
6.3. Working with Objects through ReactiveRedisTemplate
Most users are likely to use
ReactiveRedisTemplate and its corresponding package
org.springframework.data.redis.core - the template is in fact the central class of the Redis module due to its rich feature set. The template offers a high-level abstraction for Redis interactions. While
ReactiveRedisConnection offers low level methods that accept and return binary values (
ByteBuffer), the template takes care of serialization and connection management, freeing the user from dealing with such details.
Moreover, the template provides operation views (following the grouping from Redis command reference) that offer rich, generified interfaces for working against a certain type as described below:
Once configured, the template is thread-safe and can be reused across multiple instances.
Out of the box,
ReactiveRedisTemplate uses a Java-based serializer for most of its operations. This means that any object written or read by the template will be serialized/deserialized through
RedisElementWriter respective
RedisElementReader. The serialization context is passed to the template upon construction, and the Redis module offers several implementations available in the
org.springframework.data.redis.serializer package - see Serializers for more information.
@Configuration class RedisConfiguration { @Bean ReactiveRedisTemplate<String, String> reactiveRedisTemplate(ReactiveRedisConnectionFactory factory) { return new ReactiveRedisTemplate<>(connectionFactory, RedisSerializationContext.string()); } }
public class Example { @Autowired private ReactiveRedisTemplate<String, String> template; public Mono<Long> addLink(String userId, URL url) { return template.opsForList().leftPush(userId, url.toExternalForm()); } }
6.4. Reactive Scripting
Executing Redis scripts via the reactive infrastructure can be done using the
ReactiveScriptExecutor accessed best via); } }
Please refer to the scripting section for more details on scripting commands.
7. Redis Cluster
Working with Redis Cluster requires a Redis Server version 3.0+ and provides a very own set of features and capabilities. Please refer to the Cluster Tutorial for more information..
())); } }
7.2. Working With Redis Cluster Connection
As mentioned))
8. Redis Repositories
Working with Redis Repositories allows to seamlessly convert and store domain objects in Redis Hashes, apply custom mapping strategies and make use of secondary indexes.
8.
) }
8.2. Object to Hash Mapping
The Redis Repository support persists Objects"
8, "persons")); } } }
8.4. Secondary Indexes
Secondary indexes are used to enable lookup operations based on native Redis structures. Values are written to the according indexes on every save and are removed when objects are deleted or expire.
8")); } } }
8.
8.
8)
8.);
8.8. Queries and Query Methods
Query methods allow automatic derivation of simple finder queries from the method name. added ones. All it takes is providing a
RedisCallback that returns a single or
Iterable set of id values.);
Here’s an overview of the keywords supported for Redis and what a method containing that keyword essentially translates to.
8.
8 available as CDI beans and create a proxy for a Spring Data repository whenever a bean of a repository type is requested by the container. Thus obtaining an instance of a Spring Data repository is a matter of declaring an
@Injected property:.
Appendixes
Appendix A: Schema
Core schema
<="topic" type="xsd:string"> <xsd:annotation> <xsd:documentation><![CDATA[ The topics(s) to which the listener is subscribed. Can be (in Redis terminology) a channel or/and a pattern. Multiple values can be specified by separating them with spaces. Patterns can be specified by using the '*' character. ]]><.redis.serializer.RedisSerializer"/> <> | https://docs.spring.io/spring-data-redis/docs/current/reference/html/ | 2017-11-17T21:23:23 | CC-MAIN-2017-47 | 1510934803944.17 | [] | docs.spring.io |
A logical port, logical switch, or NSGroup can be excluded from a firewall rule.
About this task
After you've created a section with firewall rules you may want to exclude an NSX-T appliance port from the firewall rules.
Procedure
- Select Firewall in the navigation panel.
- Click the Exclusion List tab.
The exclusion list screen appears.
- To add an object, click Add on the menu bar.
A dialog box appears.
- Select a type and an object.
The available types are Logical Ports, Logical Switch, and NSGroup.
- Click Save.
- To remove an object from the exclusion list, select the object and click Delete on the menu bar.
- Confirm the delete. | https://docs.vmware.com/en/VMware-NSX-T/2.0/com.vmware.nsxt.admin.doc/GUID-12617811-D760-48EF-A58E-8CB4AA227B21.html | 2017-11-17T21:28:17 | CC-MAIN-2017-47 | 1510934803944.17 | [] | docs.vmware.com |
You can update one or more vRealize Hyperic agents by pushing the new agent bundle to it from the vRealize Hyperic server, using the vRealize Hyperic user interface.
About this task
When you update an agent bundle, the configuration settings in the agent's AgentHome/conf/agent.properties file are not changed. However, the first time you start an agent that you have updated from version 4.5 or earlier, passwords specified in the file are encrypted.
Prerequisites
The bundle must reside in the ServerHome/hq-engine/hq-server/webapps/ROOT/WEB-INF/hq-agent-bundles directory.
Procedure
- On the Resources tab, select the server on which the agent bundle resides.
- On the Views tab, click Agent Commands.
- Select Upgrade from the Select an agent operation to run menu.
- Select the appropriate bundle from the Select upgradeable agent bundle menu.
The bundle includes an update to the JRE. If you do not want to update the JRE, select the bundle that does not include a platform in the file name, for example agent-version.number.tar.gz.
- Click Execute.
Results
The bundle is copied to the bundles directory and self-extracts. On completion of the extraction process, you can see the version information for the upgraded agent on the tab. | https://docs.vmware.com/en/vRealize-Hyperic/5.8.4/com.vmware.hyperic.install.config.doc/GUID-56303AE3-22ED-4152-BD0E-B3B57DFA2C62.html | 2017-11-17T21:28:49 | CC-MAIN-2017-47 | 1510934803944.17 | [] | docs.vmware.com |
The Application Billing Page
Important
Amazon DevPay is not accepting new seller accounts at this time. Please see AWS Marketplace for information on selling your applications on Amazon Web Services.
After customers have used the product, they can view their usage and corresponding costs on the Application Billing page (at). This page is available at any time and shows usage and billing information for all the DevPay products they use. You should provide a link to this page on your web site.
The following image shows an example Application Billing page for a customer who has purchased paid AMIs.
The next image shows an example Application Billing page for a customer who has purchased products based on Amazon S3.
| http://docs.aws.amazon.com/AmazonDevPay/latest/DevPayDeveloperGuide/ExampleApplicationBilling.html | 2017-11-17T21:36:36 | CC-MAIN-2017-47 | 1510934803944.17 | [array(['images/ApplicationBillingExample_EC2.gif',
'Application Billing page example for paid AMIs'], dtype=object)
array(['images/ApplicationBillingExample_S3.gif',
'Application Billing page example for Amazon S3 products'],
dtype=object) ] | docs.aws.amazon.com |
Bases: astropy.time.core.TimeFormat
Base class for times that represent the interval from a particular epoch as a floating point multiple of a unit time interval (e.g. seconds or days).
Attributes Summary
Methods Summary
Attributes Documentation
Methods Documentation
Initialize the internal jd1 and jd2 attributes given val1 and val2. For an TimeFromEpoch subclass like TimeUnix these will be floats giving the effective seconds since an epoch time (e.g. 1970-01-01 00:00:00). | https://astropy.readthedocs.io/en/v0.2.5/_generated/astropy.time.core.TimeFromEpoch.html | 2017-11-17T21:14:15 | CC-MAIN-2017-47 | 1510934803944.17 | [] | astropy.readthedocs.io |
Notifying a single user with OpsGenie is straight forward; just specify the user as the recipient of the alert.
If you are using OpsGenie Alert API or tools (lamp) to create alerts, you can put the username of the user into the recipients field. If you're forwarding alerts to OpsGenie via email, you can define an email rule and specify the recipient there.
Note that if you're creating the alerts via the web UI, OpsGenie web UI allows entering user's full name and auto-completes as you type in the name. It actually sets the recipient to the username in the background.
Yes. Recipients field support multiple entries, hence if you're using the API, simply put the usernames of all the users you'd like to notify in the recipients field in comma separated format.
With this configuration, each of the users in the recipients field would be notified as soon as the alert is created (according to each user's own notification preferences)
Yes. You can simply create an on-call schedule for each team and specify each of the on-call schedules as a recipient. This would allow each of the team to manage their own on-call schedule and rotations, and one member of each team would be notified immediately for the alerts.
Yes. When usernames and/or groups specified as the recipients, all specified users are notified immediately. To notify users in order with a time delay OpsGenie provides "escalations". In an escalation, OpsGenie tries each user and/or group in order until the alert is acknowledged by someone (or closed).
To notify first Fili, and then Kili, first create an escalation, and specify the names of the users and/or groups and the time delay between each in the escalation rules.
Then when creating the alert, simply put the name of the escalation into the recipients field.
Can I notify multiple users without specifying their names while creating the alert?Can I notify members of the webops team immediately and notify to their manager if the alert is not acknowledged in 10 minutes?
Yes. OpsGenie web API allows specifying the name of a "group" in the recipients field. Just create a "group", add users to the group, and put the group name into the recipients field
With this configuration, each member of the group would be notified as soon as the alert is created, according to the user's notification preferences. Using groups as the recipient instead of individual users may be preferred since modifying group membership is typically easier than modifying various integration scripts, email rules, etc. and can be done in a single place.
Can I notify members of the webops team immediately and notify to their manager if the alert is not acknowledged in 10 minutes?
Yes. Create an escalation that notifies webops_team group immediately, and notifies Thorin after 10 minutes (if the alert is still not acknowledged or closed)
Then when creating the alert, specify the escalation as the recipient of the alert.
Yes. You can create an on-call schedule with daily, weekly or custom rotation, and specify the on-call schedule as the recipient of the alert.
Can I notify the on-call engineer first, and escalate it to the team members if the alert is not acknowledge in 10 minutes?
Yes. You can create an on-call schedule and an escalation and specify both as the recipients of the alert.
Yes, you can obtain this behavior as follows:
Create an escalation that notifies all members of your team as in the following example:
- Create an on-call schedule.
- Add one rotation for Work Hours that notifies the escalation above within work hours like in the following setup:
Add another rotation for Off Hours that notifies your team within the off-hours, like in the following example. Please note that this rotation will notify the members of the team according to the order and time specifications of its escalation policy.
Your final schedule will look like the below:
You can refer to our Teams, Escalations and On-call Schedules and Rotations documents for further information.
Yes. You need to follow these steps:
- The owner of the account can assign any user as owner. So, log in with the current owner and set a user as Owner from "Users" page, by selecting the user's role as "Owner" .
- New owner can log in and change the role of old owner or delete old owner.
You can copy a user's current Notification Settings to multiple other users; by making a single POST request to OpsGenie Web API. This is particulary useful if, for example, you've configured a good set of notification rules on your profile; and now want to apply these same rules to your 50 other users.
Upon a successful POST request, OpsGenie responds with the text "process started" and starts updating the notification rules of the targeted users; asynchronously. The whole process may be completed in ranging from 5 seconds to 10 minutes, depending on the size of the operation.
After all the updates are made, OpsGenie adds to your log stream a report of the operation, with the message "Copying of notification rules completed". In this report you can see how many users are updated and details of partial updates, in case your users haven't configured all their contact methods for the notification rules.
Either use the API key of your "Default API" integration in Integrations page or create an API Integration and obtain its API key to make the request below. Please make sure that the integration is not restricted to access configurations.
Please note that all the existing notification rules of the target users for the specified types will be deleted before copy.
The POST request takes the following parameters:
apiKey
API key is used for authenticating API requests
fromUser
Username of the template user. This user's notification rules will be used for copying.
toUsers
Specify a list of the users which you want to copy the rules to. You can use the username of a user, the name of a group, or to copy to all users, "all".
ruleTypes
Specify a list of the action types you want to copy the rules of. It can contain "New Alert", "Acknowledged Alert" or for all types of notification rules, "all". The total list of valid types are:
- all
- New Alert
- Acknowledged Alert
- Closed Alert
- Schedule Start
- Renotified Alert
- Assigned Alert
- Add Note
Sample Request
curl -XPOST '' -d ' { "apiKey": "eb243592-faa2-4ba2-a551q-1afdf565c889", "fromUser" : "[email protected]", "toUsers" : [ "group1", "[email protected]", "[email protected]" ], "ruleTypes" : [ "New Alert", "Alert Closed" ] } '
Response:
You can export your alerts by submitting a form in Reports Page. You will receive an email contains a link to download the export result. By clicking, you can download a zip file containing two csv files. One of them contains alert data, and the other one contains alert count data day by day.
Alert data contains alert creation and update times in milliseconds along with human-readable timestamps in your account's timezone. Timestamp information also contains the timezone offset.
You can export maximum 10k alerts per export request. In that case, result email contains a link to continue exporting the rest of your alerts.You can export all alerts or just your alerts by enabling the Only My Alerts field in the form. | https://docs.opsgenie.com/docs/frequently-asked-questions | 2017-11-17T21:29:05 | CC-MAIN-2017-47 | 1510934803944.17 | [] | docs.opsgenie.com |
Upgrading django-SHOP¶
Upgrading to 0.10.0¶
This version requires django-CMS version 3.4.2 or higher and djangocms-cascade version 0.12.0 or higher. It is well tested with Django-1.10 but should work as well with Django-1.9.
There has been a lot of effort in getting a cleaner and more consistent API. If you upgrade from version 0.9 please note the following changes:
The REST serializers have been moved into their own submodule
shop.serializers. They now are
separated into
bases and
defaults following the same naming convention as beeing used
in
shop.models and
shop.admin. Please ensure that you change your import statements.
Serializers
ProductCommonSerializer,
ProductSummarySerializer and
ProductDetailSerializer
have been unified into a single
ProductSerializer, which acts as default for the
ProductListView and the
ProductRetrieveView. The
ProductSummarySerializer (which is used
to serialize attributes available across all products of the site) now must be configured using the
settings directive
SHOP_PRODUCT_SUMMARY_SERIALIZER.
All Angular directives have been checked for HTML5 mode compatibility. It is strongly recommended over hashbang mode.
Billing and shipping address have been unified into one single address form which makes them easier
to interchange. The
salutation field has been removed from the address model and can now
optionally be added to the merchant representation.
All AngularJS directives for the catalog list and catalog search view support infinite scroll, as well as manual pagination.
After upgrading to angular-ui-bootstrap version 0.14, all corresponding directives have to be
prefixed with
uib-....
There is no more need for a special URL pattern to handle auto-completion search. Instead use the
wrapping view
shop.search.views.CMSPageCatalogWrapper.
The model
CartItem has a new CharField
product_code. This replaces the
product_code,
which optionally is kept inside its
extra dict. This requires to simplify some templates
implementing
{{ somevar.extra.product_code }} into
{{ somevar.product_code }}; it applies to
the cart, the add-to-cart and the order templates. Also check for
ProductSerializer-s
implemented for products with variations.
Look for methods implementing
get_product_variant since its signature changed.
requires a database migration by the merchant implementation. Such a migration file must contain a datamigration, for instance:
from __future__ import unicode_literals from django.db import migrations, models def forwards(apps, schema_editor): CartItem = apps.get_model('myshop', 'CartItem') for item in CartItem.objects.all(): item.product_code = item.extra.get('product_code', '') item.save() def backwards(apps, schema_editor): CartItem = apps.get_model('myshop', 'CartItem') for item in CartItem.objects.all(): item.extra['product_code'] = item.product_code item.save() class Migration(migrations.Migration): dependencies = [ ('myshop', '0001_initial'), ] operations = [ migrations.AddField( model_name='cartitem', name='product_code', field=models.CharField(blank=True, help_text='Product code of added item.', max_length=255, null=True, verbose_name='Product code'), ), migrations.RunPython(forwards, reverse_code=backwards), ]
0.9.3¶
This version requires djangocms-cascade 0.11.0 or higher. Please ensure to run the migrations which convert the Cascade elements:
./manage.py migrate shop
0.9.2¶
The default address models have changed in 0.9.2. If you are upgrading from 0.9.0 or 0.9.1 and your project is using the default address models, you need to add a migration to make the necessary changes to your models:
./manage.py makemigrations --empty yourapp
Next, edit the migration file to look like this:
# -*- coding: utf-8 -*- from __future__ import unicode_literals from django.db import models, migrations class Migration(migrations.Migration): dependencies = [ # makemgirations will generate the dependencies for you. ] operations = [ migrations.RenameField("ShippingAddress", "addressee", "name"), migrations.RenameField("ShippingAddress", "street", "address1"), migrations.RenameField("ShippingAddress", "supplement", "address2"), migrations.RenameField("ShippingAddress", "location", "city"), migrations.AlterField("ShippingAddress", "name", models.CharField( verbose_name="Full name", max_length=1024 )), migrations.AlterField("ShippingAddress", "address1", models.CharField( verbose_name="Address line 1", max_length=1024 )), migrations.AlterField("ShippingAddress", "address2", models.CharField( verbose_name="Address line 2", max_length=1024 )), migrations.AlterField("ShippingAddress", "city", models.CharField( verbose_name="City", max_length=1024 )), migrations.RenameField("BillingAddress", "addressee", "name"), migrations.RenameField("BillingAddress", "street", "address1"), migrations.RenameField("BillingAddress", "supplement", "address2"), migrations.RenameField("BillingAddress", "location", "city"), migrations.AlterField("BillingAddress", "name", models.CharField( verbose_name="Full name", max_length=1024 )), migrations.AlterField("BillingAddress", "address1", models.CharField( verbose_name="Address line 1", max_length=1024 )), migrations.AlterField("BillingAddress", "address2", models.CharField( verbose_name="Address line 2", max_length=1024 )), migrations.AlterField("BillingAddress", "city", models.CharField( verbose_name="City", max_length=1024 )), ]
Finally, apply the migration:
./manage.py migrate yourapp | http://django-shop.readthedocs.io/en/latest/upgrading.html | 2017-11-17T20:52:34 | CC-MAIN-2017-47 | 1510934803944.17 | [] | django-shop.readthedocs.io |
How to model an app user in ApiOmat
Most of the time when creating a new app, we are confronted with the central question: How to model a user of my app? This article shows the best practices in ApiOmat and gives some examples of the usage of the user model.
In ApiOmat, existing data models or classes are encapsulated in modules. One module is automatically contained in every app: The Basics module. If you are searching for a representation of a user, you will find it right in this module: The User class. A user represents one installation of your app; they therefore have to be unique to each device. You find examples below Creation and loading of a new user. This unique relation is necessary to ensure a proper authentication of requests against ApiOmat. If the same user would be used on more than one device, no clear separation of data would be possible. If no user would be used at all, only free accessible data could be retrieved. Additionally, some other modules like Push or Facebook are based on the concept of unique users per device and would not work when sharing users.
User insights
A user contains some basic attributes, which are primarily used for authentication:
userName - Unique user name or email address (obligatory)
password - Password of the user (obligatory)
firstName - First name of the user
lastName - Last name of the user
dateOfBirth - User's date of birth
It can be seen that every user has to have at least a userName and password.
A deeper look into the class shows an attribute named dynamicAttributes; this map is not writable and only for internal use. It is used to store additional user attributes in a dynamic way, which are injected by other modules. For example, if you add the Push Module to your app, your user will automatically be enriched with some more attributes seen in the SDK to store the Push tokens.
Creation and loading of a new user
If you select the user class in the left menu and switch to the SDK afterwards, you will see example code for user creation in all SDK languages (Javascript, iOS,...). The following code will automatically create a new user if it did not exist before:
// Create a new member/user of your app
User myUser=
new
User();
myUser.setUserName(
"johnDoe"
);
myUser.setPassword(
"1,618"
);
// configure datastore with user credentials
Datastore.configure( myUser );
// Try to load user or create new one if not exists
myUser.loadMeAsync(
new
AOMEmptyCallback() {
@Override
public
void
isDone(ApiomatRequestException exception) {
if
(exception!=
null
) {
myUser.saveAsync(
null
);
}
}
} );
The workflow in creating a user is always the same:
Create a user
Call Datastore.configure(), which will set the user's credentials for the following requests to ApiOmat
Load an existing user (loadMe())
Or: save the user(save())
It is very important to call Datastore.configure() before the first request. Otherwise the communication layer will not have the necessary information to communicate with ApiOmat.
Reset password
While all other attributes of a user can be set using the setter methods, special methods are used to reset password:
resetPassword() – Sends a generated password to the user via email. This is useful if they forget the old password. It is necessary to use a valid email address as userName to make this method work!
changePassword – Changes the password to a new value. The Datastore gets automatically reconfigured with the new credentials afterwards.
Extending user class
In some cases, the existing attributes in the user class will not suffice the needs of you app. If this happens, you can simply build you own class in Class Editor (e.g. JiraUser in the image), and inherit from the user class. Your own class will inherit all attributes and behavior from the standard user class and can be used for authentication, too. The example code in SDK will show you how:
// Create a new member/user of your app
JiraUser myUser=
new
JiraUser();
myUser.setUserName(
"johnDoe"
);
myUser.setPassword(
"1,618"
);
// configure datastore with user credentials
Datastore.configure( myUser );
...
Roles and Security
An authentication with a user permits access to all data with the User and Guest rights. An extension to this rights management is also possible: You can find additional documentation for security for complex cases here.
The default access rules of the basics.User class allow all users to read the data of other users. You can either create a subclass and modify the access rules, or change the configuration of the module "Basics" accordingly (see Basics Module).
Billing
Based on the usage of the User class the number of active users is calculated, which is one part used for billing purposes. An active user is a user who made requests against ApiOmat or received push messages in the last 30 days. To ensure a proper calculation, the implementation of the user in your app must follow the above advises (one app installation = one unique user). Misuse or sharing of users is detected. | http://docs.apiomat.com/31/How-to-model-an-app-user-in-ApiOmat.html | 2019-01-16T07:37:52 | CC-MAIN-2019-04 | 1547583657097.39 | [array(['images/download/attachments/27723939/userOverview.png',
'images/download/attachments/27723939/userOverview.png'],
dtype=object)
array(['images/download/attachments/27723939/user_parentClass.png',
'images/download/attachments/27723939/user_parentClass.png'],
dtype=object) ] | docs.apiomat.com |
An Act to amend 978.06 (5) (a); and to create 978.001 (1k) of the statutes; Relating to: allowing district attorneys, deputy district attorneys, and assistant district attorneys to engage in the private practice of law for certain civil purposes.
Bill Text (PDF: )
AB276 ROCP for Committee on Judiciary (PDF: )
Wisconsin Ethics Commission information | http://docs.legis.wisconsin.gov/2015/proposals/ab276 | 2019-01-16T07:35:59 | CC-MAIN-2019-04 | 1547583657097.39 | [] | docs.legis.wisconsin.gov |
Geodetic calculations¶
Module
openquake.hazardlib.geo.geodetic contains functions for geodetic
transformations, optimized for massive calculations.
- class
openquake.hazardlib.geo.geodetic.
GeographicObjects(objects, getlon=<operator.attrgetter object>, getlat=<operator.attrgetter object>)[source]¶
Store a collection of geographic objects, i.e. objects with longitudes and latitudes. By default extracts the coordinates from the attributes .lon and .lat, but you can provide your own getters. It is possible to extract the closest object to a given location by calling the method .get_closest(lon, lat)._to_arc(alon, alat, aazimuth, plons, plats)[source]¶
Calculate a closest distance between a great circle arc and a point (or a collection of points).)[source]¶
Calculate the geodetic distance between two points or two collections of points.
Parameters are coordinates in decimal degrees. They could be scalar float numbers or numpy arrays, in which case they should “broadcast together”.
Implements.
Rounds the distance between two reference points with respect to
lengthand calls
npoints_towards().
openquake.hazardlib.geo.geodetic.
min_distance(mlons, mlats, mdepths, slons, slats, sdepths, indices=False)[source]¶
Calculate the minimum distance between a collection of points and a point.
This function allows to calculate a closest distance to a collection of points for each point in another collection. Both collection can be of any shape, although it doesn’t make sense to use scalars for the first one.
Implements the same formula as in
geodetic_distance()for distance along great circle arc and the same approach as in
distance()for combining it with depth distance.
openquake.hazardlib.geo.geodetic.
min_distance_to_segment(seglons, seglats, lons, lats)[source]¶
This function computes the shortest distance to a segment in a 2D reference system.
openquake.hazardlib.geo.geodetic.
min_geodetic_distance(mlons, mlats, slons, slats)[source]¶
Same as
min_distance(), but calculates only minimum geodetic distance (doesn’t accept depth values) and doesn’t support
indices=Truemode.
This is an optimized version of
min_distance()that is suitable for calculating the minimum distance between first mesh and each point of the second mesh when both are defined on the earth surface..
Implements the same approach as
npoints_towards(). | https://docs.openquake.org/oq-hazardlib/0.21/geo/geodetic.html | 2019-01-16T07:35:24 | CC-MAIN-2019-04 | 1547583657097.39 | [] | docs.openquake.org |
Week of April 24th, 2017 (10.2.0) Search Docs See Service for APM system status and upgrade schedule information. Enhancements Network Path Charts—Another improvement to the Path Performance charts! The time and value of the most recent data point is clearly displayed at the top of the chart for each metric. Resolved issues ID Keyword Description AV-1079 Experience Fixed: The URL in Web Path Detail Reports (PDF) that links back to the web path in the AppNeta Performance Manager web app is broken. Going forward, newly generated reports will have working links. AV-2341 Experience Fixed: Both the current web path status and the latest transaction status were not always an aggregate roll-up of the latest transaction status of each milestone. It may show as error when in fact each milestone was successful. AV-2433 Experience Fixed: When new APM licenses are assigned or re-assigned, web paths may show inconsistent configuration and monitoring state. AV-2442 Experience When converting Enterprise Monitoring Points from legacy to the APM licensing, a warning message is presented with specific web paths flagged to have a lower license selected, before any changes can be applied to the Web App Group. For example, change enhanced -> standard. AV-2460 Experience Fixed: In cases where an account has expired licenses, the Web Paths page only shows a license warning and does not list the web paths. Now the warning is displayed along with the web paths. AV-2472 Experience Fixed: When “declare” statements or blank lines are present in web path workflows, inaccurate line numbers are reported for script issues found during preview. AV-2482 Experience Fixed: When a web path is disabled and re-enabled, custom monitoring interval settings are lost and reset to default values. | https://docs.appneta.com/release-notes/2017-04-24-application.html | 2019-01-16T07:57:18 | CC-MAIN-2019-04 | 1547583657097.39 | [] | docs.appneta.com |
6 – Evaluating Cloud Hosting Costs
This chapter presents a basic cost model for running the aExpense application in the cloud. It makes some assumptions about the usage of the application and uses the current pricing information for Microsoft Azure services to estimate annual operational costs.
The Premise
The aExpense application is a typical business application. Adatum selected this as a pilot cloud migration project because the application has features that are common to many of Adatum's other business applications, and Adatum hopes that any lessons learned from the project can be applied elsewhere.
The original on-premises version of the aExpense application is deployed in Adatum’s data center, with components installed across several different servers. The web application is hosted on a Windows Server computer that it shares with another application. aExpense also shares a SQL Server database installation with several other applications, but has its own dedicated drive array for storing scanned expense receipts.
The current cloud-based deployment of aExpense, using Cloud Services web and worker roles, is sized for average use, not peak use, so the application can be slow and unresponsive during the busy two days at month-end when the majority of users submit their business expense claims.
Goals and Requirements
It is difficult for Adatum to determine accurately how much it costs to run the original on-premises version of aExpense. The application uses several different servers, shares an installation of SQL Server with several other business applications, and is backed up as part of the overall backup strategy in the data center.
It is very difficult to estimate the operational costs of an existing on-premises application.
Although Adatum cannot determine the existing running costs of the application, Adatum wants to estimate how much it will cost to run in the cloud now that the developers have completed the migration steps described in the previous chapters of this guide. One of the specific goals of the pilot project is to discover how accurately it can predict running costs for cloud based applications.
A second goal is to estimate what cost savings might be possible by configuring the application in different ways, or by taking advantage of other Azure services. Adatum will then be able to assign a cost to a particular configuration and level of service, which will make it much easier to perform a cost-benefit analysis on the application. A specific example of this in the aExpense application is to estimate how much it will cost to deploy additional instances to meet peak demand during the busy month-end period.
Overall, Adatum would like to see greater transparency in managing the costs of its suite of business applications.
Detailed Costing Estimates
The first step Adatum took was to analyze what it will be billed every month for the cloud-based version of aExpense. Figure 1 shows the services that Microsoft will bill Adatum for each month for the aExpense application.
Figure 1
Billable services
The following table summarizes the current rates in U.S. dollars for these services. The prices listed here are accurate for the U.S. market as of July 2012. However, for up-to-date pricing information see the Azure Pricing Details. You can find the pricing for other regions at the same address.
Bandwidth Cost Estimate for aExpense
The aExpense application is not bandwidth intensive. Assuming that all scanned receipt images will be transferred back and forth to the application twice, and taking into account the web traffic for the application, Adatum estimated that 9.5 GB of data would move each way every month.
The Hands-on Labs that are available for this guide include an exercise that demonstrates how Adatum estimated the bandwidth usage and other runtime parameters for the aExpense application.
Compute Estimate for aExpense
Adatum's assumption here is that the application will run 24 hours a day, 365 days a year. The current version of the application uses a single instance of the Cloud Services web role and worker role.
Receipt Image Storage Estimate for aExpense
The aExpense application stores uploaded receipt images in Azure blob storage. Based on an analysis of existing usage, on average 65 percent of 15,000 Adatum employees submit ten business expense items per month. Each scanned receipt averages 15 KB in size, and to meet regulatory requirements, the application must store seven years of history. This gives an estimated storage requirement for the application of 120 GB.
Azure SQL Database Storage Requirements Estimate
The aExpense application stores expense data (other than the receipt images) in a Azure SQL Database. Adatum estimates that each business expense record in the database will require 2 KB of storage. So based on the analysis of existing usage (on average 65 percent of 15,000 Adatum employees submit ten business expense items per month) and the requirement to store data for seven years, this gives an estimated storage requirement of 16 GB. However, the actual measured database usage is likely to be greater than this due to the nature of the way that a database stores the data until it is compacted, and so Adatum will base the estimate on a 20 GB database.
Total Cost Approximation
This means that the costs as an approximate proportion of the total cost of running the application (a total of $3,089.76 per year) will be as follows:
- Compute (web and worker roles): $2,102.40 (~ 68 %)
- Azure SQL Database: $791.16 (~ 26 %)
- Azure storage: $182.52 (~ 6 %)
- Bandwidth: $13.68 (~ 0.4 %)
Variations
Having established the approximate costs of running the aExpense application in its current form, Adatum wants to confirm that its choice of a PaaS approach was justified, and also consider some variations to discover the cost for better meeting peak demand and to see if additional cost savings were possible.
Costing the IaaS Hosting Approach
In the first step of the migration, Adatum hosted both the application and the SQL Server database in Azure Virtual Machines. To accommodate the requirements of SQL Server, with a view to using it with other applications in the future, Adatum chose to use a medium sized virtual machine for the database, and a small sized virtual machine for the application.
Adatum also chose to use the Standard edition of SQL Server rather than the Web edition in order to accommodate future requirements. The virtual machine that hosts SQL Server also needs a data disk to store the SQL Server database. The estimated costs of this configuration are shown in the following table.
From this it’s clear that the PaaS approach using Cloud Services and Azure SQL Database is considerably less expensive than the original migration step that used the IaaS approach with virtual machines. However, Adatum must consider that the IaaS approach required only very minimal changes to the application code, and that the use of a hosted SQL Server is not directly equivalent to using Azure SQL Database. For example, if Adatum deploys additional applications in the future they can share the hosted SQL Server without incurring additional cost, whereas additional costs will be incurred when other applications that use Azure SQL Database are deployed.
It’s also possible for Adatum to install SQL Server on the virtual machine using a licensed copy they own instead of paying to rent SQL Server, which could considerably reduce the overall costs; but Adatum must also consider the cost of maintaining and upgrading the operating systems and database software for the IaaS approach.
However, overall, the saving of almost $5,000.00 per year justifies the decision Adatum made to move from IaaS to PaaS for the aExpense application, even when considering the development effort required to refactor the application and adapt it to run in Cloud Services web and worker roles. Adatum will review the situation when it decides to move other applications to the cloud.
Combined IaaS and PaaS Approach
If Adatum actually requires a virtual machine because the application demands some special operating system configuration, access to non-standard installed services, or cannot be refactored into web and worker roles, the data could be still be stored in Azure SQL Database to remove the requirement for a hosted SQL Server. In this case the running costs per instance would be similar to that for the PaaS approach using Cloud Services.
This configuration is based on a single virtual machine, which would run the background tasks asynchronously within the application instead of using a separate worker role. Adatum could implement a virtual network in the cloud and load balance two virtual machine instances to provide additional capacity, in which case the overall cost would be almost the same as using a Cloud Services web and worker role.
Adatum could also use a virtual machine with a separate Cloud Services worker role to perform the background processing tasks, and communicate with the worker role from the virtual machine using Azure storage queues. This configuration will also cost almost the same as when using Cloud Services web and worker roles.
Costing for Peak and Reduced Demand
One of the issues raised by users of the existing aExpense application is poor performance of the application during the two days at the end of the month when the application is most heavily used. To address this issue, Adatum then looked at the cost of doubling the compute capacity of the application for two days a month by adding an extra two web roles to handle the UI load.
This indicates that the additional cost to better meet peak demand is low, and yet it will provide a huge benefit for users. Adatum can use scripts executed by on-premises administrators to change the number of running instances, perhaps through a scheduled task, or implement an auto scaling solution such as the Enterprise Library Autoscaling Application Block.
Note
The Autoscaling Application Block is part of Enterprise Library, developed by the p&p team at Microsoft. For more information see “The Autoscaling Application Block” and Chapter 6, “Maximizing Scalability, Availability, and Performance in the Orders Application,” in the p&p guide “Building Hybrid Applications in the Cloud on Microsoft Azure.
Adatum also examined the cost implications of running the application for only twelve hours each day for only six days each week, except at the month end when the majority of users access it. The following table shows the compute costs for the web and worker roles.
This is less than half of the compute cost of running the application 24 hours a day for 365 days per year, giving a saving of around $1,100 per year. Adatum could use the same auto scaling approach described earlier to achieve this pattern of availability.
Costing for Azure Table Storage
Adatum is also interested in comparing the cost of storing the business expense data in Azure table storage instead of in Azure SQL Database. The previous calculations in this chapter show that the storage requirement for seven years of data (excluding receipt images, which are stored in Azure blob storage) is around 16 GB. The following table also assumes that each new business expense item is accessed five times during the month.
As you can see, this is a fraction of the cost of using Azure SQL Database ($791.16 per year). The estimated total running cost for the year would be $2,322.90 using table storage, offering the possibility of reducing the overall running costs by almost a quarter.
Adapting the application to use Azure table storage instead of a relational database will require development and testing effort. However, as long as table storage can provide the performance and scalability required, the saving makes this worthwhile for Adatum’s scenario. In the following chapter you will see how Adatum explored adapting the aExpense application to use table storage, and then implemented the change.
More Information
Use the Azure Pricing calculator to estimate runtime costs.
You can find information that will help you to understand your Azure bill at
Pricing Details.
For information about auto scaling Azure application roles, see “The Autoscaling Application Block” and Chapter 6, “Maximizing Scalability, Availability, and Performance in the Orders Application,” in the p&p guide “Building Hybrid Applications in the Cloud on Microsoft Azure.”
Next Topic | Previous Topic | Home | Community | https://docs.microsoft.com/en-us/previous-versions/msp-n-p/ff803372(v=pandp.10) | 2019-01-16T07:41:47 | CC-MAIN-2019-04 | 1547583657097.39 | [array(['images/gg663533.pnp-logo%28en-us%2cpandp.10%29.png',
'patterns & practices Developer Center patterns & practices Developer Center'],
dtype=object)
array(['images/ff803372.1619c5df6a9debac65bc70ef27b7c93f%28en-us%2cpandp.10%29.png',
'Figure 1 - Billable services Figure 1 - Billable services'],
dtype=object) ] | docs.microsoft.com |
StaticTextViewItem Class
Namespace: DevExpress.ExpressApp.Mobile.Editors
Assembly: DevExpress.ExpressApp.Mobile.v18.1.dll
Syntax
public class StaticTextViewItem : StaticText, IDisposable, IAppearanceFormat, IAppearanceBase
The StaticTextViewItem uses the Label wrapper to display captions in a UI.
The following image illustrates the StaticTextViewItem used to display the "Welcome! Please enter your user name and password below." message:
For general information on Static Text View Items, refer to the StaticText class description.
To learn how to add a Static Text View Item to a Detail View, refer to the View Items topic.
Inheritance
StaticTextViewItem | https://docs.devexpress.com/eXpressAppFramework/DevExpress.ExpressApp.Mobile.Editors.StaticTextViewItem | 2018-07-15T23:07:31 | CC-MAIN-2018-30 | 1531676589022.38 | [array(['/eXpressAppFramework/images/statictextviewitem_mobile132731.png',
'StaticTextViewItem_Mobile'], dtype=object) ] | docs.devexpress.com |
Ticket #2038 (in_testing enhancement)
Qtopia USSD requests support
Description.
Attachments
Change History
Changed 10 years ago by Treviño
- Attachment qtopia-add-support-for-USSD-requests.patch added
comment:2 Changed 10 years ago?
Changed 10 years ago by zecke
- Attachment 0001-Move-detecting-of-supplementary-service-detection-to.patch added
Move the detection to qmodemcall
Changed 10 years ago by zecke
- Attachment 0002-gsm-Switch-the-encoding-before-sending-a-supplemen.patch added
Force GSM charset before dialing supplementary numbers
comment:3 follow-up: ↓ 4 Changed 10 years ago.
comment:4 in reply to: ↑ 3 Changed 10 years ago by Treviño
Okay first thoughts:
- Moving the matching to QModemCall is good and I will take that
Ok...
- Removing public API (specially only the implementation) even if it is called is not good.
Ah, I thought it was redoundant...
- The CSCS query and the resulting duplication is not good. From what I assume is that with AT+CUSD we could send the right data?
Well, I added that since I didn't know that I could define how to perform an action just for a class of devices (for a specific modem), so the best way I found was that of reading the pre-set codec, setting the right one and then going back to the default one.
Btw I had to make CUSD incoming data work in two situations:
- Parsing it correctly when I requested it and so when I've set a specified codec
- Parsing it correctly when I don't requested it directly (i.e. the network send me a "flash message" or simply informs me of how much I've spent in the just endend call) and so when I don't know what codec I'm using.
While in the first case I know what codec I must use, if I've set it; in the second case I just have to check CSCS to get the codec that the modem is using and then calling the right decoding method.
However, yes. We must use AT+CUSD to send the data... The mentioned 3GPP documents should give you more informations about this.
What is left from my point of view:
- Understand why ATD needs to have the other encoding
Well... ATD imho should be removed at all from this part, since (according to what I've read in the 3GPP docs) it isn't used at all for service calls.
It must be used only for standard call...
- Fix sending of AT+CUSD, this should be done by encoding the data properly
I've tried to do that, but I always got modem errors... I figure that I was using a bad encoding the network would have informed me about using the right syntax sending me a CUSD error, but this never happens.
I only collected modem errors (I've not saved the error codes I got, but some were like "memory error"), nothing goes to the network and this seems strange to me.
Also, according to the 3GPP docs I found that the request should define the codec used; so maybe using something like AT+CUSD="<number-coded-in-ucs2>",<code-for-ucs2> should work anyway. While the number could be coded using the functions available (I guess, since they're not so far from SMS hadling) the <code-for-ucs2> according to GSM 03.38 should be 01001000 that is 72 in decimal... However I'm only wondering this...
E.g. in your crash report I think it is crashing because you removed a method from the library that is called by the gsm*.cpp code.
Thanks, I'll check also if I thought I had recompiled/relinked everything...
Can you be happy with the code? I put the code here and not in the git repo as review works both ways.
If I find time in this weekend I'll give a try, but it seems ok in the codec management. I don't know how you'd implement the request...
comment:5 Changed 10 years ago by tick
- Owner changed from zecke to tick
- Status changed from new to accepted
comment:6 Changed 10 years ago by tick
- Status changed from accepted to in_testing
Switch to in_testing
comment:7 Changed 10 years ago by tick
- Status changed from in_testing to assigned
Oh ~ Sorry It's my fault. It's not there yet.
comment:8 Changed 10 years ago by Treviño
I've seen the git commit 91d64283e87eb3223bfcdd16d68a7a20b765fd7c that with the others recently added should add support for USSD, btw there are things that I don't understand (also if I've not tested these changes in my hardware yet):
- Why the CUSD command is used (in order of the ATD one) only in the calypso modem? I know that we're using only that in Om, but all the GSM modems use the CUSD command to ask USSD requests...
- Why in dialServiceCommand() you encode the string number using the current codec but you don't update the relative data coding scheme (dcs)? I mean, actually in any case is sent something like AT+CUSD=1,"<coded-dialed-string>",15 while the "15" value should change according to the GSM 03.38, isn't it? 15 stands for 1111 and so for "Language unspecified" according to that reference.
An I wrong?
comment:9 Changed 10 years ago by zecke
Why:
- Issueing a charset change is dangerous as it can creates a window where saving something on the phonebook might fail as a command gets inserted in between. Even while the window is very small and unlikely to be hit I have fixed many of this kind of issues in the Qtopia code already. I try to avoid it.
- Doing it from the vendor plugin fits best with the current structure of the code. It also solves the issue of a default charset nicely. It is set by the plugin, it is handled by the plugin.
- From reading the specs we are not sure if setting AT+CSCS should have a influence on the USSD encoding. The spec says default is GSM 7 Bit and later something else. So we have no idea how other modems handle it, this is another reason to put it into the fic code path.
encoding:
comment:10 Changed 10 years ago by roh
- HasPatchForReview set
BatchModify?: set HasPatchForReview? on 'keyword' contains 'patch'
comment:11 Changed 10 years ago by zecke
Okay, I still need to test this in Europe, maybe during the next weekend.
comment:12 Changed 10 years ago by tick
- Status changed from assigned to in_testing
[PATCH] Add support for USSD requests to the network | http://docs.openmoko.org/trac/ticket/2038 | 2018-07-15T23:19:15 | CC-MAIN-2018-30 | 1531676589022.38 | [] | docs.openmoko.org |
Out of date: This is not the most recent version of this page. Please see the most recent version
PortIn
Use the PortIn class to read an underlying GPIO port as one value. This is much faster than BusIn because you can read!
// Switch on an LED if any of mbed pins 21-26 is high #include "mbed.h" PortIn p(Port2, 0x0000003F); // p21-p26 DigitalOut ind(LED4); int main() { while(1) { int pins = p.read(); if(pins) { ind = 1; } else { ind = 0; } } } | https://docs.mbed.com/docs/mbed-os-api-reference/en/latest/APIs/io/PortIn/ | 2018-07-15T23:16:53 | CC-MAIN-2018-30 | 1531676589022.38 | [] | docs.mbed.com |
Using the Products Per Page App
Table of Contents
Overview
The Products Per Page app adds the ability for you and your customers to change the number of products listed per page in archives.
When your customers are shopping online they want the best experience possible. For some this means to have a small amount of products per page, while others like to have a long list of many (or all) products available on a single page. Using the Products Per Page app your customers can choose how many products they want to see per page.
The Products Per Page dropdown is easy to use, works out-of-the-box, and also has several other product page settings available to let you customise the dropdown position, number of columns, and default number of products to display per page. | https://docs.thatwebsiteguy.net/using-the-products-per-page-app/ | 2018-07-15T23:18:17 | CC-MAIN-2018-30 | 1531676589022.38 | [] | docs.thatwebsiteguy.net |
You can view the cost details of vCloud Director constructs that are categorized according to organization, organization virtual data center, virtual machines, and vApps.
Procedure
- Log in to vRealize Business for Cloud as an administrator. (for the vRealize Automation integrated setup)(for the vRealize Business for Cloud standalone setup)
- Click Business Management.
- Under Reports, click vCenter Server.
- Under Reports, click vCloud Director.
- Select the construct for which you want to view the report. | https://docs.vmware.com/en/vRealize-Business/7.2/com.vmware.vRBforCloud.user.doc/GUID-68B1B520-3419-4A03-9B1A-F47ACEB227FF.html | 2018-07-15T23:32:15 | CC-MAIN-2018-30 | 1531676589022.38 | [] | docs.vmware.com |
The Affiliates Coupons extension provides automatic and bulk coupon generation features.
Installation
Upload the plugin zip file through Plugins > Add New > Upload on your WordPress dashboard. Activate the Affiliates Coupons plugin.
Setup
In the Affiliates Coupons menu you will find several subsections that are used to control the coupons that are generated.
Affiliates Coupons
This section controls the settings used to create coupons automatically when a new affiliates is created.
The coupons that are created are percentage-based. The coupons grant a percentage discount applied to the cart total or on products.
The coupon codes that are generated consist of a prefix and the affiliate ID. The prefix can be changed.
Default Coupon Settings
- Enable – this must be checked to enable automatic coupon generation for new affiliates.
- Discount Type – Choose Cart % Discount or Product % Discount for coupons that grant a percentage on the cart total or products. Choose Product Discount for fixed discounts granted on products. Choose Cart Discount for fixed amount discounts on the cart. The coupon validity and discount amount is affected by the product limitation set under Affiliates Coupons > WooCommerce if any products are chosen there.
- Coupon Amount – Specify the amount for the percentage or fixed discount to be applied.
- Usage Limit – If a usage limit is set, the coupon can only be used up to the number of times.
- Apply Coupon before Tax – If checked, the discount is applied before tax.
- Individual Use – If checked, the coupon cannot be used together with other coupons.
- Coupon Prefix – The coupon codes that are generated based on these setting consist of the prefix with the affiliate ID appended. The prefix can be a single character, a word, or one of the following tokens. Tokens supported are: {affiliate_id} which appends the affiliate_id to the coupon code , {random,n} which renders a random alphanumeric string of length n and {count} which inserts the number of coupons created for the current affiliate in bulk creation.
Examples: If the Coupon Prefix is aff, a new affiliate who’s ID is 123 will have a coupon code aff123 assigned. If the prefix used is ref{random,4}-{affiliate_id}, a new affiliate with ID 100 will have a coupon code ref785f-100 assigned.
Bulk Options
Here you can create new coupons in bulk for existing affiliates.
To create new coupons for affiliates in bulk, first review the settings on this page to make sure that the coupons that will be generated match the desired discounts. Apart from the settings that apply for the automatic process, you can also choose the number of coupons to create per affiliate, each affiliate can be assigned more than one coupons.
Example: If the number of coupons per affiliate is set to 2 and we choose a prefix like discount{random,4}-{affiliate_id}-{count}, the generated coupons for an affiliate with ID 7 would be discount6508-7-1 and discount38f2-7-2, both of them assigned to affiliate with ID 7.
After reviewing the settings for your new coupons, press the button to generate the new set for all affiliates. Note that the number of coupons that can be generated in one go is limited. This is to avoid problems with timeouts when the number of affiliates is large. To cover large sets of affiliates, the same settings can be used several times – as long as the prefix is not changed, no duplicate coupons will be created.
WooCommerce
The coupons that are issued can be limited to products using the settings in this section. Note that cart discounts and product discounts that are restricted by products indicated here have different consequences and that this setting affects all new coupons that are automatically created for new affiliates as well as those that are created in bulk.
To limit the coupons to specific products, choose the desired products and click Save. New coupons created will have the product limitation set from now on.
Example View on Generated Coupons
| http://docs.itthinx.com/document/affiliates-coupons/ | 2018-07-15T22:57:58 | CC-MAIN-2018-30 | 1531676589022.38 | [array(['http://docs.itthinx.com/wp-content/uploads/2015/03/affiliates-coupons-coupons.jpg',
'Affiliates Coupons WooCommerce Product Settings'], dtype=object) ] | docs.itthinx.com |
Use the
NoEmitOnErrorsPlugin to skip the emitting phase whenever there are errors while compiling. This ensures that no assets are emitted that include errors. The
emitted flag in the stats is
false for all assets.
new webpack.NoEmitOnErrorsPlugin()
This supersedes the (now deprecated) webpack 1 plugin
NoErrorsPlugin.
If you are using the CLI, the webpack process will not exit with an error code by enabling this plugin. If you want webpack to "fail" when using the CLI, please check out the
bailoption.
© JS Foundation and other contributors
Licensed under the Creative Commons Attribution License 4.0. | http://docs.w3cub.com/webpack/plugins/no-emit-on-errors-plugin/ | 2018-07-15T22:54:38 | CC-MAIN-2018-30 | 1531676589022.38 | [] | docs.w3cub.com |
Ticket #1546 (closed enhancement: fixed)
package details are slow to get (always)
Description
Kernel : 20080705-asu.stable-uImage.bin
Root file system :20080708-asu.stable-rootfs.jffs2
Steps:
1) Launch assassin
2) select a category (speed: immediate)
3) select a package (speed: 3~4 sec)
Expected:
Wondering why it takes so long to go into the package details page.
It should immediately show package details after selection.
Change History
comment:3 Changed 10 years ago by will
- Keywords must have added
adding 'must have' for query references
Note: See TracTickets for help on using tickets. | http://docs.openmoko.org/trac/ticket/1546 | 2018-07-15T23:08:35 | CC-MAIN-2018-30 | 1531676589022.38 | [] | docs.openmoko.org |
The exception that is thrown to communicate errors to the client when the client connects to non-.NET Framework applications that cannot throw exceptions.
See Also: ServerException Members
System.Runtime.Remoting.ServerException uses the HRESULT COR_E_SERVER, which has the value 0x8013150E.
System.Runtime.Remoting.ServerException uses the default object.Equals(object) implementation, which supports reference equality.
For a list of initial property values for an instance of System.Runtime.Remoting.ServerException, see the System.Runtime.Remoting.ServerException constructors. | http://docs.go-mono.com/monodoc.ashx?link=T%3ASystem.Runtime.Remoting.ServerException | 2018-07-15T23:13:11 | CC-MAIN-2018-30 | 1531676589022.38 | [] | docs.go-mono.com |
TitleExistsForThisUserException Class
The TitleExistsForThisUserException class represents the exception that is thrown when an alert with the specified title already exists for the current user.
Microsoft.SharePoint.Portal.Alerts.AlertException
Microsoft.SharePoint.Portal.Alerts.TitleExistsForThisUserException
Public Constructors
The following table shows the constructors of the TitleExistsForThisUserException | https://docs.microsoft.com/en-us/previous-versions/office/developer/sharepoint2003/dd583278(v=office.11) | 2018-07-15T23:47:48 | CC-MAIN-2018-30 | 1531676589022.38 | [] | docs.microsoft.com |
The driver library copies drivers from the Mirage system to the endpoint. When Windows scans for hardware changes, these copied drivers are used by the Windows Plug and Play (PnP) mechanism, and the appropriate drivers are installed as required.
This diagram illustrates the driver library architecture and how rules associate drivers to endpoints.
Profile A contains drivers from driver folder 1 and 2. When the profile is analyzed, the drivers from those folders are applied to two endpoints.
Profile B contains drivers only from driver folder 2, which is also used by profile A. When the profile is analyzed, the drivers from that folder are applied to only one endpoint.
The Mirage system can have multiple driver folders, multiple driver profiles, and many endpoints.
A driver profile can contain drivers from multiple driver folders and multiple driver profiles can use a driver folder.
You can apply a driver profile to one, many, or no endpoints.
The driver library is used during the following operations:
Centralization
Migration
Hardware migration and restore
Machine cleanup
Base layer update
Set driver library
Endpoint provisioning | https://docs.vmware.com/en/VMware-Mirage/5.8.1/com.vmware.mirage.webmanagement/GUID-A3E797F6-24B9-45FD-A122-75029D4A8AA6.html | 2018-07-15T23:25:50 | CC-MAIN-2018-30 | 1531676589022.38 | [array(['images/GUID-A8250D32-47F8-47DE-9880-7D106E872664-high.png',
'How driver library rules associate drivers to endpoints.'],
dtype=object) ] | docs.vmware.com |
Open Source Gitify Commands modx:install
Gitify modx:install
Renamed from
Gitify install:modx in v0.8. Installs the latest version of MODX, or the one you specified, by downloading the zip and running a command line install. Database details and the likes will be asked for interactively.
Usage: modx:install [modx_version] Arguments: modx_version The version of MODX to install, in the format 2.3.2-pl. Leave empty or specify "latest" to install the last stable release. Options: --download (-d) Force download the MODX package even if it already exists in the cache folder. --help (-h) Display this help message. --verbose (-v|vv|vvv) Increase the verbosity of messages: 1 for normal output, 2 for more verbose output and 3 for debug. --version (-V) Display the Gitify version. | https://docs.modmore.com/en/Open_Source/Gitify/Commands/MODX_Install.html | 2018-11-12T19:51:23 | CC-MAIN-2018-47 | 1542039741087.23 | [] | docs.modmore.com |
TCP Sample Service
The TCP sample service is a simple echo service. Feel free to use it when first trying out Testable to better understand how the platform works.
The service is accessible at sample.testable.io:8091.
Example Scala client:
import java.io.PrintStream import java.net.Socket object GatewayEchoTestClient extends App { val socket = new Socket("sample.testable.io", 8091) var in = socket.getInputStream val out = new PrintStream(socket.getOutputStream) out.print("This is some test data!") out.flush val data = new Array[Byte](100) in.read(data) println("Client received: " + new String(data)) socket.close }
This prints the following result:
This is some test data! | https://docs.testable.io/samples/tcp.html | 2018-11-12T20:47:49 | CC-MAIN-2018-47 | 1542039741087.23 | [] | docs.testable.io |
The exception that is thrown when a System.Threading.Thread is in an invalid Thread.ThreadState for the method call.
See Also: ThreadStateException Members
Once a thread is created, it is in at least one of the System.Threading.ThreadState states until it terminates. ThreadStateException is thrown by methods that cannot perform the requested operation due to the current state of a thread. For example, trying to restart an aborted thread by calling Thread.Start on a thread that has terminated throws a System.Threading.ThreadStateException.
System.Threading.ThreadStateException uses the HRESULT COR_E_THREADSTATE, which has the value 0x80131520.
For a list of initial property values for an instance of System.Threading.ThreadStateException, see the ThreadStateException.#ctor constructors.
The following example demonstrates an error that causes a System.Threading.ThreadStateException exception to be thrown.
C# Example); } } }
The output isWorking thread... | http://docs.go-mono.com/monodoc.ashx?link=T%3ASystem.Threading.ThreadStateException | 2018-11-12T20:08:47 | CC-MAIN-2018-47 | 1542039741087.23 | [] | docs.go-mono.com |
Release notes for Gluster 3.11.2
This is a bugfix release. The release notes for 3.11.1, 3.11.0, contains a listing of all the new features that were added and bugs fixed, in the GlusterFS 3.11 stable release.
Major changes, features and limitations addressed in this release
There are no major features or changes made in this.
- #1463512: USS: stale snap entries are seen when activation/deactivation performed during one of the glusterd's unavailability
- #1463513: [geo-rep]: extended attributes are not synced if the entry and extended attributes are done within changelog roleover/or entry sync
- #1463517: Brick Multiplexing:dmesg shows request_sock_TCP: Possible SYN flooding on port 49152 and memory related backtraces
- #1463528: [Perf] 35% drop in small file creates on smbv3 on *2
- #1463626: [Ganesha]Bricks got crashed while running posix compliance test suit on V4 mount
- #1464316: DHT: Pass errno as an argument to gf_msg
- #1465123: Fd based fops fail with EBADF on file migration
- #1465854: Regression: Heal info takes longer time when a brick is down
- #1466801: assorted typos and spelling mistakes from Debian lintian
- #1466859: dht_rename_lock_cbk crashes in upstream regression test
- #1467268: Heal info shows incorrect status
- #1468118: disperse seek does not correctly handle the end of file
- #1468200: [Geo-rep]: entry failed to sync to slave with ENOENT errror
- #1468457: selfheal deamon cpu consumption not reducing when IOs are going on and all redundant bricks are brought down one after another
- #1469459: Rebalance hangs on remove-brick if the target volume changes
- #1470938: Regression: non-disruptive(in-service) upgrade on EC volume fails
- #1471025: glusterfs process leaking memory when error occurs
- #1471611: metadata heal not happening despite having an active sink
- #1471869: cthon04 can cause segfault in gNFS/NLM
- #1472794: Test script failing with brick multiplexing enabled | https://gluster.readthedocs.io/en/latest/release-notes/3.11.2/ | 2018-11-12T20:22:46 | CC-MAIN-2018-47 | 1542039741087.23 | [] | gluster.readthedocs.io |
Flatcar Linux Documentation¶
Welcome to Flatcar Linux documentation
Getting Started¶
Flatcar Linux runs on most cloud providers, virtualization platforms and bare metal servers. Running a local VM on your laptop is a great dev environment. Following the Quick Start guide is the fastest way to get set up.
Working with Clusters¶
Follow these guides to connect your machines together as a cluster. Configure machine paramaters, create users, inject multiple SSH keys, and more with Container Linux Config.
Container Runtimes¶
Flatcar Linux supports all of the popular methods for running containers, and you can choose to interact with the containers at a low-level, or use a higher level orchestration framework. Listed below are your options from the highest level abstraction down to the lowest level, the container runtime.
Reference¶
APIs and troubleshooting guides for working with Flatcar Linux.
Migrating from cloud-config to Container Linux Config | https://docs.flatcar-linux.org/ | 2018-11-12T19:40:12 | CC-MAIN-2018-47 | 1542039741087.23 | [] | docs.flatcar-linux.org |
This feature allows patrons to receive checkout receipts through email at the circulation desk and in the Evergreen self-checkout interface. Patrons need to opt in to receive email receipts by default, and must have an email address in their account. Opt in can be staff mediated at the time of account creation or in existing accounts. Patrons can also opt in directly in their OPAC account. This feature does not affect the behavior of checkouts from SIP2 devices.
Allow Others to Use My Account information is displayed in patron summary area.
User Buckets allow staff to make batch modifications to user accounts in Evergreen.
Patron’s email address is a mailto link in the profile, so it can be clicked to send an email to the patron.
Two new columns indicating the number of circulation notifications generated for a given loan and the date of the most recent notification are added to Items Out screen in patron record. These columns will allow circulation staff to better respond to patron questions about whether they were sent a notification about an overdue item.
A new library setting, "Exclude Courtesy Notices from Patrons Itemsout Notices Count", is added to allow libraries to choose whether to exclude courtesy notices in these fields.
Display day-granular due dates in the circulating library’s time zone, which means libraries in non-Pacific time zone will see due date ending at 23:59PM.
A whole day (or days) of the client time is (are) marked closed when marking a single or multiple dates closed on Closed Dates Editor. Libraries in non-Pacific time zone no longer need to adjust computer time to server time zone. Time portion is no longer displayed for such closed dates.
Staff can choose to view holds picked up at all branches of a library system, or all libraries in a federation. Previously staff could only choose to view holds picked up at a selected library/branch or all Sitka libraries..
There is a Patron Search link for staff to retrieve patrons via names and other information, rather than relying on barcode alone, on the Place Hold screen in the catalogue.
Staff can place multiple title/metarecord holds at once. This feature is especially beneficial for book clubs and reading groups, which need to place holds on multiple copies of a title.
In order to use the feature, libraries need to set up a new library setting: Maximum number of duplicate holds allowed to a number higher than 1.
When placing a title or a metarecord hold, a Number of copies field will display. This field is not available when placing volume or copy holds.
This feature does not change the way in which the system fills holds. The multiple holds will fill in the same way that they would if the user had placed multiple holds separately.
When an on-hold item is being checked out to another patron, not the requester, there is a new checkbox in the prompt allowing staff to cancel the hold during the check out. If the borrower is picking up the item on behalf of the requester, you can select the checkbox to cancel the hold. However, such cancelled holds are not counted as fulfilled. If you track fulfilled holds statistics, Co-op Support suggests you check out the item to the requester only.
There is a new library setting, Clear Hold When Other Patron Checks Out Item, to allow libraries to choose whether the cancel hold checkbox is selected by default.
Libraries may choose to allow staff to retrieve a few recently accessed patron accounts. To do so, libraries need to set up a new library setting, Number of Retrievable Recent Patrons. Once done, an entry called "Retrieve Recent Patrons" will show up on the Circulation menu.
Now you can include the patron birth year and/or birth month and/or birth day when searching for patrons.
Day and month values are exact matches. E.g. month "1" (or "01") matches January, "12" matches December. Year searches are "contains" searches, i.e. year "15" matches 2015, 1915, 1599, etc. For exact matches use the full 4-digit year.
Copy alerts can be added via the volume/copy creator and the check in, check out, and renew pages. Copy alerts can also be managed at the item status page.
Copy Alerts types are added to allow library staff to add alerts appearing when a specific event takes place, such as when the copy is checked in, checked out, or renewed.
Libraries may choose to suppress certain types of copy alerts via the Copy Alert Suppression page under Local Administration.
A patron billing statement, which summarizes a patron’s bills, credits and payments, is added to Full Details screen. There are two tabs on the screen: Statement and Details. | http://docs.libraries.coop/sitka/_circulation.html | 2018-11-12T20:18:04 | CC-MAIN-2018-47 | 1542039741087.23 | [] | docs.libraries.coop |
GLM: Poisson Regression¶
In [1]:
## Interactive magics %matplotlib inline import sys import re import numpy as np import pandas as pd import matplotlib.pyplot as plt plt.style.use('seaborn-darkgrid') import seaborn as sns import patsy as pt import pymc3 as pm plt.rcParams['figure.figsize'] = 14, 6 np.random.seed(0) print('Running on PyMC3 v{}'.format(pm.__version__))
Running on PyMC3 v3.4.1
This is a minimal reproducible example of Poisson regression to predict counts using dummy data.
This Notebook is basically an excuse to demo Poisson regression using
PyMC3, both manually and using the
glm library to demo interactions
using the
patsy library. We will create some dummy data, Poisson
distributed according to a linear model, and try to recover the
coefficients of that linear model through inference.
For more statistical detail see:
- Basic info on Wikipedia
- GLMs: Poisson regression, exposure, and overdispersion in Chapter 6.2 of ARM, Gelmann & Hill 2006
- This worked example from ARM 6.2 by Clay Ford
This very basic model is inspired by a project by Ian Osvald, which is concerned with understanding the various effects of external environmental factors upon the allergic sneezing of a test subject.
Local Functions¶
In [2]:
def strip_derived_rvs(rvs): '''Convenience fn: remove PyMC3-generated RVs from a list''' ret_rvs = [] for rv in rvs: if not (re.search('_log',rv.name) or re.search('_interval',rv.name)): ret_rvs.append(rv) return ret_rvs def plot_traces_pymc(trcs, varnames=None): ''' Convenience fn: plot traces with overlaid means and values ''' nrows = len(trcs.varnames) if varnames is not None: nrows = len(varnames) ax = pm.traceplot(trcs, varnames=varnames, figsize=(12,nrows*1.4), lines={k: v['mean'] for k, v in pm.summary(trcs,varnames=varnames).iterrows()}) for i, mn in enumerate(pm.summary(trcs, varnames=varnames)['mean']): ax[i,0].annotate('{:.2f}'.format(mn), xy=(mn,0), xycoords='data', xytext=(5,10), textcoords='offset points', rotation=90, va='bottom', fontsize='large', color='#AA0022')
Generate Data¶
This dummy dataset is created to emulate some data created as part of a study into quantified self, and the real data is more complicated than this. Ask Ian Osvald if you’d like to know more
Assumptions:¶
- The subject sneezes N times per day, recorded as
nsneeze (int)
- The subject may or may not drink alcohol during that day, recorded as
alcohol (boolean)
- The subject may or may not take an antihistamine medication during that day, recorded as the negative action
nomeds (boolean)
- I postulate (probably incorrectly) that sneezing occurs at some baseline rate, which increases if an antihistamine is not taken, and further increased after alcohol is consumed.
- The data is aggregated per day, to yield a total count of sneezes on that day, with a boolean flag for alcohol and antihistamine usage, with the big assumption that nsneezes have a direct causal relationship.
Create 4000 days of data: daily counts of sneezes which are Poisson distributed w.r.t alcohol consumption and antihistamine usage
In [3]:
# decide poisson theta values theta_noalcohol_meds = 1 # no alcohol, took an antihist theta_alcohol_meds = 3 # alcohol, took an antihist theta_noalcohol_nomeds = 6 # no alcohol, no antihist theta_alcohol_nomeds = 36 # alcohol, no antihist # create samples q = 1000 df = pd.DataFrame({ 'nsneeze': np.concatenate((np.random.poisson(theta_noalcohol_meds, q), np.random.poisson(theta_alcohol_meds, q), np.random.poisson(theta_noalcohol_nomeds, q), np.random.poisson(theta_alcohol_nomeds, q))), 'alcohol': np.concatenate((np.repeat(False, q), np.repeat(True, q), np.repeat(False, q), np.repeat(True, q))), 'nomeds': np.concatenate((np.repeat(False, q), np.repeat(False, q), np.repeat(True, q), np.repeat(True, q)))})
In [4]:
df.tail()
Out[4]:
Briefly Describe Dataset¶
In [6]:
g = sns.factorplot(x='nsneeze', row='nomeds', col='alcohol', data=df, kind='count', size=4, aspect=1.5)
Observe:
- This looks a lot like poisson-distributed count data (because it is)
- With
nomeds == Falseand
alcohol == False(top-left, akak antihistamines WERE used, alcohol was NOT drunk) the mean of the poisson distribution of sneeze counts is low.
- Changing
alcohol == True(top-right) increases the sneeze count
nsneezeslightly
- Changing
nomeds == True(lower-left) increases the sneeze count
nsneezefurther
- Changing both
alcohol == True and nomeds == True(lower-right) increases the sneeze count
nsneezea lot, increasing both the mean and variance.
Poisson Regression¶
Our model here is a very simple Poisson regression, allowing for interaction of terms:
Create linear model for interaction of terms
In [7]:
fml = 'nsneeze ~ alcohol + antihist + alcohol:antihist' # full patsy formulation
In [8]:
fml = 'nsneeze ~ alcohol * nomeds' # lazy, alternative patsy formulation
1. Manual method, create design matrices and manually specify model¶
Create Design Matrices
In [9]:
(mx_en, mx_ex) = pt.dmatrices(fml, df, return_type='dataframe', NA_action='raise')
In [10]:
pd.concat((mx_ex.head(3),mx_ex.tail(3)))
Out[10]:
Create Model
In [11]:
with pm.Model() as mdl_fish: # define priors, weakly informative Normal b0 = pm.Normal('b0_intercept', mu=0, sd=10) b1 = pm.Normal('b1_alcohol[T.True]', mu=0, sd=10) b2 = pm.Normal('b2_nomeds[T.True]', mu=0, sd=10) b3 = pm.Normal('b3_alcohol[T.True]:nomeds[T.True]', mu=0, sd=10) # define linear model and exp link function theta = (b0 + b1 * mx_ex['alcohol[T.True]'] + b2 * mx_ex['nomeds[T.True]'] + b3 * mx_ex['alcohol[T.True]:nomeds[T.True]']) ## Define Poisson likelihood y = pm.Poisson('y', mu=np.exp(theta), observed=mx_en['nsneeze'].values)
Sample Model
In [12]:
with mdl_fish: trc_fish = pm.sample(1000, tune=1000, cores=4)
Auto-assigning NUTS sampler... Initializing NUTS using jitter+adapt_diag... Multiprocess sampling (4 chains in 4 jobs) NUTS: [b3_alcohol[T.True]:nomeds[T.True], b2_nomeds[T.True], b1_alcohol[T.True], b0_intercept] Sampling 4 chains: 100%|██████████| 8000/8000 [01:25<00:00, 93.34draws/s] The number of effective samples is smaller than 25% for some parameters.
View Diagnostics
In [13]:
rvs_fish = [rv.name for rv in strip_derived_rvs(mdl_fish.unobserved_RVs)] plot_traces_pymc(trc_fish, varnames=rvs_fish)
Observe:
- The model converges quickly and traceplots looks pretty well mixed
Transform coeffs and recover theta values¶
In [14]:
np.exp(pm.summary(trc_fish, varnames=rvs_fish)[['mean','hpd_2.5','hpd_97.5']])
Out[14]:
Observe:
The contributions from each feature as a multiplier of the baseline sneezecount appear to be as per the data generation:
exp(b0_intercept): mean=1.02 cr=[0.96, 1.08]
Roughly linear baseline count when no alcohol and meds, as per the generated data:
theta_noalcohol_meds = 1 (as set above) theta_noalcohol_meds = exp(b0_intercept) = 1
exp(b1_alcohol): mean=2.88 cr=[2.69, 3.09]
non-zero positive effect of adding alcohol, a ~3x multiplier of baseline sneeze count, as per the generated data:
theta_alcohol_meds = 3 (as set above) theta_alcohol_meds = exp(b0_intercept + b1_alcohol) = exp(b0_intercept) * exp(b1_alcohol) = 1 * 3 = 3
exp(b2_nomeds[T.True]): mean=5.76 cr=[5.40, 6.17]
larger, non-zero positive effect of adding nomeds, a ~6x multiplier of baseline sneeze count, as per the generated data:
theta_noalcohol_nomeds = 6 (as set above) theta_noalcohol_nomeds = exp(b0_intercept + b2_nomeds) = exp(b0_intercept) * exp(b2_nomeds) = 1 * 6 = 6
exp(b3_alcohol[T.True]:nomeds[T.True]): mean=2.12 cr=[1.98, 2.30]
small, positive interaction effect of alcohol and meds, a ~2x multiplier of baseline sneeze count, as per the generated data:
theta_alcohol_nomeds = 36 (as set above) theta_alcohol_nomeds = exp(b0_intercept + b1_alcohol + b2_nomeds + b3_alcohol:nomeds) = exp(b0_intercept) * exp(b1_alcohol) * exp(b2_nomeds * b3_alcohol:nomeds) = 1 * 3 * 6 * 2 = 36
2. Alternative method, using
pymc.glm¶
Create Model
Alternative automatic formulation using ``pmyc.glm``
In [15]:
with pm.Model() as mdl_fish_alt: pm.glm.GLM.from_formula(fml, df, family=pm.glm.families.Poisson())
Sample Model
In [16]:
with mdl_fish_alt: trc_fish_alt = pm.sample(2000, tune=2000)
Auto-assigning NUTS sampler... Initializing NUTS using jitter+adapt_diag... Sequential sampling (2 chains in 1 job) NUTS: [mu, alcohol[T.True]:nomeds[T.True], nomeds[T.True], alcohol[T.True], Intercept] 100%|██████████| 4000/4000 [02:08<00:00, 31.19it/s] 100%|██████████| 4000/4000 [01:11<00:00, 55.68it/s] The number of effective samples is smaller than 25% for some parameters.
View Traces
In [17]:
rvs_fish_alt = [rv.name for rv in strip_derived_rvs(mdl_fish_alt.unobserved_RVs)] plot_traces_pymc(trc_fish_alt, varnames=rvs_fish_alt)
Transform coeffs¶
In [18]:
np.exp(pm.summary(trc_fish_alt, varnames=rvs_fish_alt)[['mean','hpd_2.5','hpd_97.5']])
Out[18]:
Observe:
- The traceplots look well mixed
- The transformed model coeffs look moreorless the same as those generated by the manual model
- Note also that the
mucoeff is for the overall mean of the dataset and has an extreme skew, if we look at the median value …
In [19]:
np.percentile(trc_fish_alt['mu'], [25,50,75])
Out[19]:
array([ 4.06581711, 9.79920004, 24.21303451])
… of 9.45 with a range [25%, 75%] of [4.17, 24.18], we see this is pretty close to the overall mean of:
In [20]:
df['nsneeze'].mean()
Out[20]:
11.42775
Example originally contributed by Jonathan Sedar 2016-05-15 github.com/jonsedar | https://docs.pymc.io/notebooks/GLM-poisson-regression.html | 2018-11-12T20:20:07 | CC-MAIN-2018-47 | 1542039741087.23 | [array(['../_images/notebooks_GLM-poisson-regression_12_0.png',
'../_images/notebooks_GLM-poisson-regression_12_0.png'],
dtype=object)
array(['../_images/notebooks_GLM-poisson-regression_29_0.png',
'../_images/notebooks_GLM-poisson-regression_29_0.png'],
dtype=object)
array(['../_images/notebooks_GLM-poisson-regression_41_0.png',
'../_images/notebooks_GLM-poisson-regression_41_0.png'],
dtype=object) ] | docs.pymc.io |
Testing
🕓 15 minutes
#What you’ll learn
How to create, run and analyse tests for the application components in the CodeNow environment.
- Writing tests for the application is an essential part of the application development process.
- Here are some benefits that you gain by testing your application:
- You don’t have to spend the time manually testing the code yourself.
- You can make changes to the code without worrying if something is broken.
- If something went wrong, you are able to detect the problematic part by looking into the tests.
- Tests provide documentation for what your code is actually meant to do.
#Prerequisites
#Overview
CodeNow uses the Karate tool for testing purposes.
- If you want to read more about the Karate tool, see:
Find the "Testing&Quality Management" section in the sidebar menu, click on the "Testing" and go to the "Test Use Cases" option.
- Here you will see two sections:
- Select an Application
- You can select an existing application to create, run or analyse test use cases.
- Custom Applications
- You can create test cases for an external application.
#Select an Existing Application
- Select an existing application and click on the "Add Test Repository" button on the right.
- After a little while, the "Show Test Use Case" button will appear.
- Click on the "Show Test Use Case".
- Here you will see some information regarding the existing tests for the selected application.
- How to clone your test repository
- Test components' builds.
- Test folders for each of the application components.
- The "Clone your Test Repository" section describes how to clone a test repository using SSH or via HTTP(S) into the local environment.
- The "Component Builds" section shows all the existing components inside the chosen application.
- For each test component, you can see the latest test package build version.
- If the test package of the particular component hasn't been built yet, you can click on the "Build" button on the right.
- You need to compile the sources in order to build the components which could be used by the end consumer.
- Down below you see all the test packages for every application component.
- It shows the test name and count.
- If you click on the "Detail" button, you can see the use case detail, tests and test results.
- If you click on the "Detail" button in the "Use Case Tests" section, in the "Feature Content" section you can see the test code for the selected test case.
- If you want to run the selected test package, you can click on the "Run" button.
- You will be redirected to a page, where you should select an environment and a use case version.
- Then click on the "Deploy" button.
#Use Case Test Results
CodeNow uses the Cucumber tool to generate test reports.
- If you want to know more about the Cucumber, see :
#How to Find
To see the test results analysis:
- Select the application component.
- Click on the "Details" button on the right and you will be redirected to a Use Case Detail page.
- In the section "Use Case Test Results" select one of the test results and click the "Details" button.
#Overview
Here, you are able to see an analysis of the features, tags, steps and failures on the selected use case run report.
- Features
- The graphs in this page show the passing and failing statistics for features.
- The table shows detailed information for the tested feature including:
- Passed, failed, skipped, pending and undefined steps.
- Passed and failed scenarios.
- Duration and status for the features.
- The graph shows passing and failing statistics for tags.
- The detailed information on each tag is collected into the table below the graph.
- Steps
- The following table shows the step statistics for this build.
- The list below is based on the results. If a step does not provide information about the result then it is not listed below.
- Additionally, @Before and @After are not counted because they are part of the scenarios, not steps.
- Failures
- The following summary displays the scenarios that failed.
#Create a Custom Application
If you want to test an external application that doesn't run in the CodeNow environment, you can add a custom test application for this purpose.
Fill in the application name and add the required number of components. Then click on the "Confirm" button.
Now you can clone the created test repository to the local environment, write tests and run them in the CodeNow environment.
- Don't forget to add correct values, so the tests can reach your application.
- The process continues with STEP 2 of the "Select an Existing Application" section.
#Simple Karate test example
#Basic Keywords
- Feature: List of scenarios.
- Scenario: Business rule through a list of steps with arguments.
- Given: Some precondition step
- When: Some key actions
- Then: To observe outcomes or validation
- And,But: To enumerate more Given,When,Then steps
- Examples: Container for a table
- Background: List of steps run before each of the scenarios
#Example
- This simple test example consists of two parts:
- The first test creates a new cat entity with the name of "Fluffy". It says: after making a POST request, you should get status 201 and the response must contain a non-null id and the name "Fluffy".
- The second one is a simple GET for a cat that was created in a previous part of the test.
- If you are creating a test for a CodeNow component, then you don’t need to specify the given url or path. The CodeNow will automatically add the right values during the building process.
- On the other hand, if you create a test package and write a test for a custom application, then for the test to be able to reach its testing target, you must specify the "given url" value.
#What's next?
See our other manuals: | https://docs.codenow.com/docs/advanced-features/testing | 2021-04-10T18:46:59 | CC-MAIN-2021-17 | 1618038057476.6 | [array(['/img/admMan/testing/tes_1.png', 'go_to_testing'], dtype=object)
array(['/img/admMan/testing/tes_2.png', 'testin_overview'], dtype=object)
array(['/img/admMan/testing/tes_3.png', 'add_test_repo'], dtype=object)
array(['/img/admMan/testing/tes_5.png', 'show_test_use_case'],
dtype=object)
array(['/img/admMan/testing/tes_6.png', 'test_use_case_overview'],
dtype=object)
array(['/img/admMan/testing/tes_7.png', 'clone_repo'], dtype=object)
array(['/img/admMan/testing/tes_8.png', 'component_build'], dtype=object)
array(['/img/admMan/testing/tes_9.png', 'build'], dtype=object)
array(['/img/admMan/testing/tes_10.png', 'detail'], dtype=object)
array(['/img/admMan/testing/detail_tes_13.png', 'use_case_detail'],
dtype=object)
array(['/img/admMan/testing/detail_tes_14.png', 'use_case_test_detail'],
dtype=object)
array(['/img/admMan/testing/tes_13.png', 'feature_content'], dtype=object)
array(['/img/admMan/testing/tes_11.png', 'run'], dtype=object)
array(['/img/admMan/testing/tes_run_12.png', 'exec_config'], dtype=object)
array(['/img/admMan/testing/detail_tes_15.png', 'test_results_detail'],
dtype=object)
array(['/img/admMan/testing/tes_15.png', 'report'], dtype=object)
array(['/img/admMan/testing/tes_14.png', 'feature'], dtype=object)
array(['/img/admMan/testing/tes_16.png', 'tags'], dtype=object)
array(['/img/admMan/testing/tes_17.png', 'steps'], dtype=object)
array(['/img/admMan/testing/tes_18.png', 'failures'], dtype=object)
array(['/img/admMan/testing/tes_4.png', 'go_to_lib'], dtype=object)
array(['/img/admMan/testing/tes_19.png', 'go_to_lib'], dtype=object)] | docs.codenow.com |
Forum
Arcade
Search
Calendar
Downloads
April 10, 2021, 03:25:36 PM
1 Hour
1 Day
1 Week
1 Month
Forever
Docskillz
»
Security Mods Mods
Here are some mods that you can use to help fight again spammers and hackers. Be warned that some of these mods use a database of spammers, hackers, etc. Sometimes, people wrongly accuse (or just mistake) honest people for spammers/hackers. Be careful and sure of who you ban or add to these databases!
If you have idiots who are just naughty and not spamming/hacking and who you think should be banned, DO NOT add them to these databases!!! They are idiots and not spammers or hackers...there is a difference. Just because someone was not very nice at your site doesn't mean that they will play nice at other sites. Just ban them in the normal SMF fashion.
You should have a topic or way for banned users who are legit to read/email you so they can state their case. You can always unban and apologize later.
You should use these mods in conjunction with your SMF security settings. A tutorial for what general SMF security settings you should use to fight against hackers and spammers is in the works.
SMF Mods:
Please use these at your own risk and make sure that you read all of the instructions!!!
http:BL
- This mod uses
Project Honey Pot.org
's database of email/info harvesters, comment spammers, spam servers and dictionary attackers to flag people/bots from registering on your site. It will actually prevent those found in it's db from registering. It is always wise to check anyone registering (even if they pass the check) to be sure.
You should register for an account at Project Honey Pot's site to add certain files to your site to help in the install process. Also, to add bots, spammers, etc. and to add people/bots to the database. This will not only help you but will help anyone else who uses this mod. There is also a lot of great info there about protecting your site.
Stop Forum Spam
- This mod uses
Stop Forum Spam.com
's database to flag registering people/bots. It will put those found in its db into the "Awaiting Approval" list for your approval. You must add new bots/spammers into the db at the stopforumspam.com website.
You should register for an account at Stop Forum Spam's site to add new bots, etc. to help in the process and to add people/bots to the database. This will not only help you but will help anyone else who uses this mod.
Stop Spammer
- This mod also uses
Stop Forum Spam.com
's database to flag registering people/bots. It will put those found in its db into the "Awaiting Approval" list for your approval. BUT you can also add them to the db from within SMF!
You should register for an account at Stop Forum Spam's site as noted above.
Forum Firewall
- A very powerful mod that should be used with the utmost caution!!! You can ban regular users and yourself VERY easily!!! Also, you must use the "DOS ATTACK" settings very sparingly! It will ban/block normal users who just might have forgotten their password. You need to adjust the setting to be sure. Ask someone to help you test this until they are no longer banned.
EDIT:
I HIGHLY suggest that you set your SMF settings so that new users cannot view the email addresses of your users! Also, notify your current users to not allow others to view their email (this can be done in their profile) and that they should also never post their email address in any topic. If the bots/harvesters cannot see your email address, it will render the dictionary attack useless and prevent future attacks using your email address. At the very least, you will not receive more spam in your email accounts.
This list will be updated as needed.
Updated - Feb. 07, 2016
Page created in 0.046 seconds with 18 queries.
SimplePortal 2.3.8 © 2008-2021, SimplePortal
SMF 2.0.18
|
SMF © 2021
,
Simple Machines
Simple Audio Video Embedder
XHTML
WAP2
| Doc Skillz Theme by
simply sibyl | https://www.docskillz.com/docs/index.php?page=page300 | 2021-04-10T19:25:36 | CC-MAIN-2021-17 | 1618038057476.6 | [] | www.docskillz.com |
Volumes¶
Reference
- Step Size
- Distance between volume shader samples when rendering the volume. Lower values give more accurate and detailed results but also increased render time.
- Max Steps
- Maximum number of steps through the volume before giving up, to protect from extremely long render times with big objects or small step sizes. | https://docs.blender.org/manual/en/dev/render/cycles/render_settings/volumes.html | 2019-10-14T06:24:07 | CC-MAIN-2019-43 | 1570986649232.14 | [] | docs.blender.org |
Returns the integer index of an element within an array, or
nil if the element is not in the array. The search goes through the length of the array as determined by
#t whose value is undefined if there are holes.
This function can not be used to locate a child table/array within the array being searched.
table.indexOf( t, element )
local t = { 1, 3, 5, 7, 9, "a", "b" } print( table.indexOf( t, 9 ) ) --> 5 print( table.indexOf( t, 3 ) ) --> 2 print( table.indexOf( t, "b" ) ) --> 7 | https://docs.coronalabs.com/api/library/table/indexOf.html | 2019-10-14T06:25:57 | CC-MAIN-2019-43 | 1570986649232.14 | [] | docs.coronalabs.com |
Apart from installing Exivity in any on premise environment, Exivity can also be deployed from the Azure Market Place (AMP). Deploying Exivity on AMP is straight forward, and can be finised within a few minutes via your Azure Portal.
Login to your Azure Portal at and then go to the Marketplace to search for the Exivity offer:
Once you've selected the Exivity offering, you should be presented with the following screen:
After clicking the Create button, you will be redirected to the VM deployment wizard
Fill in a Windows user/pass and pick your deployment Resource Group:
Make sure to write down this username and password, as you will need these when connecting to the Exivity Windows server using the Remote Desktop Protocol.
You may select any additional options, but none are required for running Exivity succesfully, so you may skip this page simply by clicking the OK button:
Review the summary and click Create to deploy your Exivity VM:
This may take a few minutes. You may review the status of the Virtual Machine in your VM list:
Write down the Public IP address once it is available. Optionally you may configure a custom DNS name to have an easy way to connect.
You can logon to your Exivity instance with RDP, but after deployment you should be able to connect to your instance using the public IP address or DNS name of your Exivity instance considering the following default URL:
https://<Your_Public_IP>:8001
The default admin username is admin with password exivity.. | https://docs.exivity.com/getting-started/installation/azure-market-place | 2019-10-14T06:41:37 | CC-MAIN-2019-43 | 1570986649232.14 | [] | docs.exivity.com |
It's common for Box2D to report multiple contact points during a single iteration of a simulation. This function is use to determine if averaging of all the contact points is enabled.
This function returns
false (default) if the point of contact reported is the first one reported by Box2D (the order is arbitrary). This function returns
true if the point of contact reported is the average of all the contact points.
physics.getAverageCollisionPositions()
-- Toggle the averaging if ( physics.getAverageCollisionPositions() ) then physics.setAverageCollisionPositions( false ) else physics.setAverageCollisionPositions( true ) end | https://docs.coronalabs.com/api/library/physics/getAverageCollisionPositions.html | 2019-10-14T05:50:36 | CC-MAIN-2019-43 | 1570986649232.14 | [] | docs.coronalabs.com |
Logging));
If you simply want to log text messages and don't need all of the HTTP context information, consider using one of our integrations for popular logging frameworks like log4net, NLog or Serilog. Also, the Elmah.Io.Client package contains a logging API documented | https://docs.elmah.io/logging-errors-programmatically/ | 2019-10-14T06:29:23 | CC-MAIN-2019-43 | 1570986649232.14 | [] | docs.elmah.io |
Track an action a lead has taken specific to your business with custom activities.
What are Activities
There are several ways a your leads.
Custom activities function just like standard activities. Setting them up however is a two-part process.
Step 1: Create a custom activity in your Marketo account
Step 2: The person in your organization who works with our API can then begin the implementation. More information can be found here: Custom Activity API
Have fun! | https://docs.marketo.com/plugins/viewsource/viewpagesrc.action?pageId=10617304 | 2019-10-14T06:30:25 | CC-MAIN-2019-43 | 1570986649232.14 | [] | docs.marketo.com |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.