content
stringlengths 0
557k
| url
stringlengths 16
1.78k
| timestamp
timestamp[ms] | dump
stringlengths 9
15
| segment
stringlengths 13
17
| image_urls
stringlengths 2
55.5k
| netloc
stringlengths 7
77
|
---|---|---|---|---|---|---|
?Who does GDPR impact?
GDPR affects every company that uses personal data from EU citizens. If you’re collecting data from users in the EU, you need to comply with GDPR regardless of where you’re based.
What does it mean for me?What does it mean for me?
Our understanding of GDPR compliance within the context of Freeform, is that you need explicit consent from the person filling out the form to agree to you having their data, as well as allowing the user to request access to the information you have stored on them, and the ability to request to have that data removed promptly. We often receive requests from customers wondering if there's a way to have Freeform not store the data but still send an email notification to the admin(s). This is not sufficient, as email notifications containing personal information about customers is still you or your organization storing their sensitive data somewhere on a server, etc.
It's also worth noting that not all types of forms require consent. If you're conducting an anonymous survey, quiz or poll, that does not collect any personally identifying information, you likely shouldn't need to worry about GDPR.
That said, updating your forms to be compliant shouldn't be too much work. Just follow this guide below:
Definition of Stored DataDefinition of Stored Data
We define stored data as any data that is collected from a Freeform form and stored on your website, email notifications and API integrations such as CRM and Mailing List services. It's important to understand that asking for consent essentially means you require the user to consent to any/all of the above or nothing at all (can't submit the form).
Ask for Consenthe Checkbox
In your form, simply create an additional field of the Checkbox fieldtype, and set it to be required. It's important however, that the following is adhered to:
- The checkbox needs to be clearly labelled and easy to understand. A good example would be something like:
I consent to Company Name collecting and storing my data from this form.
- The consent needs to be separate and cannot be bundled with other consent, terms and conditions, notices, etc.
- The checkbox must be a positive opt-in, and cannot be pre-checked by default.
- Set the checkbox to be required so that the form cannot submit without consent being given.
If you'd like to add additional disclaimer information above or below the checkbox, you can do this by using an HTML Block that is a special fieldtype available inside Composer.
Proof of ConsentProof of Consent
While we can probably all agree this part might be somewhat meaningless since the data could be easily manipulated, it's required that you have the ability to "prove" that the user consented to the data being stored. The checkbox field you created will store a
y value (or whatever you set for it) in the database, so you're covered here. No other action is necessary.
Withdrawal of ConsentWithdrawal of Consent
OverviewOverview
You must make it easy for users to withdrawal consent (and have you remove all of their data). You must also tell them how they can do this. To cover all angles here, you might consider some or all of the following:
- Include instructions and/or Consent Withdrawal form on the success page after they've submitted your form.
- Include an option/instructions to remove consent in any email notifications generated.
- Include an option/instructions to remove consent in any future promotional marketing email communications.
Withdrawal FormWithdrawal Form
The withdrawal of consent does not need to be automated (but that might help if your sites deals with a high volume of users). To setup a form to handle this, it's required that the process:
- Only requires the user submit their email address.
- Does not require the user to log into your site.
- Does not require the user to visit more than 1 page to submit their request (needs to be very simple and fast).
A form is not required however. You could also include instructions for the user to send an email to you asking to have their data removed, etc.
Removing a User's DataRemoving a User's Data
Removing a user's data is simple. If you're requested to remove data about a user, simply delete the Freeform submission(s) associated with them from the Freeform control panel.
- You have 1 month to comply with removal of the user's data.
ReviewReview
So in review here's a summary of the steps required:
- Create an additional field of the Checkbox fieldtype, and set it to be required. Leave it unchecked by default and label it something like I consent to Company Name collecting and storing my data from this form.
- Place instructions in email notifications or form success pages explaining how a customer can go about having their data removed from your site.
- When requested to remove a user's data, be sure to remove all associated submission(s) within 30 days or less.
Again, be sure to review this official guide to GDPR. | http://docs.solspace.com/craft/freeform/v1/setup/gdpr-compliance.html | 2019-05-19T12:35:08 | CC-MAIN-2019-22 | 1558232254882.18 | [] | docs.solspace.com |
Types: Float
The floating-point type,
float, allows the use of real numbers. It supports at least the range and precision of IEEE 754 64-bit double-precision
representation, and includes the special values minus infinity, plus infinity, and Not-a-Number (NaN). Using predefined constant names, those
values are written as
-INF,
INF, and
NAN, respectively.
The library functions
is_finite,
is_infinite, and
is_nan indicate if a given floating-point value is finite, infinite, or a NaN, respectively.
Consider the following example:
<?hh // strict namespace Hack\UserDocumentation\Types\Float\Examples\Average; function average_float(float $p1, float $p2): float { return ($p1 + $p2)/2.0; } <<__EntryPoint>> function main(): void { $val = 3e6; $result = average_float($val, 5.2E-2); echo "\$result is " . $result . "\n"; }
$result is 1500000.026
When called, function
average_float takes two arguments, of type
float, and returns a value of type
float.
The literals
2.0,
3e6, and
5.2E-2 have type
float, so the local variable
$val is inferred as having type
float. (Unlike function parameters
such as
$p1 and
$p2, or a function return, a local variable cannot have an explicit type.) Then when
$val and
5.2E-2 are passed to
average_float, the compiler sees that two
floats were passed and two
floats were expected, so the call is well-formed. | https://docs.hhvm.com/hack/types/float | 2019-05-19T12:34:28 | CC-MAIN-2019-22 | 1558232254882.18 | [] | docs.hhvm.com |
Multiple Coupons¶
Overview¶ entice more new customers. Bu that increase store revenue significantly.
This extension is fully compatible with Mageplaza One Step Checkout
Download and Install¶
How to use¶
How to Configure¶
1. Configuration¶
From the Admin panel, go to
Stores > Configuration > Mageplaza > Multiple Coupons
- Select Enable = Yes to enable the module
- Apply for: Select Page to apply Multiple Coupon
- Coupon Limit Qty:
- The coupon limit can be used for 1 Cart
- Please note that with different coupon types, when used at the same time, the coupon is applied in order from the beginning to the end or until the subtotal reaches 0
- If left this field blank or equal to 0, the coupon number can be used unrestricted
Unique Coupon Code:
- It is helpful to avoid multiple discount coupons used in the same cart as the owner may suffer loss
- When the specific coupon in this field is applied, all other coupons will be canceled (including before or after the unique coupon)
- For multiple coupons, the customer needs to remove unique coupon
API¶
Multiple Coupon extension supports API integration with Rest API commands of Magento. By using the available command structures to check the order information, invoice, credit memo, admins can quickly capture the details of an order. See the details about Rest API Magento here
Instructions for using Postman to check API¶
Step 1: Get Access Token¶
- Log in to Postman, in the Headers section select
Key = Content-Type,
Value = application/json
- At Body tab, insert
{"username": "demo", "password": "demo123"}with
demo/demo123are
username/passwordto login to your backend
- Use the POST method and send the following command:
- Access Key will be displayed in the Body section | https://docs.mageplaza.com/multiple-coupons/ | 2019-05-19T12:31:41 | CC-MAIN-2019-22 | 1558232254882.18 | [array(['https://i.imgur.com/51MtkVM.png',
'https://i.imgur.com/51MtkVM.png'], dtype=object)
array(['https://i.imgur.com/FP9x7jg.png',
'https://i.imgur.com/FP9x7jg.png'], dtype=object)] | docs.mageplaza.com |
Proposed Roadmap
Technical deliverables are documented here. Each technical deliverable (TD) has a unique number. These numbers do not imply sequence of execution or importance. See the last section for dependencies.
TD1 - Core
Core components required for the data service to function.
- Authentication / Authorization
- Mongo CRUD Controller
- Basic Data Application
- Basic Metadata Application
TD2 - Documentation
Auto-generated documentation for the REST API's for use with Swagger UI.
TD3 - Aggregation
Composite entity support.
TD4 - Async Processing
Support for processing requests from clients asynchronously.
TD5 - RDBMS Support
Implementation of RDBMS CRUD Controller to support aggregation across SQL and NoSQL technology stacks.
TD6 - Updated Applications
Update data and metadata applications to be usable by less technical folks. We intend on having basic functionality initially.
Adoption Roadmap
| https://dev.docs.lightblue.io/proposed_roadmap/ | 2019-05-19T12:27:52 | CC-MAIN-2019-22 | 1558232254882.18 | [array(['https://raw.githubusercontent.com/lightblue-platform/lightblue/master/docs/lightblue-adoption-roadmap.png',
None], dtype=object) ] | dev.docs.lightblue.io |
For cPanel & WHM version 11.50
(Home >> cPanel >> x3 Branding)
Overview
This feature allows you to modify the x3 and x3mail themes so that the appearance of your users' interface changes.
Important:
This interface only allows you to modify the x3 and x3mail themes, as well as any deprecated themes. You cannot brand the Paper Lantern theme from this interface.
- For more information about branding for Paper Lantern, read our x3 Branding documentation.
- We strongly recommend that you do not use deprecated themes.
Enable the Live Editor link
Important:
If you disallow
root or reseller logins to cPanel user accounts, the Live Editor link will not function properly.
To enable the Live Editor link, select one of the following options for the Accounts that can access a cPanel user account setting in the System section of the x3 Branding interface (Home >> Server Configuration >> Tweak Settings):
- Root, Account-Owner, and cPanel User — Allows the
rootuser and resellers to access the cPanel account.
- Account-Owner and cPanel User Only — Allows only the reseller who owns the account to access the cPanel account.
Branding
Note:
For more information about how to brand cPanel interfaces, read our documentation. This guide is for users who need more features than are available in cPanel's Branding Editor interface (Home >> Preferences >> Branding Editor).
The Branding table lists all of the themes on your server and the directory in which you can find each theme's files.
To modify a theme, click Live Editor in that theme's Editor(s) column. The system will automatically open a new browser tab and log you in to cPanel's x3 Branding interface (Home >> Preferences >> Branding Editor). Use the Branding Editor interface to make all of the desired changes to your themes. | https://docs.cpanel.net/display/1150Docs/x3+Branding | 2019-05-19T12:27:31 | CC-MAIN-2019-22 | 1558232254882.18 | [] | docs.cpanel.net |
Source Code Fundamentals: Literals
Boolean Literals
The literals
true and
false represent the Boolean values True and False, respectively. The type of a Boolean
literal is
bool. For example:
$val = true; if ($val === false) ...
Integer Literals
Integer literals can be written as decimal; hexadecimal (with prefix
0x or
0X, and including letters A-F or a-f);
octal (with prefix
0); or binary (with prefix
0b or
0B). The type of an integer literal is
int. For example:
$count = 10 // decimal 10 0b101010 >> 4 // binary 101010 and decimal 4 0XAf << 012 // hexadecimal Af and octal 12
Floating-Point Literals
Floating-point literals typically have an integer part, a decimal point, and a fractional part. They may also have an
exponent part. They are written using decimal digits. The type of a floating-point literal is
float. For example:
123.456 + 0.6E27 + 2.34e-3
The predefined constants
INF and
NAN provide access to the floating- point values for infinity and Not-a-Number, respectively.
String Literals
A string literal can have one of the following forms:
A string literal is a sequence of zero or more characters delimited in some fashion. The delimiters are not part of
the literal's content. The type of a string literal is
string.
Single-Quoted String Literals
A single-quoted string literal is a string literal delimited by single-quotes ('). The literal can contain any source character except single-quote (') and backslash (\), which can only be represented by their corresponding escape sequence, \' and \\. For example:
'Welcome to Hack!' 'Can embed a single quote (\') and a backslash (\\) like this'
Double-Quoted String Literals
A double-quoted string literal is a string literal delimited by double-quotes ("). The literal can contain any source character except double-quote (") and backslash (\), which can only be represented by their corresponding escape sequence, \" and \\. For example:
"Welcome to Hack!" "Can embed a double quote (\") and a backslash (\\) like this"
Certain other (and sometimes non-printable) characters can also be expressed as escape sequences. An escape sequence represents a single-character encoding. For example:
"First line 1\nSecond line 2\n\nFourth line\n" "Can embed a double quote (\") and a backslash (\\) like this"
Here are the supported escape sequences:
Within a double-quoted string literal a dollar ($) character not escaped by a backslash (\) is handled using variable substitution rules, which follow.
When a variable name is seen inside a double-quoted string, after that variable is evaluated, its value is converted to
string
and is substituted into the string in place of the variable-substitution expression. Subscript or property accesses are resolved
according to the rules of the subscript operator and
member selection operator, respectively. If the character sequence following
the
$ does not parse as a recognized name, then the
$ character is instead interpreted verbatim and no variable substitution
is performed.
Consider the following example:
<?hh // strict namespace Hack\UserDocumentation\Fundamentals\Literals\Examples\DQVariableSubstitution; class C { public int $p1 = 2; } <<__EntryPoint>> function main(): void { $x = 123; echo ">\$x.$x"."<\n"; $myC = new C(); echo "\$myC->p1 = >$myC->p1<\n"; }
>$x.123< $myC->p1 = >2<
Heredoc String Literals
A heredoc string literal is a string literal delimited by "
<<< id" and "
id". The literal can contain any source character.
Certain other (and sometimes non-printable) characters can also be expressed as escape sequences.
A heredoc literal supports variable substitution as defined for double-quoted string literals.
For example:
<?hh // strict namespace Hack\UserDocumentation\Fundamentals\Literals\Examples\HeredocLiterals; <<__EntryPoint>> function main(): void { $v = 123; $s = <<< ID S'o'me "\"t e\txt; \$v = $v" Some more text ID; echo ">$s<\n"; }
>S'o'me "\"t e xt; $v = 123" Some more text<
The start and end id must be the same. Only horizontal white space is permitted between
<<< and the start id. No
white space is permitted between the start id and the new-line that follows. No white space is permitted between the
new-line and the end id that follows. Except for an optional semicolon (
;), no characters—not even comments or white
space—are permitted between the end id and the new-line that terminates that source line.
Nowdoc String Literals
A nowdoc string literal looks like a heredoc string literal except that in the former the start id is enclosed in single quotes ('). The two forms of string literal have the same semantics and constraints except that a nowdoc string literal is not subject to variable substitution. For example:
<?hh // strict namespace Hack\UserDocumentation\Fundamentals\Literals\Examples\NowdocLiterals; <<__EntryPoint>> function main(): void { $v = 123; $s = <<< 'ID' S'o'me "\"t e\txt; \$v = $v" Some more text ID; echo ">$s<\n\n"; }
>S'o'me "\"t e\txt; \$v = $v" Some more text<
No white space is permitted between the start id and its enclosing single quotes (').
The Null Literal
There is one null-literal value,
null, which has type
null. For example:
function log(num $arg, ?num $base = null): float { ... }
Here,
null is used as a default argument value in the library function
log.
In the following example:
<?hh // strict namespace Hack\UserDocumentation\Fundamentals\Literals\Examples\NullLiteral; type IdSet = shape('id' => ?string, 'url' => ?string, 'count' => int); function get_IdSet(): IdSet { return shape('id' => null, 'url' => null, 'count' => 0); }
null is used to initialize two data fields in a shape. | https://docs.hhvm.com/hack/source-code-fundamentals/literals | 2019-05-19T13:04:20 | CC-MAIN-2019-22 | 1558232254882.18 | [] | docs.hhvm.com |
Next, the peer certificate's chain, or "pedigree" of issuer
certificates, is established. The chain consists of the certificate's
issuer certificate (if any), followed by that issuer's issuer
certificate (if any), etc. up through a root or self-signed
certificate. Such issuer certificates are generally CA (Certificate
Authority) certificates, as opposed to the leaf (server or
client) certificate itself, which is generally not a CA. A
certificate's chain may be provided by the peer itself (e.g. via the
sslcertificatechain or SSL Certificate Chain File
settings on the peer), and/or it may be automatically completed
locally from trusted certificates. In any event, the chain is
constructed and verified locally by looking for each chain
certificate's issuer certificate. | https://docs.thunderstone.com/site/vortexman/chain_completion.html | 2019-05-19T13:10:19 | CC-MAIN-2019-22 | 1558232254882.18 | [] | docs.thunderstone.com |
Rails makes doing common tasks relatively easy, however it adds a level of complexity. Rails adds relationships and dependencies that are not obvious through the use of conventions, meta-programming and naming patterns. The Dependencies View helps reduce the complexity by keeping a dynamic view of class and logical relationships.
While you navigate source code or nodes in the Rails Explorer, you can use the Dependencies View to track focus and show the dependencies (class, method, controller, action, view, attributes). The Dependencies View shows references to and from a selected class or method.
For a demonstration of the Dependencies View, see the video Using the Dependencies View.
The Dependencies View is a very useful tool for navigating and introspecting software. Relationships are determined by type inference and semantic analysis. Dependencies are updated in real-time during editing and navigation. Relationships, between controllers, actions, views, models, helpers, routes, migrations, schema and tests are related and determined by convention and method call analysis.
This screen example shows the Dependencies View:
In this example the Model class LinItem is open in the. It shows these Outbound dependencies:
It shows these Inbound dependencies: | http://docs.embarcadero.com/products/3rdrail/2007_Q1/html/3rdrailenvironment/dependencies.html | 2018-10-15T16:54:55 | CC-MAIN-2018-43 | 1539583509336.11 | [] | docs.embarcadero.com |
Use InterBase to benefit from its multi-generational architecture, log-based journaling and disaster recovery features. A small footprint, automatic crash recovery, self-tuning, Unicode, SMP support, SQL 92 compliance, and near zero maintenance makes InterBase 2007 the ideal database for embedded and business-critical small-to-medium enterprise server applications.
Connect to and use InterBase to serve a project using the ibrails_plugin gem.
gem install ibruby
Create a Rails Project using InterBase:
Configuration Details
C:\Users\{username}>isql -u sysdba -p masterkey Use CONNECT or CREATE DATABASE to specify a database SQL> create database "c:\home\foo.ib";
Using InterBase on an Existing Project:
gem install ibruby
script/plugin install rails_plugins/trunk/ibrails_plugin | http://docs.embarcadero.com/products/3rdrail/2007_Q1/html/3rdrailenvironment/usinginterbase.html | 2018-10-15T17:43:31 | CC-MAIN-2018-43 | 1539583509336.11 | [] | docs.embarcadero.com |
Special entries and opening sheets
Legal entities in Spain can post special entries as opening entries for the current period, while adapting accounts to changes in accounting rules.
By using opening sheets, you can indicate the following:
- Increase the value of specific financial fixed assets.
- Change the value of specific raw materials when the value has changed significantly during the year and when the material meets specific criteria.
When you close the entries for the previous fiscal year, you can create several lines by setting the Type field to Opening. This allows special opening entries for the current fiscal year to be posted. You can make adjustments to the opening sheet for the fiscal year on the Opening sheets page.
Create a new opening sheet
To create a new Opening sheet, click New on the Opening sheets page, and specify the following.
After you enter the general information about the opening sheet, you'll need to specify the main accounts to include in the opening sheet. To do this, click Opening accounts > Load balances on the Opening sheets page. To post all ledger account balances and adjustments to the opening sheet, click Post. | https://docs.microsoft.com/ar-sa/dynamics365/unified-operations/financials/localizations/emea-esp-opening-sheets-spain | 2018-10-15T17:18:03 | CC-MAIN-2018-43 | 1539583509336.11 | [] | docs.microsoft.com |
Run Menu
- See the Debugging voice applications and Debugging routing applications topics for supported functionality.
The Run Menu contains all of the actions required to run, debug, step through code and work with breakpoints. Different parts of the menu are visible at different times, as each perspective can be customized to show only specific capabilities. The Run menu contains the following items:
This page was last modified on 10 October 2016, at 10:30.
Feedback
Comment on this article: | https://docs.genesys.com/Documentation/Composer/latest/Help/RunMenu | 2018-10-15T18:10:32 | CC-MAIN-2018-43 | 1539583509336.11 | [] | docs.genesys.com |
Note: You can’t disable or deactivate the managed package triggers.These triggers are included in the Celigo package.
Be sure to enable the debug logs to understand the triggers functionality. In your Salesforce account navigate to Setup > in Quick Find type ‘Debug Log’ > Create a new debug log with the current user logged in user.
Triggers logic
Triggers functionality is available for all sObjects and every sObject has a managed package trigger. Let’s take an example of Contact sObject here:
- Create a “Contact” object in Salesforce
- CeligoContactUpdate trigger will be invoked as a real time event.
Note: All Celigo triggers are executed After submit only.
- Once the trigger enters the ‘After Submit’ event, it will invoke the Integrator Distributed Adaptor package.
- A call to RealTimeSync Sobject is made to check whether flow is enabled. (This is the first SOQL query)
- SOQL Query: SELECT Id, GUID__C, Connector_Id__c, export_when_field_is_set__c, package_version__c, Qualifier__c, Connection__r.User_Name__c, Connection__r.Password__c, Connection__r.Consumer_Key__c, Connection__r.Consumer_Secret__c, Connection__r.Token_Id__c, Connection__r.Token_Secret__c, Connection__r.Role_Id__c, Connection__r.Account_Id__c, Connection__r.Endpoint__c, Connection__r.Access_Token__c, NetSuite_Sync_Error_Field__c, NetSuite_Internal_Id_Field__c, Real_Time_Data_Flow_Id__c, SObject_Type__c, Referenced_Fields__c, User_Defined_Referenced_Fields__c, Batch_Size__c, Skip_Export_Field_Id__c FROM Real_Time_Sync__c WHERE (Disabled__c = FALSE AND SObject_Type__c = :tmpVar1 AND Origin__c = :tmpVar2 AND Connector_Id__c = :tmpVar3)
- You’ll see the results greater than 0 if the flow is enabled. Or else, the result is 0 and the trigger is terminated.
Note: Only one SOQL is executed when flow is disabled.
- If the flow is enabled, another SOQL is executed to fetch the related list on flow.(This is the second SOQL query)
- SOQL: SELECT Real_Time_Sync__c, Related_Parent_Field__c, Related_SObject_Type__c, Referenced_Fields__c, Filter__c, Order_by__c FROM Related_SObject_Sync__c WHERE Real_Time_Sync__c IN :tmpVar1
- If you set any Qualification criteria in flow settings, then another SOQL query to check if the record is satisfied to sync to Integration App is executed.(This is the third SOQL query)
- SOQL: Select Id from Contact where (IO Configured filter will be added here) and Id IN : Ids
- If it is not satisfied, the trigger is terminated
- You can check the “Skip Export To NetSuite” checkbox in Salesforce on any record to skip that record to sync to Integration App.
If the “Skip Export to NetSuite" checkbox is checked, the flow isn’t invoked and a SOQL query is executed.(This is the fourth SOQL query and is optional)
- SOQL: select id,celigo_sfnsio__skip_export_to_netsuite__c from account where id in :ids FOR UPDATE.
- The record is locked while unchecking the SkipExport Field to prevent race conditions and thread safety problems.
- The “Skip Export To NetSuite” checkbox is unchecked and the trigger is terminated.
- The field afterSubmit is updated, the trigger will be executed again but all the SOQL queries are not run again and they exit from Celigo Packages.
Please sign in to leave a comment. | https://docs.celigo.com/hc/en-us/articles/360054610152-FAQ-View-logs-of-executed-Celigo-triggers | 2021-04-10T14:46:52 | CC-MAIN-2021-17 | 1618038057142.4 | [array(['/hc/article_attachments/360080905672/h1.png', 'h1.png'],
dtype=object)
array(['/hc/article_attachments/360080905732/h4.png', 'h4.png'],
dtype=object)
array(['/hc/article_attachments/360080905752/h3.png', 'h3.png'],
dtype=object) ] | docs.celigo.com |
K.
External App Catalog (no user account or sign-in required for browsing).
Within KBase by clicking on the Catalog icon in the Menu
In a Narrative by clicking the right arrow at the top of the Apps Panel.
The majority of KBase Apps fall into the following categories:
Show me!
Go straight to the App Catalog
Note: you will need a KBase user account to use our tools.
Below is an example outline of the major workflows and datatypes in KBase. The unboxed labels represent datatypes, while each colored box represents a single KBase App. The box colors signify the category of functionality, and the numbers in parentheses indicate the number of alternative apps that implement each function. Apps that require a genome as input are marked with a green “G” icon.
Each app links to a reference page (which includes technical details about the inputs and outputs) called an App Details Page.
To run apps, you will need to sign in to the Narrative Interface.
You can access the App Catalog from inside the Narrative Interface by clicking the small arrow in the upper right corner of the Apps Panel.
You can click the star at the lower left of any app to add it to your “favorites.” The gray star will turn yellow to indicate that you have favorited the app. The number to the right of the star shows how many people have favorited that app.
By default, the apps are sorted by category. Try the options in the “Organize by” menu to sort the apps by My Favorites, Run Count and other options.
More info
For more information about using the App Catalog, see the Narrative Interface User Guide. | https://docs.kbase.us/apps/analysis | 2021-04-10T15:17:03 | CC-MAIN-2021-17 | 1618038057142.4 | [] | docs.kbase.us |
A guide on searching¶
There is no difference between RealTime or plain indexes in terms of how you run queries.
The recommended and simplest way to query Manticore is to use the SphinxQL interface. You can access it with any MySQL client or library, just do
$ mysql -P9306 -h0.
Running queries¶
In the guide of the indexes we already saw an example of a search. In addition to the fulltext match, you can also have attribute filtering, grouping and sorting by attributes or expressions.
mysql> SELECT *,weight() FROM myrtindex WHERE MATCH('text') AND gid>20 ORDER BY gid ASC,WEIGHT() DESC; SHOW META; +------+------+----------+ | id | gid | weight() | +------+------+----------+ | 3 | 22 | 2230 | | 2 | 22 | 1304 | | 4 | 33 | 2192 | +------+------+----------+ 3 rows in set (0.00 sec) +---------------+-------+ | Variable_name | Value | +---------------+-------+ | total | 3 | | total_found | 3 | | time | 0.000 | | keyword[0] | text | | docs[0] | 4 | | hits[0] | 7 | +---------------+-------+ 6 rows in set (0.00 sec)
Here we also added a SHOW META command (you can run it in another call, but must be on same session to give information from the query you’ve just executed). For general usage, total_found and time are most useful.
Manticore supports LIMIT clause like traditional databases in the format LIMIT [offset,] row_count. If no LIMIT is set, the first 20 rows of the result set are returned.
Another non-standard clause is OPTION, which can be used to set various settings for the query.
Fulltext Matching¶
By default, operator AND is used if multiple keywords are specified. The keywords are searched over all fulltext fields and unless there are other rules, a match is valid when the keywords are found in any of the fulltext fields.
So for example ‘search for something’ will give you a match on a document where ‘search’ and ‘for’ are find in ‘title’ field and ‘something’ in ‘content’ field.
Restricting the search to certain field(s) can be done with @ operator followed by the name of the field(s), for example @title search for something.
Most operators use keyword position relative to document and will give a positive match only if the keywords are found in same field, like proximity, phrase, fied-start/end,NEAR, strict order etc.
There are operators for which the keyword position has no influence, like boost operator, exact form modifier or qourum.
Ranking fulltext matches¶
Manticore offers a powerful way to construct scoring formulas for the fulltext match.
There are several building rankers (predefined scoring formulas) with default been proximity_bm25 and custom expressions can be made using the 20+ ranking factors and attributes values if needed.
The most important factors are
BM25 - an industry retrieval function that ranks the document based on the query terms appearances, it’s a per document factor IDF - inverse document frequency, a numeric statistic that reflect how important a word is to a document in the collection, it is used per field. The IDF values can be used by several ways (as sum, max etc.) LCS - longest common subsequence, in broad terms it gives the proximity (based on keyword positions). Beside the classic ‘lcs’, several derivates are available too. In addition to those, you can use counters on hits or words, boolean factors like exact hit or exact order and document attributes can be used too inside expressions.
Several pre-built ranker expressions are available: proximity_bm25, bm25, none, wordcount, proximity, matchany, sph04, expr (custom rankers) and export (same as expr, but stores for output the factor values). They can be changed using the OPTION statement, for example OPTION ranker=bm25.
The default proximity_bm25 can be written as custom ranker as
OPTION ranker=expr('sum(lcs*user_weight)+bm25').
The user_weight relates to the boost per field, by default all fields are treated equal. For example if you have fields ‘title’ and ‘content’ you might want to give a boost to ‘title’ matching so you would set
OPTION field_weights=(title=10, content=1).
The ranking score is relative to the query itself as long as it includes metrics that calculate distances between keywords or keywords/document frequencies. In these cases, the values of the score can differ a lot from query to query, so doing any kind of comparison between scores of different queries does not make sense.
MySQL [(none)]> SELECT *,weight() FROM myrtindex WHERE MATCH('"more this text"/2') OPTION ranker=proximity_bm25; +------+------+----------+ | id | gid | weight() | +------+------+----------+ | 3 | 22 | 4403 | | 4 | 33 | 3378 | | 2 | 22 | 2453 | | 1 | 11 | 2415 | +------+------+----------+ 4 rows in set (0.00 sec) .. code-block:: none MySQL [(none)]> SELECT *,weight() FROM myrtindex WHERE MATCH('"more this text"/2') OPTION ranker=none; +------+------+----------+ | id | gid | weight() | +------+------+----------+ | 1 | 11 | 1 | | 2 | 22 | 1 | | 3 | 22 | 1 | | 4 | 33 | 1 | +------+------+----------+ 4 rows in set (0.00 sec) .. code-block:: none MySQL [(none)]> SELECT *,weight() FROM myrtindex WHERE MATCH('"more this text"/2') OPTION ranker=expr('sum(1)+gid'); +------+------+----------+ | id | gid | weight() | +------+------+----------+ | 4 | 33 | 35 | | 2 | 22 | 24 | | 3 | 22 | 24 | | 1 | 11 | 13 | +------+------+----------+ 4 rows in set (0.00 sec)
Data tokenization¶
Search engines don’t store text as it is for performing searches on it. Instead they extract words and create several structures that allows fast full-text searching. From the found words, a dictionary is build, which allows a quick look to discover if the word is present or not in the index. In addition, other structures records. Besideformswords features. This helps not only on speeding queries, but also on decreasing index size. A more advanced blacklisting is bigrams, which allows creating a special token between a ‘bigram’ (common) word and an uncommon word. This can speed up several times when common words are used in phrase searches. In case of indexing HTML content, it’s desired to not index also the HTML tags, as they can introduce a lot of ‘noise’ in the index. HTML stripping can be used and can be configured to strip, but index certain tag attributes or completely ignore content of certain HTML elements.
Another common text search type is wildcard searching. Wildcard searching is performed at dictionary level. By default, both plain and RT indexes use a dictionary type called keywords. lot of expansions or expansions that have huge hitlists. The penalties are higher in case of infixes, where wildcard is added at the start and end of the words. Even more, usage the expand_keywords setting, which can apply automatically the stars to the input search terms, should be made with care.
The plain index also supports a crc dictionary type. With this type, words are not stored as they are, instead a control sum value of words is used. Indexing is much faster in this case compared to keywords mode. Since it would not be possible to do substring search on the CRCs, instead all possible substrings of the words (defined by min_prefix_len or min_infix_len) are also stored. This increase the index size several times when prefix/infix are enabled, but wildcard querying doesn’t suffer performance penalties as it doesn’t need to perform expansions like keywords dictionary. On indexes with crc dictionary it’s not possible to use QSUGGEST feature (since control sums are stored in index instead of actual words) and it’s not possible to convert to RealTime indexes (which only work with keywords dictionary).
Multi-threaded searching¶
One index may not be enough. When searching, only one search thread (that uses a cpu core) is used for a query.
Because of the size of the data or heavy computing queries, we would want to use more than a CPU core per query.
To do that, we need to split the index into several smaller indexes. One common way to split the data is to perform a modulo filtering on the document id
(like
sql_query = SELECT * FROM mytable where id % 4 = 0 [1,2,3]).
Having several indexes instead of one means now we can run multiple indexing operations in parallel.
Faster indexing comes with a cost: several CPU cores will be used instead of one, there is more pressure on the source (especially if you rebuild all the indexes at once) and multiple threads writing to disk can overload your storage ( you can limit the impact of IO on storage with max_iops and max_iosize directives).
Searching over these shards can be done in 2 ways:
- one is to simply enumerate them in the query, like SELECT * FROM index0,index1,index2,index3. dist_threads >1 can be used for multi-core processing.
- using a local distributed index and dist_threads > 1 (for multi-core processing).
Grouping and faceting¶
Manticore Search supports grouping by multiple columns or computed expressions. Results can be sorted inside a group with WITHIN GROUP ORDER BY. A particular feature is returning more than one row per group, by using GROUP n BY. Grouping also supports HAVING clause, GROUP_CONCAT and aggregation functions. Manticore Search also supports faceting, which in essence is a set of group by applied on the same result set.
mysql> SELECT * FROM myindex WHERE MATCH('some thing') and afilter=1 FACET attr_1 FACET_2 attr_2; +------+---------+----------+----------+ | id | attr_1 | attr_2 | afilter | +------+------+-------------+----------+ | 4 | 33 | 35 | 1 | ........ +------+------------+ | attr_1 count(*) | +------+------------+ | 4 | 33 | ........ +------+------------+ | attr_2 count(*) | +------+------------+ | 10 | 1 |
In return you get a multiple result set, where the first is the result set of the query and the rest are the facet results.
Functions¶
GEODIST function can be used to calculate distance between 2 geo coordinates. The result can be used for sorting.
mysql> SELECT *, GEODIST(0.65929812494086, -2.1366023996942, latitude, longitude, {in=rad, out=mi,method=adaptive}) AS distance FROM geodemo WHERE distance < 10000 ORDER BY distance ASC LIMIT 0,100;
In addition, polygon calculation can be made, including geo polygon that takes into account Earth’s curvature.
mysql> SELECT *, CONTAINS(GEOPOLY2D(40.95164274496,-76.88583678218,41.188446201688,-73.203723511772,39.900666261352,-74.171833538046,40.059260979044,-76.301076056469),latitude_deg,longitude_deg) AS inside FROM geodemo WHERE inside=1;
Manticore Search also supports math, date and aggregation functions which are documented at Expressions, functions, and operators. Special functions ALL() and ANY() can be used to test elements in an array from a JSON attribute or MVA.
Highlighting¶
Highlighting allows to get a list of fragments from documents (called snippets) which contain the matches. The snippets are used to improve the readability of search results to end users. Snippeting can be made with the CALL SNIPPETS statement. The function needs the texts that will be highlighted, the index used (for it’s tokenization settings), the query used and optionally a number of settings can be applied to tweak the operation.
mysql> CALL SNIPPETS('this is my hello world document text I am snippeting now', 'myindex', 'hello world', 1 as query_mode, 5 as limit_words); +------------------------------------------------+ | snippet | +------------------------------------------------+ | ... my <b>hello world</b> document text ... | +------------------------------------------------+ 1 row in set (0.00 sec)
Tokenizer tester¶
CALL KEYWORDS provides a way to check how keywords are tokenized or to retrieve the tokenized forms of particular keywords..
Beside debug/testing, CALL KEYWORDS can be used for transliteration. For example we can have a template index which maps characters from cyrillic to latin. We can use CALL KEYWORDS to get the latin form of a word written in cyrillic.
mysql> call keywords ('ran','myindex'); +------+-----------+------------+ | qpos | tokenized | normalized | +------+-----------+------------+ | 1 | ran | run | +------+-----------+------------+ 1 row in set (0.00 sec)
Suggested words¶
CALL SUGGEST enabled getting suggestions or corrections of a given words. This is useful to implement ‘did you mean …’ functionality.
CALL SUGGEST requires an index with full wildcarding (infixing) enabled. Suggestion is based on the index dictionary and uses Levenshtein distance. Several options are available to allow tweaking and the output provide, beside distance, a document count for each word. In case at input there is more than one word, CALL SUGGEST will only process the first word, while CALL QSUGGEST will only process the last word and ignore the rest.
mysql> call suggest('sarch','myindex'); +---------+----------+------+ | suggest | distance | docs | +---------+----------+------+ | search | 1 | 6071 | | arch | 1 | 20 | | march | 1 | 10 | | sarah | 1 | 4 | +---------+----------+------+ 4 rows in set (0.00 sec)
Percolate queries¶
The regular workflow is store index document and match against them a query. However, sometimes it’s desired to check if new content matches an existing set of queries. Running the queries over the index each time a document is added can be inefficient. Instead, it would be faster if queries were stored in a index and the new documents are tested against the stored queries. Also called inverse search, this is used for for signaling in monitoring systems or news aggregation.
For this, a special index is used called percolate, which is similar with a RealTime index. The queries are stored in a percolate index and CALL PQ can test one or more documents if they match against the stored queries.
mysql> INSERT INTO index_name VALUES ( 'this is a query'); mysql> INSERT INTO index_name VALUES ( 'this way'); mysql> CALL PQ ('index_name', ('multiple documents', 'go this way'), 0 as docs_json );
Search performance¶
To debug and understand why a search is slow, information is provided by commands SHOW PROFILE, SHOW PLAN and SHOW META.
Tokenization and search expression can have a big impact on the search speed. They can generate requesting a lot of data from index components and/or require heavy computation (like merging big lists of hits). An example is using wildcarding on very short words, like 1-2 characters.
An index is not fully loaded by default into memory. Only several components are, such as dictionary or attributes (which can be set to not be loaded). The rest will be loaded when queries are made.
Operating systems will cache read files from the storage. If there is plenty of RAM, an index can be cached enterily as searches are made. If the index is not cached, a slow storage will impact searches. Also, the load time of an index is influenced by how fast components can be loaded into RAM. For small indexes this is not a problem, but in case of huge indexes it can take minutes until an index is ready for searches.
Queries can also be CPU-bound. This is because index is too big or it’s settings or search perform heavy computation. If an index grows big, it should be split to allow multi-core searching as explained in previous guide.
If we talk about big data, one server may not be enough and we need to spread our indexes over more than one server. Servers should be as close as possible (at least same data center), as the network latencies between master and nodes will affect the query performance. | https://docs.manticoresearch.com/3.2.2/html/getting-started/searching.html | 2021-04-10T13:51:20 | CC-MAIN-2021-17 | 1618038057142.4 | [] | docs.manticoresearch.com |
Does anyone know how azure migrate handles on premise disk constraints when performing an assessment? E.g will it read disk queues when advising on the disk type to use within azure?
Does anyone know how azure migrate handles on premise disk constraints when performing an assessment? E.g will it read disk queues when advising on the disk type to use within azure?
Hello @JamieChilds-0424
Sorry for the delay in response to your query.
We are checking on this internally with the concerned team and will get back to you with an update. In the meanwhile, you can refer to this link -
Hope it helps!
@SadiqhAhmed-MSFT kindly answer this one .
@AsiyaAliAdfolksLLC-8545 Appreciate your patience in this matter!
Doc link: Questions about discovery, assessment, and dependency analysis in Azure Migrate - Azure Migrate | Microsoft Docs
If the issue still persists, please create a support case so that we can help debug the issue.
If the response helped, do "Accept Answer" and up-vote it
From the XML that you shared, it looks like all the performance counters are missing for these VMs. Can you please ensure that:
• If the VMs are powered on for the duration for which you are creating the assessment
• If all of the performance counters are missing, ensure that outbound connections on ports 443 (HTTPS) are allowed.
• Connection status of the assessment agent is connected and you can see a latest datetime on the last heartbeat (To check this, click on the number of assessment from the Server Assessment tile > Under Manage, click on Appliances > and you can see agents under Agent health)
If the response helped, do "Accept Answer" and up-vote it
@JamieChilds-0424 Thanks for your patience in this matter!
For storage sizing in an Azure VM assessment, Azure Migrate tries to map each disk that is attached to the machine to an Azure disk. The sizing logic changes with the Sizing criteria chosen in the assessment properties as below:
• As on-premises: The assessment logic does not consider the performance history of disks and only looks at the on-premises disk size and storage type specified in assessment properties to recommend appropriate disk type. Possible storage types are Standard HDD, Standard SSD, and Premium.
• Performance-based: Along with the disk size and storage type, the logic also considers IOPS and throughput of individual disks to find the appropriate disk in Azure. More details here
If an Answer is helpful, please “Accept Answer” and Up-Vote for the same which might be beneficial to other community members reading this thread.
Azure Migrate Assessment Report- CPU usage % and memory usage % missing for Hyper-V, Please help.
I checked with the port it's enabled and also VMs are up and running.
@SadiqhAhmed-MSFT please help with the above query
@AsiyaAliAdfolksLLC-8545 Could you please share the screenshot of assessment report or the XLS if any?
5 people are following this question. | https://docs.microsoft.com/en-us/answers/questions/146585/azure-migrate-disk-assessment.html | 2021-04-10T15:32:20 | CC-MAIN-2021-17 | 1618038057142.4 | [] | docs.microsoft.com |
Diary Schema Format¶
A diary schema describes a diary entry.
The schema from a diary entry is exported via the API using JSON.
The schema JSON object has the following attributes:
The
@id value of a schema will be used as unique resource
identifier
@type of a diary entry using this schema in the scope
of the Minddistrict API.
When retrieving one of multiple entries an extra meta
information
@schemas will be included. It will
be the list of the schema used by entries contained in the
response. You can identify which schema is related to which entry by
mapping the
@id value of a schema to a
@type value of an
entry.
Fields JSON object¶
All field JSON objects have at least the following attributes:
Some field, depending on their type, will have extra attributes which are described below.
In addition to the fields listed in the schema, the
dt field is
always present inside an entry and contains the date and
time information at which the entry was posted.
It is possible that none of the fields in a schema are required. In that case, at least one of the fields should be filled in, as an empty entry doesn’t make much sense. This rule is applied both when creating and editing an entry.
text field¶
A
text field accepts text input. Newlines are accepted in that
field. None of the extra attributes are applicable to this field.
{ "name": "location", "title": "Locatie", "type": "text" }
textline field¶
A
textline field accepts text input without newlines. None of the extra
attributes are applicable to this field.
{ "name": "situation", "title": "Situatie", "type": "textline" }
boolean field¶
A
boolean field accepts a true or false value as input and can have a
default
attribute.
{ "name": "hungry", "title": "Hunger", "type": "boolean", "default": true }
integer field¶
An
integer field accepts whole numbers as input. If the
min
and
max attributes have been set, only entry field values inside
the range are valid. This field (optionally) may have a
slider
widget. This field can have a
default value, which is the
default value for the field.
{ "name": "intensity", "title": "Intensiteit", "type": "integer", "min": 1, "max": 10, "widget": "slider" }
float field¶
A
float field accepts floating point numbers as input. If the
min and
max attributes have been set, only entry field values inside the range are
valid. This field (optionally) may have
slider or
hours
widgets. This field can have a
default value, which is the
default value for the field.
{ "name": "spam", "title": "Spam", "type": "float", "min": 5.1, "max": 10.4, "default": 9.8 }
image field¶
An
image field accepts an image as input. There are no additional
attributes. The image is transferred as a regular image.
{ "name": "picture", "title": "Afbeelding", "type": "image" }
choice field¶
A
choice field presents a list of items for the user to choose
from. The list of options is a list of
value,
name
attributes. The values are strings.
Warning
Diary fields use
name for the machine-readable field name, and
title for the human-readable version. Choices, regrettably, use
name for the human-readable choice name and
value for the
machine-readable variant.
The field has an optional
default attribute, which is the default
value for the field. It is safe to assume that the default value is
found as a value in the vocabulary.
{ "name": "breakfast", "title": "Ontbijt", "type": "choice", "choices": [ { "value": "bacon", "name": "Bacon" }, { "value": "eggs", "name": "Eieren" }, { "value": "spam", "name": "Spam" } ] }
The items in the choice vocabulary have optional
icon, referring a
resource inside the resources JSON
object. These icons can be shown to the
end-user when presenting the field. An icon must be a 600x600 PNG file.
{ "name": "breakfast", "title": "Ontbijt", "type": "choice", "choices": [ { "value": "bacon", "name": "Bacon", "icon": "resources/pork.png" }, { "value": "eggs", "name": "Eieren", "icon": "resources/egg.png" }, { "value": "spam", "name": "Spam", "icon": "resources/spam.png" } ] }
Widgets¶
Some fields can (optionally) be given a
widget key. In the minddistrict
platform this changes the way the field is displayed and entered. It’s not
absolutely required that API integrations understand the widget settings, but
it is strongly recommended: these settings typically describe how to interpret
a number which is otherwise not meaningful to the end user.
Currently the following widget types are supported:
slider(for
integerand
floatfields): Displays a slider tool ranging between the
minand
maxsettings.
hours(for
floatfields): The floating point number is interpreted as a time duration measured in hours, and displayed in hours and minutes. E.g. the float value
3.5displays 3 hours and 30 minutes, the float value
3.75displays 3 hours and 45 minutes, etc.
Computations and computed fields¶
A field marked
readonly is a computed field: instead of having its value
specified when creating an entry, it is computed by the server based on the
values of the other fields in the entry.
Computed fields let a diary schema specify field values that depend on the values of other fields. For instance, one could use a computed field to calculate a general “happiness index” from a number of fields rating different aspects of the user’s mood, or a “sleep efficiency” calculated from fields recording the number of hours spent in bed and the number of wakings during the night.
The calculation is performed with Javascript code. (The code is not visible in the diary schema, but is part of the content written in the Minddistrict platform content management system.) The calculation is performed by the server whenever an entry is added or edited.
A field cannot be both
readonly and
required.
Timeline¶
The optional
timeline attribute defines an ordered list of field
names that should appears on the timeline of the diary.
Resources¶
An external resource is describe with a JSON object with the following attributes:
For example:
{ "name": "happy", "url": "" }
Examples¶
The moment entry¶
For example, the moment entry from the version 2 for the API is described like this:
{ "@id": "", "fields": [ { "name": "text", "title": "Tekst", "type": "text" }, { "name": "picture", "title": "Afbeelding", "type": "image" } ], "resources": [], "title": "Moment" }
With this schema, the following fields will be usable on the entry as a json object:
The
@type meta information for this object is:.
The emotion entry¶
For example, the emotion entry from the version 2 for the API is described like this:
{ "@id": "", "fields": [ { "choices": [ { "icon": "resources/happy.png", "name": "Blij", "value": "happy" }, { "icon": "resources/angry.png", "name": "Boos", "value": "angry" }, { "icon": "resources/sad.png", "name": "Verdrietig", "value": "sad" }, { "icon": "resources/scared.png", "name": "Angstig", "value": "scared" } ], "name": "emotion", "required": true, "title": "Hoe voel jij je?", "type": "choice" }, { "default": 1, "max": 5, "min": 1, "name": "intensity", "required": true, "title": "Hoe sterk is deze emotie?", "type": "integer", "widget": "slider" }, { "description": "In welke situatie bevind jij je?", "name": "situation", "title": "Situatie", "type": "text" }, { "name": "picture", "title": "Afbeelding", "type": "image" } ], "resources": [ { "name": "resources/angry.png", "url": "" }, { "name": "resources/happy.png", "url": "" }, { "name": "resources/sad.png", "url": "" }, { "name": "resources/scared.png", "url": "" } ], "title": "Emotie" }
With this schema, the following fields will be usable on the entry as a json object:
The
@type meta information for this object is:.
Charts¶
The optional
charts attribute defines an ordered list of chart definitions.
Each one is a JSON object defining a single chart. A chart can only be defined
for fields of type
integer,
float, and
choice. Here is an example:
{ "charts": [ { "type": "line", "title": "Mood indicators", "series": [ { "field": "self_reported_mood" }, { "field": "sleep_quality" }, { "field": "appetite" } ] }, { "type": "bar", "title": "Sleep hours", "buckets": "day", "series": [ { "field": "sleep_hours" } ] } ] }
A chart definition JSON object has the following attributes:
Chart types¶
A line chart joins the points in the data series with straight lines. This chart type is a good choice if the value being plotted is a measurement of some quantity that varies smoothly over time (e.g., weight, blood pressure): the line suggests that the values moved smoothly from one data point to the other, and makes long-term trends more visible.
If the field is counting how often something happened (e.g., how much alcohol
the client drank or cigarettes they smoked), a bar chart is typically a
better visualisation. This makes explicit the fact that these measurements
are made over a period of time (the
buckets attribute), and that counting
over a longer period will typically give a larger result. (Rule of thumb: a
bar chart implies that adding together the values from two entries “makes
sense”.)
A scatter chart plots individual data points just as a line chart does, but does not draw a line between them. This is useful for numeric data when measuring a quantity at a moment in time (rather than counting events over some period) but when the quantity cannot be expected to vary smoothly over time: examples include the intensity of an emotional response to a situation, or quality of sleep.
Series¶
Each JSON object in the
series list defines a single data series in the chart.
A series object must specify a
field to take the data from (the field type
must be
integer,
float, or
choice), and may optionally specify a
bucket-function.
There are some restrictions on series combinations:
A bar chart can only show a single series (a line or scatter chart may show more).
If a scatter chart shows more than one series, they must be on numeric fields; it can also show a
choicefield, but not more than one.
If a line or scatter chart includes more than one series, all series fields must be numeric and they must either all have the same min/max values configured, or have no min/max values configured.
To use a numeric field (
integeror
float) in a bar chart, the field must have a
minof at least 0. This prevents the bar chart from going below the
xaxis, which is currently not supported.
Buckets and bucketing¶
The
buckets attribute on a chart lets you collect multiple entries into
a single data value in the chart; for example
"day" (which collects
all values on the same day into a single data point) or
"night-morning-afternoon-evening" (which shows four data points, collecting
entries from multiple days for each one).
The default
buckets for a line chart or scatter chart on numeric fields is
to not do bucketing (i.e., each entry is a separate data point). For a bar
chart, the default is the same as specifying
"day". A scatter chart on
a
choice field does not support bucketing at all..
Not all combinations of chart types, field types, buckets, and bucket functions make sense; of those that make sense, not all are supported at present. The following combinations are currently supported:
Rendering tweaks based on chart config details¶
Getting charts that look good regardless of the data they contain is harder than you might expect. Here we collect the adjustments we make to details of chart rendering on the Minddistrict platform, based on the details of the chart config. If you’re implementing your own charting engine using configuration and data from our API, you may want to make the same adjustments so that your charts render consistently with ours.
In a line chart, if a series defines a maximum and minimum value, make sure that the Y axis range includes these values.
In a line chart, if the bucket function is
sumand no series defines a maximum or minimum value, make sure that the Y axis range includes 0. (It is likely to be the minimum Y value, but need not be if the data includes negative numbers.) | https://docs.minddistrict.com/api/2/reference/diary/schema.html | 2021-04-10T15:01:29 | CC-MAIN-2021-17 | 1618038057142.4 | [] | docs.minddistrict.com |
Before undertaking any maintenance of the TSM middleware server that might result in it becoming unavailable to the Archive Node, take the Target component offline to limit the number of alarms that are triggered if the TSM middleware server becomes unavailable.
You must be signed in to the Grid Manager using a supported browser. | https://docs.netapp.com/sgws-112/topic/com.netapp.doc.sg-maint/GUID-05AD480F-0F66-4A82-8F22-0E7D166BA614.html?lang=en | 2021-04-10T14:50:27 | CC-MAIN-2021-17 | 1618038057142.4 | [] | docs.netapp.com |
Set host values based on event data
You can configure the Splunk platform to assign host names to your events based on the data in those events. You can use event data to override the default assignment that the Splunk platform makes by supplying a regular expression for the event data and configuring two configuration files to determine when the platform is to override the host name for an event.
On Splunk Cloud, you must configure a heavy forwarder to perform host name assignment, as a stanza within the file which specifies the default fields where the Splunk platform can potentially modify the host name field for incoming events.
You can apply host name overrides to the following default fields:
- The source, using the
source::<source>keyword
- The source type, using the
sourcetype=<sourcetype>keyword
- The host name, using the
host::<host>keyword
Host name overrides occur when you specify one of these default fields in the props.conf file. The following events must occur before the Splunk platform can override the host name:
- The host, source, or sourcetype in the incoming event data must match what you specify in the props.conf file to activate the host name override transform configuration in the transforms.conf file.
- The event data must match the regular expression you set for the host name override transform to trigger. for editing.
- Add a stanza to this file that represents the default fields for which the host name override is to apply.
- Save the props.conf file and close it.
- Restart the heavy forwarder.
On Splunk Enterprise, you can perform this procedure on either the instance that ingests the data or on a heavy forwarder that sends data to the instance.
For more information about configuration files in general, see About configuration files in the Splunk Enterprise Admin Manual.
Configure a transforms.conf stanza with a host name override transform
The transforms.conf file controls where and how the Splunk platform transforms the incoming event data.
The host name override transformation stanza in transforms.conf uses the following syntax:
[<unique_stanza_name>] REGEX = <your_regex> FORMAT = host::$1 DEST_KEY = MetaData:Host
There are a few things to note in this stanza:
- Use the
<unique_stanza_name>part of the syntax to refer to the transform from the props.conf configuration file. A best practice is for it to reflect that it involves a host value.
<your_regex>is the regular expression that identifies where in the event you want to extract the host value and assign that value as the default field for that event.
FORMAT = host::$1writes the
REGEXvalue into the
host::field.
Configure a props.conf stanza to reference the host name override transform
The props.conf file references the stanza in the transforms.conf file that performs the transformation:
[<spec>] TRANSFORMS-<class> = <unique_stanza_name>
There are a few things to note in this stanza:
<spec>can be any of these values:
of host name default field overriding
Given the following set of events from the houseness.log! | https://docs.splunk.com/Documentation/Splunk/8.1.2/Data/Overridedefaulthostassignments | 2021-04-10T14:13:36 | CC-MAIN-2021-17 | 1618038057142.4 | [array(['/skins/OxfordComma/images/acrobat-logo.png', 'Acrobat logo'],
dtype=object) ] | docs.splunk.com |
There are two types of runners in Bottles:
Wine
Proton
The Wine runner is our favorite runner, it is used for all Environments and is therefore in all bottles created, but also for wineprefixes imported into Bottles. We offer 2 different builds:
chardonnay (our runner, available by default in Bottles v3)
lutris
Includes patches from wine-staging and lutris, which increases support for compatible Windows applications.
The Proton runner (developed by Valve and improved/offered by GloriousEggroll in the GE custom version) is a much more complex version of Wine and is suitable for the most modern games.
It contains several patches for specific gaming titles support, implements OpenVR support and integrates dxvk (installable on wine from the bottle preferences page).
The Proton runner can be installed from the Bottles Preferences page and choosen on bottle creation selecting the Custom Environment. You can also switch from Wine to Proton at any time by changing your bottle preferences.
We personally recommend using the Proton runner only in special cases where there is a patch for a specific video game. However, Valve collaborates in the development of Wine and many of the features integrated into Proton are also available in the latest versions of Wine.
You can install new runners in one click from Bottles Preferences.
If you're feeling fearless, you can enable Release Candidates to download and then test premature versions of Wine, which may include greater software compatibility at the cost of bugs and possible regressions. | https://docs.usebottles.com/components/runners | 2021-04-10T14:56:29 | CC-MAIN-2021-17 | 1618038057142.4 | [] | docs.usebottles.com |
I haven't programmed in .Net, last used VB 6 several years ago. I'm trying to make a Windows Form application in VB.Net, and it won't let me use the "Add New Data Source" option to connect my database to the project. When I click "Add New Data Source" nothing at all is happening, that I can see. It's not launching a wizard. Data Sources tab still says "There are no data sources to show for the selected project." I've tried several tutorials online and none of them seem to be having this issue. | https://docs.microsoft.com/en-us/answers/questions/336210/net-problemquestion.html | 2021-04-10T16:17:38 | CC-MAIN-2021-17 | 1618038057142.4 | [] | docs.microsoft.com |
Crate tp_state_machine
Version 0.8.2
See all tp_state_machine's items
Tetcore state machine implementation.
pub use crate::backend::Backend;
State machine backends. These manage the code and storage of contracts.
Logs a message at the debug level.
Logs a message at the error level.
Logs a message at the trace level.
Logs a message at the warn level.
Simple Map-based Externalities impl.
Block identifier that could be used to determine fork of this block.
Changes trie build cache.
Blocks range where configuration has been constant.
Changes tries state at some block.
Wraps a read-only backend, call executor, and current overlayed changes.
In-memory implementation of changes trie storage.
In-memory storage for offchain workers recoding changes for the actual offchain storage implementation.
The set of changes that are overlaid onto the backend.
Patricia trie-based backend which also tracks all touched storage trie values.
These can be sent to remote node and used as a proof of execution.
Patricia trie-based backend specialized in get value proofs.
Simple read-only externalities for any backend.
The tetcore state machine.
Accumulated usage statistics specific to state machine
crate.
A storage changes structure that can be generated by the data collected in OverlayedChanges.
OverlayedChanges
A proof that some set of key-value pairs are included in the storage trie. The proof contains
the storage values so that the partial storage backend can be reconstructed by a verifier that
does not already have access to the key-value pairs.
The storage transaction are calculated as part of the storage_root and
changes_trie_storage_root. These transactions can be reused for importing the block into the
storage. So, we cache them to not require a recomputation of those transactions.
storage_root
changes_trie_storage_root
Simple HashMap-based Externalities impl.
Patricia trie-based backend. Transaction type is an overlay of changes to commit.
Usage statistics for state backend.
Measured count of operations and total bytes.
Storage backend trust level.
The action to perform when block-with-changes-trie is imported.
Externalities Error.
Like ExecutionStrategy only it also stores a handler in case of consensus failure.
ExecutionStrategy
Strategy for executing a call into the runtime.
Requirements for block number that can be used with changes tries.
Changes trie storage. Provides access to trie roots and trie nodes.
State Machine Error bound.
Trait for inspecting state in any backend.
Patricia trie-based storage trait.
Key-value pairs storage that is used by trie backend essence.
A key-value datastore implemented as a database-backed modified Merkle tree.
Create proof check backend.
Create state where changes tries are disabled.
Check execution proof, generated by prove_execution call.
prove_execution
Check execution proof on proving backend, generated by prove_execution call.
Return changes of given key at given blocks range.
max is the number of best known block.
Changes are returned in descending order (i.e. last block comes first).
max
Returns proof of changes of given key at given blocks range.
max is the number of best known block.
Check key changes proof and return changes of the key at given blocks range.
max is the number of best known block.
Changes are returned in descending order (i.e. last block comes first).
Similar to the key_changes_proof_check function, but works with prepared proof storage.
key_changes_proof_check storage read proof.
Prune obsolete changes tries. Pruning happens at the same block, where highest
level digest is created. Pruning guarantees to save changes tries for last
min_blocks_to_keep blocks. We only prune changes tries at max_digest_interval
ranges.
min_blocks_to_keep
max_digest_interval
Check child storage read proof, generated by prove_child_read call.
prove_child_read
Check child storage read proof on pre-created proving backend.
Check storage read proof, generated by prove_read call.
prove_read
Check storage read proof on pre-created proving backend.
Type of changes trie transaction.
In memory arrays of storage values for multiple child tries.
Database value).
hash_db
Hasher
KeyFunction
In memory array of storage values.
Global proof recorder, act as a layer over a hash db for recording queried
data.
Storage key.
Storage value.
Persistent trie database write-access interface for the a given hasher. | https://docs.rs/tp-state-machine/0.8.2/tp_state_machine/ | 2021-04-10T14:31:16 | CC-MAIN-2021-17 | 1618038057142.4 | [] | docs.rs |
Setting up Skribble Business¶
Note
You’ll need a Skribble account to set up Skribble Business for your company. You can create one at skribble.com if you don’t have one yet.
Skribble offers a free trial month to new customers. You won’t be charged during the free trial, and you can cancel at any time.
To set up Skribble Business and start your free trial:
Log in to your Skribble account at my.skribble.com
Go to Billing
On the next page, you’ll see the details of the price plans available at Skribble.
Navigate to Skribble Business and click Test for free
Enter your full and exact company name and read our General Terms & Conditions
If you accept the terms, click the box next to “I agree to the General Terms and Conditions of Skribble on behalf of my company”, and then click Next
Enter your company’s billing address and click Next
Choose your preferred billing period and click Next
Enter your credit card details and click Start your free 30-day trial
Note
Your credit card won’t be charged until the trial period expires.
Note
Are you an Enterprise customer with an access code?
Click Enterprise customer? below the credit card input field
In the next step, you’ll be able to enter the access code if you have one.
Enter your access code and click Start your trial
Congratulations, you can now test Skribble Business for free. Click Add new members to add your team members and enable them to sign electronically.
| https://docs.skribble.com/business-admin/quickstart/upgrade | 2021-04-10T14:45:08 | CC-MAIN-2021-17 | 1618038057142.4 | [array(['../_images/step1_setup_biz.png', 'pointer to menu button'],
dtype=object)
array(['../_images/step2_setup_biz_trial2.png',
'../_images/step2_setup_biz_trial2.png'], dtype=object)
array(['../_images/step_1_comp_name.png',
'../_images/step_1_comp_name.png'], dtype=object)
array(['../_images/step_2_billing_address.png',
'../_images/step_2_billing_address.png'], dtype=object)
array(['../_images/step_3_start_trial2.png',
'../_images/step_3_start_trial2.png'], dtype=object)
array(['../_images/step_4_card_details.png',
'../_images/step_4_card_details.png'], dtype=object)
array(['../_images/set_up_biz_enterprise.png',
'../_images/set_up_biz_enterprise.png'], dtype=object)
array(['../_images/set_biz_enterprise_code.png',
'../_images/set_biz_enterprise_code.png'], dtype=object)
array(['../_images/step_5_confirmation.png',
'../_images/step_5_confirmation.png'], dtype=object)] | docs.skribble.com |
In this README#
About#
This repository contains the components - such as
DatasetReader,
Model, and
Predictor classes - for applying AllenNLP to a wide variety of NLP tasks.
It also provides an easy way to download and use pre-trained models that were trained with these components.
Tasks and components#
This is an overview of the tasks supported by the AllenNLP Models library along with the corresponding components provided, organized by category. For a more comprehensive overview, see the AllenNLP Models documentation or the Paperswithcode page.
Classification tasks involve predicting one or more labels from a predefined set to assign to each input. Examples include Sentiment Analysis, where the labels might be
{"positive", "negative", "neutral"}, and Binary Question Answering, where the labels are
{True, False}.
🛠 Components provided: Dataset readers for various datasets, including BoolQ and SST, as well as a Biattentive Classification Network model.
Coreference resolution tasks require finding all of the expressions in a text that refer to common entities.
See nlp.stanford.edu/projects/coref for more details.
🛠 Components provided: A general Coref model and several dataset readers.
This is a broad category for tasks such as Summarization that involve generating unstructered and often variable-length text.
🛠 Components provided: Several Seq2Seq models such a Bart, CopyNet, and a general Composed Seq2Seq, along with corresponding dataset readers.
Language modeling tasks involve learning a probability distribution over sequences of tokens.
🛠 Components provided: Several language model implementations, such as a Masked LM and a Next Token LM.
Multiple choice tasks require selecting a correct choice among alternatives, where the set of choices may be different for each input. This differs from classification where the set of choices is predefined and fixed across all inputs.
🛠 Components provided: A transformer-based multiple choice model and a handful of dataset readers for specific datasets.
Pair classification is another broad category that contains tasks such as Textual Entailment, which is to determine whether, for a pair of sentences, the facts in the first sentence imply the facts in the second.
🛠 Components provided: Dataset readers for several datasets, including SNLI and Quora Paraphrase.
Reading comprehension tasks involve answering questions about a passage of text to show that the system understands the passage.
🛠 Components provided: Models such as BiDAF and a transformer-based QA model, as well as readers for datasets such as DROP, QuAC, and SQuAD.
Structured prediction includes tasks such as Semantic Role Labeling (SRL), which is for determining the latent predicate argument structure of a sentence and providing representations that can answer basic questions about sentence meaning, including who did what to whom, etc.
🛠 Components provided: Dataset readers for Penn Tree Bank, OntoNotes, etc., and several models including one for SRL and a very general graph parser.
Sequence tagging tasks include Named Entity Recognition (NER) and Fine-grained NER.
🛠 Components provided: A Conditional Random Field model and dataset readers for datasets such as CoNLL-2000, CoNLL-2003, CCGbank, and OntoNotes.
This is a catch-all category for any text + vision multi-modal tasks such Visual Question Answering (VQA), the task of generating a answer in response to a natural language question about the contents of an image.
🛠 Components provided: Several models such as a ViLBERT model for VQA and one for Visual Entailment, along with corresponding dataset readers.
Pre-trained models#
Every pretrained model in AllenNLP Models has a corresponding
ModelCard in the
allennlp_models/modelcards/ folder.
Many of these models are also hosted on the AllenNLP Demo and the AllenNLP Project Gallery.
To programmatically list the available models, you can run the following from a Python session:
>>> from allennlp_models import pretrained >>> print(pretrained.get_pretrained_models())
The output is a dictionary that maps the model IDs to their
ModelCard:
{'structured-prediction-srl-bert': <allennlp.common.model_card.ModelCard object at 0x14a705a30>, ...}
You can load a
Predictor for any of these models with the
pretrained.load_predictor() helper.
For example:
>>> pretrained.load_predictor("mc-roberta-swag")
Here is a list of pre-trained models currently available.
coref-spanbert- Higher-order coref with coarse-to-fine inference (with SpanBERT embeddings).
evaluate_rc-lerc- A BERT model that scores candidate answers from 0 to 1.
generation-bart- BART with a language model head for generation.
glove-sst- LSTM binary classifier with GloVe embeddings.
lm-masked-language-model- BERT-based masked language model
lm-next-token-lm-gpt2- OpenAI's GPT-2 language model that generates the next token.
mc-roberta-commonsenseqa- RoBERTa-based multiple choice model for CommonSenseQA.
mc-roberta-piqa- RoBERTa-based multiple choice model for PIQA.
mc-roberta-swag- RoBERTa-based multiple choice model for SWAG.
pair-classification-decomposable-attention-elmo- The decomposable attention model (Parikh et al, 2017) combined with ELMo embeddings trained on SNLI.
pair-classification-esim- Enhanced LSTM trained on SNLI.
pair-classification-roberta-mnli- RoBERTa finetuned on MNLI.
pair-classification-roberta-snli- RoBERTa finetuned on SNLI.
rc-bidaf-elmo- BiDAF model with ELMo embeddings instead of GloVe.
rc-bidaf- BiDAF model with GloVe embeddings.
rc-naqanet- An augmented version of QANet that adds rudimentary numerical reasoning ability, trained on DROP (Dua et al., 2019), as published in the original DROP paper.
rc-nmn- A neural module network trained on DROP.
rc-transformer-qa- A reading comprehension model patterned after the proposed model in Devlin et al, with improvements borrowed from the SQuAD model in the transformers project
roberta-sst- RoBERTa-based binary classifier for Stanford Sentiment Treebank
semparse-nlvr- The model is a semantic parser trained on Cornell NLVR.
semparse-text-to-sql- This model is an implementation of an encoder-decoder architecture with LSTMs and constrained type decoding trained on the ATIS dataset.
semparse-wikitables- The model is a semantic parser trained on WikiTableQuestions.
structured-prediction-biaffine-parser- A neural model for dependency parsing using biaffine classifiers on top of a bidirectional LSTM.
structured-prediction-constituency-parser- Constituency parser with character-based ELMo embeddings
structured-prediction-srl-bert- A BERT based model (Shi et al, 2019) with some modifications (no additional parameters apart from a linear classification layer)
structured-prediction-srl- A reimplementation of a deep BiLSTM sequence prediction model (Stanovsky et al., 2018)
tagging-elmo-crf-tagger- NER tagger using a Gated Recurrent Unit (GRU) character encoder as well as a GRU phrase encoder, with GloVe embeddings.
tagging-fine-grained-crf-tagger- This model identifies a broad range of 16 semantic types in the input text. It is a reimplementation of Lample (2016) and uses a biLSTM with a CRF layer, character embeddings and ELMo embeddings.
tagging-fine-grained-transformer-crf-tagger- Fine-grained NER model
ve-vilbert- ViLBERT-based model for Visual Entailment.
vqa-vilbert- ViLBERT (short for Vision-and-Language BERT), is a model for learning task-agnostic joint representations of image content and natural language.
Installing#
From PyPI#
allennlp-models is available on PyPI. To install with
pip, just run
pip install allennlp-models
Note that the
allennlp-models package is tied to the
allennlp core package. Therefore when you install the models package you will get the corresponding version of
allennlp (if you haven't already installed
allennlp). For example,
pip install allennlp-models==2.2.0 pip freeze | grep allennlp # > allennlp==2.2.0 # > allennlp-models==2.2.0
From source#
If you intend to install the models package from source, then you probably also want to install
allennlp from source.
Once you have
allennlp installed, run the following within the same Python environment:
git clone cd allennlp-models ALLENNLP_VERSION_OVERRIDE='allennlp' pip install -e . pip install -r dev-requirements.txt
The
ALLENNLP_VERSION_OVERRIDE environment variable ensures that the
allennlp dependency is unpinned so that your local install of
allennlp will be sufficient. If, however, you haven't installed
allennlp yet and don't want to manage a local install, just omit this environment variable and
allennlp will be installed from the main branch on GitHub.
Both
allennlp and
allennlp-models are developed and tested side-by-side, so they should be kept up-to-date with each other. If you look at the GitHub Actions workflow for
allennlp-models, it's always tested against the main branch of
allennlp. Similarly,
allennlp is always tested against the main branch of
allennlp-models. you can either use a prebuilt image from a release or build an image locally with any version of
allennlp and
allennlp-models.
If you have GPUs available, you also need to install the nvidia-docker runtime.
To build an image locally from a specific release, run
docker build \ --build-arg RELEASE=1.2.2 \ --build-arg CUDA=10.2 \ -t allennlp/models - < Dockerfile.release
Just replace the
RELEASE and
CUDA build args with what you need. You can check the available tags
on Docker Hub to see which CUDA versions are available for a given
RELEASE.
Alternatively, you can build against specific commits of
allennlp and
allennlp-models with
docker build \ --build-arg ALLENNLP_COMMIT=d823a2591e94912a6315e429d0fe0ee2efb4b3ee \ --build-arg ALLENNLP_MODELS_COMMIT=01bc777e0d89387f03037d398cd967390716daf1 \ --build-arg CUDA=10.2 \ -t allennlp/models - < Dockerfile.commit
Just change the
ALLENNLP_COMMIT /
ALLENNLP_MODELS_COMMIT and
CUDA build args to the desired commit SHAs and CUDA versions, respectively.
Once you've built your image, you can run it like this:
mkdir -p $HOME/.allennlp/ docker run --rm --gpus all -v $HOME/.allennlp:/root/.allennlp allennlp/models
Note: the
--gpus allis only valid if you've installed the nvidia-docker runtime. | https://docs.allennlp.org/models/main/ | 2021-04-10T14:04:40 | CC-MAIN-2021-17 | 1618038057142.4 | [] | docs.allennlp.org |
You must configure each subzones with the Update Policy deployment option.
For more information about the Update Policy DNS deployment option, refer to Configuring zones to accept GSS-TSIG updates.
To configure subzones:
- Navigate to an AD subzone under default View> Top level zone (com)> Lower level zone (example.com)> Subzones (_msdcs.example.com).
- Click the Deployment Options tab.
- Under Deployment Options, click New and select DNS Option.
- Under General, set the following options and click Add.
- Option—Update Policy
- Privilege—grant
- Identity—select Name and type a client name in the text field.
- Nametype—subdomain
- Name—enter the name of the current subzone (for example, _msdcs.example.com)
- RR Types—ANY
- Under Server, select the server to which the option will apply.
- Under Change Control, add comments, if required.
- Click Add.
- Repeat this process for each of the AD subzones.
This completes all the necessary steps in Address Manager.
You need to deploy the configuration to a managed DNS Server. Perform a full DNS deployment to the DNS Server. | https://docs.bluecatnetworks.com/r/Address-Manager-Administration-Guide/Configuring-zones-to-accept-GSS-TSIG-updates/8.2.0 | 2021-04-10T14:51:06 | CC-MAIN-2021-17 | 1618038057142.4 | [] | docs.bluecatnetworks.com |
displays the integrations and integration apps as tiles. To access data flows, go to the integration app settings as shown. Hover over the tile to see the options.
Turn on/Turn off data flows
Go to the data flow group, select the individual data flow and turn on / turn off. Grey color switch indicates the flow is turned off. The green color switch indicates the flow is turned on.
The following two sections, Settings, and Dashboard are the two main interfaces for running the flows, viewing flow statuses, and resolving errors, if any.
Settings
Each data flow group contains one or more data flows. Some flows are real-time, as whenever a new record is created or updated in the source system (Amazon), the same record is automatically updated in the target system (NetSuite). The flows can also be rescheduled to your required frequency. A switch is provided to turn on or turn off the data flow. Field mappings between the systems can be modified or new field mappings can be added. Settings enable you to tweak the data flow (filters are provided) before you decide to run the flow.
Dashboard
When you run a data flow from the 'settings' section, the dashboard page is displayed and provides the status of the data flow. The integration progresses through the following stages: in the queue, in progress, and completed.
You can filter integration jobs based on options such as date ranges, flows with errors, or based on other statuses, etc. You can hide empty jobs. The number of records exported is indicated. Failed flows or flows with errors are categorized by color code. Errors can be resolved from within the dashboard.
- Filters to view and organize your data flows
- Data flow description and associated components (import and export status)
- Take action on your data flows ( re-run the flows without going back to the Settings section)
- Settings section
- View errors (hover over the error) and resolve.
You can set up the Integration App to send and receive notifications, whenever the integration encounters any errors or when a connection goes offline. Refer topic email notifications in Integrator.io.
For detailed information on the platform capabilities, go to Integrator.io section.
Related topics
Please sign in to leave a comment. | https://docs.celigo.com/hc/en-us/articles/115004463707-How-to-use-the-Amazon-NetSuite-Integration-App-IO- | 2021-04-10T14:36:23 | CC-MAIN-2021-17 | 1618038057142.4 | [array(['/hc/article_attachments/115008644468/2017-03-20_12-14-51.jpg',
'2017-03-20_12-14-51.jpg'], dtype=object)
array(['/hc/article_attachments/115008644448/2017-03-20_12-14-30.jpg',
'2017-03-20_12-14-30.jpg'], dtype=object)
array(['/hc/article_attachments/360086099872/mceclip0.png',
'mceclip0.png'], dtype=object)
array(['/hc/article_attachments/360086184951/mceclip2.png',
'mceclip2.png'], dtype=object)
array(['/hc/article_attachments/360086099892/mceclip3.png',
'mceclip3.png'], dtype=object)
array(['/hc/article_attachments/360086184971/mceclip4.png',
'mceclip4.png'], dtype=object)
array(['/hc/article_attachments/360086184991/mceclip5.png',
'mceclip5.png'], dtype=object)
array(['/hc/article_attachments/360086185011/mceclip6.png',
'mceclip6.png'], dtype=object) ] | docs.celigo.com |
Corda for Project Planners
A deployment of Corda Enterprise requires a variety of machines and resources depending on the role of each member of the Corda project, and the architecture of the deployment. It’s important to define the role your organisation will play before beginning a project on Corda.
Roles in a Corda business network
Whether you are planning to build CorDapps for other organisations to use, or you are planning to take responsibility for a network as a Business Network Operator, you should get a good idea of the work and responsibilities of other enterprise users so you can see where you fit in. Keep in mind that some organisations may perform multiple roles on their network.
CorDapp Developer
In a Corda network, the same CorDapp must be deployed to all nodes that wish to transact with one another. CorDapps may be developed by a member of the business network, by the Business Network Operator, or by an entirely external organisation.
When developing CorDapps, an organisation should bear in mind the platform support matrix and the guidance on developing CorDapps.
To test CorDapps, use the network bootstrapper tool to quickly create Corda networks to test that the CorDapp performs as expected.
Node operator
A member of a Corda business network has a variety of considerations:
Deployment architecture
The architecture of the specific Corda deployment will change the resources required for an ongoing deployment, but for a production deployment, a node should have an HA implementation of the Corda Firewall, and an HSM compatible with the security policy of the organisation.
Testing environments
A node operator should operate or have access to a testing network, a UAT network, and their production network.
UAT and production networks should include a node, HA firewall, and an HSM, although it may not be necessary for more informal testing environments. In some cases, a Business Network Operator will provide access to a UAT environment that node operators may connect to.
Business Network Operator
The Business Network Operator is responsible for the infrastructure of the business network, they maintain the network map and identity services that allow parties to communicate, and - in many deployments - also operate the notary service.
Deployment architecture.
Development and testing environments
A Business Network Operator should have a variety of environments:
- A development environment including minimum. | https://docs.corda.net/docs/corda-enterprise/4.7/operations/project-planner/corda-planning.html | 2021-04-10T15:41:19 | CC-MAIN-2021-17 | 1618038057142.4 | [] | docs.corda.net |
Get a Wallet – Beginners guide
What is a wallet?
Decentraland uses the Ethereum blockchain to record the ownership of all digital assets and tradable items.
Digital wallets are tools that work as a bridge between the blockchain and the dApp (decentralized applications). This means that with a wallet you will be able to monitor your available funds, transaction history and security options.
Do I need a wallet to play in Decentraland?
If you want to fully enjoy the Decentraland experience, we highly recommend you get yourself a digital wallet. Why? Because it will work as your personal account, allowing you to connect from different devices, keeping all your digital assets (such as names, collectibles, LANDs) and progress safe.
If you choose to experience Decentraland.
How do I get a digital wallet?
To enter Decentraland, you must use a wallet that is integrated to your web browser, so we recommend you MetaMask
Once you install it, you will see an icon like this:.
You can localize your wallet address clicking on the extension icon in your browser, and then clicking on your wallet name with the public key to copy it to clipboard:
Or by clicking on your account details: (ETH), and how do I send it to my wallet?
For executing transactions, you’ll need to put money in your wallet. dApps based on Ethereum, like Decentraland, use Ether: a digital currency that powers the Ethereum network. It acts like any other currency, in that its value fluctuates with the market.
- You need to convert your currency (e.g. USD, CAD, GBP) into Ether to pay for things such as a collectibles.
How do I get Ether?
For US citizens only:
You can purchase ETH for the MetaMask Browser Extension with the Coinbase service.
- Click the
Buybutton.
- Select the
Coinbaseoption.
- Click the
Continue to Coinbasebutton to purchase Ethereum.
For the rest of the World:
You need to buy ETH from Coinbase or another exchange using normal fiat currency.
- Copy your MetaMask address by clicking on your name account and address.
Copy Address to clipboard.
- Go to Coinbase or another exchange.
- Click
Accountsin your top navigation.
- Select your ETH wallet and click
buy.
- Follow the steps to
Add payment methodand paste your MetaMask address with the amount you’d like to transfer.
What is MANA and how do I get it?
MANA is Decentraland’s fungible (reproducible or interchangeable) cryptocurrency token. It is burned, or spent in exchange for LAND parcels, wearables and names.
Steps to buy MANA:
- First, you need to register with an exchange that lists MANA (such as Coinbase, Huobi, Binance).
- Secondly, you will need to deposit funds into your account. While things change rapidly in the crypto world, it’s not likely that there’s an exchange available to convert your USD directly for MANA. If that’s the case, you’ll first need to obtain a cryptocurrency listed in a currency pair with MANA, such as Ether (ETH), and then exchange it for Decentraland’s native token.
- Third, once logged into your exchange account, click on the “Markets” or “Exchange” link and search for your desired currency pairing. For example, MANA/ETH. In the “Buy” field, you can then specify the amount of MANA you want to buy or the amount of ETH you want to spend. Make sure you take a moment to review the full details of the transaction including any fees that apply and the total cost of completing your purchase..
- ‘Gas’ is composed of two parts: Gas Price and Gas Limit. Gas Price is what you offer to pay the miners (in a tiny measurement of ether called ‘gwei’) for each operation to execute the smart contract. Gas Limit is how many operations you let them do before they run out of gas and drop the transaction.
- 1 gwei = 1/1,000,000,000th of an Ether.
To summarize, Gas Price (gwei) is the amount of Ether offered per gas unit to pay miners to process your transaction. The higher the gas price you set, the faster your transaction will get processed. So, for more important transactions – such as a collectible that you really like ;D – think about increasing the suggested gas price.
For extra technical information, visit this link | https://docs.decentraland.org/examples/get-a-wallet/ | 2021-04-10T15:17:43 | CC-MAIN-2021-17 | 1618038057142.4 | [array(['/images/media/get-a-wallet-account.png', None], dtype=object)
array(['/images/media/get-a-wallet-wallet.png', None], dtype=object)
array(['/images/media/get-a-wallet-metamask-logo.png', None], dtype=object)
array(['/images/media/get-a-wallet-metamask-extension.png', None],
dtype=object)
array(['/images/media/get-a-wallet-public-key-1.png', None], dtype=object)
array(['/images/media/get-a-wallet-public-key-2.png', None], dtype=object)
array(['/images/media/get-a-wallet-ether.png', None], dtype=object)
array(['/images/media/get-a-wallet-mana.png', None], dtype=object)
array(['/images/media/get-a-wallet-gas.png', None], dtype=object)] | docs.decentraland.org |
Adding and removing nodes from a cluster with master nodes
Add data-only nodes to a cluster with master into the Admin UI on the new data node.
- In the Appliance Settings section, click Running Config.
- Click Edit config.
- Add an entry to the Running Config file by completing the following steps:
- Add a comma after the second to last curly brace (}).
- Press ENTER to create a new line.
- Paste the following code on the new line before the final curly brace:
master nodes
Complete the following steps to remove a node from an Explore cluster.
- Log into the Admin UI on the node you want to remove.
- In the Explore Cluster Settings section, click Cluster Members.
- Review the node role at the top of the page.
- Data-only nodes can be removed as needed.
- A single master-only node can be temporarily removed, but if you want to remove multiple master-only nodes, contact ExtraHop Support for help.
- In the Actions column, click Leave Explore Cluster next to the node that you are currently logged into, and then click OK.
Thank you for your feedback. Can we contact you to ask follow up questions? | https://docs.extrahop.com/7.9/exa-master-add-remove/ | 2021-04-10T14:55:49 | CC-MAIN-2021-17 | 1618038057142.4 | [] | docs.extrahop.com |
The XEM8350 requires a clean, filtered, DC supply within the range of 5 V to 16 V. This supply may be delivered through the DC power connector (rated to 5 A max current) or through the mezzanine connectors (rated to 16 A max current).
The XEM8350 power distribution system is quite complex, with several supplies designed to provide suitable, efficient power for several systems and modules. A schematic diagram of the system follows, with input (+VDC) shown to the left and accessible supply rails shown to the right.
Supply Heat Dissipation (IMPORTANT!!)
Due to the limited area available on the small form-factor of the XEM8350 and the density of logic provided, heat dissipation may be a concern. This depends entirely on the end application and cannot be predicted in advance by Opal Kelly. Heat sinks may be required on any of the devices on the XEM8350. Of primary focus should be the FPGA (U15) and SDRAM (U24, U25, U26, U27, U28). Although the switching supplies are high-efficiency, they are very compact and consume a small amount of PCB area for the current they can provide.
If you plan to put the XEM8350 in an enclosure, be sure to consider heat dissipation in your design.
Power Supply
The XEM8350 is designed to be operated from a single 5-16-volt power source supplied through the DC power jack on the device. This provides power for the several high-efficiency switching regulators on-board to provide multiple DC voltages for various components on the device as well as three adjustable supplies for the peripheral.
DC Power Connector
The DC power connector on the XEM8350 is part number PJ-102AH from CUI, Inc. It is a standard “canon-style” 2.1mm / 5.5mm jack. The outer ring is connected to DGND. The center pin is connected to +VDC.
The PJ-102AH jack is rated for 5 A maximum continuous current. Applications requiring higher current must use the mezzanine connectors for providing power to the system (rated for a maximum of 16 A).
Power Budget
The table below can help you determine your power budget for each supply rail on the XEM83TH transceivers. These are independent and can be computed separately for power budget based on their assigned function.
Example XEM8350-KU060 FPGA Power Consumption
XPower Estimator version 14.3 was used to compute the following power estimates for the Vccint supply. These are simply estimates; your design requirements may vary considerably. The numbers below indicate approximately 80% utilization.
Heat Sink
The device has been fitted with two heat sink anchors, proximate to the FPGA for mounting a passive or active heat sink. The following heat sink has been tested with the XEM8350.
The active heat sink above includes a small fan which connects to the fan controller on-board for manual or automatic fan speed control. The fan is powered directly by the input supply to the XEM8350. The fan is specified for a nominal operating voltage of-40X40 is available for purchase directly from Opal Kelly. | https://docs.opalkelly.com/display/XEM8350/Powering+the+XEM8350 | 2021-04-10T14:31:55 | CC-MAIN-2021-17 | 1618038057142.4 | [] | docs.opalkelly.com |
TouchManager Touch Modes
TouchManager allows you to control the touch mode of the UIElement in the visual tree. You can do this using the TouchMode attached property of the manager.
The IsTouchHitTestVisible and ShouldLockTouch properties of TouchManager are obsolete and replaced by the TouchMode property.
TouchMode is an enumeration that contains the following values:
- HitTestVisible (default value): The element is visible for touch input and events will route normally.
- HitTestHidden: The element is not visible for touch input. Touch events will be raised for the parents of the element, as if this element is not in the visual tree.
- Locked: The element is visible for touch input and it will capture the touch device on touch down. All touch events will be marked as handled, thus preventing event routing.
- None: The element will suppress all touch events. No touch events will be raised for touch input within the boundaries of the element.
Example 1: Setting TouchMode in XAML
<Border x:
Example 2: Setting TouchMode in code
TouchManager.SetTouchMode(this.element, TouchMode.HitTestVisible);
TouchManager.SetTouchMode(Me.element, TouchMode.HitTestVisible)
TouchMode examples
This section demonstrates the TouchModes with an example containing a few nested UIElements.
Figure 1: The logical tree of the example - parent Grid, a Border inside the grid and an Ellipse inside the border
Figure 2: TouchMode.HitTestVisible
Figure 3: TouchMode.HitTestHidden
Figure 4: TouchMode.Locked
Figure 5: TouchMode.None
| https://docs.telerik.com/devtools/silverlight/controls/touchmanager/touch-modes | 2021-04-10T15:14:55 | CC-MAIN-2021-17 | 1618038057142.4 | [array(['images/touchmanager_touch_modes_01.png',
'TouchManager | Touch Modes Image 01'], dtype=object)
array(['images/touchmanager_touch_modes_02.png',
'TouchManager | Touch Modes Image 02'], dtype=object)
array(['images/touchmanager_touch_modes_03.png',
'TouchManager | Touch Modes Image 03'], dtype=object)
array(['images/touchmanager_touch_modes_04.png',
'TouchManager | Touch Modes Image 04'], dtype=object)
array(['images/touchmanager_touch_modes_05.png',
'TouchManager | Touch Modes Image 05'], dtype=object)] | docs.telerik.com |
will not be necessary.
If the webcam is currently being used by your local computer, it can be used by the remote desktop simultaneously. Also, if the webcam is being used by the remote desktop, it can be used by your local computer at the same time.
If you have more than one webcam connected to your local computer, you can configure a preferred webcam to use on your remote desktop. | https://docs.vmware.com/en/VMware-Horizon-Client-for-Mac/4.5/com.vmware.horizon.mac-client-45-doc/GUID-4E47975D-33F0-4CC1-83D1-D64C5E1D166E.html | 2021-04-10T15:36:38 | CC-MAIN-2021-17 | 1618038057142.4 | [] | docs.vmware.com |
Scenario: Engage users who scroll to the bottom of a page
From Genesys Documentation
This topic is part of the manual Event tracking with tag managers for version Current of Genesys Predictive Engagement.
Contents
Learn how to create a custom event tag and a corresponding segment in Genesys Predictive Engagement. Then, use that segment with an action map and see the visitors in LiveNow that the action map engaged.
Scenario
You created a webpage announcing a new product. You want to create a segment of those visitors who view the entire page and start a chat with them.
Summary of steps
- Use JavaScript to create a ScrollToBottom event tag.
- Deploy your event tag with your preferred tag manager.
- Create a segment that uses the event tag.
- Create an action map to engage your visitors.
- Test your solution in LiveNow.
Create a ScrollToBottom event tag
In your preferred code editor, develop and validate a ScrolledToBottom event tag. For example:
<script> ac('dom', 'ready', function() { $(window).scroll(function(){ timeout = setTimeout( function() { if( $(window).scrollTop() + $(window).height() > $(document).height() - 100 ) { ac('record', 'scrollToBottom', 'User scrolled to bottom'); console.log('User scrolled to bottom'); } }) }) }) </script>
For more information about the available Journey JavaScript methods you can use to create event tags, see: ScrollToBottom.
Create the ScrolledToBottom action map
- In Action maps, click Create action map.
- Name the action map ScrolledToBottom.
- Under Select trigger, click Segment match.
- Click Select segments and select the ScrolledToBottom segment.
- Under Configure User Engagement, Webchat is selected by default.
- Click Configure and design the Webchat however you like.
Test your solution in LiveNow
- Open Live Now.
- Open your website.
- Start a visit and go to the page that you are tracking.
- Scroll to the bottom.
- Refresh Live Now and verify that a new visitor appears and that the visitor was added to the ScrolltoBottom segment. | https://all.docs.genesys.com/ATC/Current/Event/Scroll_to_bottom | 2021-04-10T15:20:24 | CC-MAIN-2021-17 | 1618038057142.4 | [] | all.docs.genesys.com |
Start System Services
Cacti requires the following system services (daemons) to be started.
- crond
starts cacti polling
- mysqld
stores all administrative data for cacti
- httpd
provides the cacti web interface
- snmpd (optional)
on the local server is required to poll its snmp based performance data
You should ensure, that on a system restart, those services are restarted as well. System start procedures are under heavy development these days. We have SysV init scripts, upstart and systemd. So things may change …
This example assumes that we're dealing with the httpd web service. Please apply the same procedure to all services above.
Here's how to start httpd as a service. Make sure, that httpd is listed as a service by issuing
chkconfig --list|grep httpd
Activate httpd by
chkconfig httpd on
to find sth like
chkconfig --list|grep httpd httpd 0:Off 1:Off 2:On 3:On 4:On 5:On 6:Off
Now start the service via
service httpd start
Verify via
service httpd status | https://docs.cacti.net/manual:088:1_installation.1_install_unix.5a_start_system_services | 2021-04-10T14:48:15 | CC-MAIN-2021-17 | 1618038057142.4 | [] | docs.cacti.net |
While working in integrator.io with at least Manager access in Developer mode, you will see Settings within integrations, flows, connections, exports, and imports:
These settings serve two complementary functions: allowing data to persist among integration objects and defining form settings (also known as form elements, controls, fields, and inputs). Within a form, you can prompt a user for information, make changes based on those selections, and capture the results after the form is submitted.
Other users of this account will be able to see the settings defined in the forms, but they will not be able to edit the definitions for a form or settings unless they have at least Manager permissions.
Advanced integrator.io features also allow you to write JavaScript to customize the form content shown during integration installation or editing – and then make decisions based on the settings during flow runtime. For example, you could...
- Place code in hooks to retrieve data for populating form fields or basing logic on the custom settings returned
- Author installation forms for Integration Apps
- Handle values passed in load (formInit) or save (preSave) events
Contents
- Example A: Video tutorial of settings at integration level
- Example B: Build a form to request user info
Example A: Video tutorial of settings at integration level
In just a few steps, you can add settings to your integration:
You will then be able to access the selected values throughout the integration to make flow decisions at runtime:
Example B: Build a form to request user info
- Expand the Custom settings section.
- Click Launch form builder.
Form builder opens with placeholder JSON or any changes you had earlier saved. Let’s look at the basic elements of Form builder with a simple drop-down list added to a connection pane:
- JSON/Script: Toggle between the default JSON view (shown above) and the advanced Script view. When you click Script, the Form definition is expanded, as described in the Script example, below.
- Form definition: Build the JSON definition for each custom form field within fieldMap.
- Form preview: Your form definition is displayed as working form fields that will later appear in the Custom settings section of this form. Click Test form to see the…
- Form output: The results of the user’s interaction are shown as JSON custom settings.
- Preview/Auto preview: Click Auto preview to see the Form preview take shape as you type the Form definition. If you uncheck Auto preview, click the Preview button to make spot checks.
- Error (not shown): Any JSON parsing or JavaScript syntax errors are listed, along with their location in your code.
Before you commit changes to an integration shared with others, you have the chance to view your form in Dev playground. Here, too, you must have Developer mode enabled, and then you can try out various form field options, based on sample JSON definitions. To get started,
- From the Tools menu, select Dev playground.
- Under Editor examples, expand the Form builder options.
- Choose your sample data:
- Simple form: A custom form definition with three fields and built-in hide/show logic depending on selection
- Multi-column: A custom form definition that illustrates advanced layout
- Field containers: A custom form definition with fields grouped in available layouts
- Field dictionary: A lengthy custom form definition with the most common fields and options
The selected JSON opens up in a Form builder panel nearly identical to the one shown above, without the option to save. Personalize the JSON and any needed script, and make sure to copy your final version to a text file or the intended Custom [settings] section when you’re happy with the output shown.
Simple form expanded
Continuing with the simple sample above, let’s look at the JSON form definition:
{ "fieldMap": { "RolePrompt": { "id": "RolePrompt", "name": "RolePrompt", "type": "select", "options": [ { "items": [ "Admin", "User", "Monitor" ] } ], "label": "What is your role within this organization?", "helpText": "Permissions assigned to you by IT.", "required": true } }, "layout": { "fields": [ "RolePrompt" ] } }
First of all, the fields displayed must be children of fieldMap. In this case, the field defines a drop-down list, internally called RolePrompt. That’s apparent from the following descriptive fields:
- id (string, required): Typically the same as the object key, RolePrompt, in this case.
- name (string, required): In most cases, the same as the id, or RolePrompt, here. The name and id are never displayed in the form.
- type (string, required): A supported category of form fields. Enter "select" to create a drop-down list (known to HTML developers as a <select> tag).
When building a drop-down list, the choices are filled with an array of options.items, as demonstrated above. They are then sorted alphabetically in the resulting form field, and the empty default state is Please select.
Additional optional fields let you further refine the appearance of this drop-down list, such as...
- label (string): Provide an introductory prompt for this field – such as “What is your role within this organization?”
- required (Boolean): Not required by default. A required field – set to true – has these characteristics:
- The label appears in boldface text.
- It is followed by an asterisk (*).
- Its container – when you place the field inside of a collapsible section – is expanded by default.
- A selection is validated within the form, meaning that the submit buttons (such as Save) are disabled if nothing is selected; or, if it remains empty after interacting with the field, the user is alerted to provide input.
- helpText (string, HTML tags allowed): When you enter additional instructions for this field, the help (?) button shows the content when clicked.
Note: Though the layout object is optional and has no effect in this example, it’s included here for scripting access, below.
Script example
Opening this form again in Form builder, this time we’ll toggle the Script option at the upper right:
Notice the immediate changes:
- Form definition becomes Script input, where you can also edit the JSON
- fieldMap.RolePrompt is now displayed as a child of resource.settingsForm.form
- The script editor (lower left) lets you start coding or select an existing script and entry function
For demonstration purposes, we’ll make changes to the current field and also add a new field when the form is initialized, with the following function:
1 function formInit(options){ 2 let form = options.resource.settingsForm.form; 3 4 // If form doesn't exist, add an empty one 5 if (!form) form = {fieldMap:{}, layout:{fields:[]}}; 6 7 form.fieldMap.newBox = { 8 id: "newBox" 9 name: "newBox", 10 type: "checkbox", 11 label: "CTO notified" 12 }; 13 form.layout.fields.push('newBox'); 14 15 form.fieldMap.RolePrompt.label = "Select a security level"; 16 form.fieldMap.RolePrompt.required = false; 17 return form; 18 }
Viewing the preview, notice the following runtime changes as soon as the form is initialized:
- The drop-down list’s label changes (line 15)
- The drop-down list is no longer required (line 16)
- A new field is defined (lines 7 – 12) and added to the custom form (line 13)
Please sign in to leave a comment. | https://docs.celigo.com/hc/en-us/articles/360058595552-Create-forms | 2021-04-10T15:22:17 | CC-MAIN-2021-17 | 1618038057142.4 | [array(['/hc/article_attachments/360090738592/cs-trans.png', None],
dtype=object)
array(['/hc/article_attachments/360090739452/custom-forms-7.png', None],
dtype=object)
array(['/hc/article_attachments/360090756732/custom-forms-8.png', None],
dtype=object)
array(['/hc/article_attachments/360090721971/custom-forms-9.png', None],
dtype=object)
array(['/hc/article_attachments/360090722031/custom-forms-10.png', None],
dtype=object)
array(['/hc/article_attachments/360090757152/custom-forms-11.png', None],
dtype=object) ] | docs.celigo.com |
Our PHP agent has a number of settings to fine-tune the types and amounts of data reported. For most users, the default values produce the best possible mix of overhead and utility. However, you can change the settings for your specific needs.
Important.
With New Relic's PHP agent, API settings override per-directory configuration settings. Per-directory settings override the
php.ini file settings. Server-side configuration is not applicable.:
Important.
Important
If the file
/etc/newrelic/newrelic.cfg exists, the agent ignores these settings, and the agent will not start the daemon automatically.
For more information about ways to start the daemon and when to use an external configuration file, see PHP daemon startup modes.
Sets the socket endpoint for agent to daemon communications.
This can be specified in four ways.
To use a specified file as a UNIX domain socket (UDS), provide an absolute path name as a string. This is the default on non-Linux systems.
To use a standard TCP port, specify a number in the range 1 to 65534.
To use an abstract socket, use the value
@newrelic-daemon(available for agent version 5.2.0.141 or higher). This is the default on Linux systems.
To connect to a daemon that is running on a different host (helpful for container environments), set this value to
host:port, where
hostdenotes either a host name or an IP, and
portdenotes a valid port number. Both IPv4 and IPv6 are supported. This is available for agent version 9.2.0.247 or higher.
Caution
Data transmitted from the agent to the daemon is not encrypted. The only exception to this is the SQL obfuscation that happens before sending data to the daemon. We recommend only using a private network connection between the agent and daemon (this only applies when the agent and daemon are running on different hosts)..
Sets the maximum time the agent should wait for the daemon to start after a daemon launch was triggered. A value of
0 causes the agent to not wait. Allowed units are
"ns",
"us",
"ms",
"s",
"m" and
"h".
The specified timeout value will be passed to the daemon via the
--wait-for-port flag. This causes daemon startup to block until a socket is acquired or until the timeout is elapsed.
Recommendation: If setting a timeout, the recommended value is
2s to
5s. It is recommended to only set this timeout when instrumenting long-lived background tasks, as in case of daemon start problems the agent will block for the given timeout at every transaction start.
Transaction tracer .ini settings
The values of these settings are used to control transaction traces.
Other tracer .ini settings
The values of these settings are used to control various tracer features.
Attribute settings
This section lists the settings that affect attribute collection and reporting.
Other .ini settings
This section lists the remaining newrelic.ini settings.. | https://docs.newrelic.com/docs/agents/php-agent/configuration/php-agent-configuration/ | 2021-04-10T14:22:14 | CC-MAIN-2021-17 | 1618038057142.4 | [array(['/static/66502cc6c05ca93aa4dc63e260a19739/8c557/php-config-cascade.png',
'php-config-order.png php-config-order.png'], dtype=object) ] | docs.newrelic.com |
Browser apps now show AJAX performance!
Mobile monitoring apps will now show handled exceptions!
Now supporting a new APM external services page
Now supporting a new APM database page for services that report database information
Now supporting multiple queries in Dashboard time series charts
Fixes issue where some charts may not be displayed in recently edited or created dashboards.
Adds in a feedback page on phone, that allows you to send us feedback on the mobile app
Fixed an issue where filters in dashboards would not be applied properly.
Added the ability to show and hide expected APM errors.Added the ability to go directly to APM errors from an incident with related errors.
Improved push notification support.
Now supporting sharing login information for the Insights TV app
Updated universal link supportFixed a visual issue with percentage queries in dashboards
Now supporting dashboards with pages!
Added in support to open universal links from New Relic One.
Added support to filter any dashboard!
Easier login flow to provide support for our newest customers
Introducing NRQL editing from within the iPad app
Improved Home screen look and feelAdded dashboards to the Home screenAdded the ability to filter to specific entity types on the Home screen (iPhone)Updated header colors in light mode
Introducing NRQL editing from within the iPhone app!
Introducing Dashboards on iPad. Now view all your dashboards right from the New Relic mobile app.
Improved time picker with custom start and end time selectionAdded time picker to Dashboards
View New Relic dashboardsAdded detailed pages that include new chart tooltip and legend behaviorLandscape support for widgets
3.53.1 is a hot fix for 3.53.0 which has an improved Mobile APM Http Errors view, to allow you to facet & filter down to the errors you care about.
We now have an improved Mobile APM Http Errors view, to allow you to facet & filter down to the errors you care about.
Support for iOS 13 dark mode!!!
The New Relic app is even better with an improved home screen, displaying all your favorite services, hosts, mobile and browser apps, key transactions, and more. This enhanced landing page also allows…
Improved and redesigned Browser product within the app!
Synchronizes app favorites with New Relic One for a consistent way to get access to those APM apps, Browser apps, Mobile apps, and Synthetics monitors wherever you might be.
Our new APM memory / threads section now works with Java & Elixir agents!
Additional information for Java instances and Elixir hosts in the APM application detail section.New host / instance drop down for APM applications showing summary information, and allowing you to fil…
Additional information for hosts of Ruby, Node.js, and Go APM applications.
Fixed the problem with muting, and enabling / disabling Synthetics monitors
Updated account settings screen enables seamless transition between users and accounts
This version contains enhanced notifications, with support for selecting unique sounds for different types of notifications, along with showing a chart image for incidents.
New Relic iOS now supports favoriting! Favorite your apps, monitors, key transactions, and plugins to keep what's important to you at the top of each list. And favorite syncs across all your iOS devic…
Added multiline support for Radar in violations
Support for incident context from New Relic Radar -- a tool for faster orientation during an incident.Incident context provides faster orientation during an incident by detecting unusual behavior in y…
Now supporting Infrastructure on the iPad!
Now supporting accounts located in the EU region!
APM Error Analytics in the app, group by different attributes and page through similar error traces
Fixes crash in iPad, when tapping on the transaction button for APM apps, that was introduced in 3.39.1
Now you can view APM Transaction Trace summaries and attributes.Swipe from left to pop views.
Added health status to the detail headers
Added Infrastructure host health status popup that shows recent violationsAdded Infrastructure host applications popup with the ability to filter to an application, or view it in APMAdded a new UI loo…
Added the Hosts screen in the Infrastructure product, which replaces the older Compute
Support for the new iPhone X
Added in universal link support for Radar
Introducing Radar
Updated tooltips on every chartImproved login flow for Touch ID
Added compare with yesterday and last week charts in the APM, Key Transaction & Browser sections. ** Toggle this from the time picker. At the top is a new 'Compare with yesterday and last week' button…
Adds additional animations to Health Map
As enterprises adopt more scalable microservice architectures, it becomes more difficult to pinpoint performance issues within the application stack. New Relic’s new health map feature brings together…
Fixes a crash when viewing traced errors without stack tracesFixes a sharing issue when a warning violation does not have an associated incident.
Adds in a sharing option for Alert Incidents & Violations.Improve the UI in the Errors detail view.RBAC checks for acknowledging alerts & resolving crashes
Added in summary KPI (Key Performance Indicator) & KPI charts to application incidents.Added acknowledgement of incidents in the push notificationAdded in the ability to scope the Applications page to…
Touch ID support for logging in, now you can login using only your finger!Support for Infrastructure universal links
Infrastructure table sorting
Network, Storage, and Processes views within the Infrastructure product.Each of these views can toggle between a charts or table representationThe table view allows you to select which items show up o…
Support for New Relic Infrastructure
Fixes some crashes and other UI bugs.
This version updates the very dated error details view, and re-enables you to view custom & request parameters for traced errors. It also fixes some account switching issues between SAML and regular a…
Hot fix crash on startup for non arm64 devices that was introduced in 3.25
Improved support for large numbers of applications & servers.See more summary details in the index views.
Crash and bug fixes
Support for NRQL alerts
Fix a crash on start for users running iOS 7
Universal link support for New Relic https url's.Login support for LastPass and 1Password extensions.
Users who have capitalized letters setup for their login email preferences should now be able to login again.
Improves multi-window support on iPad.
Support for New Relic Synthetics! You can now use your iOS device to view the health and status of all your current monitors, and view detailed information about specific failures. For more informatio…
Stability release to fix crashes and some UI bugs.
Add a setting allowing users to disable V3 alerts on the mobile apps
New V3 alerts support in the mobile apps
Bug fix release
New Saved Labels & Rollups on mobile
General bug fixes
Browser support
General bug fixes
Fix push notifications
New completely redesigned UI
Application Transactions views
Updated privacy policy
Improved Alerts detail view
Add SAML supportAdd support for idle session timeout
Fix bug that occurred when opening an application from notification
Performance improvements
New Relic for iPad!
New iOS 7 user interface
Platform plugin support
Fixed authentication issues
Performance improvements
Better chartsImproved UINew errors detail view
Added mobile monitoring
Apdex charts displayed for Apdex alerts
Fix non-standard characters in password issueFix mixed case email charactersBetter sub-account support
Initial release | https://docs.newrelic.com/docs/release-notes/mobile-apps-release-notes/new-relic-ios-release-notes/ | 2021-04-10T15:18:23 | CC-MAIN-2021-17 | 1618038057142.4 | [] | docs.newrelic.com |
To get help generating an integration like this or this, follow these steps:
Install
opencollective-setup
$ npm install -g opencollective-setup
Get a personal token from GitHub's token page. Check all the
repo related permissions.
Create a file in your home directory (on Mac OS X or Linux) called
.opencollective.json and add token in it:
{ "github_token": "[YOUR_TOKEN]" }
Run cli for a given repo:
$ opencollective-setup setup -r [repo_owner/repo_name] -i
Ex: To integrate with MochaJS (), run:
opencollective-setup -r mochajs/mocha -i
-i makes it interactive.
Answer questions asked by the script - usually defaults are good to go with. Verify that the slug of project is same as the one in the database (script guesses at it and is usually right).
Script attempts to do several integrations across README.md, CONTRIBUTORS.md and ISSUE_TEMPLATE.md. Most important ones are the two integrations on README.md: backers and sponsor badges at the top and adding backer/sponsor section near the bottom. | https://docs.opencollective.com/help/contributing/development/readme-integration | 2021-04-10T15:18:12 | CC-MAIN-2021-17 | 1618038057142.4 | [] | docs.opencollective.com |
Environment Types
SilverStripe knows three different environment types (or "modes"). Each of the modes gives you different tools
and behaviors. The environment is managed by the
SS_ENVIRONMENT_TYPE variable through an
environment configuration file.
The three environment types you can set are
dev,
test and
live.
Dev
When developing your websites, adding page types or installing modules you should run your site in
dev. In this mode
you will see full error back traces and view the development tools without having to be logged in as an administrator
user.
Test Mode
Test mode is designed for staging environments or other private collaboration sites before deploying a site live.
In this mode error messages are hidden from the user and SilverStripe includes BasicAuth integration if you
want to password protect the site. You can enable that by adding this to your
app/_config/app.yml file:
--- Only: environment: 'test' --- SilverStripe\Security\BasicAuth: entire_site_protected: true
The default password protection in this mode (Basic Auth) is an oudated security measure which passes credentials without encryption over the network. It is considered insecure unless this connection itself is secured (via HTTPS). It also doesn't prevent access to web requests which aren't handled via SilverStripe (e.g. published assets). Consider using additional authentication and authorisation measures to secure access (e.g. IP whitelists).
When using CGI/FastCGI with Apache, you will have to add the
RewriteRule .* - [E=HTTP_AUTHORIZATION:%{HTTP:Authorization}] rewrite rule to your
.htaccess file
Live Mode
All error messages are suppressed from the user and the application is in it's most secure state.
Checking Environment Type
You can check for the current environment type in config files through the
environment variant.
app/_config/app.yml
--- Only: environment: 'live' --- MyClass: myvar: live_value --- Only: environment: 'test' --- MyClass: myvar: test_value
Checking for what environment you're running in can also be done in PHP. Your application code may disable or enable certain functionality depending on the environment type.
use SilverStripe\Control\Director; if (Director::isLive()) { // is in live } elseif (Director::isTest()) { // is in test mode } elseif (Director::isDev()) { // is in dev mode } | https://docs.silverstripe.org/en/4/developer_guides/debugging/environment_types/ | 2021-04-10T15:23:41 | CC-MAIN-2021-17 | 1618038057142.4 | [] | docs.silverstripe.org |
.
Several dashboards monitor SmartStore status. The dashboards are scoped either to a single instance or to the entire deployment. Find the dashboards under the Indexing menu and the SmartStore submenu:
- SmartStore Activity: Instance
- SmartStore Activity: Deployment
- SmartStore Cache Performance: Instance
- SmartStore Cache Performance: Deployment
SmartStore Activity dashboards
The SmartStore Activity dashboards provide information on activity related to the remote storage, such as:
- Remote storage connectivity
- Bucket upload/download activity
- Bucket upload/download failure count
The SmartStore Activity dashboards also include check boxes that you can select to show progress if you are currently performing data migration or bootstrapping.
SmartStore Cache Performance dashboards
The SmartStore Cache Performance dashboards provide information on the local caches, such as:
- The values for the
server.confsettings that affect cache eviction
- The bucket eviction rate
- Portion of search time spent downloading buckets from remote storage
- Cache hits and misses
- Repeat bucket downloads.
GCSClient. Communication with GCS.
StorageInterface. External storage activity (at a higher level than
S3Clientor
GCSClient).
CacheManager. Activity of the cache manager component.
CacheManagerHandler. Cache manager REST endpoint activity (both server and client side).
KeyProviderManager. Errors related to key provider setup and configuration. The key provider is used when the system has encrypted data on the remote store.
search.log . Examine these log channels:
CacheManagerHandler. Bucket operations with cache manager manager REST endpoint.
Use manager..
- Cache manager issues. If the problem persists beyond a search, the cause could be related to the cache manager. Examine
splunkd.logon the indexer issuing the error.
This documentation applies to the following versions of Splunk® Enterprise: 8.1.0, 8.1.1, 8.1.2, 8.1.3
Feedback submitted, thanks! | https://docs.splunk.com/Documentation/Splunk/8.1.2/Indexer/TroubleshootSmartStore | 2021-04-10T15:26:41 | CC-MAIN-2021-17 | 1618038057142.4 | [array(['/skins/OxfordComma/images/acrobat-logo.png', 'Acrobat logo'],
dtype=object) ] | docs.splunk.com |
Item API Name: Item__c The item object represents a contracted, chargeable product included in a subscription. Label API Name Type Description Order No. Name Text Active Active__c Checkbox Controls whether or not this item is included in the invoice run. Additional Description AdditionalDescription__c LongTextArea(32768) Specifies an additional description for the item. Additional Title AdditionalTitle__c TextArea Specifies an additional title for the item. Aggregate Indiv. Priced Transactions AggregateIndividualPriced__c Checkbox Controls whether or not to aggregate transactions with individual prices into one transaction. Billing Period BillingPeriod__c Number(2, 0) Specifies the cycle (time in months) after which the item is included in the invoice run. Billing Type BillingType__c Picklist Determines the method for calculating quantities. Billing Unit BillingUnit__c Picklist Defines the time unit for the billing period (Day or Month), which is used for the price calculation of the billing period. Charge Model ChargeModel__c Picklist Allows to define pricing scenarios that consider a specific business logic (Mark Up/Mark Down). Commission Commission__c Percent(3, 2) Specifies a percentage commission that calculates the line item total based on the unit price and the defined percentage. Contract Value Correction ContractValueCorrection__c Currency(16, 2) Allows to specify a corrective value to be deducted from the contract value because it has already been invoiced. Contract Value Invoiced ContractValueInvoiced__c Currency(16, 2) Shows the actually invoiced contract value, which is updated each time an invoice or an invoice line item is changed or deleted. Contract Value Remaining ContractValueRemaining__c Currency(16, 2) Shows the remaining contract value, which is calculated based on the other three values (Remaining Contract Value = Contract Value - Contract Value Correction - Invoiced Contract Value). Contract Value ContractValue__c Currency(16, 2) Shows the overall value of the item for the whole subscription period. Decimal Places for Quantity DecimalPlacesForQuantity__c Number(2, 0) The number of decimal places for the quantity as displayed on the invoice. Decimal Places for Unit Price DecimalPlacesForUnitPrice__c Number(2, 0) The number of decimal places for the unit price as displayed on the invoice. Description Description__c LongTextArea(32768) A long description for this item, which can be printed on the invoice line item. Discount Discount__c Percent(3, 2) Specifies a percentage discount rate that is applied to the item price. Display Subtotal After This Item DisplaySubtotalAfter__c Checkbox Defines whether a subtotal is displayed after this item. End Date EndDate__c Date The date until which this item is active. Expected Revenue ExpectedRevenue__c Currency(16, 2) The monthly expected revenue. This is only valid for transactional items and will be analyzed for MRR/MRUR reporting. Include In Monthly Minimum GlobalMonthlyMinimum__c Checkbox Controls whether this item is included in the monthly minimum fee. Ignore Item Criterion IgnoreCriterion__c Checkbox If checked, the aggregation criterion of transactions (or other objects) is ignored, and the invoice line items are not grouped. Invoice Line Item Type InvoiceLineItemType__c Picklist Specifies the type of the invoice line item. If empty, defaults to product. Next Service Period Start NextInvoice__c Date Holds the next billing date for this item. Order By OrderBy__c Text(255) The API name of an invoice line item field by which the line items are to be ordered. Price Type PriceType__c Picklist Specifies the price calculation method of this item (Default or Flat), is ignored when using price tiers. Price Price__c Currency(13, 5) Specifies the item price, is ignored when using price tiers. Product Group ProductGroup__c Text(255) Allows for specifying the product group of this item. Quantity Quantity__c Number(13, 5) Specifies the quantity used to bill recurring and one-time items. Does not affect transactional items. Reverse Reverse__c Checkbox If checked, the display order of the invoice line items created from this item is reversed. Sequence Sequence__c Number(18, 0) Determines the position of this item in the item table. Source Child Id SourceChildId__c Text(18) Holds the id of the source child. The field is set during subscription building. Source Parent Id SourceParentId__c Text(18) Holds the id of the source parent. The field is set during subscription building. The field is used to channel subscription updates in case of items with duplicate order numbers. Start Date StartDate__c Date The start date from which this item is active. Subscription Subscription__c Lookup(Subscription__c) The subscription to which this item relates. Sync With SyncWith__c Picklist Specifies a specific date with which the billing period of a recurring item is to be syncronized. Title Title__c TextArea Specifies the title or name of a product. Transaction Aggregation Fields TransactionAggregationFields__c LongTextArea(32768) Defines additional fields of transactions that are to be aggregated for this item. Format: {"FIELDNAME1":"FUNCTION","FIELDNAME2":"FUNCTION"}. Valid functions include SUM, MIN, MAX, LAST. Transaction Commission Tier Price Field TransactionCommissionTierPriceField__c Text(255) Defines a source field for the commission tier price that is to be used during the transaction building. Transaction Price Field TransactionPriceField__c Text(255) Defines a source field for the price that is to be used during the transaction building. Transaction Price Tier Quantity Field TransactionPriceTierQuantityField__c Text(255) Defines a source field for the price tier quantity that is to be used during the transaction building. Transaction Quantity Field TransactionQuantityField__c Text(255) Defines a source field for the quantity that is to be used during the transaction building. Transaction Type TransactionType__c Picklist Specifies the type of transactions that are to be billed with this item. Unit Unit__c Picklist Specifies the unit that is associated with this item. | https://docs.juston.com/jo_objects/Item/ | 2018-08-14T17:24:16 | CC-MAIN-2018-34 | 1534221209216.31 | [] | docs.juston.com |
By default all assemblies in the endpoint bin directory are scanned to find types implementing its interfaces so that it can configure them automatically.
NServiceBus.) is automatically included since it is required for endpoints to properly function.
Core. dll.) are not considered a core assembly and will need to be included when customizing assembly scanning.
RavenDB. dll
Nested Directories
Nested directories are not scanned for assemblies. Nested directoriescard using the following:, already loaded into the AppDomain, but not present in the applications base directory, are not scanned. The endpoint can be configured to also scan AppDomain assemblies using:
var scanner = endpointConfiguration.AssemblyScanner(); scanner.ScanAppDomainAssemblies = true;
Suppress scanning exceptions
By default, exceptions occurring during assembly scanning will be re-thrown. assembly scanning exceptions can be ignored using the following:
var scanner = endpointConfiguration.AssemblyScanner(); scanner.ThrowExceptions = false; | https://docs.particular.net/nservicebus/hosting/assembly-scanning?version=core_6 | 2018-08-14T17:08:31 | CC-MAIN-2018-34 | 1534221209216.31 | [] | docs.particular.net |
If a system from which BMC BladeLogic Configuration Manager deploys software is available on the network and you have installed an EPI agent to interact with it, software can be deployed from it directly to newly provisioned machines. The requesting user can select which software to deploy or the blueprint can contain the specific jobs to be deployed on all machines provisioned from that blueprint.
Prerequisites
Install an EPI Agent for BMC BladeLogic.
Log in to the vRealize Automation EPI/BMC Agent host as a system administrator.
As the system administrator under which the EPI agent is running, log in to the BladeLogic console to configure the authentication profile to be used and to accept any BladeLogic security certificates, and then close the console. This prerequisite is required only once.
Procedure
- Select vRealize Automation EPI/BMC Agent service. , and stop the
- On the EPI agent installation host, which could be the same as the Manager Service host, change to the EPI agent installation directory, typically %SystemDrive%\Program Files (x86)\VMware\vCAC Agents\agent_name.
- Edit every file in the Scripts\nsh folder in the EPI agent directory and under the parameter list section of each .nsh file, update the values for the following variables. The description of each variable appears above the variable definitions.
USERNAME_USER=BLAdmin
AUTH_TYPE=SRP
PASSWORD_USER=password
APP_SERVER_HOST=bladelogic.dynamicops.local
ROLE_NAME=BLAdmins
- Edit the agent configuration file, VRMAgent.exe.config, in the EPI agent installation directory and replace
CitrixProvisioningUnregister.ps1with
DecomMachine.ps1.
-="CitrixProvisioningRegister. ps1" unregisterScript="DecomMachine.ps1"/>
- If you intend to provision by cloning with a static IP address assignment, you can enable BMC BladeLogic registration of provisioned machines by IP address rather than by machine name.
- Edit the files InstallSoftware.ps1 and DecomMachine.ps1 in theScripts folder in the EPI agent directory and change the line
$byip=$falseto
$byip=$true. edit the files InstallSoftware.ps1 and DecomMachine.ps1 in the Scripts folder in the EPI agent directory and change the line
$byip=$falseto
$byip=$true.
- If you enable registration by IP address by making the above change, you must provision by using static IP address assignment, otherwise, BMC BladeLogic integration fails.
- Select vRealize Automation Agent – agentname service). to start the EPI/BMC agent service (
- Place all the BMC BladeLogic jobs you want available to be selected by machine requestors or specified by blueprint architects under a single location within BMC BladeLogic Configuration Manager, for example, /Utility.
- Prepare a reference machine and convert it to a template for cloning.
- Install a BMC BladeLogic agent that points to the server on which BMC BladeLogic Configuration Manager is running.
- Verify that you are able to connect to the agent on the guest and successfully execute jobs as expected after provisioning.
Results
Tenant administrators and business group managers can now integrate BMC BladeLogic into clone blueprints. See Add BMC BladeLogic Integration to a Blueprint. | https://docs.vmware.com/en/vRealize-Automation/7.3/com.vmware.vra.prepare.use.doc/GUID-502319AD-AACC-4A81-B5F0-2C898BC9469A.html | 2018-08-14T17:36:51 | CC-MAIN-2018-34 | 1534221209216.31 | [] | docs.vmware.com |
Flows: showing common paths in a session
Flow: Use flows to analyze user (actor) actions over time, a sequence of actions, or a sequence of actions over time.
Paths – Over the last 3 months, what are the most common steps in a user session?
How many users perform each step and what is the % out of total users performing other steps? | https://docs.interana.com/3/3.x_Cookbook/Flows%3A_showing_common_paths_in_a_session | 2018-08-14T17:20:57 | CC-MAIN-2018-34 | 1534221209216.31 | [] | docs.interana.com |
How do invoices become
Paid?
Invoices are the statements that document your payment requests against your customers.
According to the business needs, invoices have different statuses.
- Draft: New invoices have the status
Draft. You can check draft invoices for correctness and edit them as necessary.
- Open: If you approve of a draft invoice, you Finalize it. This sets the status to
Open, making the invoice effective, that is, due for payment (and unalterable).
- Paid: With incoming payments that make up for the due amounts, open invoices become
Paid.
Usual lifecycle of an invoice in JustOn
Basically, there are three ways for JustOn to cover the due amount and to set the invoice
Paid.
- Registering a payment entry and assigning it to the invoice: This creates a
Paymentbalance on the invoice, which counts against the open amount as recorded in the
Invoicebalance.
- Settling the invoice against a credit: This creates a
Clearingbalance on the invoice, which, again, counts against the open amount.
- Finalizing an invoice with the payment method
Cash: This creates a balance of the type
Cashon the invoice, which offsets the open amount.
In addition, you can manually create and assign balances like
Refund or
Prepayment, which also reduce an open invoice amount.
▶ If the sum of all balances for the invoice is
0, the invoice is considered
Paid.
Related information:
Managing Balances
Managing Payment Entries
Managing Settlements | https://docs.juston.com/en/jo_faq_paid/ | 2018-08-14T17:23:54 | CC-MAIN-2018-34 | 1534221209216.31 | [array(['../../images/invoice_lifecycle.png', 'invoice_lifecycle'],
dtype=object) ] | docs.juston.com |
Jamf Pro Server Logs
The Jamf Pro Server Logs settings allow you to view and download the Jamf Pro server log from the Jamf Pro web app. You can also use the Jamf Pro Server Logs settings to enable debug mode and statement logging.
Viewing and Downloading the Jamf Pro Server Log
Log in to Jamf Pro.
In the top-right corner of the page, click Settings
.
Click Jamf Pro Information.
Click Jamf Pro Server Logs
.
Click Edit.
Configure the options on the screen.
Click Save.
The Jamf Pro server log displays on the page.
(Optional) Click Download to download the log. The JAMFSoftwareServer.log is downloaded immediately.
Related Information
For related information, see the following Knowledge Base article:
Enabling Debug Mode
Find out how to enable debug mode for several Jamf products, as well as where to view logs from your Apple devices so that you can troubleshoot on a deeper level. | http://docs.jamf.com/10.6.0/jamf-pro/administrator-guide/Jamf_Pro_Server_Logs.html | 2018-08-14T17:57:41 | CC-MAIN-2018-34 | 1534221209216.31 | [] | docs.jamf.com |
- 安全 >
- Encryption >
- Transport Encryption >
- Configure mongod and mongos for:
cd /etc/ssl/ openssl req -newkey rsa:2048 .解
- Specify <pem> with the full path name to the certificate.
- If the private key portion of
Or, if using the older older configuration file format: TLS name of the .pem file that contains the signed SSL certificate and key.
- CAFile with the name of the .pem file that contains the root certificate chain from the Certificate Authority. CAFile: /etc/ssl/ca.pem
Or, if using the older older configuration file format: TLS allowConnectionsWithoutCertificates run-time option with mongod and mongos. If the client does not present a certificate, no validation occurs. These connections, though not validated, are still encrypted using SSL.
For example, consider the following mongod with an SSL configuration that includes the allowConnectionsWithoutCertificates setting:
mongod --sslMode requireSSL --sslAllowConnectionsWithoutCertific.
在 2.6 版更改: In previous versions, you can only specify the passphrase with a command-line or a configuration file option.
注解
FIPS-compatible SSL is available only in MongoDB Enterprise. See Configure MongoDB for FIPS for more information.
See Configure MongoDB for FIPS for more details. | http://docs.mongoing.com/tutorial/configure-ssl.html | 2018-08-14T17:56:06 | CC-MAIN-2018-34 | 1534221209216.31 | [] | docs.mongoing.com |
When you want to draw text in your game this text is drawn in a
standard Arial 12 points font, but to make more interesting or
unique looking texts you will probably want to use different fonts,
however. So, to use different fonts that you have on your computer
you must create a font resource in GameMaker: Studio. For
each font resource you specify a particular type of font from your
computer which can then be used in your game using the action
or code
to set a font for drawing to the screen.
To create a font resource in your game, use the item Create Font in the Resources menu or use the corresponding button on the toolbar, which will cause the following window to pop up:
As with all
resources, you should give your font resource a unique name so that
you (and GameMaker: Studio) can identify it while writing
your game. Next you should start selecting the fonts to preview
using the drop-down menu "Font" on the left. Beneath this section
you can set other things like the size and whether the font should
be drawn as bold or italic or have anti-aliasing
(edge smoothing) applied, and you also have the option to use
High Quality fonts, which will use a different rendering
technique for the font glyphs, giving a better, sharper look.
However it should be noted that some fonts may not look better, and
you should experiment with this option to see which you prefer.
The preview window on the right will show you different text ranges you have selected as they will look with the size and transforms you have specified, except the anti-aliasing which is not visible in the preview but will be in your game. Please note that font scaling (especially from small to large) can give artifacts when drawn, so try to avoid this where possible. There is also a check-box labelled "Include In Asset Package. If you are creating a package of fonts to upload to the Marketplace or to distribute as part of an extension, then you should tick this option otherwise the base font files will not be distributed with the package.
WARNING!: If you include a base font file in this way it must be licensed for distribution.
One final option that you have available to you is the ability to assign your font resource to a texture group. this can be very useful when it comes to optimising the way your game runs and the amount of texture swaps that must be done while the game is being played. for more information on texture groups please see the section Advanced Use - Texture Pages.
NOTE: Due to licensing issues, GameMaker: Studio does not store the fonts with the project file (when the game is finally finished the font is rendered to a texture page, so finished games will draw the text as designed). This means that if you wish to share the *.gmx or a zipped *.gmz file, you must include the font resource that you have used yourself, as not everyone will have the same fonts as you installed on their computer. The only exception to this is when creating asset packages (see above).
A font typically consist of 256 characters, numbered from 0 to
255, but in general you use only a small portion of these. This is
why GameMaker: Studio defaults to using only the characters
from 32 till 127 are stored in the font. You can, however, change
the character range used to help optimise your games ie: If you
only need the numbers from a specific font, then only select
the numbers. To do this, you should first click the Clear
button to clear the current range and then click on the "+" button
to add a new range. This will open the following window:
This window
has some buttons to help you establish a standard range for your
font -
- The Normal range from 32 till 127
- The All range from 0 till 255
- The Digits range that only contains the 10 digits
- The Lettersrange that contains all uppercase and lowercase letters only
Other ranges can be used by typing in the first and last
character index of the range you wish to set in the Character
Range input boxes (If a character does not lie in the range it
is replaced by a space). Apart from these buttons and input boxes,
you also have two further options there especially useful.
The first is the button marked From Code. If you click on this button and then click OK, GameMaker: Studio will automatically parse your game code for strings and then create different character ranges to cover all the text in your game. Note that it looks for all strings (either within "" or '') and so may also include file names in the character ranges. However you can remove any ranges from your font resource using the "-" button at the bottom of the window and so remove those unwanted characters.
The second button in this window is marked From File and it works similarly to that explained above. If you click on it, you will be asked to supply a file, and then once that is done you should click on OK. GameMaker: Studio will then parse the file and create character ranges for the text found within.. | http://docs.yoyogames.com/source/dadiospice/001_advanced%20use/003_fonts.html | 2018-08-14T17:10:47 | CC-MAIN-2018-34 | 1534221209216.31 | [] | docs.yoyogames.com |
AWS services or capabilities described in AWS Documentation may vary by region/location. Click Getting Started with Amazon AWS to see specific differences applicable to the China (Beijing) Region.
Constructs AmazonCloudFrontClient with the credentials loaded from the application's default configuration, and if unsuccessful from the Instance Profile service on an EC2 instance. Example App.config with credentials set.
Namespace: Amazon.CloudFront
Assembly: AWSSDK.CloudFront.dll
Version: 3.x.y.z
.NET Standard:
Supported in: 1.3
.NET Framework:
Supported in: 4.5, 4.0, 3.5 | https://docs.aws.amazon.com/sdkfornet/v3/apidocs/items/CloudFront/MCloudFrontctor.html | 2018-08-14T17:55:03 | CC-MAIN-2018-34 | 1534221209216.31 | [] | docs.aws.amazon.com |
- How Secure Peering Works
- Generating Security Keys and Certificates
- Configuring Secure Peering
Because an appliance with secure peering enabled will only compress connections with partner appliances with which it has a secure peering relationship, this procedure should be applied at the same time to all your appliances. | https://docs.citrix.com/en-us/cloudbridge/7-4/cb-features-wrapper-con/cb-secure-traffic-accel-con/br-adv-secure-peering-con/br-adv-to-conf-secure-peer-con.html | 2018-08-14T17:55:48 | CC-MAIN-2018-34 | 1534221209216.31 | [] | docs.citrix.com |
Using the Refresh Token¶
When using the Authorization Code grant type, you will receive a refresh token in addition to the access token. Any time up until the expiration of the refresh token (currently 30 days past the access token expiration), you can generate a new access token without requiring the user to go through the authorization process. | https://carvoyant-api.readthedocs.io/en/latest/getting-started/oauth-example-usingrefreshtoken.html | 2021-07-24T02:24:32 | CC-MAIN-2021-31 | 1627046150067.87 | [] | carvoyant-api.readthedocs.io |
Strategy Constraints
Strategy constraints allow you to set pre-conditions on activation strategies that needs to be satisfied for the activation strategies to take effect.
#Constrain on a specific environment
The most common use case for strategy constraints is that you want an activation strategy to only take effect in a specific environment. For example, you could enable the feature for everyone in development, while you only expose the new feature to a few percent of users in production.
#Constrain on custom context fields
It is also possible to constrain an activation strategy configuration on custom context fields. A common use case is a multi-tenant service where you want to control roll-out on a tenant identifier. This allows you to decide which customer should get access to your new feature.
#Define your own custom fields
Starting with Unleash-enterprise version 3.2.28 customers can define their custom context fields via the user interface.
You can also define your own custom context fields that you can use together with strategy constraints. We have seen customers use multiple variants of custom context fields to control their feature roll-out:
- region
- country
- customerType
- tenantId
Combining strategy constraints with the “flexibleRollout” allows you to do a gradual roll-out to a specific segment of your user base.
#Step 1: Navigate to “Context Fields“
Locate “context fields in the menu
#Step 2: Define new context field
Next you can define your new context field. The minimum requirement is to give it a unique name. In addition, you can give it a description and define the legal values.
#What is “legal values”?
Legal values defines all possible values for the context field. this will be used in Unleash Admin UI to guide users when working with context fields to make sure they only use legal values.
| https://docs.getunleash.io/advanced/strategy_constraints/ | 2021-07-24T02:06:36 | CC-MAIN-2021-31 | 1627046150067.87 | [array(['/assets/images/strategy-constraints-a9a932e8c3645af137957122331791db.png',
'Strategy constraints'], dtype=object)
array(['/assets/images/custom-constraints-2fad70fadecd049dd05e34573d8de3ed.png',
'Custom constraints'], dtype=object)
array(['/assets/images/context-fields-d3fb75e0be1bc56e8ee60e5336576aec.png',
'Context fields'], dtype=object)
array(['/assets/images/new_context_field-31331b03e67715452f5ffb543c81f492.png',
'New context fields'], dtype=object)
array(['/assets/images/constraints_legal_values-5c007d027cd11566656a314143d03060.png',
'New context fields'], dtype=object) ] | docs.getunleash.io |
Overview
ThreatLIST provides SIEM enrichment options for network, security, and incident response professionals.
Our database of IP addresses and domain IOCs can be used to enhance research and forensics, local reporting and network traffic correlation, and as a data enrichment tool for SIEM software (Splunk, QRadar, ArcSight, LogRhythm…).
ThreatSTOP’s ThreatLIST allows you to consume dynamically updated information about the threats targeting, or already present within your network.
ThreatLIST is easily configured to:
- Include the security policy categories meaningful to you.
- Integrate with your existing hardware and software platforms.
- Contain the desired historical and contextual meta data about IOCs.
Access to this feature
Access to this feature must be enabled in your product plan. Please contact your ThreatSTOP representative if your current plan doesn’t include it.
Data Formats
ThreatSTOP SIEM integration is available in Splunk, Suricata and Domain-Only formats.
- Splunk files are in a CSV format with the following headers:
"IOC","Category","SubCategory","Severity","FirstSeen","LastSeen","Geo","IOC Type"
Geo field is ISO-8859 encoded. All other fields are 7 bit ascii.
- Suricata files are formatted with one IoC per line with the following default format:
'alert ip [%ioc] any -> any (msg: "[%blocker_desc %blocker_type]"); priority: %priority; sid: %sid;)'
- Domain-Only files are formatted with one IoC per line with only the domain. This format only works for DNS Defense policies with domain IoCs.
Enabling ThreatLIST
ThreatList files are generated from one or more of your custom policies. Enabling the feature is done in two steps:
- Step 1: configure the global settings for the feature.
- Step 2: configure the policy to be exported.
Step 1 - Global settings
- Navigate to the SIEM Integration tab in the navigation menu.
Using the dropdown, choose between Splunk (default) and Suricata format.
- If you’ve chosen Suricata, you will need to describe the format of the line. Our default is:
alert ip [%ioc] any -> any (msg: "[%blocker_desc %blocker_type]"); priority: %priority; sid: %sid;)
There are several variables that will be replaced with information about each indicator of compromise (IoC) when your file is generated:
- %ioc is the ip address from ThreatSTOP.
- %blocker_desc is the name of source of the block.
- %blocker_type is the type of threat.
- %priority is how dangerous the threat is.
%sid is an incrementing sid for suricata.
- If you’ve chosen Suricata, you will also be able to config the starting sid.
Step 2 - Policy settings
- Browse to the policy configuration for each policy you want to export using ThreatList.
- Enable SIEM integration (Enabled checkbox).
- There are three configurations to set by choosing from the dropdowns.
- Set the Threatlist IOC Type. This determines what type of IoCs will be included in your SIEM file. Your choices are:
- IPs only.
- Domains only.
- All (both IPs and domains).
- Set the Threatlist Ioc Format. This determines if the system will generate separate files for each IoC type or a single file. Your choices are:
- Split IoC Types.
- All IoC Types in a single file.
- Save your changes.
Accessing the ThreatList files
The ThreatLIST files are produced every two hours and are made available for you at threatlist.threatstop.com. The files need to be accessed via SFTP. Credentials will be provided by our support team.
The files following this naming convention:
threatlist-<policy_name>-<ioc type>-<timestamp>.csv threatlist-<policy_name>-<ioc type>-latest.csv
where:
- Policy name is the name of the Policy being exported.
- ioc type is ip, domain, or all (for ThreatList setting requesting both IP addresses and domains).
- YYYYMMDD-hhmm is the timestamp at which the file was produced. The -latest file always points to the latest version.
For example:
threatlist-my_policy-ip-latest.csv | https://docs.threatstop.com/threatlist.html | 2021-07-24T02:19:35 | CC-MAIN-2021-31 | 1627046150067.87 | [] | docs.threatstop.com |
Corporate media uses its role as "arbiter of credibility" to exploit the public — to manufacture consent — rather than to reflect and protect its values.
Ideamarket allows the public to bestow and revoke credibility trustlessly, without depending on trusted third parties like media corporations.
Therefore, we expect "actual trustedness," without reference to what corporate media tells you to trust, to be a reason people buy a listing.
Over time, as history happens and highly-ranked people are proven wrong (or liars), the market will correct.
Meanwhile, the largest profit opportunities will be found in discovering extremely trustworthy voices early, while still unknown. Market participants are rewarded for seeking out and elevating under-valued voices to compete with incumbents.
This combination of
1) markets correcting due to revealed failures and deceit
2) constant attempts to improve upon current leaders
will improve Ideamarket's signal over time.
Everyone agrees trust and trustworthiness are valuable. The fundamental value is there already. The challenge is identifying it, and building tools and metrics that help reveal it.
With tools like these and others in development, Ideamarket will help people identify trustworthiness and make price discovery judgments about it:
Locking tokens signals confidence in a listing's long-term strength. A high locked token % could distinguish potential blue chips from pump-and-dump schemes. (However, too high a locked % may indicate low buying interest, as if everyone able to sell a listing already has.)
Verified identity % could help distinguish listings bought by genuine grassroots movements, from those bought by Sheldon Adelson using 10,000 different ETH addresses. We may partner with someone like BrightID to verify identities on Ideamarket while maintaining user privacy.
Got more ideas? Share with us on Discord. | https://docs.ideamarket.io/philosophy/unmanufacturing-consent | 2021-07-24T01:24:26 | CC-MAIN-2021-31 | 1627046150067.87 | [] | docs.ideamarket.io |
Demonstration: Testing the Transformation
It's time to test the set actions that we've specified in the previous sections. To test, perform the following actions:
Compile the DTL by clicking the Compile button.
Click the Tools tab on the right pane and choose Test Transformation.
Open ABC1.txt in a text editor. See the note below for the location of this file. Copy the contents and paste them into the Input Message area.
Click Test to run the data transformation. Verify that the MSH and PID segments have been copied, and that the first occupied field of the PID segment reads 77777777. The value in the test message is 16284718.
Note that there is a BuildMap error that has been thrown in this test. This is because the target message is missing some segments - we have only populated two of them so far. We will fix this in the next chapter.
If successful, this test shows that we are changing the value of the patient's ExternalID in the message.
ABC1.txt is in <ensemblesys>\Dev\tutorials\hl7messagerouting where <ensemblesys> is the directory where Ensemble is installed. | https://docs.intersystems.com/latest/csp/docbook/Doc.View.cls?KEY=THL7_DataTransforms_DemoFive | 2021-07-24T00:32:09 | CC-MAIN-2021-31 | 1627046150067.87 | [] | docs.intersystems.com |
I log into the CGC using my eRA Commons ID. How can I push containers to the CGC image registry?
Posted in Add your tools by Zoidberg Wed Mar 02 2016 22:02:39 GMT+0000 (Coordinated Universal Time)·3·Viewed 43,835 times
I use my eRA Commons ID to log into the CGC. How do I use the Docker CLI to log in and push images to the CGC image registry? What do I need?
To push images to the CGC registry, you first need to login into your CGC account via the CLI:
docker login cgc-images.sbgenomics.com
You will be prompted to enter your CGC username. For password, use your authentication token (available in Account settings > Developer on the CGC).
When pushing your image to the CGC registry, you must prefix the image repository name with "cgc-images.sbgenomics.com/". You can do so when committing the container to an image:
docker commit <CONTAINER_ID> cgc-images.sbgenomics.com/<IMAGE>
Alternatively, you can retag an existing local image by associating the image ID with a new image name:
docker tag <IMAGE_ID> cgc-images.sbgenomics.com/<IMAGE>
When pushing, use the full image name with the CGC URL prefix:
docker push cgc-images.sbgenomics.com/<IMAGE>
Do I also need to include my cgc user name in the image name? is this case sensitive?
To log into the CGC using the Docker CLI, always use your exact, case-sensitive username as you made it on the CGC.
However, when naming your images to push to the CGC image registry, you must comply to Docker requirements. This means slightly modifying your image names, as Docker has requirements that repo names only lowercase letters and underscores.
For example, my username on the CGC is "gauravCGC". However, Docker doesn't allow capital letters in a repo name. Therefore, I will log into the CGC using my case-sensitive username (gauravCGC) but name my images as gauravcgc/image_name:tag, where I've replaced upper case letter with lower case.
If your username has other characters (such as .,+,@) replace them with underscores in your Docker image names. For example:
gaurav+CGC --> gaurav_cgc/image:tag
gaurav.kaushik --> gaurav_kaushik/image:tag | https://docs.cancergenomicscloud.org/v1.0/discuss/56d762ffad6eef25003f95b4 | 2021-07-24T01:29:24 | CC-MAIN-2021-31 | 1627046150067.87 | [] | docs.cancergenomicscloud.org |
Viewing a list of backup copies of a configuration file
Download PDF of this page
You can view a list of the backup copies of ONTAP volumes that are defined in the configuration file. You can also get details about available backups and rename specific backups based on the requirement.
From the main menu of the Snap Creator GUI, select Data > Backups.
From the Profiles and Configurations pane of the Backups tab, expand a profile, and then select a configuration file.
The Backups tab displays a list of all of the backup copies of the ONTAP volumes that are defined in the configuration file. | https://docs.netapp.com/us-en/snap-creator-framework/administration/task_viewing_a_list_of_backup_copies_of_a_configuration_file.html | 2021-07-24T01:58:29 | CC-MAIN-2021-31 | 1627046150067.87 | [] | docs.netapp.com |
Checking the status of the daemon
Contributors
Download PDF of this page
You can check the status of the daemon to see whether the daemon is running. If the daemon is already running, you do not need to restart it until the SnapDrive for UNIX configuration file has been updated.
You must be logged in as a root user.
Steps
Check the status of the daemon:
snapdrived status | https://docs.netapp.com/us-en/snapdrive-unix/aix/task_checking_the_status_of_the_daemon.html | 2021-07-24T02:22:34 | CC-MAIN-2021-31 | 1627046150067.87 | [] | docs.netapp.com |
Mass Provisioning of the Chrome Extension for Gmail Users¶
Enterprise customers of Revenue Inbox for Salesforce and Gmail can make use of this mass provisioning scenario and the mass deployment scenario for Windows systems (via Active directory) to set up the product for all end users in an Org.
These scenarios are typically performed by the local email/system Administrator. If the company prefers not to grant Gmail/Google Workspace (G Suite) data access to Revenue Inbox for each individual end user’s mailbox due to security policies or the end users being too many.
Note
Handling of your email/calendar and CRM data by Revenue Inbox Chrome Extension for Gmail is based on the same set of Privacy and Security principles as Revenue Inbox Add-In for MS Outlook App or Web.
After the Revenue Inbox Chrome Extension has been installed in the end users’ Chrome browsers either individually or in bulk by the Admin, follow the below steps to configure and use a Gmail/Google Workspace (G Suite) Service account to mass-configure RI for the end users.
1. You should prepare an in-org Gmail/Google Workspace (G Suite) account which will be used only for providing Gmail data access for RI end users
2. Configure this account as a Service account. Follow guide 1 to create a Gmail service account or guide 2 to create a Google Workspace (G Suite) service account via Google console
3. Follow this Google guide to authorize the service account to access other accounts’ data
The scopes required for the service account on Step 4 for the above Google guide: [optional, if DisableDriveAttachments is enabled]
4. Next Log in to Revenue Inbox Admin panel with Admin credentials provided by RevenueGrid.com
5. On the Organizations tab, select the Organization which the end users belong to; see the articles on managing organizations and managing users in RI Admin panel
6. Select the E-mail configuration subtab for this organization and pick Google Service Account in the Mailbox Access Type box
7. An Upload JSON file line will appear; click the button Upload file next to it, browse your hard drive and select the JSON Web Token file which was generated on Step 2 of this guide
8. Now, if everything was configured correctly, you will see the Project ID and Client ID of your service account and its Connection status
We would love to hear from you | https://docs.revenuegrid.com/ri/fast/articles/Gmail-Users-Mass-Provisioning/ | 2021-07-24T01:48:26 | CC-MAIN-2021-31 | 1627046150067.87 | [array(['../../assets/images/faq/fb.png', None], dtype=object)] | docs.revenuegrid.com |
Test runs
What is a Test run ?
Test run is the execution of defined test case, it can be single test case or multiple test cases. Test run can be executed manually or automated.
Test run in SOFY
SOFY enable software QA engineers to perform manual tests using SOFY Live or run a no-code test run that was create within SOFY platform.
Creating Test run in SOFY
- Click on Test Runs under Automated -> Test Runs
- Click on "New Run"
- Add the values of the test runs
- Test run name
- select an existing build or upload a new build
- Select the device(s) to run the automated test runs
- select the recorded test scenario to run
- Click on "New Run"
| https://docs.sofy.ai/understanding-sofy/test-runs | 2021-07-24T00:57:07 | CC-MAIN-2021-31 | 1627046150067.87 | [array(['https://files.helpdocs.io/kr0v3v4pmt/articles/twuvfcdywa/1616531484437/image.png',
None], dtype=object)
array(['https://files.helpdocs.io/kr0v3v4pmt/articles/twuvfcdywa/1616531493893/image.png',
None], dtype=object) ] | docs.sofy.ai |
theme
String
The gauge theme. This can be either a built-in theme or "sass". When set to "sass" the gauge will read the variables from the Sass-based themes.
The supported values are:
- "sass" - special value, see notes
- "black"
- "blueopal"
- "bootstrap"
- "default"
- "highcontrast"
- "metro"
- "metroblack"
- "moonlight"
- "silver"
- "uniform" | https://docs.telerik.com/kendo-ui/api/javascript/dataviz/ui/circulargauge/configuration/theme | 2021-07-24T02:41:16 | CC-MAIN-2021-31 | 1627046150067.87 | [] | docs.telerik.com |
Use the Actions UI control to act on hosts in your cluster. Actions that you perform that comprise more than one operation, possibly on multiple hosts, are also known as bulk operations.
The Actions control comprises a workflow that uses a sequence of three menus to refine your search: a hosts menu, a menu of objects based on your host choice, and a menu of actions based on your object choice.
For example, if you want to restart the RegionServers on any host in your cluster on which a RegionServer exists:
Steps
In the Hosts page, select or search for hosts running a RegionServer:
Using the Actions control, click Fitered Hosts > RegionServers > Restart:
Click OK to start the selected operation.
Optionally, monitor background operations to follow, diagnose, or troubleshoot the restart operation.
More Information
Monitoring Background Operations | https://docs.hortonworks.com/HDPDocuments/Ambari-2.6.1.0/bk_ambari-operations/content/performing_host-level_actions.html | 2018-01-16T17:27:39 | CC-MAIN-2018-05 | 1516084886476.31 | [] | docs.hortonworks.com |
In vRealize Automation, the Active Directory Sync logs go back only a couple days.
Problem
After two days, Active Directory Sync logs disappear from the management interface. Folders for the logs also disappear from the following vRealize Automation appliance directory.
/db/elasticsearch/horizon/nodes/0/indices
Cause
To conserve space, vRealize Automation sets the maximum retention time for Active Directory Sync logs to three days.
Procedure
- Log in to a console session on the vRealize Automation appliance as root.
- Open the following file in a text editor.
/usr/local/horizon/conf/runtime-config.properties
- Increase the analytics.maxQueryDays property.
- Save and close runtime-config.properties.
- Restart the identity manager and elastic search services.
service horizon-workspace restart service elasticsearch restart | https://docs.vmware.com/en/vRealize-Automation/7.3/com.vmware.vra.install.upgrade.doc/GUID-3F748B03-C9F4-4D50-BD62-62AF6ACCBC06.html | 2018-01-16T17:03:55 | CC-MAIN-2018-05 | 1516084886476.31 | [] | docs.vmware.com |
Groom your backlog
VSTS | TFS 2018 | TFS 2017 | TFS 2015 | TFS 2013
A great backlog conveys customer needs and value. Over the course of the project, your team will add detailed information to each backlog item, break them down into smaller items, prioritize and estimate them, and finally, implement them and deliver the results to your customers.
To get started, see Create your backlog.
Role of the product owner
Product owners play an important role in Scrum, primarily as the interface between customers and the team. To enable product owners to perform the following responsibilities, they need to be added to the Contributors group.
- Analyzing customer requirements and articulate them as user stories, features, or requirements
- Building, prioritizing, and refining the product backlog
- Representing customer and stakeholder requirements to the team and responding to questions your team has about them
- Meeting regularly with stakeholders to address their needs and keep them informed
- Helping stakeholders understand the decisions underlying the priority order of your backlog
- Responding to any and all requests from your team for more information concerning backlog priorities and requirements
If they will also be responsible for configuring team settings, add them as a team administrator.
A product owner can reduce the need for detailed specifications by being more responsive to the team's questions about implementation details and clearly articulating acceptance criteria within each requirement.
Acceptance criteria
Acceptance criteria define what "Done" means by describing the conditions that the team should use to verify whether a requirement or bug fix has been fully implemented. You can capture these criteria in the work item. Clear acceptance criteria help with estimating and developing requirements and with testing.
Product owners are the ultimate deciders of the criteria that create customer value.
Tips from the trenches: Start to love and embrace acceptance criteria.
Ask 10 mature agile teams "How do you know when you're 'done done'?" and you'll get the same answer from each one. . . get serious about writing acceptance criteria.
Acceptance criteria are the handshake between the product owner and the team on what "done done" really means.
Until the acceptance criteria are met, the team isn't done with the story. Period. However, the value of acceptance criteria only starts here.
Acceptance criteria provide the stage for some of most meaningful conversations and interactions that can happen on an agile team. On my own team we routinely have some of our best interactions as we start digging into the acceptance criteria for each story on our backlog. Inevitably we all start with our own ideas about what "done" means for a given story.
However, as we begin to discuss the acceptance criteria presented by the product owner what ensues is a series of "ah-ha moments." A shared understanding of the story begins to emerge. A comment one team member might elicit the following response from someone else. . . "Ah-ha, great point. . . I never thought of that."
Regardless of who is being enlightened, the power is in the fact that the product owner and the team are building together a shared understanding of what "done" means for each backlog item. And, this is happening before the team has written a single line of code… before any work has been done…
before commitments have been made… and before the sprint has begun.
By collaborating on acceptance criteria the team is minimizing risk and greatly increasing the chance of delivering successfully. I don't think it's a coincidence that the first bullet in the Agile Manifesto states ". . . we have come to value individual and interactions over processes and tools". Agile teams work together. And by working together, they create better software.
Start learning to love acceptance criteria and see if your team isn't more successful delivering software.
—Aaron Bjork, Principal Product Manager, Visual Studio Cloud Services, first published in the blog post: Agile Tip #5 – Learn to Love Acceptance Criteria
Refine your fully.
Refining your Agile backlogs for success provides a nice quality checklist to guide your backlog refinement efforts
Capture and manage spikes
In addition to new features and requirements to build, you can capture non-feature work that still needs to be done for a healthy ecosystem of delivery. This work can include necessary research, design, exploration, or prototyping. Any work done that doesn't directly lead to shippable software can be considered and captured as a spike.
As the need to perform this work arises, capture it along with other items on your backlog. To track that it is a spike, you can either preface the title with the word "[Spike]" or add the tag "Spike" to the work item. | https://docs.microsoft.com/en-us/vsts/work/backlogs/best-practices-product-backlog | 2018-01-16T17:47:07 | CC-MAIN-2018-05 | 1516084886476.31 | [] | docs.microsoft.com |
Module futures::
stream
[−]
[src]
Asynchronous streams
This module contains the
Stream trait and a number of adaptors for this
trait. This trait is very similar to the
Iterator trait in the standard
library except that it expresses the concept of blocking as well. A stream
here is a sequential sequence of values which may take some amount of time
in between to produce.
A stream may request that it is blocked between values while the next value is calculated, and provides a way to get notified once the next value is ready as well.
You can find more information/tutorials about streams online at | https://docs.rs/futures/0.1/futures/stream/index.html | 2018-01-16T17:20:19 | CC-MAIN-2018-05 | 1516084886476.31 | [] | docs.rs |
Job location preferences
There are a few different options for location preferences for jobs:
- Onsite - The employee or contractor to work onsite
- Onsite/Remote Hybrid - The employee or contractor starts out initially working on-site for several months. If things work out, then you move to remote after that initial period. This gives you the best of both worlds of being able to tap a much wider talent pool while at the same time building a strong in-person relationship early on.
- Remote (Specific Locations) - The employee or contractor can work remotely but only within specific locations - for example you want them to be anywhere in the UK.
- Remote (Anywhere) - You're open to remote workers pretty much anywhere - this gives you access to the widest talent pool. | http://docs.commercehero.io/article/105-job-location-preferences | 2018-01-16T17:31:45 | CC-MAIN-2018-05 | 1516084886476.31 | [] | docs.commercehero.io |
part_type_orientation( ind, ang_min, ang_max, ang_incr, ang_wiggle, ang_relative );
Returns: N/A
This function is used to determine the orientation of the
particle sprite when it is created and can also be used to make the
particle orientation increase or decrease over its lifetime. The
minimum and maximum orientation values default at 0 but these
values can be changed to randomize the orientation following the
standard GameMaker: Studio directions of 0 degrees being
right, 90 degrees being up, 180 degrees being left and 270 degrees
being down. If you set them to the same value the particles will
all be created with the same orientation.
You can also set an increment value which will add (if a positive number) or subtract (if a negative number) an amount of degrees to the orientation over its lifetime. This value can be a minimum of (+/-) 0.01.
You can set the "wiggle" factor too, the same as other particle functions. This is a value that will be added or subtracted randomly to the orientation each step of the particles lifetime. Obviously larger values are more pronounced than smaller ones, and this value can even be a negative with the maximum range being between -20 and 20.
Finally, you can choose to have the orientation relative or not, which means that while the particle has a direction (and speed) the particle sprite will be orientated around that vector. Bear in mind that if you have the particle speed set to reduce and it reaches 0, the lack of speed sets the direction to the default value of 0� and so a relative orientation will cause the particle sprite to "jump" to a different angle.
part_type_shape(particle2, pt_shape_spark);
part_type_size(particle2, 0.10, 0.50, 0.01, 0);
part_type_scale(particle2, 0.30, 0.30);
part_type_colour1(particle2, 8454143);
part_type_alpha1(particle2, 0.50);
part_type_speed(particle2, 4, 4, -0.07, 1);
part_type_direction(particle2, 0, 359, 0, 20);
part_type_orientation(particle2, 0, 359, 0, 20, 1);
part_type_blend(particle2, 1);
part_type_life(particle2, 1, 5);
The above code will set various particle values including the orientation which will be random value between 0� and 359�. It will also have a random amount added to it of anywhere between 0 and 20 each step too, and the orientation is relative to the direction of motion. | http://docs.yoyogames.com/source/dadiospice/002_reference/particles/particle%20types/part_type_orientation.html | 2019-01-16T02:33:05 | CC-MAIN-2019-04 | 1547583656577.40 | [] | docs.yoyogames.com |
Assumptions
It is assumed you have already installed pfSense 2.4 amd64 with two NIC (LAN and WAN). Lan IP address is 10.0.0.1 netmask is 255.0.0.0. It is also assumed you have already done the initial login to the Web UI of pfSense, completed the initial setup wizard and successfully rebooted the pfSense box at least once.
| https://docs.diladele.com/tutorials/filtering_https_traffic_squid_pfsense/assumptions.html | 2019-01-16T01:23:30 | CC-MAIN-2019-04 | 1547583656577.40 | [array(['../../_images/assume.png', '../../_images/assume.png'],
dtype=object) ] | docs.diladele.com |
This page exists within the Old ArtZone Wiki section of this site. Read the information presented on the linked page to better understand the significance of this fact.
Product: DAZ Studio
Build: 3.0.1
Version Released: Beta June 1, 2009
Note: Not all features are available in all versions of DAZ Studio. Please check the version you have installed if you are not seeing a particular plug-in or feature available.
Refer to Known Issues for older known issues.
Visit our site for further technical support questions or concerns:
Thank you and enjoy your new products!
DAZ Productions Technical Support
12637 South 265 West #300
Draper, UT 84020
Phone:(801) 495-1777
TOLL-FREE 1-800-267-5170 | http://docs.daz3d.com/doku.php/artzone/pub/software/dazstudio/readme | 2019-01-16T02:03:28 | CC-MAIN-2019-04 | 1547583656577.40 | [] | docs.daz3d.com |
The Service Processor (SP) is a remote management device that enables you to access, monitor, and troubleshoot a node remotely.
The key capabilities of the SP include the following:
The SP is powered by a standby voltage, which is available as long as the node has input power to at least one of its power supplies.
You can log in to the SP by using a Secure Shell client application from an administration host. You can then use the SP CLI to monitor and troubleshoot the node remotely. In addition, you can use the SP to access the serial console and run ONTAP commands remotely.
You can access the SP from the serial console or access the serial console from the SP. The SP enables you to open both an SP CLI session and a separate console session simultaneously.
For instance, when a temperature sensor becomes critically high or low, ONTAP triggers the SP to shut down the motherboard gracefully. The serial console becomes unresponsive, but you can still press Ctrl-G on the console to access the SP CLI. You can then use the system power on or system power cycle command from the SP to power on or power-cycle the node.
The SP monitors environmental sensors such as the node temperatures, voltages, currents, and fan speeds. When an environmental sensor has reached an abnormal condition, the SP logs the abnormal readings, notifies ONTAP of the issue, and sends alerts and "down system" notifications as necessary through an AutoSupport message, regardless of whether the node can send AutoSupport messages.
The SP also logs events such as boot progress, Field Replaceable Unit (FRU) changes, events generated by ONTAP, and SP command history. You can manually invoke an AutoSupport message to include the SP log files that are collected from a specified node.
Other than generating these messages on behalf of a node that is down and attaching additional diagnostic information to AutoSupport messages, the SP has no effect on the AutoSupport functionality. The AutoSupport configuration settings and message content behavior are inherited from ONTAP.
If SNMP is enabled, the SP generates SNMP traps to configured trap hosts for all "down system" events.
The SEL stores each audit log entry as an audit event. It is stored in onboard flash memory on the SP. The event list from the SEL is automatically sent by the SP to specified recipients through an AutoSupport message.
When messages are sent to the console, the SP stores them in the console log. The console log persists as long as the SP has power from either of the node power supplies. Because the SP operates with standby power, it remains available even when the node is power-cycled or turned off.
The service enhances ONTAP management of the SP by supporting network-based functionality such as using the network interface for the SP firmware update, enabling a node to access another node's SP functionality or system console, and uploading the SP log from another node.
You can modify the configuration of the SP API service by changing the port the service uses, renewing the SSL and SSH certificates that are used by the service for internal communication, or disabling the service entirely.
The following diagram illustrates access to ONTAP and the SP of a node. The SP interface is accessed through the Ethernet port (indicated by a wrench icon on the rear of the chassis): | http://docs.netapp.com/ontap-9/topic/com.netapp.doc.dot-cm-sag/GUID-10633784-DEFF-47A9-891A-AA57C2A8FC68.html | 2019-01-16T01:42:31 | CC-MAIN-2019-04 | 1547583656577.40 | [] | docs.netapp.com |
CreateClusterSubnetGroup
Creates a new Amazon Redshift subnet group. You must provide a list of one or more subnets in your existing Amazon Virtual Private Cloud (Amazon VPC) when creating Amazon Redshift subnet group.
For information about subnet groups, go to Amazon Redshift Cluster Subnet Groups in the Amazon Redshift Cluster Management Guide.
Request Parameters
For information about the parameters that are common to all actions, see Common Parameters.
- ClusterSubnetGroupName
The name for the subnet group. Amazon Redshift stores the value as a lowercase string.
Constraints:
Must contain no more than 255 alphanumeric characters or hyphens.
Must not be "Default".
Must be unique for all subnet groups that are created by your AWS account.
Example:
examplesubnetgroup
Type: String
Required: Yes
- Description
A description for the subnet group.
Type: String
Required: Yes
- SubnetIds.SubnetIdentifier.N
An array of VPC subnet IDs. A maximum of 20 subnets can be modified in a single request.
Type: Array of strings
Required: Yes
- Tags.Tag.N
A list of tag instances.
Type: Array of Tag objects
Required: No
Response Elements
The following element is returned by the service.
- ClusterSubnetGroup
Describes a subnet group.
Type: ClusterSubnetGroup object
Errors
For information about the errors that are common to all actions, see Common Errors.
- ClusterSubnetGroupAlreadyExists
A ClusterSubnetGroupName is already used by an existing cluster subnet group.
HTTP Status Code: 400
- ClusterSubnetGroupQuotaExceeded
The request would result in user exceeding the allowed number of cluster subnet groups. For information about increasing your quota, go to Limits in Amazon Redshift in the Amazon Redshift Cluster Management Guide.
HTTP Status Code: 400
- ClusterSubnetQuotaExceededFault
The request would result in user exceeding the allowed number of subnets in a cluster subnet groups. For information about increasing your quota, go to Limits in Amazon Redshift in the Amazon Redshift Cluster Management Guide.
HTTP Status Code: 400
- DependentServiceRequestThrottlingFault
The request cannot be completed because a dependent service is throttling requests made by Amazon Redshift on your behalf. Wait and retry the request.
HTTP Status Code: 400
- InvalidSubnet
The requested subnet is not valid, or not all of the subnets are in the same VPC.
HTTP Status Code: 400
- InvalidTagFault
The tag is invalid.
HTTP Status Code: 400
- TagLimitExceededFault
You have exceeded the number of tags allowed.
HTTP Status Code: 400
- UnauthorizedOperation
Your account is not authorized to perform the requested operation.
HTTP Status Code: 400
Example
Sample Request ?Action=CreateClusterSubnetGroup &ClusterSubnetGroupName=mysubnetgroup1 &Description=My subnet group 1 &SubnetIds.member.1=subnet-756a591f &SubnetIds.member.1=subnet-716a591b &Version=2012-12-01 &x-amz-algorithm=AWS4-HMAC-SHA256 &x-amz-credential=AKIAIOSFODNN7EXAMPLE/20130129/us-east-1/redshift/aws4_request &x-amz-date=20130129T192820Z &x-amz-signedheaders=content-type;host;x-amz-date
Sample Response
<CreateClusterSubnetGroupResponse xmlns=""> <CreateClusterSubnetGroupResult> <ClusterSubnetGroup> <VpcId>vpc-796a5913</VpcId> <Description>My subnet group 1</Description> <ClusterSubnetGroupName>mysubnetgroup1</ClusterSubnetGroupName> <SubnetGroupStatus>Complete</SubnetGroupStatus> <Subnets> <Subnet> <SubnetStatus>Active</SubnetStatus> <SubnetIdentifier>subnet-756a591f</SubnetIdentifier> <SubnetAvailabilityZone> <Name>us-east-1c</Name> </SubnetAvailabilityZone> </Subnet> </Subnets> </ClusterSubnetGroup> </CreateClusterSubnetGroupResult> <ResponseMetadata> <RequestId>0a60660f-6a4a-11e2-aad2-71d00c36728e</RequestId> </ResponseMetadata> </CreateClusterSubnetGroupResponse>
See Also
For more information about using this API in one of the language-specific AWS SDKs, see the following: | https://docs.aws.amazon.com/redshift/latest/APIReference/API_CreateClusterSubnetGroup.html | 2019-01-16T01:59:41 | CC-MAIN-2019-04 | 1547583656577.40 | [] | docs.aws.amazon.com |
When needing to perform maintenance tasks on an active pool member it is preferrable to be able to remove that member from the pool in a graceful manner which does not abruptly terminate client connections. The usual approach to this is a process known as connection draining, where a member’s state is set so that it will no longer accept new connections requests. This allows for any existing connections to complete their current tasks and close, then once there are no remaining connections the member server can be worked on safely.
To achieve this on the Catalyst Cloud Load Balancer service set the
weight
for the target member to 0.
$ openstack loadbalancer member set http_pool login.example.com --weight 0
Once the member is ready to go back in to the pool simply reset its weight value back the the same as the other members in the pool.
To check the weight values for existing pool members run
$ openstack loadbalancer member list http_pool_2 -c name -c weight +------------------+--------+ | name | weight | +------------------+--------+ | shop.example.com | 1 | +------------------+--------+ | https://docs.catalystcloud.nz/load-balancer/connection-draining.html | 2019-01-16T01:25:57 | CC-MAIN-2019-04 | 1547583656577.40 | [] | docs.catalystcloud.nz |
Aircall
Aircall allows you to receive and place phone calls. You can connect it to Gorgias to sync phone calls as tickets, and see them in the customer timeline.
Benefits.
Connecting Aircall to Gorgias
Please follow these steps if you want to create tickets on your Gorgias account for every call answered or launched from your Aircall account.
- On your Gorgias account, go to Integrations
- Click on Aircall
- Click Connect Aircall
- Copy the webhook url from the page
- Add a Webhook integration in your Aircall account, under integrations
- Paste the webhook url in the url field, and save
Tada! Now, when there's a new call on Aircall, it will create a ticket in Gorgias. If you're using Shopify, we'll match people you call with Shopify customers. | https://docs.gorgias.io/voice-and-phone/aircall | 2019-01-16T01:13:41 | CC-MAIN-2019-04 | 1547583656577.40 | [] | docs.gorgias.io |
Service Auto Scaling
Your Amazon ECS service can optionally be configured to use Service Auto Scaling to adjust its desired count up or down in response to CloudWatch alarms. Service Auto Scaling is available in all regions that support Amazon ECS.
Amazon ECS publishes CloudWatch metrics with your service’s average CPU and memory usage. You can use these service utilization metrics to scale your service up to deal with high demand at peak times, and to scale your service down to reduce costs during periods of low utilization. For more information, see Service Utilization.
You can also use CloudWatch metrics published by other services, or custom metrics
that are
specific to your application. For example, a web service could increase the number
of tasks
based on Elastic Load Balancing metrics such as
SurgeQueueLength, and a batch job could increase
the number of tasks based on Amazon SQS metrics like
ApproximateNumberOfMessagesVisible.
You can also use Service Auto Scaling in conjunction with Auto Scaling for Amazon EC2 on your ECS cluster to scale your cluster, and your service, as a result to the demand. For more information, see Tutorial: Scaling Container Instances with CloudWatch Alarms.
Service Auto Scaling Required IAM Permissions
Service Auto Scaling is made possible by a combination of the Amazon ECS, CloudWatch, and Application Auto Scaling APIs. Services are created and updated with Amazon ECS, alarms are created with CloudWatch, and scaling policies are created with Application Auto Scaling. IAM users must have the appropriate permissions for these services before they can use Service Auto Scaling in the AWS Management Console or with the AWS CLI or SDKs. In addition to the standard IAM permissions for creating and updating services, Service Auto Scaling requires the following permissions:
Copy
{ "Version": "2012-10-17", "Statement": [ { "Effect": "Allow", "Action": [ "application-autoscaling:*", "cloudwatch:DescribeAlarms", "cloudwatch:PutMetricAlarm" ], "Resource": [ "*" ] } ] }
The Create Services and Update Services IAM policy examples show the permissions that are required for IAM users to use Service Auto Scaling in the AWS Management Console.
The Application Auto Scaling service needs permission to describe your ECS services
and CloudWatch alarms,
as well as permissions to modify your service's desired count on your behalf. You
must
create an IAM role (
ecsAutoscaleRole) for your ECS services to provide
these permissions and then associate that role with your service before it can use
Application Auto Scaling. also create
the
role by following the procedures in Amazon ECS Service Auto Scaling IAM Role.
Service Auto Scaling Concepts, Application Auto Scaling scales the desired count up to the minimum capacity value and then continues to scale out as required, based on the scaling policy associated with the alarm. However, a scale in activity will not adjust the desired count, because it is already below the minimum capacity value.
If a service's desired count is set above its maximum capacity value, and an alarm triggers a scale in activity, Application Auto Scaling scales the desired count down to the maximum capacity value and then continues to scale in as required, based on the scaling policy associated with the alarm. However, a scale out activity will cool down period.
Amazon ECS Console Experience
The Amazon ECS console's service creation and service update workflows support
Service Auto Scaling. The ECS console handles the
ecsAutoscaleRole and policy
creation, provided that the IAM user who is using the console has the permissions
described in Service Auto Scaling Required IAM Permissions, and
that they can create IAM roles and attach policies to them.
When you configure a service to use Service Auto Scaling in the console, your service is automatically registered as a scalable target with Application Auto Scaling so that you can configure scaling policies that scale your service up and down. You can also create and update the scaling policies and CloudWatch alarms that trigger them in the Amazon ECS console.
To create a new ECS service that uses Service Auto Scaling, see Creating a Service.
To update an existing service to use Service Auto Scaling, see Updating a Service.
AWS CLI and SDK Experience
You can configure Service Auto Scaling by using the AWS CLI or the AWS SDKs, but you must observe the following considerations. EC2 Container Service API Reference, the Amazon CloudWatch API Reference, and the Application Auto Scaling API Reference. For more information about the AWS CLI commands for these services, see the ecs, cloudwatch, and application-autoscaling sections of the AWS Command Line Interface Reference.
Before your service can use Service Auto Scaling, you must register it as a scalable target with the Application Auto Scaling RegisterScalableTarget API operation.
After your ECS service is registered as a scalable target, you can create scaling policies with the Application Auto Scaling PutScalingPolicy API operation to specify what should happen when your CloudWatch alarms are triggered.
After you create the scaling policies for your service, you can create the CloudWatch alarms that trigger the scaling events for your service with the CloudWatch PutMetricAlarm API operation. | http://docs.aws.amazon.com/AmazonECS/latest/developerguide/service-auto-scaling.html | 2017-09-19T17:27:45 | CC-MAIN-2017-39 | 1505818685912.14 | [] | docs.aws.amazon.com |
Here at Intercom we have a set of tags that we use for 'the voice of our customer' i.e. you folks. Depending on what you're telling us, we'll tag the conversation as a “new feature request", a “bug report", or feedback around specific features like “feedback on A/B testing", or “feedback on events" and so on.
Or, a lot of the time we'll even get more specific with our tag names for known issues such as "request for multi-language support". When it's time to review this after a few weeks, you can visit the inbox and search for the tag. Then all the conversations you've tagged will be there in one place.
If you and your team are tagging consistently over time, you'll acquire valuable data that will directly inform how your product evolves. As a quick example, in Intercom, we get asked for a "reporting" feature and a "bulk exports" feature all the time. By tagging each over time, we were able to easily see which was being asked for most often. This helped to inform which we would build first.
Note: You can use our API to access your conversations and tags to analyze trends.
Make your research tags memorable
You’ll likely have lots of research projects to keep track of. So it’s important to add as much context as possible to each new tag you create. For example, instead of just calling your tag ‘Calendar Research’ you can add the month and the year so it reads something like ‘January_2016_Calendar_Research.’ This will help you find and manage tags you created during a specific time period when you’re revisiting or redesigning a feature, for example.
Here's more about how tags work | https://docs.intercom.com/learning-from-your-customers/organizing-the-feedback-you-get-with-tags | 2017-09-19T16:58:36 | CC-MAIN-2017-39 | 1505818685912.14 | [array(['https://uploads.intercomcdn.com/i/o/20860774/279395095e6ad03dbef4a88a/tag+-+packaging+2.0.png',
None], dtype=object) ] | docs.intercom.com |
This document outlines the changes that were made in ModeShape 5.3.0.Final. We hope you enjoy it!
This release addresses 11 bugs and 2 enhancements, most notably: - adding replica set support for the Mongo DB binary store to both the JSON and Wildfly configuration (see MODE-2635).3.0.Final to help us identify problems. Specifically, we ask that you test the following areas:
ModeShape 5.3.0.Final has these features:
All of the JCR 2.0 features supported in previous versions are still supported:
ModeShape also has features that go beyond the JCR API:
The following are the bugs, features and other issues that have been fixed in the 5.3.0.Final release: | http://docs.jboss.org/modeshape/5.3.0.Final/release.html | 2017-09-19T17:04:31 | CC-MAIN-2017-39 | 1505818685912.14 | [] | docs.jboss.org |
Information for "Screen.trashmanager.15" Basic information Display titleHelp15 talk:Screen.trashmanager.15 Default sort keyScreen.trashmanager.15 Page length (in bytes)1,182 Page ID1093 Page content languageEnglish (en) Page content modelwikitext Indexing by robotsAllowed Number of redirects to this page1 Page protection EditAllow all users MoveAllow all users Edit history Page creatorDrmmr763 (Talk | contribs) Date of page creation19:08, 24 March 2008 Latest editorChris Davenport (Talk | contribs) Date of latest edit19:26, 1 February 2010 Total number of edits6 Total number of distinct authors4 Recent number of edits (within past 30 days)0 Recent number of distinct authors0 Retrieved from ‘’ | https://docs.joomla.org/index.php?title=Help15_talk:Screen.trashmanager.15&action=info | 2015-06-30T05:33:38 | CC-MAIN-2015-27 | 1435375091751.85 | [] | docs.joomla.org |
Revision history of "JMailHelper::cleanSubject::cleanSubject/11.1 to API17:JMailHelper::cleanSubject without leaving a redirect (Robot: Moved page) | https://docs.joomla.org/index.php?title=JMailHelper::cleanSubject/11.1&action=history | 2015-06-30T06:17:53 | CC-MAIN-2015-27 | 1435375091751.85 | [] | docs.joomla.org |
Manage articles using the Front-end of Joomla! 1.5
From Joomla! Documentation
Contents
- 1 Manage articles using the Front-end of the Web site
- 2 Where Next?
- 3 Further information?
The Front-end of a Joomla! site is the public interface. It is the part seen by any visitor to the site and you have already used it for adding and altering its content. Authors, Editors and Publishers can login to this part of the site, as you have already seen.
-
There are other documents in this series that show you how to add new articles and how to alter them. (cross-refs) Most Joomla! sites allow this. The alternative is that content is added in the Back-end, thus limiting the types of people who can update the site.?
Further information
--Lorna Scammell December 2010 | https://docs.joomla.org/index.php?title=J1.5:Manage_articles_using_the_Front-end_of_Joomla!_1.5&oldid=36578 | 2015-06-30T06:28:01 | CC-MAIN-2015-27 | 1435375091751.85 | [] | docs.joomla.org |
available for injection.
- IOCBeanManagerProvider : Makes Errai's client-side bean manager, ClientBeanManager, available for injection.
- MessageBusProvider : Makes Errai's client-side MessageBus singleton available for injection.
- RequestDispatcherProvider : Makes an instance of the RequestDispatcher available for injection.
- RootPanelProvider : Makes GWT's RootPanel singleton injectable.
- SenderProvider : Makes MessageBus Sender<T> objects available for injection.
Implementing a Provider is relatively straight-forward. Consider the following two classes:
TimeService.java
TimeServiceProvider.java
If you are familiar with Guice, this is semantically identical to configuring an injector like so:
As shown in the above example code, the annotation @IOCProvider is used to denote top-level providers.
The classpath will be searched for all annotated providers at compile time. | https://docs.jboss.org/author/display/ERRAI/Container+Wiring | 2015-06-30T05:24:43 | CC-MAIN-2015-27 | 1435375091751.85 | [] | docs.jboss.org |
Difference between revisions of "Adding a new Poll"
From Joomla! Documentation
Revision as of 12:19, 1 September 2012
The Joomla! Poll Manager allows you to create polls using the multiple choice format on any of your Web site pages. They can be either published in a module position using the Poll module or in a menu item using the Poll component.
Contents. | https://docs.joomla.org/index.php?title=Adding_a_new_Poll&diff=73478&oldid=8142 | 2015-06-30T05:46:55 | CC-MAIN-2015-27 | 1435375091751.85 | [] | docs.joomla.org |
You can optionally provide a dialog box in which the user can visually edit a property. The most common use of property editors is for properties that are themselves classes. An example is the Font property, for which the user can open a font dialog box to choose all the attributes of the font at once.
To provide a whole-property editor dialog box, override the property-editor class's Edit method.
Edit methods use the same Get and Set methods used in writing GetValue and SetValue methods. In fact, an Edit method calls both a Get method and a Set method. Because the editor is type-specific, there is usually no need to convert the property values to strings. The editor generally deals with the value "as retrieved."
When the user clicks the '...' button next to the property or double-clicks the value column, the Object Inspector calls the property editor's Edit method. | http://docs.embarcadero.com/products/rad_studio/delphiAndcpp2009/HelpUpdate2/EN/html/devwin32/cwgeditingthepropertyasawhole_xml.html | 2012-05-26T22:45:46 | crawl-003 | crawl-003-016 | [] | docs.embarcadero.com |
Dynamic introduced or overridden by a particular class. (The virtual method table, in contrast, includes all of the object's virtual methods, both inherited and introduced.) Inherited dynamic methods are dispatched by searching each ancestor's dynamic method list, working backwards through the inheritance tree.
To make a method dynamic, add the directive dynamic after the method declaration. | http://docs.embarcadero.com/products/rad_studio/delphiAndcpp2009/HelpUpdate2/EN/html/devwin32/cwgdynamicmethods_xml.html | 2012-05-26T22:45:40 | crawl-003 | crawl-003-016 | [] | docs.embarcadero.com |
The essential element of a graphic control is the way it paints its image on the screen. The abstract type TGraphicControl defines a method called Paint that you override to paint the image you want on your control.
The Paint method for the shape control needs to do several things:
type TSampleShape = class(TGraphicControl) . . . protected procedure Paint; override; . . . end;
class PACKAGE TSampleShape : public TGraphicControl { . . . protected: virtual void __fastcall Paint(); . . . };
Then write the method in the implementation part of the unit:
procedure TSampleShape.Paint; begin with Canvas do begin Pen := FPen; { copy the component's pen } Brush := FBrush; { copy the component's brush } case FShape of sstRectangle, sstSquare: Rectangle(0, 0, Width, Height); { draw rectangles and squares } sstRoundRect, sstRoundSquare: RoundRect(0, 0, Width, Height, Width div 4, Height div 4); { draw rounded shapes } sstCircle, sstEllipse: Ellipse(0, 0, Width, Height); { draw round shapes } end; end; end;
void __fastcall TSampleShape::Paint() { int X,Y,W,H,S; Canvas->Pen = FPen; // copy the component's pen Canvas->Brush = FBrush; // copy the component's brush W=Width; // use the component width H=Height; // use the component height X=Y=0; // save smallest for circles/squares if( W<H ) S=W; else S=H; switch(FShape) { case sstRectangle: // draw rectangles and squares case sstSquare: Canvas->Rectangle(X,Y,X+W,Y+H); break; case sstRoundRect: // draw rounded rectangles and squares case sstRoundSquare: Canvas->RoundRect(X,Y,X+W,Y+H,S/4,S/4); break; case sstCircle: // draw circles and ellipses case sstEllipse: Canvas->Ellipse(X,Y,X+W,Y+H); break; default: break; } }
Paint is called whenever the control needs to update its image. Controls are painted when they first appear or when a window in front of them goes away. In addition, you can force repainting by calling Invalidate, as the StyleChanged method does. | http://docs.embarcadero.com/products/rad_studio/delphiAndcpp2009/HelpUpdate2/EN/html/devwin32/cwgdrawingthecomponentimage_xml.html | 2012-05-26T22:45:34 | crawl-003 | crawl-003-016 | [] | docs.embarcadero.com |
When:
The VCL registers a method called MainWndProc as the window procedure for each type of component in an application. MainWndProc contains an exception-handling block, passing the message structure from Windows to a virtual method called WndProc and handling any exceptions by calling the application class's HandleException method.
MainWndProc is a nonvirtual method that contains no special handling for any particular messages. Customizations take place in WndProc, since each component type can override the method to suit its particular needs.
WndProc methods check for any special conditions that affect their processing so they can "trap" unwanted messages. For example, while being dragged, components ignore keyboard events, so the WndProc method of TWinControl passes along keyboard events only if the component is not being dragged. Ultimately, WndProc calls Dispatch, a nonvirtual method inherited from TObject, which determines which method to call to handle the message.
Dispatch uses the Msg field of the message structure to determine how to dispatch a particular message. If the component defines a handler for that particular message, Dispatch calls the method. If the component does not define a handler for that message, Dispatch calls DefaultHandler. | http://docs.embarcadero.com/products/rad_studio/delphiAndcpp2009/HelpUpdate2/EN/html/devwin32/cwgdispatchingmessages_xml.html | 2012-05-26T22:45:24 | crawl-003 | crawl-003-016 | [] | docs.embarcadero.com |
Table of Contents
Sitka Snippet Video - Copy Buckets (4:33)
Buckets is the name for a batch change functionality in Evergreen, or for a function that groups records in one place. Batch changes allow you to group together many records and enact changes on them all at once, instead of individually editing them. Buckets can also be used to create pull lists. Buckets allow you to track and work with your materials in arbitrary ways and more easily collaborate with others.
This chapter will demonstrate a variety of ways in which you can manage your copies with Buckets.
Currently there are copy and title record buckets. You may work on copy records with Copy Buckets and MARC records with Record Buckets.
Some possible uses for buckets are batch editing items, deleting items, and grouping like items temporarily to change their status or to create bibliographies and pull lists. While you can batch edit records a variety of ways in Evergreen, using common Windows functions such as select all and edit, buckets are useful for keeping records together over a period of time. For example, if you scan 20 items into the Item Status screen you can batch edit or delete from there by selecting all, but you have to enact those changes right then while records are all together on the screen. By utilizing Evergreen’s bucket functionality, you can create a bucket and add records to that bucket, and they stay there until you are ready to work with them, whether that be immediately or days later. Adding items to a bucket is like creating and saving a query. The record being in a bucket does not affect normal library functions such as circulation, as being in a bucket is not a status.
Buckets can be shared or private, and are associated with a login account.
Deleted records are not automatically removed from buckets. It is recommended that you always display Deleted? field in Bucket View.
When a bucket is retrieved, all information about the records in it is transferred to the workstation. It consumes the computer’s resources. It is recommended that a copy bucket contain no more than a few hundred records.
There are two ways to create a copy bucket. You can either create a bucket first, without accessing any copies, or you can access a copy record and choose to create the bucket from that view. We will demonstrate both methods here.
Create a Copy Bucket on Copy Buckets view
Select New Bucket from the Buckets dropdown list.
Type in a name and some description, if needed. Click Create Bucket.
The newly created bucket is the active bucket in Bucket View. Note that the bucket is numbered, and creating owner identified.
Create a Copy Bucket when Adding a Copy Record to a Bucket
You can also create a bucket from within a copy record.
When a copy is displayed on a screen such as Checkin or Item Status, you can add it to a copy bucket by choosing Actions → Add Items to Bucket.
You are prompted to add the record to an existing bucket or a new one. To add to a new bucket, type in a name in Name For New Bucket box, then click Add to New Bucket.
To delete a copy bucket, retrieve it on Bucket View, then select Delete Bucket on Buckets dropdown list. | http://docs.libraries.coop/sitka/cat-copy-bucket.html | 2018-11-12T20:17:57 | CC-MAIN-2018-47 | 1542039741087.23 | [] | docs.libraries.coop |
DeleteDomain
Deletes the specified domain recordset and all of its domain records.
Request Syntax
{ "domainName": "
string" }
Request Parameters
For information about the parameters that are common to all actions, see Common Parameters.
The request accepts the following data in JSON format.
- domainName
The specific domain name to delete.
Type: String
Required: Yes
Response Syntax
{ "operation": { : | https://docs.aws.amazon.com/lightsail/2016-11-28/api-reference/API_DeleteDomain.html | 2018-11-12T20:41:41 | CC-MAIN-2018-47 | 1542039741087.23 | [] | docs.aws.amazon.com |
Getting Started: 6 steps to setup success
Follow these 6 steps and you’ll get your Help Scout account up and running in no time. If you'd like to get a full tour of Help Scout and all it's
- Forward copies of your email into Help Scout
- Customize your Mailbox
- Collaborate with a friend
- Get organized and optimized
- Integrate your favorite apps
- What’s up, Docs?
Take 10-20 minutes on each step per day to get set up in a week, or be an overachiever and knock through all of them in a couple of hours.
Step step:
You have plenty of time to play and explore the rest of the options later :)
Edit Mailbox
Edit Mailbox gives you control of some basic Mailbox settings, like who has ownership over conversations after a reply, adding aliases to your Mailbox, or designing your Mailbox signature.
Here’s a 2 minute video that goes over each option in Edit Mailbox: to protect your emails from landing in the jaws of spam folders.
Send with your own servers, either by SMTP or Google oAuth:
- Pros: No need to give us permission, you’re sending your own mail! Your emails run the least risk of landing in spam.
- Cons: If something goes wrong,? Here’s some stuff to try:
- Add a note
- Assign a conversation to your friend, and check out your default Mailbox folders
- Change the status on a conversation to Pending or Closed, and see how that affects where the conversation goes in your folders.
- @mention your friend and check out the notification station
- Change your redirect settings on Send Reply or Add Note to find your ideal flow
- Pop in the same conversation at the same time to check out Traffic Cop
- Start a New Conversation
Step 4. Get organized and optimized
We know why Help Scout caught your eye, and it’s all about making life easier for you and your support team. The good life starts with good organizational tools: Tags and Folders.
When should I use a Tag?
Apply Tags any time you want to group or categorize conversations for reporting or tracking purposes, or if it would be helpful to get a visual indicator in the queue for the type of conversation coming in.
An example for when to use Tags are for feature requests, helpful to track the volume of the request for product roadmap purposes. You can also use Tags to trigger automatic Workflows, which means you can follow up with all of the conversations tagged when that feature is released.
Learn how to manage Tags: Add and Manage Tags
When should I use a Folder?
Use a Folder anytime you want quick and easy access to a particular group of conversations. Folders are better for time sensitive groupings like priority, or conversations you want to track accomplishment on, not just track in reports.
Examples of when to use Folders are things like conversations that have been waiting over X amount of time or conversations that might have received a negative customer satisfaction rating and need follow up.
Add your first Folder: Create Folders to organize Conversations
When should I use both?
Any time you have something time sensitive that requires immediate attention that you also want to track the progress on long term.
Examples of this include VIP customers or urgent conversations.
How can I make all this less work?
That’s the best question of all, my friend. Once you’ve got a couple of Tags and added a Folder, explore Workflows to get repeatable tasks out of the way with some simple automations: Getting started with Workflows.
Here’s a couple of Workflow templates you can use to inspire you on ways to make your life easier: Workflow templates and scenarios
Step 5. Integrate your favorite apps
Connect the dots: coms, lys, and ios. Help Scout plays nicely with many of your favorite Apps, or you can build custom integrations to pull in information from your internal CRM or CMS. Here are a few integrations we offer and how you’ll find them useful:
- Chat Integrations: Bring your chat history into Help Scout so your team has all customer interactions under the same roof. We sync with Olark, SnapEngage, Chatra, and more.
- Phone: Manage voicemails from several popular providers, or connect a call center software like Aircall or TalkDesk if you need something more robust.
- CRM: Sync your sales and support efforts by connecting your CRM to Help Scout conversations. Capsule and Pipedrive are the most popular.
-)
- Ecommerce: Shopify, WooCommerce, and Magento. Get your customer’s order information connected to every Help Scout conversation. That’s what we have in store for your store.
- Custom Apps: Maybe Todd in IT hacked together some internal management system years ago, or Beth in ops found a perfect CMS solution for you a little off the beaten path. Connect with all the oddball internal systems or anything your heart desires with a Custom App: Build a Custom App.
- Requires slightly above average technical knowledge, just FYI.
Step 6. What’s up, Docs?
The secret to offering speedy support is a well stocked knowledge base. Why? You have all the ingredients for success at both your customers and your support team’s fingertips. If knowledge is power, a knowledge base is your key to the kingdom.
This video overview covers everything you need to know to get your Docs site up and running:
Ok, you got us here…this step takes more than 20 minutes as a great knowledge base evolves over time. Use Reports to see what customer’s are looking for in your Docs or what Saved Replies makes your team sound like a broken record to inspire new documentation.
Once you’ve got a couple articles in your Docs, your team can use the Docs search bar to easily share with customers in replies.
But wait…there’s more!
Once you’ve got these steps under control, there’s plenty more to explore in Help Scout. Set up Beacon to give customers help where they need it most, set up Office Hours to filter reports, or start seeing results with your customers using satisfaction ratings.
Continue your journey with Help Scout by checking out our Docs site or reaching out to our helpful customer success team to hone your Help Scout experience. | https://docs.helpscout.net/article/831-6-steps-to-setup-success | 2018-11-12T19:58:29 | CC-MAIN-2018-47 | 1542039741087.23 | [] | docs.helpscout.net |
Hybrid Cloud Management Classic 2018.05 - Ultimate
Hybrid Cloud Management Classic - Ultimate (HCM Ultimate) is an integrated automation and management solution for private and hybrid cloud environments that accelerates transformation of applications and IT service delivery with efficiency and speed. The HCM Ultimate enables IT to provide Infra/App Services and enable deployment by brokering across multiple environments (traditional, private, public, etc.). End-user consumers can be business users through service catalogs, developers and IT operators using APIs or the user interface. HCM Ultimate enables Infra & Ops (I/O) pros to get visibility, governance, and operational control through a single pane of glass UI or programmatically.
Suite Components
The following capabilities are available:
Additional Resources
Micro Focus Software Support Online | https://docs.microfocus.com/itom/Hybrid_Cloud_Management_Classic:2018.05_Ultimate/Home | 2018-11-12T20:48:46 | CC-MAIN-2018-47 | 1542039741087.23 | [] | docs.microfocus.com |
This section includes topics relating directly to Spring Boot applications. Actuator app, look at the
conditionsendpoint (
/actuator/conditionsor the JMX equivalent) for the same information.
Actuator app, look at the
configpropsendpoint.
bindmethod on the
Binderto pull configuration values explicitly out of the
Environmentin a relaxed manner. It is often used with a prefix.
@Valueannotations that bind directly to the
Environment.
@ConditionalOnExpressionannotations that switch features on and off in response to SpEL expressions, normally evaluated with placeholders customizations:
addListenersand
addInitializersmethods on
SpringApplicationbefore you run it.
context.initializer.classesor
context.listener.classesproperties.
“Section 23); } } }
You can use the
ApplicationBuilder class to create parent/child
ApplicationContext
hierarchies. See “Section 23,. | https://docs.spring.io/spring-boot/docs/current/reference/html/howto-spring-boot-application.html | 2018-11-12T21:01:22 | CC-MAIN-2018-47 | 1542039741087.23 | [] | docs.spring.io |
VAST Integration
To integrate your ad server with AWS Elemental MediaTailor, your ad server must send XML that conforms to the IAB specifications for the supported versions of VAST and VMAP. You can use a public VAST validator to ensure that these tags are well-formed.
Make sure that your ad server's VAST response contains IAB compliant
TrackingEvents elements and
standard
event types like
impression. If you don't include
standard tracking events, AWS Elemental MediaTailor rejects the VAST response and
doesn't provide an ad
fill for the break.
VAST 3.0 introduced support for ad pods, which is the delivery of a set of sequential linear ads. With AWS Elemental MediaTailor if a specific ad in an ad pod is not available, AWS Elemental MediaTailor logs an error on CloudWatch, in the interactions log of the ad decision servers, and tries to insert the next ad in the pod. In this way, AWS Elemental MediaTailor iterates through the ads in the pod until it finds one that it can use.
Targeting
To target specific players for your ads, you can create templates for your ad tags and URLs. For more information, see Dynamic Ad Variables in AWS Elemental MediaTailor.
AWS Elemental MediaTailor proxies the player's
user-agent and
x-forwarded-for headers when it sends the ad server VAST request
and when it makes the server-side tracking calls. Make sure that your ad server can
handle these headers. Alternatively, you can use
[session.user_agent]
or
[session.client_ip] and pass these values in query strings on the ad
tag and ad URL. For more information, see Session Data.
Ad Calls
MediaTailor calls your VAST ads URL as defined in your configuration, substituting any player-specific or session-specific parameters when making the ad call. MediaTailor follows up to three levels of VAST wrappers and redirects in the VAST response. In live streaming scenarios, MediaTailor makes ad calls simultaneously at ad break start for connected players. In practice, due to jitter, these ad calls can be spread out over a few seconds. Make sure that your ad server can handle the number of concurrent connections this type of calling requires. MediaTailor does not currently support pre-fetching VAST responses.
Creative Handling
When AWS Elemental MediaTailor receives the ADS VAST response, for each creative it
identifies the
highest bit rate
MediaFile for transcoding and uses this as its source.
It sends this file to the on-the-fly transcoder for transformation into renditions
that fit the player's master manifest bit rates and resolutions. For best results,
make sure that your highest bit rate media file is a high-quality MP4 asset with
valid manifest presets. When manifest presets are not valid, the transcode jobs
fail, resulting in no ad shown. Examples of presets that are not valid include
unsupported input file formats, like ProRes, and certain rendition specifications,
like the resolution 855X481.
Creative Indexing
AWS Elemental MediaTailor uniquely indexes each creative by the value of the
id
attribute provided in the
<Creative> element. If a creative's ID
is not specified, AWS Elemental MediaTailor uses the media file URL for the index.
The following example declaration shows the creative ID:
<Creatives> <Creative id="57859154776" sequence="1">
If you define your own creative IDs, use a new, unique ID for each creative. Do not reuse creative IDs. AWS Elemental MediaTailor stores creative content for repeated use, and finds each by its indexed ID. When a new creative comes in, the service first checks its ID against the index. If the ID is present, AWS Elemental MediaTailor uses the stored content, rather than reprocessing the incoming content. If you reuse a creative ID, AWS Elemental MediaTailor uses the older, stored ad and does not play your new ad. | https://docs.aws.amazon.com/mediatailor/latest/ug/vast-integration.html | 2018-11-12T20:21:07 | CC-MAIN-2018-47 | 1542039741087.23 | [] | docs.aws.amazon.com |
New in version 0.7.
Plugin Name: UdpOutput
Output plugin that delivers Heka message data to a specified UDP or Unix datagram socket location.
Config:
Network type to use for communication. Must be one of “udp”, “udp4”, “udp6”, or “unixgram”. “unixgram” option only available on systems that support Unix datagram sockets. Defaults to “udp”.
Address to which we will be sending the data. Must be IP:port for net types of “udp”, “udp4”, or “udp6”. Must be a path to a Unix datagram socket file for net type “unixgram”.
Local address to use on the datagram packets being generated. Must be IP:port for net types of “udp”, “udp4”, or “udp6”. Must be a path to a Unix datagram socket file for net type “unixgram”.
Name of registered encoder plugin that will extract and/or serialized data from the Heka message.
Maximum size of message that is allowed to be sent via UdpOutput. Messages which exceeds this limit will be dropped. Defaults to 65507 (the limit for UDP packets in IPv4).
Example:
[PayloadEncoder] [UdpOutput] address = "myserver.example.com:34567" encoder = "PayloadEncoder" | https://hekad.readthedocs.io/en/v0.10.0/config/outputs/udp.html | 2018-11-12T19:46:59 | CC-MAIN-2018-47 | 1542039741087.23 | [] | hekad.readthedocs.io |
Plugin Name: RegexSplitter
A RegexSplitter considers any text that matches a specified regular expression to represent a boundary on which records should be split. The regular expression may consist of exactly one capture group. If a capture group is specified, then the captured text will be included in the returned record. If not, then the returned record will not include the text that caused the regular expression match.
Config:
Regular expression to be used as the record boundary. May contain zero or one specified capture groups.
Specifies whether the contents of a delimiter capture group should be appended to the end of a record (true) or prepended to the beginning (false). Defaults to true. If the delimiter expression does not specify a capture group, this will have no effect.
Example:
[mysql_slow_query_splitter] type = "RegexSplitter" delimiter = '\n(# User@Host:)' delimiter_eol = false | https://hekad.readthedocs.io/en/v0.10.0/config/splitters/regex.html | 2018-11-12T20:59:09 | CC-MAIN-2018-47 | 1542039741087.23 | [] | hekad.readthedocs.io |
Event ID 1388 or 1988: A lingering object is detected
Applies To: Windows Server 2016, Windows Server 2012 R2, Windows Server 2012
Log Name: Directory Service Source: Microsoft-Windows-ActiveDirectory_DomainService Date: 5/3/2008 3:34:01 PM Event ID: 1388 Task Category: Replication Level: Error Keywords: Classic User: ANONYMOUS LOGON Computer: DC3.contoso.com Description: Another domain controller (DC) has attempted to replicate into this DC an object which is not present in the local Active Directory Domain Services database. The object may have been deleted and already garbage collected (a tombstone lifetime or more has past since the object was deleted) on this DC.
Log Name: Directory Service Source: Microsoft-Windows-ActiveDirectory_DomainService Date: 2/7/2008 8:20:11 AM Event ID: 1988 Task Category: Replication Level: Error Keywords: Classic User: ANONYMOUS LOGON Computer: DC5.contoso
dn: CN=94fdebc6-8eeb-4640-80de-ec52b9ca17fa,CN=Operations,CN=ForestUpdates,CN=Configuration,DC=<ForestRootDomain> changetype: add objectClass: container showInAdvancedViewOnly: TRUE name: 94fdebc6-8eeb-4640-80de-ec52b9ca17fa objectCategory: CN=Container,CN=Schema,CN=Configuration,DC=<ForestRootDomain> | https://docs.microsoft.com/en-us/windows-server/identity/ad-ds/manage/troubleshoot/event-id-1388-or-1988--a-lingering-object-is-detected | 2018-01-16T11:36:02 | CC-MAIN-2018-05 | 1516084886416.17 | [] | docs.microsoft.com |
Crate sample [−] [src]
A crate of fundamentals for audio PCM DSP.
- Use the Sample trait to remain generic across bit-depth.
- Use the Frame trait to remain generic over channel layout.
- Use the Signal trait for working with Iterators that yield Frames.
- Use the slice module for working with slices of Samples and Frames.
- See the conv module for fast conversions between slices, frames and samples.
- See the types module for provided custom sample types.
- See the interpolate module for sample rate conversion and scaling. | https://docs.rs/sample/0.7.1/sample/ | 2018-01-16T11:56:29 | CC-MAIN-2018-05 | 1516084886416.17 | [] | docs.rs |
BarNet offers its members access to the Oxford English Dictionary
from within the BarNet network. Using the OED outside the BarNet
network is not direct possible. When following the steps in this
document, a workaround for this problem can be installed.
This document is for people who ...
With this solution it is not necessary anymore to setup a VPN
connection the BarNet network to use the OED.
Configure your browser to use the proxy.pac file at for automatic proxy
configuration. Then login to the BarNet proxy server with your
BarNet username and password.
We have encountered difficulties with this solution on the following platforms:
When browsing to, you suddenly see this
screen instead of the familiar screen...
In the Tools menu, go to Internet Options.
Go to the Connections tab. If you are connected to an ethernet,
select LAN settings at the bottom. if you are dialin in,
select the dial-up method at the top and select the Settings
button.
Tick the box with Use automatic configuration script, and
enter there:.
The close all the dialog boxes (via the OK buttons) and reload
the page.
Then you are asked for a password for the BarNet proxy server.
Fill in your BarNet username and password.
The authentication is valid for two hours. After that you're asked
again for your username and password.
And you have access to the OED again! | http://docs.barnet.com.au/documentation.php?topic=oed | 2018-01-16T11:01:17 | CC-MAIN-2018-05 | 1516084886416.17 | [] | docs.barnet.com.au |
Frequently Asked Questions¶
Frequently asked questions on the usage of
schedule.
How to execute jobs in parallel?¶
I am trying to execute 50 items every 10 seconds, but from the my logs it says it executes every item in 10 second schedule serially, is there a work around?
By default, schedule executes all jobs serially. The reasoning behind this is that it would be difficult to find a model for parallel execution that makes everyone happy.
You can work around this restriction by running each of the jobs in its own thread:
import threading import time import schedule def job(): print("I'm running on thread %s" % threading.current_thread()) def run_threaded(job_func): job_thread = threading.Thread(target=job_func) job_thread.start() schedule.every(10).seconds.do(run_threaded, job) schedule.every(10).seconds.do(run_threaded, job) schedule.every(10).seconds.do(run_threaded, job) schedule.every(10).seconds.do(run_threaded, job) schedule.every(10).seconds.do(run_threaded, job) while 1: schedule.run_pending() time.sleep(1)
If you want tighter control on the number of threads use a shared jobqueue and one or more worker threads:
import Queue import time import threading import schedule def job(): print("I'm working") def worker_main(): while 1: job_func = jobqueue.get() job_func() jobqueue.task_done() jobqueue = Queue.Queue() schedule.every(10).seconds.do(jobqueue.put, job) schedule.every(10).seconds.do(jobqueue.put, job) schedule.every(10).seconds.do(jobqueue.put, job) schedule.every(10).seconds.do(jobqueue.put, job) schedule.every(10).seconds.do(jobqueue.put, job) worker_thread = threading.Thread(target=worker_main) worker_thread.start() while 1: schedule.run_pending() time.sleep(1)
This model also makes sense for a distributed application where the workers are separate processes that receive jobs from a distributed work queue. I like using beanstalkd with the beanstalkc Python library.
How to continuously run the scheduler without blocking the main thread?¶
Run the scheduler in a separate thread. Mrwhick wrote up a nice solution in to this problem here (look for
run_continuously())
Does schedule support timezones?¶
Vanilla schedule doesn’t support timezones at the moment. If you need this functionality please check out @imiric’s work here. He added timezone support to schedule using python-dateutil.
What if my task throws an exception?¶
Schedule doesn’t catch exceptions that happen during job execution. Therefore any exceptions thrown during job execution will bubble up and interrupt schedule’s run_xyz function.
If you want to guard against exceptions you can wrap your job function in a decorator like this:
import functools def catch_exceptions(job_func, cancel_on_failure=False): @functools.wraps(job_func) def wrapper(*args, **kwargs): try: return job_func(*args, **kwargs) except: import traceback print(traceback.format_exc()) if cancel_on_failure: return schedule.CancelJob return wrapper @catch_exceptions(cancel_on_failure=True) def bad_task(): return 1 / 0 schedule.every(5).minutes.do(bad_task)
Another option would be to subclass Schedule like @mplewis did in this example.
How can I run a job only once?¶
def job_that_executes_once(): # Do some work ... return schedule.CancelJob schedule.every().day.at('22:30').do(job_that_executes_once)
How can I cancel several jobs at once?¶
You can cancel the scheduling of a group of jobs selecting them by a unique identifier.
def greet(name): print('Hello {}'.format(name)) schedule.every().day.do(greet, 'Andrea').tag('daily-tasks', 'friend') schedule.every().hour.do(greet, 'John').tag('hourly-tasks', 'friend') schedule.every().hour.do(greet, 'Monica').tag('hourly-tasks', 'customer') schedule.every().day.do(greet, 'Derek').tag('daily-tasks', 'guest') schedule.clear('daily-tasks')
Will prevent every job tagged as
daily-tasks from running again.
I’m getting an
AttributeError: 'module' object has no attribute 'every' when I try to use schedule. How can I fix this?¶
This happens if your code imports the wrong
schedule module. Make sure you don’t have a
schedule.py file in your project that overrides the
schedule module provided by this library.
How can I add generic logging to my scheduled jobs?¶
The easiest way to add generic logging functionality to your schedule job functions is to implement a decorator that handles logging in a reusable way:
import functools import time import schedule # This decorator can be applied to def with_logging(func): @functools.wraps(func) def wrapper(*args, **kwargs): print('LOG: Running job "%s"' % func.__name__) result = func(*args, **kwargs) print('LOG: Job "%s" completed' % func.__name__) return result return wrapper @with_logging def job(): print('Hello, World.') schedule.every(3).seconds.do(job) while 1: schedule.run_pending() time.sleep(1) | https://schedule.readthedocs.io/en/stable/faq.html | 2018-01-16T10:59:45 | CC-MAIN-2018-05 | 1516084886416.17 | [] | schedule.readthedocs.io |
Pre- and post-send signals¶
Anymail provides pre-send and post-send signals you can connect to trigger actions whenever messages are sent through an Anymail backend.
Be sure to read Django’s listening to signals docs for information on defining and connecting signal receivers.
Pre-send signal¶
You can use Anymail’s
pre_send signal to examine
or modify messages before they are sent.
For example, you could implement your own email suppression list:
from anymail.exceptions import AnymailCancelSend from anymail.signals import pre_send from django.dispatch import receiver from email.utils import parseaddr from your_app.models import EmailBlockList @receiver(pre_send) def filter_blocked_recipients(sender, message, **kwargs): # Cancel the entire send if the from_email is blocked: if not ok_to_send(message.from_email): raise AnymailCancelSend("Blocked from_email") # Otherwise filter the recipients before sending: message.to = [addr for addr in message.to if ok_to_send(addr)] message.cc = [addr for addr in message.cc if ok_to_send(addr)] def ok_to_send(addr): # This assumes you've implemented an EmailBlockList model # that holds emails you want to reject... name, email = parseaddr(addr) # just want the <email> part try: EmailBlockList.objects.get(email=email) return False # in the blocklist, so *not* OK to send except EmailBlockList.DoesNotExist: return True # *not* in the blocklist, so OK to send
Any changes you make to the message in your pre-send signal receiver will be reflected in the ESP send API call, as shown for the filtered “to” and “cc” lists above. Note that this will modify the original EmailMessage (not a copy)—be sure this won’t confuse your sending code that created the message.
If you want to cancel the message altogether, your pre-send receiver
function can raise an
AnymailCancelSend exception,
as shown for the “from_email” above. This will silently cancel the send
without raising any other errors.
Post-send signal¶
You can use Anymail’s
post_send signal to examine
messages after they are sent. This is useful to centralize handling of
the sent status for all messages.
For example, you could implement your own ESP logging dashboard (perhaps combined with Anymail’s event-tracking webhooks):
from anymail.signals import post_send from django.dispatch import receiver from your_app.models import SentMessage @receiver(post_send) def log_sent_message(sender, message, status, esp_name, **kwargs): # This assumes you've implemented a SentMessage model for tracking sends. # status.recipients is a dict of email: status for each recipient for email, recipient_status in status.recipients.items(): SentMessage.objects.create( esp=esp_name, message_id=recipient_status.message_id, # might be None if send failed email=email, subject=message.subject, status=recipient_status.status, # 'sent' or 'rejected' or ... )
anymail.signals.
post_send¶
Signal delivered after each EmailMessage is sent.
If you register multiple post-send receivers, Anymail will ensure that all of them are called, even if one raises an error.
Your post_send receiver must be a function with this signature:
def my_post_send_handler(sender, message, status, esp_name, **kwargs):
(You can name it anything you want.) | http://anymail.readthedocs.io/en/stable/sending/signals/ | 2018-01-16T11:35:04 | CC-MAIN-2018-05 | 1516084886416.17 | [] | anymail.readthedocs.io |
Using Scripting Variables Example
This example creates a simple mini-report inside the log file for each target in the job. This simple example demonstrates the use of the scripting variables. In this case, the values of the variables are displayed as part of the report using the echo command built into the DOS command-line interface.
The following figure shows some of the variable names surrounded by single quotes. You do not need to enclose variable names in quotes. These quotes are there only to show emphasis on the variable values in the mini-report example.
The following output shows the mini-report displayed immediately after the target starts to be built. You could add this same script as a post-build step for the target and a similar mini-report would be created when the target is done being generated.
Building target: WebWorks Help 5.0
--- Target Mini-Report for WebWorks Help 5.0 ---
‘WebWorks Help 5.0’ is running from the ‘Scripting Demo’ job located at C:\Documents and Settings\doc\My Documents\WebWorks Automap\Jobs\Demo.
Currently running the ‘PreBuild’ script for the WebWorks Help 5.0 target. ‘WebWorks Help 5.0’ will deploy to ‘Online Help’ located in the C:\AutoMapOutput\Help Systems\Online Help folder.
So far, 0 errors have occurred.
--- End of WebWorks Help 5.0 mini-report ---
Generation started at 9:57:15 AM
Initializing file information
Updating documents.
Applying settings to WebWorks.doc, 1 of 1. | http://docs.webworks.com/ePublisher/2009.3/Help/03.Preparing_and_Publishing_Content/4.34.Automating_Projects | 2018-01-16T12:04:22 | CC-MAIN-2018-05 | 1516084886416.17 | [array(['/ePublisher/2009.3/Help/03.Preparing_and_Publishing_Content/4.34.Automating_Projects?action=AttachFile&do=get&target=ScriptVariables.png',
'ScriptVariables.png ../4.34.Automating_Projects/ScriptVariables.png'],
dtype=object) ] | docs.webworks.com |
Contributing¶
Getting Involved¶
So you’d like to contribute? That’s awesome! We would love to have your help, especially in the following ways:
- Making Pull Requests for code, tests, or docs
- Commenting on open issues and pull requests
- Suggesting new features
Pull Requests¶
Start by submitting a pull request on GitHub against the master branch of the repository. Your pull request should provide a good description of the change you are making, and/or the bug that you are fixing. This will then trigger a build in Travis-CI where your contribution will be tested to verify it does not break existing functionality.
Running Tests Locally¶
You can make use of tox >= 1.8 to test the entire matrix of options:
- with / without lxml
- pygeoif vs shapely
- py26,py27,py32,py33,py34
as well as pep8 style checking in a single call (this approximates what happens when the package is run through Travis-CI)
# Install tox pip install tox>=1.8 # Run tox tox # Or optionally # (to skip tests for Python versions you do not have installed) tox --skip-missing-interpreters
This will run through all of the tests and produce an output similar to:
______________________________________________________ summary ______________________________________________________ SKIPPED: py26: InterpreterNotFound: python2.6 py27: commands succeeded SKIPPED: py32: InterpreterNotFound: python3.2 SKIPPED: py33: InterpreterNotFound: python3.3 py34: commands succeeded SKIPPED: py26-shapely: InterpreterNotFound: python2.6 SKIPPED: py26-lxml: InterpreterNotFound: python2.6 py27-shapely: commands succeeded py27-lxml: commands succeeded SKIPPED: py32-shapely: InterpreterNotFound: python3.2 SKIPPED: py32-lxml: InterpreterNotFound: python3.2 SKIPPED: py33-shapely: InterpreterNotFound: python3.3 SKIPPED: py33-lxml: InterpreterNotFound: python3.3 py34-shapely: commands succeeded py34-lxml: commands succeeded SKIPPED: py26-shapely-lxml: InterpreterNotFound: python2.6 py27-shapely-lxml: commands succeeded SKIPPED: py32-shapely-lxml: InterpreterNotFound: python3.2 SKIPPED: py33-shapely-lxml: InterpreterNotFound: python3.3 py34-shapely-lxml: commands succeeded pep8: commands succeeded congratulations :)
You are primarily looking for the
congratulations :) line at the bottom,
signifying that the code is working as expected on all configurations
available. | http://fastkml.readthedocs.io/en/latest/contributing.html | 2018-02-18T01:26:41 | CC-MAIN-2018-09 | 1518891811243.29 | [] | fastkml.readthedocs.io |
The mobile time clock provides a way for your employees to punch in/out from anywhere.
Not all schedules have the mobile time clock enabled.
As a manager you can disable the mobile time clock add-on. You can control what add-ons are enabled for your schedule from Settings -> Add Ons.
Punching In/Out
To punch in/out go to the Today page and click on Clock In/Out. This will bring out the time clock.
You may see a screen asking you to enable location tracking if your manager has enabled this feature.
You can select the position and location you are punching in for and add a note if you would like.
If you punch in successfully you should see 'You are clocked in' in red.
For Managers
You can enable GPS tracking and set a maximum distance threshold from Settings -> Add On Settings -> Mobile Time Clock.
Mobile Time Clock FAQs
Is there a GPS locator or picture capability when clocking in?
Yes, and no. The mobile time clock has the capability to save the GPS position of the user when they clock in and out. But we do not track this constantly.
We currently do not offer the ability to take a picture of the user when they clock in or out. | http://docs.zoomshift.com/mobile-app/mobile-time-clock | 2018-02-18T00:52:23 | CC-MAIN-2018-09 | 1518891811243.29 | [array(['https://uploads.intercomcdn.com/i/o/14478086/6d498fce00e3f88310f6db7f/Simulator+Screen+Shot+Dec+8%2C+2016%2C+10.55.08+AM.png',
None], dtype=object)
array(['https://uploads.intercomcdn.com/i/o/14478339/29c1e5394b92e5968a1759ca/Simulator+Screen+Shot+Dec+8%2C+2016%2C+10.55.13+AM.png',
None], dtype=object)
array(['https://uploads.intercomcdn.com/i/o/14478583/e40688a7749b91311552bfbd/Simulator+Screen+Shot+Dec+8%2C+2016%2C+10.55.22+AM.png',
None], dtype=object)
array(['https://uploads.intercomcdn.com/i/o/14478352/4db59c0b9948fce8d57af048/Screen+Shot+2016-12-08+at+11.11.51+AM.png',
None], dtype=object) ] | docs.zoomshift.com |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.