content
stringlengths 0
557k
| url
stringlengths 16
1.78k
| timestamp
timestamp[ms] | dump
stringlengths 9
15
| segment
stringlengths 13
17
| image_urls
stringlengths 2
55.5k
| netloc
stringlengths 7
77
|
---|---|---|---|---|---|---|
Connect to Workbooks¶
First things first: to be able to talk to Excel from Python or call
RunPython in VBA, you need to establish a connection with
an Excel Workbook.
Python to Excel¶
There are various ways to connect to an Excel workbook from Python:
wb = Workbook()connects to a a new workbook
wb = Workbook.active()connects to the active workbook (supports multiple Excel instances)
wb = Workbook('Book1')connects to an unsaved workbook
wb = Workbook('MyWorkbook.xlsx')connects to a saved (open) workbook by name (incl. xlsx etc.)
wb = Workbook(r'C:\path\to\file.xlsx')connects to a saved (open or closed) workbook by path
Note
When specifying file paths on Windows, you should either use raw strings by putting
an
r in front of the string or use double back-slashes like so:
C:\\path\\to\\file.xlsx.
Excel to Python (RunPython)¶
To make a connection from Excel, i.e. when calling a Python script with
RunPython, use
Workbook.caller(), see
Call Python with “RunPython”.
Check out the section about Debugging to see how you can call a script from both sides, Python and Excel, without
the need to constantly change between
Workbook.caller() and one of the methods explained above.
User Defined Functions (UDFs)¶
UDFs work differently and don’t need the explicit instantiation of a
Workbook, see UDF Tutorial.
However,
xw.Workbook.caller() can be used in UDFs although just read-only. | http://docs.xlwings.org/en/v0.7.1/connect_to_workbook.html | 2018-12-10T03:48:28 | CC-MAIN-2018-51 | 1544376823303.28 | [] | docs.xlwings.org |
Protecting File Downloads on WP Engine
By default, Easy Digital Downloads will protect download files inside of the wp-content/uploads/edd/ folder with a .htaccess, but this will only work if your site is running on Apache. If your site is running on NGINX, as WP Engine uses, admin area of your WP Engine account.
This redirect can be added from the Redirects menu. | https://docs.easydigitaldownloads.com/article/683-protecting-file-downloads-on-wp-engine | 2018-12-10T03:58:54 | CC-MAIN-2018-51 | 1544376823303.28 | [array(['http://d33v4339jhl8k0.cloudfront.net/docs/assets/5463d152e4b0f639418397ca/images/54f9d692e4b086c0c096bde7/file-SH5mBGvIf9.png',
None], dtype=object) ] | docs.easydigitaldownloads.com |
Getting Started
RadDataForm is easily setup by adding the control to a Page and setting its Item property to an object which properties are to be visualized in the Data Form. In order to use the control in your application you need to reference the following binaries:
- Telerik.Core.dll
- Telerik.Data.dll
- Telerik.UI.Xaml.Input.dll
- Telerik.UI.Xaml.Primitives.dll
- Telerik.UI.Xaml.Controls.Data.dll
After referencing the required assemblies you can proceed with setting up the control itself. The only property that is required to be set in order to visualize any data in our application is the Item property. It is of type object and gets or sets the business item that is about to be visualized. It is important to keep in mind that the control will resolve only the public properties of the business class by default.
How the control works?
For each property exposed by the object set as Item the RadDataForm control creates an EntityPropertyControl. This control is responsible for visualizing the information held by the respective property. The RadDataForm supports out of the box the following types and creates the respective EntityPropertyControl:
- bool - BooleanEditor
- string - StringEditor
- double - NumericEditor
- DateTime - DateEditor or TimeEditor
- enum - EnumEditor
Example
Create the business item
public class UserData { private string name; public string Name { get { return name; } set { name = value; } } private string lastName; public string LastName { get { return lastName; } set { lastName = value; } } private int age; public int Age { get { return age; } set { age = value; } } private DateTime birthDate; public DateTime BirthDate { get { return birthDate; } set { birthDate = value; } } public UserData(string name = "", string lastName = "", int age = -1, DateTime birthDate = new DateTime()) { this.Name = name; this.LastName = lastName; this.Age = age; this.BirthDate = birthDate; } }
Initialize the RadDataForm and add it to the page
public MainPage() { this.InitializeComponent(); var form = new RadDataForm(); form.Item = new UserData("FirstName", "LastName", 19, new DateTime(1995, 10, 24)); this.root.Children.Add(form); }
The result should be similar to the picture below.
Figure 1:
| https://docs.telerik.com/windows-universal/controls/raddataform/dataform-gettingstarted | 2018-12-10T03:46:08 | CC-MAIN-2018-51 | 1544376823303.28 | [array(['images/dataform-overview.png', None], dtype=object)] | docs.telerik.com |
The following commands and special variables may be used anywhere in a command string except inside a [//] or {//} prompt. Examples of the use of all these commands and special variables can be found in the XST files in \X_WIN95\X_LANSA\SOURCE shipped with Visual LANSA to provide custom execute dialogs from within the LANSA development environment:
Note: If any of these commands are mistyped and the '%' is the first character on the line, the command will be taken as a Symbol Name and the rest of the line will be set as the value of that symbol. The resulting command line will look as if the whole line has been ignored. | https://docs.lansa.com/14/en/lansa015/content/lansa/depb3_0160.htm | 2018-12-10T04:04:58 | CC-MAIN-2018-51 | 1544376823303.28 | [] | docs.lansa.com |
Exporting Sales / Earnings PDF
Under Downloads → Reports → Export, at the top of the page you'll see a section like this:
Pressing the button will either render a PDF in your browser or ask you to download one. It shows the same information as HTML chart under the Reports tab, simply in a PDF instead of HTML.
Currently you can only get all data for current year, but more options are on the way.
Here are the first and last pages of a report:
| https://docs.easydigitaldownloads.com/article/903-exporting-sales-earnings-pdf | 2018-12-10T04:46:10 | CC-MAIN-2018-51 | 1544376823303.28 | [array(['http://d33v4339jhl8k0.cloudfront.net/docs/assets/5463d152e4b0f639418397ca/images/5570b37be4b027e1978e54a9/file-u2nWVRGh8Q.png',
None], dtype=object)
array(['http://d33v4339jhl8k0.cloudfront.net/docs/assets/5463d152e4b0f639418397ca/images/5570bfb5e4b01a224b4290de/file-QoQ727cNNJ.png',
None], dtype=object)
array(['http://d33v4339jhl8k0.cloudfront.net/docs/assets/5463d152e4b0f639418397ca/images/5570bfc7e4b01a224b4290e1/file-c4VZP0LAMT.png',
None], dtype=object) ] | docs.easydigitaldownloads.com |
Exporting File Download Records to CSV
Under Downloads → Reports → Export, at the bottom of the page you'll find the box for "Export Download History in CSV". This feature lists every time any file is downloaded, no matter how many times.
Simply choose a month and press the "Generate CSV" button and you'll be asked to save a CSV file. The fields in that CSV are:
- Date
- Downloaded by
- IP Address
- Product File | https://docs.easydigitaldownloads.com/article/907-exporting-file-download-records-to-csv | 2018-12-10T04:23:09 | CC-MAIN-2018-51 | 1544376823303.28 | [array(['http://d33v4339jhl8k0.cloudfront.net/docs/assets/5463d152e4b0f639418397ca/images/5570e5e9e4b027e1978e55df/file-pwKstUAodt.png',
None], dtype=object) ] | docs.easydigitaldownloads.com |
Extra Invalidation Handlers¶
This library provides decorators that build on top of the
CacheInvalidator
to simplify common operations.
Tag Handler¶
New in version 1.3: The tag handler was added in FOSHttpCache 1.3. If you are using an older
version of the library and can not update, you need to use
CacheInvalidator::invalidateTags.
The tag handler helps you to mark responses with tags that you can later use to
invalidate all cache entries with that tag. Tag invalidation works only with a
CacheInvalidator that supports
CacheInvalidator::INVALIDATE.
Setup¶
Note
Make sure to configure your proxy for tagging first.
The tag handler is a decorator around the
CacheInvalidator. After
creating the invalidator with a proxy client
that implements the
BanInterface, instantiate the
TagHandler:
use FOS\HttpCache\Handler\TagHandler; // $cacheInvalidator already created as instance of FOS\HttpCache\CacheInvalidator $tagHandler = new TagHandler($cacheInvalidator);
Usage¶
With tags you can group related representations so it becomes easier to invalidate them. You will have to make sure your web application adds the correct tags on all responses. You can add tags to the handler using:
$tagHandler->addTags(array('tag-two', 'group-a'));
Before any content is sent out, you need to send the tag header:
header(sprintf('%s: %s', $tagHandler->getTagsHeaderName(), $tagHandler->getTagsHeaderValue() ));
Tip
If you are using Symfony with the FOSHttpCacheBundle, the tag header is set automatically. You also have additional methods of defining tags with annotations and on URL patterns.
Assume you sent four responses:
You can now invalidate some URLs using tags:
$tagHandler->invalidateTags(array('group-a', 'tag-four'))->flush();
This will ban all requests having either the tag
group-a /or/
tag-four.
In the above example, this will invalidate
/two,
/three and
/four.
Only
/one will stay in the cache. | https://foshttpcache.readthedocs.io/en/1.4/invalidation-handlers.html | 2018-12-10T04:45:05 | CC-MAIN-2018-51 | 1544376823303.28 | [] | foshttpcache.readthedocs.io |
End a scroll view begun with a call to BeginScrollView.
See Also: GUILayout.BeginScrollView
Scroll View in the Game View..
using UnityEngine;
public class ExampleScript : MonoBehaviour { // The variable to control where the scrollview 'looks' into its child elements. Vector2 scrollPosition;
// The string to display inside the scrollview. 2 buttons below add & clear this string. string longString = "This is a long-ish string";
void"; } }
Did you find this page useful? Please give it a rating: | https://docs.unity3d.com/ScriptReference/GUILayout.EndScrollView.html | 2018-12-10T05:24:18 | CC-MAIN-2018-51 | 1544376823303.28 | [] | docs.unity3d.com |
single-line text field where the user can edit a string.
Text field in the GameView.
using UnityEngine;
public class ExampleScript : MonoBehaviour { string stringToEdit = "Hello World";
void OnGUI() { // Make a text field that modifies stringToEdit. stringToEdit = GUILayout.TextField(stringToEdit, 25); } }
Did you find this page useful? Please give it a rating: | https://docs.unity3d.com/ScriptReference/GUILayout.TextField.html | 2018-12-10T04:19:25 | CC-MAIN-2018-51 | 1544376823303.28 | [] | docs.unity3d.com |
Adding files, such as images, that you can use in your course has the following steps.
For more information, see the following topics.
You manage most files for your course, including image files, on the Files & Uploads page. This page lists the files that you upload, along with the following information about the files.
The file name.
A preview of the file (if the file is an image) or an icon that represents the file.
Note
If you do not want to see a preview of your files, select the Hide File Preview checkbox in the left pane. To show a preview again, clear this checkbox.
The file type.
The date the file was added.
The URLs that you use to make your files visible in your course or on the internet.
An indication of whether the file is locked.
This page also includes a Search option to help you find specific files. For more information, see Find an Uploaded File.
Note
For textbooks or for course handouts that you want to make available in the sidebar of the Course page, see Textbooks or Add a Course Handout.
The maximum size for each file is 10 MB. We recommend that you use standard compression tools to reduce PDF and image file sizes before you add the files to your course.
You can upload multiple files at a time.
If you have very large audio files or large data sets to share with your students, do not use the Files & Uploads page to add these files to your course. Instead, use another hosting service to host these files.
Note
Ensure that you obtain copyright permissions for files and images you upload to your course, and that you cite sources appropriately.
To upload one or more files, follow these steps.
Create or locate the files on your computer.
Note
When Studio generates URLs for an uploaded file, the file name becomes part of the URL and is visible to students when they access the file. Avoid using file names that contain information that you do not want to share about the file contents, such as “Answerkey.pdf”.
On the Content menu, select Files & Uploads.
On the Files & Uploads page, drag your file or files into the Drag and Drop area.
Note
If you upload a file that has the same name as an existing course file, the original file is overwritten without warning.
The Files & Uploads page refreshes to show the uploaded file or files.
To upload additional files, drag more files into the Drag and Drop area of the page.
The Files & Uploads page lists up to 50 files at one time. If your course has more than 50 files, additional files are listed on other pages.
To find a file on the Files & Uploads page, you can use the Search option, or you can view the page that lists the file.
To use the Search option, enter one of the following search terms in the Search field, and then select the magnifying glass icon.
For example, if the file is named FirstCourseImage.jpg, you can enter any of the following search terms in the Search field.
FirstCourseImage.jpg
.jpg
First
Image
First
.jpg
To view the page that lists the file, select Previous or Next to view the previous or next page, or select the number of the page that you want to view.
You can also sort files by name, type, or date added, or filter files by type. For more information, see Sort Files or Filter Files.
On the Files & Uploads page, you can sort your files by name, type, or date added. To sort by one of these columns, select the name of the column. For example, to sort your files by type, select the Type column name.
The arrow or arrows to the right of the column name indicate the column sort order. Files are sorted by the column that has one arrow. The direction of the arrow indicates whether the order is ascending or descending.
To change between ascending and descending order, select the column name again.
You can filter the list of files by type so that only a selected type of file is visible. The list remains in the current sort order.
You can filter by the following file types.
To filter the list of files by type, follow these steps.
The list refreshes to show only the type or types of file you selected. You can sort the resulting list by name, type, and date added.
To reset the list and view files of all types, clear all checkboxes.
When you upload a file, Studio assigns a Studio URL and a web URL to the file. The Copy URLs column on the Files & Uploads page lists these URLs. To use an uploaded file, you add a link to the Studio URL or the web URL in your content.
Note
If you do not want to allow access to a file from outside your course, you can lock the file so that only learners who are signed in and enrolled in your course can access the file. For more information, see Lock a File.
To add a file or image inside the course, such as to a component, a course update, or a course handout, follow these steps.
On the Files & Uploads page, select the Studio option in the Copy URLs column.
The Studio option text briefly changes to Copied.
In the component or other content, paste the Studio URL.
For more information, see Add an Image to an HTML Component.
To add a file or image outside the course, such as to a bulk email message that you send from the LMS, follow these steps.
On the Files & Uploads page, select the Web option in the Copy URLs column.
The Web option text briefly changes to Copied.
In the external content, paste the web URL.
Note
Be sure that you do not use the Studio URL in an email message. For more information about sending email messages, see Send an Email Message to Course Participants. access the file.
To lock a file, select the lock icon in the row for the file.
To delete a file, select the delete icon in the row for file, and then select Permanently delete in the confirmation dialog box.
Warning
After you delete a file, any links to the file from inside or outside the course are broken. You must update links to files that you delete. | https://edx.readthedocs.io/projects/edx-partner-course-staff/en/latest/course_assets/course_files.html | 2018-12-10T05:31:25 | CC-MAIN-2018-51 | 1544376823303.28 | [] | edx.readthedocs.io |
Sched.
- reason
- A string that will be used as the reason for the triggered argument.
- createAbsoluteSourceStamps
- bb:sched
- reason
- See Configuring Schedulers.
-
reason
codebases
- createAbsoluteSourceStamps
- See Configuring Schedulers. Note that fileIsImportant, change_filter and createAbsoluteSourceStamps
reasonString
A string that will be used to create the build reason for the forced build. This string can contain the placeholders '%(owner)s' and '%(reason)s', which represents the value typed into the reason field..
buttonName
The name of the "submit" button on the resulting force-build form. This defaults to "Force¶
FixedParameter(name="branch", default="trunk"),
This parameter type will not be shown on the web form, and always generate a property with its default value.
StringParameter¶ overiding the 'get)
InheritBuildParameter¶), ])
BuildslaveChoiceParameter¶
This parameter allows a scheduler to require that a build is assigned to the chosen buildslave. The choice is assigned to the slavename property for the build. The enforceChosenSlave functor must be assigned to the canStartBuild parameter for the Builder.
Example:
from buildbot.process.builder import enforceChosenSlave # schedulers: ForceScheduler( # ... properties=[ BuildslaveChoiceParameter(), ] ) # builders: BuilderConfig( # ... canStartBuild=enforceChosenSlave, ) | http://docs.buildbot.net/0.8.9/manual/cfg-schedulers.html | 2018-12-10T04:52:41 | CC-MAIN-2018-51 | 1544376823303.28 | [] | docs.buildbot.net |
Properties of Oracle Instance (Details)
When you create a new instance, use this dialog box to add the details of the instance.
For an existing instance, use this tab to view or change the details of the selected instance.
Connect String
- When you create a new instance, specify the database connect string.
- For an existing instance, you can change the database connect string by entering 1) database user ID, 2) password for the user ID @ 3) Oracle instance name in the three spaces provided. The user ID must have SYSDBA, ALTER SYSTEM and SELECT ANY TABLE system.
Use Catalog Connect
When selected, a connection is established between the target database and the Recovery Catalog database using the specified connect string, and the Recovery Catalog database will be used for backups.
When cleared, there will be no connection between the target database and the Recovery Catalog database, and the target database Control Files will be used for backups.
The three fields are used to identify/create the connect string for the Recovery Catalog database. You can change the connect string by using entering 1) Recovery Catalog database user ID, 2) password for the user ID @ Recovery Catalog service name.
TNS_ADMIN folder
Identifies the path to the TNS Admin directory. If you have not provided the path, a default path \network\admin is appended to the path of the $ORACLE_HOME directory.
For example, if $ORACLE_HOME is \opt2\oracle, then TNS_ADMIN is \opt2\oracle\network\admin.
Click to establish or change the designated TNS_ADMIN directory.
Disable RMAN cross check
When selected, the CommServe database will not be cross verified with the RMAN catalog during a data aging operation.
Ctrl File Autobackup
Lists the configuration options available AUTOBACKUP of control file so that every time a BACKUP or COPY command is executed in RMAN, an autobackup of the control file is performed.
Not Configure - Disables autobackup of the control file.
Configure On - If the backup includes a datafile, then RMAN will not automatically include the current control file in the datafile backupset, but will write the control file and server parameter file to a separate autobackup piece. If the backup does not include a datafile, the control file and server parameter file will be written to their own autobackup piece.
Configure Off - If the backup includes a datafile, then RMAN automatically includes the current control file and server parameter file in the datafile backupset. RMAN will not create a separate autobackup piece containing the control file and server parameter file.
Block Size
Specify the block size for backup and restore operations on the selected instance. You can disable the block size specification by setting the value to 0. In such cases, the default RMAN block size value will be used for backup and restore operations. | http://docs.snapprotect.com/netapp/v11/article?p=en-us/universl/instances/oracle/oracle_details.htm | 2020-03-29T00:03:00 | CC-MAIN-2020-16 | 1585370493121.36 | [] | docs.snapprotect.com |
Table of Contents
1: Active player indicator
2, 3: Card stacks for player one and player two
4: Stack of won cards (if any)
5: Gameplay area
6: Score board with player's name, points in the current game, the overall score of the player, and the amount of wins per game
7: Player's controller (Human/PC)
8: The 'trump' in the current game | https://docs.kde.org/stable5/en/kdegames/lskat/rules_and_tips.html | 2020-03-29T01:04:01 | CC-MAIN-2020-16 | 1585370493121.36 | [array(['/stable5/en/kdoctools5-common/top-kde.jpg', None], dtype=object)
array(['lskat_screen_01.png',
'Lieutenant Skat default game reference screen.'], dtype=object)] | docs.kde.org |
Sometimes, you may miss some product reviews after transferring data from old version to the new one then please follow our this instruction below to take them back
- Log in your account on Ryviu.
- Go to the link to show the old data.
- Select the missing products on the new data and click the Transfer button.
4. Update metadata for you product by clicking Update product info button in this link:
Sorry for the inconvenience.
And as always, if you need more help or any questions, please reach out to us via our email or Live Chat. | https://docs.ryviu.com/en/articles/3005444-how-to-take-back-missing-reviews-after-updating-to-the-new-version | 2020-03-29T00:39:23 | CC-MAIN-2020-16 | 1585370493121.36 | [array(['https://downloads.intercomcdn.com/i/o/123048944/e17e0813f69536bfa5971b47/transfer.png',
None], dtype=object) ] | docs.ryviu.com |
Chart Descriptions
This is a list of most charts in Database Performance Monitor with their title, unique ID, and a brief description.
MongoDB Charts
MySQL Charts
PostgreSQL Charts
Agent OS Metrics
These are metrics from the host where the DPM agent is installed. When monitoring RDS or Aurora, refer to the CloudWatch charts for metrics like CPU and disk utilization. | https://docs.vividcortex.com/general-reference/chart-descriptions/ | 2020-03-29T00:02:52 | CC-MAIN-2020-16 | 1585370493121.36 | [] | docs.vividcortex.com |
Converts an Octal number as text to a Binary number as text.
oct2bin( value, [place] )
Text
value: (Text) Octal value as text to convert, such as
771.
place: (Number) The number of places in the result.
Invalid digits for a value parameter, including signs, are ignored.
The default value for place is however many places are necessary to represent the number.
You can experiment with this function in the test box below.
Test Input
Test Output
Test Output
oct2bin(144) returns
1100100
On This Page | https://docs.appian.com/suite/help/19.4/fnc_base_conversion_oct2bin.html | 2020-03-29T01:00:29 | CC-MAIN-2020-16 | 1585370493121.36 | [] | docs.appian.com |
Cloudera Data Platform Data Center Edition 7 is now generally available:
As always, we welcome your feedback. Please send your comments and suggestions to the user group through our community forums. You can also report issues through our external Jira projects on issues.cloudera.org. | https://docs.cloudera.com/cdpdc/7.0/announcement.html | 2020-03-29T00:05:35 | CC-MAIN-2020-16 | 1585370493121.36 | [] | docs.cloudera.com |
The Sendex™ value is an indicator of the quality of an email address. Just because an email exists and is syntactically correct, does not mean it's quality. For example, [email protected] can generally be seen as a higher quality email address than [email protected]. Kickbox uses a number of algorithms, derived from the millions of email transactions it performs each hour, to determine the overall quality of an email address. Here is an overview of just some of the characteristics Kickbox considers when classifying the quality of an email address:
- How similar the address appears to known, high-quality, valid email addresses. The Kickbox platform is always evaluating and learning what patterns good email addresses employ and uses that evolving data to provide insight into the overall quality of a given address.
- Whether or not the domain appears to be a commercial domain (example: acme.com) or a personal domain (example: yahoo.com).
- If the email address appears to be associated with a role (example: [email protected]) instead of a person (example: [email protected])
While the Sendex score should never be taken alone when determining whether or not to deliver a message, it is an indicator that the address may receive a lower response rate and potentially higher spam reports and bounce rates. That said, we recommend sending with caution to addresses with low Sendex scores.
The Sendex value ranges between a 0 (no quality) to 1 (excellent quality). So what's a good Sendex score, and what should be considered bad? The answer largely relies on your specific purpose, but we generally classify Sendex scores like so:
For Transactional Email
1.00-0.55
Good
0.54-0.20
Fair
0.19-0.00
Poor
1.00-0.70
Good
0.69-0.40
Fair
0.39-0.00
Poor
Updated about a year ago | https://docs.kickbox.com/docs/the-sendex | 2020-03-28T23:45:38 | CC-MAIN-2020-16 | 1585370493121.36 | [] | docs.kickbox.com |
Features
This article lists the supported features by RadDomainUpDown:
- Data Binding: via the DataSource property you can bind to a collection of custom objects. Additional information is available in the Data Binding.
- Unbound Mode: it is possible to populate the Items collection manually either at design time or programmatically.
Auto-Complete: the boolean AutoComplete property indicates whether the auto-complete behavior will be enabled or not while typing in the editable part. As the user types, the next item in the list that matches the user input is automatically appended to the characters the user has already typed.
Figure: 1 Auto-complete functionality
ReadOnly: when read-only mode is enabled, the user should not be allowed to type in the editor, still selecting items should be possible.
Right-to-Left: RadDomainUpDown fully supports right-to-left (RTL) language locales. You can enable/disable the right-to-left support using RightToLeft enumeration, which has the following members:
Yes: Content is aligned from right to left.
No: Content is aligned from left to right.
Inherit: Direction will be determined by the parent control.
Figure: 2 Right-to-Left mode
Navigation: The predefined set of items can be navigated by using the arrow buttons, the up and down arrow keys or by double clicking in the editor. The Wrap property determines if the selected item will revert to first item after reaching the last item and vice versa while navigating the list. The SelectNextOnDoubleClick property controls whether to rotate items on double click in the edit box part. | https://docs.telerik.com/devtools/winforms/controls/editors/domainupdown/features | 2020-03-29T01:17:35 | CC-MAIN-2020-16 | 1585370493121.36 | [] | docs.telerik.com |
v1.04 "East Friesian"
Release date:
With this update we’re bringing some much-needed improvements to Search, as well as the introduction of the classic Mm Picks.
What's New?
- Mm Picks - In this collection, you'll find some of our favourite creations, which we at Media Molecule have hand-picked as the very best examples of what's possible in Dreams. Mm Picks can be found in the Dream Surfing section of the Dreamiverse.
Other Improvements
- Updated: Added new Filters and Sort orders to Search to help you find more of the dreams you want.
- Updated: Added additional information to Search results, to help you see what dreams you've already played, and which ones have been updated recently.
- Updated: When the "re-centre your imp" reminder appears, so that it's not as frequent (there will be an option in a future update to turn it off completely).
- Updated: Added some subtle movement to the Early Access watermark, to avoid potential screen burn on some TVs.
Info circled
The Dreams User Guide is a work-in-progress. Keep an eye out for updates as we add more learning resources and articles over time. | https://docs.indreams.me/en/updates/dreams/v104-east-friesian | 2020-03-29T00:08:39 | CC-MAIN-2020-16 | 1585370493121.36 | [] | docs.indreams.me |
Download Azure Stack Hub tools from GitHub
AzureStack-Tools is a GitHub repository that hosts PowerShell modules for managing and deploying resources to Azure Stack Hub. If you're planning to establish VPN connectivity, you can download these PowerShell modules to the Azure Stack Development Kit (ASDK), or to a Windows-based external client. To get has PowerShell modules that support the following functionalities for Azure Stack Hub:
Next steps
Feedback | https://docs.microsoft.com/en-us/azure-stack/operator/azure-stack-powershell-download?view=azs-1910 | 2020-03-29T01:38:05 | CC-MAIN-2020-16 | 1585370493121.36 | [] | docs.microsoft.com |
nRF91 nrfx example¶
Overview¶
This sample demonstrates the usage of nrfx library in Zephyr. GPIOTE and DPPI are used as examples of nrfx drivers.
The code shows how to initialize the GPIOTE interrupt in Zephyr so that the nrfx_gpiote driver can use it. Additionally, it shows how the DPPI subsystem can be used to connect tasks and events of nRF peripherals, enabling those peripherals to interact with each other without any intervention from the CPU.
Zephyr GPIO driver is disabled to prevent it from registering its own handler
for the GPIOTE interrupt. Button 1 is configured to create an event when pushed.
This calls the
button_handler() callback and triggers the LED 1 pin OUT task.
LED is then toggled.
Requirements¶
This sample has been tested on the NordicSemiconductor nRF9160 DK (nrf9160_pca10090) board.
Building and Running¶
The code can be found in samples/boards/nrf/nrfx.
To build and flash the application:
west build -b nrf9160_pca10090 samples/boards/nrf/nrfx west flash
Push Button 1 to toggle the LED 1.
Connect to the serial port - notice that each time the button is pressed,
the
button_handler() function is called.
See nrfx_repository for more information about nrfx. | https://docs.zephyrproject.org/latest/samples/boards/nrf/nrfx/README.html | 2020-03-29T00:11:21 | CC-MAIN-2020-16 | 1585370493121.36 | [] | docs.zephyrproject.org |
is to avoid regressions and minimize the total time spent debugging Zulip. We do that by trying to catch as many possible future bugs as possible, while minimizing both latency and false positives, both of which can waste a lot of developer time. There are a few implications of this overall goal:
- If a test is failing nondeterministically in Travis CI, we consider that to be an urgent problem.
- If the tests become a lot slower, that is also an urgent problem.
- Everything we do in CI should also have a way to run it quickly (under 1 minute, preferably under 3 seconds), in order to iterate fast in development. Except when working on the job
fails, it’ll be reported as “Failed” failure (red in their emails).
Note that Travis CI’s web UI seems to make no visual distinction
between these.
An important detail is that Travis CI will by default hide most phases other than the actual test; you can see this easily by looking at the line numbers in the Travis CI output. There are actually a bunch of phases (e.g. the project’s setup job, downloading caches near the beginning, uploading caches at the end, etc.), and if you’re debugging our configuration, you’ll want to look at these closely.
Useful debugging tips and tools¶
- Zulip uses the
tstool to log the current time on every line of the output in our Travis CI scripts. You can use this output to determine which steps are actually consuming a lot of time.
- For performance issues, this statistics tool can give you test runtime history data that can help with determining when a performance issue was introduced and whether it was fixed. Note you need to click the “Run” button for it to do anything.
- You can sign up your personal repo for Travis CI so that every remote branch you push will be tested, which can be helpful when debugging something complicated.
Performance optimizations¶
Caching¶, so Zulip should always be using the same version of dependencies it would have used had the cache not existed. In practice, bugs are always possible, so be mindful of this possibility.
A consequence of this caching is that test jobs for branches which
modify
package.json,
requirements/, and other key dependencies
will be significantly slower than normal, because they won’t get to
benefit from the cache.
Uninstalling packages¶
In the production suite, we run
apt-get upgrade at some point
(effectively, because the Zulip installer does). This carries a huge
performance cost in Travis CI, because (1) they don’t keep their test
systems up to date and (2) literally everything is installed in their
build workers (e.g. several copies of Postgres, Java, MySQL, etc.).
In order to make Zulip’s tests performance reasonably well, we
uninstall (or mark with
apt-mark hold) many of these dependencies
that are irrelevant to Zulip in
tools/travis/setup-production. | https://zulip.readthedocs.io/en/stable/testing/travis.html | 2018-12-10T06:59:19 | CC-MAIN-2018-51 | 1544376823318.33 | [] | zulip.readthedocs.io |
Advanced troubleshooting for Stop error or blue screen error issue
Note
If you're not a support agent or IT professional, you'll find more helpful information about Stop error ("blue screen") messages in Troubleshoot blue screen errors.
What causes Stop errors?
A Stop error is displayed as a blue screen that contains the name of the faulty driver, such as any of the following example drivers:
- atikmpag.sys
- igdkmd64.sys
- nvlddmkm.sys online for the specific Stop error codes to see whether there are any known issues, resolutions, or workarounds for the problem.
As a best practice, we recommend that you do the following:
a. Make sure that you install the latest Windows updates, cumulative updates, and rollup updates. To verify the update status, refer to the appropriate update history for your system:
- Windows 10, version 1803
- Windows 10, version 1709
- Windows 10, version 1703
- Windows Server 2016 and Windows 10, version 1607
- Windows 10, version 1511
- Windows Server 2012 R2 and Windows 8.1
Windows Server 2008 R2 and Windows 7 SP1
b. Make sure that the BIOS and firmware are up-to-date.
c.. changes or reverting to the last-known working state. For more information, see Roll Back a Device Driver to a Previous Version..
- Stop and disable Automatic System Restart Services (ASR) to prevent dump files from being written.
- If the server is virtualized, disable auto reboot video:
More information on how to use Dumpchk.exe to check your dump files:
Pagefile Settings
- Introduction of page file in Long-Term Servicing Channel and Semi-Annual Channel of Windows
- How to determine the appropriate page file size for 64-bit versions of Windows
- How to generate a kernel or a complete memory dump file in Windows Server 2008 and Windows Server 2008 R2
Memory dump analysis
Finding the root cause of the crash may on analyzing dump file..
Don’t try to verify all the drivers at one time. This can degrade performance and make the system unusable. This also limits the effectiveness of the tool.
Use the following guidelines when you use Driver Verifier:
-.
Common Windows Stop errors
This section doesn't contain a list of all error codes, but since many error codes have the same potential resolutions, your best bet is to follow the steps below to troubleshoot your error.
The following table lists general troubleshooting procedures for common Stop error codes. | https://docs.microsoft.com/en-us/windows/client-management/troubleshoot-stop-errors | 2018-12-10T07:56:55 | CC-MAIN-2018-51 | 1544376823318.33 | [] | docs.microsoft.com |
Analytics¶
Zulip has a cool analytics system for tracking various useful statistics
that currently power the
/stats page, and over time will power other
features, like showing usage statistics for the various streams. It is
designed around the following goals:
- Minimal impact on scalability and service complexity.
- Well-tested so that we can count on the results being correct.
- Efficient to query so that we can display data in-app (e.g. on the streams page) with minimum impact on the overall performance of those pages.
- Storage size smaller than the size of the main Message/UserMessage database tables, so that we can store the data in the main postgres database rather than using a specialized database platform.
There are a few important things you need to understand in order to effectively modify the system.
Analytics backend overview¶
There are three main components:
- models: The UserCount, StreamCount, RealmCount, and InstallationCount tables (analytics/models.py) collect and store time series data.
- stat definitions: The CountStat objects in the COUNT_STATS dictionary (analytics/lib/counts.py) define the set of stats Zulip collects.
- accounting: The FillState table (analytics/models.py) keeps track of what has been collected for which CountStats.
The next several sections will dive into the details of these components.
The *Count database tables¶
The Zulip analytics system is built around collecting time series data in a set of database tables. Each of these tables has the following fields:
- property: A human readable string uniquely identifying a CountStat object. Example: “active_users:is_bot:hour” or “messages_sent:client:day”.
- subgroup: Almost all CountStats are further sliced by subgroup. For “active_users:is_bot:day”, this column will be False for measurements of humans, and True for measurements of bots. For “messages_sent:client:day”, this column is the client_id of the client under consideration.
- end_time: A datetime indicating the end of a time interval. It will be on an hour (or UTC day) boundary for stats collected at hourly (or daily) frequency. The time interval is determined by the CountStat.
- various “id” fields: Foreign keys into Realm, UserProfile, Stream, or nothing. E.g. the RealmCount table has a foreign key into Realm.
- value: The integer counts. For “active_users:is_bot:hour” in the RealmCount table, this is the number of active humans or bots (depending on subgroup) in a particular realm at a particular end_time. For “messages_sent:client:day” in the UserCount table, this is the number of messages sent by a particular user, from a particular client, on the day ending at end_time.
- anomaly: Currently unused, but a key into the Anomaly table allowing someone to indicate a data irregularity.
There are four tables: UserCount, StreamCount, RealmCount, and InstallationCount. Every CountStat is initially collected into UserCount, StreamCount, or RealmCount. Every stat in UserCount and StreamCount is aggregated into RealmCount, and then all stats are aggregated from RealmCount into InstallationCount. So for example, “messages_sent:client:day” has rows in UserCount corresponding to (user, end_time, client) triples. These are summed to rows in RealmCount corresponding to triples of (realm, end_time, client). And then these are summed to rows in InstallationCount with totals for pairs of (end_time, client).
Note: In most cases, we do not store rows with value 0. See Performance Strategy below.
CountStats¶
CountStats declare what analytics data should be generated and stored. The
CountStat class definition and instances live in
analytics/lib/counts.py.
These declarations specify at a high level which tables should be populated
by the system and with what data.
The FillState table¶
The default Zulip production configuration runs a cron job once an hour that updates the *Count tables for each of the CountStats in the COUNT_STATS dictionary. The FillState table simply keeps track of the last end_time that we successfully updated each stat. It also enables the analytics system to recover from errors (by retrying) and to monitor that the cron job is running and running to completion.
Performance strategy¶
An important consideration with any analytics system is performance, since it’s easy to end up processing a huge amount of data inefficiently and needing a system like Hadoop to manage it. For the built-in analytics in Zulip, we’ve designed something lightweight and fast that can be available on any Zulip server without any extra dependencies through the carefully designed set of tables in Postgres.
This requires some care to avoid making the analytics tables larger than the rest of the Zulip database or adding a ton of computational load, but with careful design, we can make the analytics system very low cost to operate. Also, note that a Zulip application database has 2 huge tables: Message and UserMessage, and everything else is small and thus not performance or space-sensitive, so it’s important to optimize how many expensive queries we do against those 2 tables.
There are a few important principles that we use to make the system efficient:
- Not repeating work to keep things up to date (via FillState)
- Storing data in the *Count tables to avoid our endpoints hitting the core Message/UserMessage tables is key, because some queries could take minutes to calculate. This allows any expensive operations to run offline, and then the endpoints to server data to users can be fast.
- Doing expensive operations inside the database, rather than fetching data to Python and then sending it back to the database (which can be far slower if there’s a lot of data involved). The Django ORM currently doesn’t support the “insert into .. select” type SQL query that’s needed for this, which is why we use raw database queries (which we usually avoid in Zulip) rather than the ORM.
- Aggregating where possible to avoid unnecessary queries against the Message and UserMessage tables. E.g. rather than querying the Message table both to generate sent message counts for each realm and again for each user, we just query for each user, and then add up the numbers for the users to get the totals for the realm.
- Not storing rows when the value is 0. An hourly user stat would otherwise collect 24 * 365 * roughly .5MB per db row = 4GB of data per user per year, most of whose values are 0. A related note is to be cautious about adding queries that are typically non-0 instead of being typically 0.
Backend Testing¶
There are a few types of automated tests that are important for this sort of system:
- Most important: Tests for the code path that actually populates data into the analytics tables. These are most important, because it can be very expensive to fix bugs in the logic that generates these tables (one basically needs to regenerate all of history for those tables), and these bugs are hard to discover. It’s worth taking the time to think about interesting corner cases and add them to the test suite.
- Tests for the backend views code logic for extracting data from the database and serving it to clients.
For manual backend testing, it sometimes can be valuable to use
./manage.py dbshell to inspect the tables manually to check that things look right; but
usually anything you feel the need to check manually, you should add some
sort of assertion for to the backend analytics tests, to make sure it stays
that way as we refactor.
LoggingCountStats¶
The system discussed above is designed primarily around the technical problem of showing useful analytics about things where the raw data is already stored in the database (e.g. Message, UserMessage). This is great because we can always backfill that data to the beginning of time, but of course sometimes one wants to do analytics on things that aren’t worth storing every data point for (e.g. activity data, request performance statistics, etc.). There is currently a reference implementation of a “LoggingCountStat” that shows how to handle such a situation.
Analytics UI development and testing¶
Setup and Testing¶
The main testing approach for the /stats page UI is manual testing.
For most UI testing, you can visit
/stats/realm/analytics while
logged in as Iago (this is the server administrator view of stats for
a given realm). The only piece that you can’t test here is the “Me”
buttons, which won’t have any data. For those, you can instead login
as the
[email protected] in the
analytics realm and visit
/stats there (which is only a bit more work). Note that the
analytics realm is a shell with no streams, so you’ll only want to
use it for testing the graphs.
If you’re adding a new stat/table, you’ll want to edit
analytics/management/commands/populate_analytics_db.py and add code
to generate fake data of the form needed for your new stat/table;
you’ll then run
./manage.py populate_analytics_db before looking at
the updated graphs.
Adding or editing /stats graphs¶
The relevant files are:
- analytics/views.py: All chart data requests from the /stats page call get_chart_data in this file. The bottom half of this file (with all the raw sql queries) is for a different page (/activity), not related to /stats.
- static/js/stats/stats.js: The JavaScript and Plotly code.
- templates/analytics/stats.html
- static/styles/stats.css and static/styles/portico.css: We are in the process of re-styling this page to use in-app css instead of portico css, but there is currently still a lot of portico influence.
- analytics/urls.py: Has the URL routes; it’s unlikely you will have to modify this, including for adding a new graph.
Most of the code is self-explanatory, and for adding say a new graph, the answer to most questions is to copy what the other graphs do. It is easy when writing this sort of code to have a lot of semi-repeated code blocks (especially in stats.js); it’s good to do what you can to reduce this.
Tips and tricks:
- Use
$.getto fetch data from the backend. You can grep through stats.js to find examples of this.
- The Plotly documentation is at (check out the full reference, event reference, and function reference). The documentation pages seem to work better in Chrome than in Firefox, though this hasn’t been extensively verified.
- Unless a graph has a ton of data, it is typically better to just redraw it when something changes (e.g. in the various aggregation click handlers) rather than to use retrace or relayout or do other complicated things. Performance on the /stats page is nice but not critical, and we’ve run into a lot of small bugs when trying to use Plotly’s retrace/relayout.
- There is a way to access raw d3 functionality through Plotly, though it isn’t documented well.
- ‘paper’ as a Plotly option refers to the bounding box of the graph (or something related to that).
- You can’t right click and inspect the elements of a Plotly graph (e.g. the bars in a bar graph) in your browser, since there is an interaction layer on top of it. But if you hunt around the document tree you should be able to find it.
/activity page¶
- There’s a somewhat less developed /activity page, for server administrators, showing data on all the realms on a server. To access it, you need to have the
is_staffbit set on your UserProfile object. You can set it using
manage.py shelland editing the UserProfile object directly. A great future project is to clean up that page’s data sources, and make this a documented interface. | https://zulip.readthedocs.io/en/stable/subsystems/analytics.html | 2018-12-10T06:18:03 | CC-MAIN-2018-51 | 1544376823318.33 | [] | zulip.readthedocs.io |
API Usage
This section provides a high-level overview of the Amazon SimpleDB API. It describes API conventions, API versioning used to minimize the impact of service changes, and API-specific information for making REST requests.
API Conventions
Overview
This topic discusses the conventions used in the Amazon SimpleDB API reference. This includes terminology, notation, and any abbreviations used to describe the API.
The API reference is broken down into a collection of Actions and Data Types.
Actions
Actions encapsulate the possible interactions with Amazon SimpleDB. These can be viewed as remote procedure calls and consist of a request and response message pair. Requests must be signed, allowing Amazon SimpleDB to authenticate the caller.
Data Types
Values provided as parameters to the various operations must be of the indicated type.
Standard XSD types (like
string,
boolean,
int) are prefixed with xsd:. Complex types defined by the
Amazon SimpleDB WSDL are prefixed with
sdb:.
WSDL Location and API Version.
API Versions
Specifying the API Version]
API Error Retries
This section describes how to handle client and server errors.
Note
For information on specific error messages, see API Error Codes
Client Errors
REST client errors are indicated by a 4xx HTTP response code.
Do not retry client errors. Client errors indicate that Amazon SimpleDB found a problem with the client request and the application should address the issue before submitting the request again.
Server Errors
For server errors, you should retry the original request.
REST server errors are indicated by a 5xx HTTP response code.
Retries and Exponential Backoff
The AWS SDKs that support Amazon SimpleDB implement retries and exponential backoff. For more information, see Error Retries and Exponential Backoff in the AWS General Reference. | https://docs.aws.amazon.com/AmazonSimpleDB/latest/DeveloperGuide/APIUsage.html | 2018-12-10T06:45:17 | CC-MAIN-2018-51 | 1544376823318.33 | [] | docs.aws.amazon.com |
When a user logs into the dispatch dashboard, the Main Order Screen is the default home screen - and the New Load screen tab of the company order life cycle is front and center.
Other clickable screen tabs include Assigned, Picked up, Delivered, Billed, Paid, and Archived. Circles appear within each screen tab listing the number of orders within that specific order status.
As orders are assigned to drivers, picked up and delivered by drivers, and then billed and paid, they will move from tab to tab based on their current status. To change from tab to tab, simply click on the desired status (screen tab) to view that category of orders.
If you need Help or further assistance, please contact Support chat on the bottom right of the screen. | https://docs.mysuperdispatch.com/everything-dispatch-dashboard/dashboard/logging-into-the-main-order-screen-of-the-dashboard | 2018-12-10T07:00:05 | CC-MAIN-2018-51 | 1544376823318.33 | [] | docs.mysuperdispatch.com |
Transpilers, Babel and TypeScript
JavaScript transpilers (or transcompilers) are tools that convert a specific language code into ES JavaScript. It takes the source code of a program written in one programming language as its input and produces the equivalent source code in another programming language.
Clarive supports using transpilers to filter and preprocess JS code before being executed. Clarive also ships with a few transpilers pre-installed:
- Babel (ES2015)
- TypeScript
Using Transpilers¶
To have your code processed through one of the installed transpilers use a special JavaScript pragma created for this.
The pragma to activate transpiling is this:
"use transpiler([transpiler name])";
This will have the full current body of source code transpiled, independently of where the pragma is declared.
Notice that althought transpiling is not a very expensive operation, everytime new unique source code is created, it has to be transpiled, which means loading the transpiler, parsing and converting to JavaScript. Clarive caches the transpiled code, so the next execution will only cost the same as if it were written in plain JavaScript, which can mean a 100x or more improvement in execution speed.
If you believe you're having issues with transpiled code cached, try wiping the cache to force it to be transpiled again.
Babel¶
Babel allows you to use the latest developments in the EcmaScript standard (ie. ES6) in the Clarive JS VM, which conforms up to ES5.
"use transpiler(babel)"; // Expression bodies evens = [2,4,6,8] var odds = evens.map(v => v + 1); var nums = evens.map((v, i) => v + i); // Interpolate variable bindings var name = "Bob", time = "today"; print(`Hello ${name}, how are you ${time}?`); class Animal { constructor(name, voice) { this.name = name; this.voice = voice; this._eyes = 2; } get eyes() { return this._eyes; } speak() { console.log(`The ${this.name} says ${this.voice}.`); } } var foo = new Animal('dog', 'woof'); foo.speak(); // The dog says woof.
For more information, checkout the Babel documentation at
TypeScript¶
TypeScript is a strict superset of JavaScript, and adds optional static typing and class-based object-oriented programming to the language
"use transpiler(typescript)"; class Greeter { greeting: string; constructor(message: string) { this.greeting = message; } greet() { return "Hello, " + this.greeting; } } let greeter = new Greeter("world"); print( greeter.greet() );
For more information:
Creating Transpilers¶
Transpilers can be created as part of a plugin, by adding a special procesor anonymous function to the
[plugin]/transpiler/ folder.
For creating a transpiler from a standard transpiler library, it is recommended you install first the library in
[plugin]/modules/, then write a lightweight transpiler that
require()s the library.
This is the transpiler procesor skelleton, ie
[plugin]/transpiler/mytranspiler.js:
module.exports = function(code){ var tr = require('../transpiler/mytranspiler-lib.js'); return tr.transpile(code); };
The anonymous function receives
code as an argument, which is the full body of the pre-transpiled code.
The function must return a string with the transpiled code.
Now, we can use it in our Clarive JS DSL:
"use transpiler(mytranspiler)"; // .. my pre-transpiled code .. | http://docs.clarive.com/devel/transpilers/ | 2018-12-10T06:41:08 | CC-MAIN-2018-51 | 1544376823318.33 | [] | docs.clarive.com |
Canu FAQ¶
- What resources does Canu require for a bacterial genome assembly? A mammalian assembly?
- How do I run Canu on my SLURM / SGE / PBS / LSF / Torque system?
- My run stopped with the error
'Failed to submit batch jobs'
- My run of Canu was killed by the sysadmin; the power going out; my cat stepping on the power button; et cetera. Is it safe to restart? How do I restart?
- My genome size and assembly size are different, help!
- What parameters should I use for my reads?
- Can I assemble RNA sequence data?
- My assembly is running out of space, is too slow?
- My assembly continuity is not good, how can I improve it?
- What parameters can I tweak?
- My asm.contigs.fasta is empty, why?
- Why is my assembly is missing my favorite short plasmid?
- Why do I get less corrected read data than I asked for?
- What is the minimum coverage required to run Canu?
- My circular element is duplicated/has overlap?
- My genome is AT (or GC) rich, do I need to adjust parameters? What about highly repetitive genomes?
- How can I send data to you?
What resources does Canu require for a bacterial genome assembly? A mammalian assembly?¶
Canu will detect available resources and configure itself to run efficiently using those resources. It will request resources, for example, the number of compute threads to use, Based on the genome size being assembled. It will fail to even start if it feels there are insufficient resources available.
A typical bacterial genome can be assembled with 8GB memory in a few CPU hours - around an hour on 8 cores. It is possible, but not allowed by default, to run with only 4GB memory.
A well-behaved large genome, such as human or other mammals, can be assembled in 10,000 to 25,000 CPU hours, depending on coverage. A grid environment is strongly recommended, with at least 16GB available on each compute node, and one node with at least 64GB memory. You should plan on having 3TB free disk space, much more for highly repetitive genomes.
Our compute nodes have 48 compute threads and 128GB memory, with a few larger nodes with up to 1TB memory. We develop and test (mostly bacteria, yeast and drosophila) on laptops and desktops with 4 to 12 compute threads and 16GB to 64GB memory.
How do I run Canu on my SLURM / SGE / PBS / LSF / Torque system?¶
Canu will detect and configure itself to use on most grids. Canu will NOT request explicit time limits or queues/partitions. You can supply your own grid options, such as a partition on SLURM, an account code on SGE, and/or time limits with
gridOptions="<your options list>"which will passed to every job submitted by Canu. Similar options exist for every stage of Canu, which could be used to, for example, restrict overlapping to a specific partition or queue.
To disable grid support and run only on the local machine, specify
useGrid=false
It is possible to limit the number of grid jobs running at the same time, but this isn’t directly supported by Canu. The various gridOptions parameters can pass grid-specific parameters to the submit commands used; see Issue #756 for Slurm and SGE examples.
My run stopped with the error
'Failed to submit batch jobs'¶
The grid you run on must allow compute nodes to submit jobs. This means that if you are on a compute host,
qsub/bsub/sbatch/etcmust be available and working. You can test this by starting an interactive compute session and running the submit command manually (e.g.
qsubon SGE,
bsubon LSF,
sbatchon SLURM).
If this is not the case, Canu WILL NOT work on your grid. You must then set
useGrid=falseand run on a single machine. Alternatively, you can run Canu with
useGrid=remotewhich will stop at every submit command, list what should be submitted. You then submit these jobs manually, wait for them to complete, and run the Canu command again. This is a manual process but currently the only workaround for grids without submit support on the compute nodes.
My genome size and assembly size are different, help!¶
The difference could be due to a heterozygous genome where the assembly separated some loci. It could also be because the previous estimate is incorrect. We typically use two analyses to see what happened. First, a BUSCO analysis will indicate duplicated genes. For example this assembly:INFO C:98.5%[S:97.9%,D:0.6%],F:1.0%,M:0.5%,n:2799 INFO 2756 Complete BUSCOs (C) INFO 2740 Complete and single-copy BUSCOs (S) INFO 16 Complete and duplicated BUSCOs (D)
does not have much duplication but this assembly:INFO C:97.6%[S:15.8%,D:81.8%],F:0.9%,M:1.5%,n:2799 INFO 2732 Complete BUSCOs (C) INFO 443 Complete and single-copy BUSCOs (S) INFO 2289 Complete and duplicated BUSCOs (D)
does. We have had some success (in limited testing) using purge_haplotigs to remove duplication. Purge haplotigs will also generate a coverage plot which will usually have two peaks when assemblies have separated some loci.
What parameters should I use for my reads?¶
Canu is designed to be universal on a large range of PacBio (C2, P4-C2, P5-C3, P6-C4) and Oxford Nanopore (R6 through R9) data. Assembly quality and/or efficiency can be enhanced for specific datatypes:
- Nanopore R7 1D and Low Identity Reads
-
With R7 1D sequencing data, and generally for any raw reads lower than 80% identity, five to ten rounds of error correction are helpful:canu -p r1 -d r1 -correct corOutCoverage=500 corMinCoverage=0 corMhapSensitivity=high -nanopore-raw your_reads.fasta canu -p r2 -d r2 -correct corOutCoverage=500 corMinCoverage=0 corMhapSensitivity=high -nanopore-raw r1/r1.correctedReads.fasta.gz canu -p r3 -d r3 -correct corOutCoverage=500 corMinCoverage=0 corMhapSensitivity=high -nanopore-raw r2/r2.correctedReads.fasta.gz canu -p r4 -d r4 -correct corOutCoverage=500 corMinCoverage=0 corMhapSensitivity=high -nanopore-raw r3/r3.correctedReads.fasta.gz canu -p r5 -d r5 -correct corOutCoverage=500 corMinCoverage=0 corMhapSensitivity=high -nanopore-raw r4/r4.correctedReads.fasta.gz
Then assemble the output of the last round, allowing up to 30% difference in overlaps:canu -p asm -d asm correctedErrorRate=0.3 utgGraphDeviation=50 -nanopore-corrected r5/r5.correctedReads.fasta.gz
- Nanopore R7 2D and Nanopore R9 1D
- The defaults were designed with these datasets in mind so they should work. Having very high coverage or very long Nanopore reads can slow down the assembly significantly. You can try the
overlapper=mhap utgReAlign=trueoption which is much faster but may produce less contiguous assemblies on large genomes.
- Nanopore R9 2D and PacBio P6
- Slightly decrease the maximum allowed difference in overlaps from the default of 14.4% to 12.0% with
correctedErrorRate=0.120
- PacBio Sequel
-
Only add the second parameter (
corMhapSensivity=normal) if you have >50x coverage.
- Nanopore R9 large genomes
- Due to some systematic errors, the identity estimate used by Canu for correction can be an over-estimate of true error, inflating runtime. For recent large genomes (>1gbp) with more than 30x coverage, we’ve used
'corMhapOptions=--threshold 0.8 --num-hashes 512 --ordered-sketch-size 1000 --ordered-kmer-size 14'. This is not needed for below 30x coverage.
Can I assemble RNA sequence data?¶
Canu will likely mis-assemble, or completely fail to assemble, RNA data. It will do a reasonable job at generating corrected reads though. Reads are corrected using (local) best alignments to other reads, and alignments between different isoforms are usually obviously not ‘best’. Just like with DNA sequences, similar isoforms can get ‘mixed’ together. We’ve heard of reasonable success from users, but do not have any parameter suggestions to make.
Note that Canu will silently translate ‘U’ bases to ‘T’ bases on input, but NOT translate the output bases back to ‘U’.
My assembly is running out of space, is too slow?¶
We don’t have a good way to estimate of disk space used for the assembly. It varies with genome size, repeat content, and sequencing depth. A human genome sequenced with PacBio or Nanopore at 40-50x typically requires 1-2TB of space at the peak. Plants, unfortunately, seem to want a lot of space. 10TB is a reasonable guess. We’ve seen it as bad as 20TB on some very repetitive genomes.
The most common cause of high disk usage is a very repetitive or large genome. There are some parameters you can tweak to both reduce disk space and speed up the run. Try adding the options
corMhapFilterThreshold=0.0000000002 corMhapOptions="--threshold 0.80 --num-hashes 512 --num-min-matches 3 --ordered-sketch-size 1000 --ordered-kmer-size 14 --min-olap-length 2000 --repeat-idf-scale 50" mhapMemory=60g mhapBlockSize=500 ovlMerThreshold=500. This will suppress repeats more than the default settings and speed up both correction and assembly.
It is also possible to clean up some intermediate outputs before the assembly is complete to save space. If you already have a
`*.ovlStore.BUILDING/1-bucketize.successsfile in your current step (e.g.
correct`), you can clean up the files under
1-overlapper/blocks. You can also remove the ovlStore for the previous step if you have its output (e.g. if you have
asm.trimmedReads.fasta.gz, you can remove
trimming/asm.ovlStore).
My assembly continuity is not good, how can I improve it?¶
The most important determinant for assembly quality is sequence length, followed by the repeat complexity/heterozygosity of your sample. The first thing to check is the amount of corrected bases output by the correction step. This is logged in the stdout of Canu or in canu-scripts/canu.*.out if you are running in a grid environment. For example on a haploid H. sapiens sample:-- BEGIN TRIMMING -- ... -- In gatekeeper store 'chm1/trimming/asm.gkpStore': -- Found 5459105 reads. -- Found 91697412754 bases (29.57 times coverage). ...
Canu tries to correct the longest 40X of data. Some loss is normal but having output coverage below 20-25X is a sign that correction did not work well (assuming you have more input coverage than that). If that is the case, re-running with
corMhapSensitivity=normalif you have >50X or
corMhapSensitivity=high corMinCoverage=0otherwise can help. You can also increase the target coverage to correct
corOutCoverage=100to get more correct sequences for assembly. If there are sufficient corrected reads, the poor assembly is likely due to either repeats in the genome being greater than read lengths or a high heterozygosity in the sample. Stay tuned for mor information on tuning unitigging in those instances.
What parameters can I tweak?¶
For all stages:
-
rawErrorRateis the maximum expected difference in an alignment of two _uncorrected_ reads. It is a meta-parameter that sets other parameters.
-
correctedErrorRateis the maximum expected difference in an alignment of two _corrected_ reads. It is a meta-parameter that sets other parameters. (If you’re used to the
errorRateparameter, multiply that by 3 and use it here.)
-
minReadLengthand
minOverlapLength. The defaults are to discard reads shorter than 1000bp and to not look for overlaps shorter than 500bp. Increasing
minReadLengthcan improve run time, and increasing
minOverlapLengthcan improve assembly quality by removing false overlaps. However, increasing either too much will quickly degrade assemblies by either omitting valuable reads or missing true overlaps.
For correction:
-
corOutCoveragecontrols how much coverage in corrected reads is generated. The default is to target 40X, but, for various reasons, this results in 30X to 35X of reads being generated.
-
corMinCoverage, loosely, controls the quality of the corrected reads. It is the coverage in evidence reads that is needed before a (portion of a) corrected read is reported. Corrected reads are generated as a consensus of other reads; this is just the minimum coverage needed for the consensus sequence to be reported. The default is based on input read coverage: 0x coverage for less than 30X input coverage, and 4x coverage for more than that.
For assembly:
-
utgOvlErrorRateis essentially a speed optimization. Overlaps above this error rate are not computed. Setting it too high generally just wastes compute time, while setting it too low will degrade assemblies by missing true overlaps between lower quality reads.
-
utgGraphDeviationand
utgRepeatDeviationwhat quality of overlaps are used in contig construction or in breaking contigs at false repeat joins, respectively. Both are in terms of a deviation from the mean error rate in the longest overlaps.
-
utgRepeatConfusedBPcontrols how similar a true overlap (between two reads in the same contig) and a false overlap (between two reads in different contigs) need to be before the contig is split. When this occurs, it isn’t clear which overlap is ‘true’ - the longer one or the slightly shorter one - and the contig is split to avoid misassemblies.
For polyploid genomes:
Generally, there’s a couple of ways of dealing with the ploidy.
-
Avoid collapsing the genome so you end up with double (assuming diploid) the genome size as long as your divergence is above about 2% (for PacBio data). Below this divergence, you’d end up collapsing the variations. We’ve used the following parameters for polyploid populations (PacBio data):
corOutCoverage=200 "batOptions=-dg 3 -db 3 -dr 1 -ca 500 -cp 50"
This will output more corrected reads (than the default 40x). The latter option will be more conservative at picking the error rate to use for the assembly to try to maintain haplotype separation. If it works, you’ll end up with an assembly >= 2x your haploid genome size. Post-processing using gene information or other synteny information is required to remove redundancy from this assembly.
-
Smash haplotypes together and then do phasing using another approach (like HapCUT2 or whatshap or others). In that case you want to do the opposite, increase the error rates used for finding overlaps:
corOutCoverage=200 correctedErrorRate=0.15
When trimming, reads will be trimmed using other reads in the same chromosome (and probably some reads from other chromosomes). When assembling, overlaps well outside the observed error rate distribution are discarded.We typically prefer option 1 which will lead to a larger than expected genome size. We have had some success (in limited testing) using purge_haplotigs to remove this duplication.
For metagenomes:
The basic idea is to use all data for assembly rather than just the longest as default. The parameters we’ve used recently are:
corOutCoverage=10000 corMhapSensitivity=high corMinCoverage=0 redMemory=32 oeaMemory=32 batMemory=200
For low coverage:
- For less than 30X coverage, increase the alllowed difference in overlaps by a few percent (from 4.5% to 8.5% (or more) with
correctedErrorRate=0.105for PacBio and from 14.4% to 16% (or more) with
correctedErrorRate=0.16for Nanopore), to adjust for inferior read correction. Canu will automatically reduce
corMinCoverageto zero to correct as many reads as possible.
For high coverage:
- For more than 60X coverage, decrease the allowed difference in overlaps (from 4.5% to 4.0% with
correctedErrorRate=0.040for PacBio, from 14.4% to 12% with
correctedErrorRate=0.12for Nanopore), so that only the better corrected reads are used. This is primarily an optimization for speed and generally does not change assembly continuity.
My asm.contigs.fasta is empty, why?¶
Canu creates three assembled sequence output files:
<prefix>.contigs.fasta,
<prefix>.unitigs.fasta, and
<prefix>.unassembled.fasta, where contigs are the primary output, unitigs are the primary output split at alternate paths, and unassembled are the leftover pieces.
The contigFilter parameter sets several parameters that control how small or low coverage initial contigs are handled. By default, initial contigs with more than 50% of the length at less than 3X coverage will be classified as ‘unassembled’ and removed from the assembly, that is,
contigFilter="2 0 1.0 0.5 3". The filtering can be disabled by changing the last number from ‘3’ to ‘0’ (meaning, filter if 50% of the contig is less than 0X coverage).
Why is my assembly is missing my favorite short plasmid?¶
In Canu v1.6 and earlier only the longest 40X of data (based on the specified genome size) is used for correction. Datasets with uneven coverage or small plasmids can fail to generate enough corrected reads to give enough coverage for assembly, resulting in gaps in the genome or even no reads for small plasmids. Set
corOutCoverage=1000(or any value greater than your total input coverage) to correct all input data.
An alternate approach is to correct all reads (
-correct corOutCoverage=1000) then assemble 40X of reads picked at random from the
<prefix>.correctedReads.fasta.gzoutput.
More recent Canu versions dynamically select poorly represented sequences to avoid missing short plasmids so this should no longer happen.
Why do I get less corrected read data than I asked for?¶
Some reads are trimmed during correction due to being chimeric or because there wasn’t enough evidence to generate a quality corrected sequence. Typically, this results in a 25% loss. Setting
corMinCoverage=0will report all bases, even low those of low quality. Canu will trim these in its ‘trimming’ phase before assembly.
What is the minimum coverage required to run Canu?¶
For eukaryotic genomes, coverage more than 20X is enough to outperform current hybrid methods. Below that, you will likely not assemble the full genome.
My circular element is duplicated/has overlap?¶
This is expected for any circular elements. They can overlap by up to a read length due to how Canu constructs contigs. Canu provides an alignment string in the GFA output which can be converted to an alignment to identify the trimming points.
An alternative is to run MUMmer to get self-alignments on the contig and use those trim points. For example, assuming the circular element is in
tig00000099.fa. Run:nucmer -maxmatch -nosimplify tig00000099.fa tig00000099.fa show-coords -lrcTH out.delta
to find the end overlaps in the tig. The output would be something like:1 1895 48502 50400 1895 1899 99.37 50400 50400 3.76 3.77 tig00000001 tig00000001 48502 50400 1 1895 1899 1895 99.37 50400 50400 3.77 3.76 tig00000001 tig00000001
means trim to 1 to 48502. There is also an alternate writeup.
My genome is AT (or GC) rich, do I need to adjust parameters? What about highly repetitive genomes?¶
On bacterial genomes, no adjustment of parameters is (usually) needed. See the next question.
On repetitive genomes with with a significantly skewed AT/GC ratio, the Jaccard estimate used by MHAP is biased. Setting
corMaxEvidenceErate=0.15is sufficient to correct for the bias in our testing.
In general, with high coverage repetitive genomes (such as plants) it can be beneficial to set the above parameter anyway, as it will eliminate repetitive matches, speed up the assembly, and sometime improve unitigs.
How can I send data to you?¶
FTP to. This is a write-only location that only the Canu developers can see.
Here is a quick walk-through using a command-line ftp client (should be available on most Linux and OSX installations). Say we want to transfer a file named
reads.fastq. First, run
ftp, specify
anonymousas the user name and hit return for password (blank). Then
cd incoming/sergek,
put reads.fastq, and
quit.
That’s it, you won’t be able to see the file but we can download it. | https://canu.readthedocs.io/en/stable/faq.html | 2018-12-10T06:07:01 | CC-MAIN-2018-51 | 1544376823318.33 | [] | canu.readthedocs.io |
KeyableModel¶
The concept of a keyable model was “inspired” by the Ruby on Rails cache_key system where you cache partials based on the updated timestamp of the instance. This leverages memcached LRU algorithm where unused cache items are not a problem and are discarded after a while of not being used.
Installation¶
To make a model a Keyable Model you need to inherit that class in your model:
from cache_tools.models import KeyableModel # ... class Profile(KeyableModel): # Your model stuff
Then sync your db or create your south migration:
python manage.py schemamigration front add_keyable_model --auto | https://django-cache-tools.readthedocs.io/en/latest/keyable_model.html | 2018-12-10T07:14:03 | CC-MAIN-2018-51 | 1544376823318.33 | [] | django-cache-tools.readthedocs.io |
Contents Now Platform Custom Business Applications Previous Topic Next Topic OAuth 2.0 tutorial - configure the Google service as an OAuth provider ... SAVE AS PDF Selected Topic Topic & Subtopics All Topics in Contents Other Share OAuth 2.0 tutorial - configure the Google service as an OAuth provider Use the Google Developer Cosole to set up an OAuth 2.0 provider. Before you beginRole required: NoneThis procedure is performed within the Google Developer Console. You must have a Google account to access this console. About this taskConfigure the Google service in order to obtain a client ID and client secret, and specify your ServiceNow instance URL as the OAuth redirect URL.Note: This information describes the state of the Google Developer Console and Contacts API as of July 22, 2015. Changes made after that date may not be included in this document. Procedure Navigate to the Google Developer Console (). Log in using your Google credentials. Click Select a project. Click Create a project. Enter a Project name. Click Create. After Google creates the project, the project dashboard appears. Navigate to APIs & auth > APIs. Select the Contacts API. Click Enable API. Navigate to APIs & auth > Credentials. Click Create new Client ID. Ensure the Web application radio button is selected and click Configure consent screen. Enter a descriptive Product name. This name appears when you authorize the OAuth token in your instance. Click Save. In the Create Client ID window, add the OAuth redirect URI for your instance to the Authorized redirect URIs field. This URI follows the format https://<instance>.service-now.com/oauth_redirect.do Click Create Client ID. The client ID information appears. Record the Client ID and Client secret values. You will need these values to configure the Google service as an OAuth provider in your instance. Next TopicOAuth 2.0 tutorial - create an OAuth provider and profile On this page Send Feedback Previous Topic Next Topic | https://docs.servicenow.com/bundle/kingston-application-development/page/integrate/outbound-rest/task/t_OAuthDemoConfigureGoogle.html | 2018-12-10T07:28:12 | CC-MAIN-2018-51 | 1544376823318.33 | [] | docs.servicenow.com |
#include <wx/grid.h>
Represents coordinates of a grid cell.
An object of this class is simply a (row, column) pair.
Default constructor initializes the object to invalid state.
Initially the row and column are both invalid (-1) and so operator!() for an uninitialized wxGridCellCoords returns false.
Constructor taking a row and a column.
Return the column of the coordinate.
Return the row of the coordinate.
Checks whether the coordinates are invalid.
Returns false only if both row and column are -1. Notice that if either row or column (but not both) are -1, this method returns true even if the object is invalid. This is done because objects in such state should actually never exist, i.e. either both coordinates should be -1 or none of them should be -1.
Inequality operator.
Assignment operator for coordinate types.
Equality operator.
Set the row and column of the coordinate.
Set the column of the coordinate.
Set the row of the coordinate. | https://docs.wxwidgets.org/3.0/classwx_grid_cell_coords.html | 2018-12-10T06:50:39 | CC-MAIN-2018-51 | 1544376823318.33 | [] | docs.wxwidgets.org |
Quickstart: Deploy your first IoT Edge module from the Azure portal to a Windows device - preview
In this quickstart, use the Azure IoT Edge cloud interface to deploy prebuilt code remotely to an IoT Edge device. To accomplish this task, first use your Windows device to simulate an IoT Edge device, then you can deploy a module to it.
In this quickstart you learn how to:
- Create an IoT Hub.
- Register an IoT Edge device to your IoT hub.
- Install and start the IoT Edge runtime on your device.
- Remotely deploy a module to an IoT Edge device and send telemetry to IoT Hub.
The module that you deploy in this quickstart is a simulated sensor that generates temperature, humidity, and pressure data. The other Azure IoT Edge tutorials build upon the work you do here by deploying modules that analyze the simulated data for business insights.
Note
The IoT Edge runtime on Windows is in public preview.
If you don't have an active
IoT Edge device:
- A Windows computer or virtual machine to act as your IoT Edge device. Use a supported Windows version:
- Windows 10 or newer
- Windows Server 2016 or newer
- If it's a Windows computer, check that it meets the system requirements for Hyper-V.
- If it's a virtual machine, enable nested virtualization and allocate at least 2 GB memory.
- Install Docker for Windows and make sure it's running.
Tip
There is an option during Docker setup to use Windows containers or Linux containers. This quickstart describes how to configure the IoT Edge runtime for use with Linux containers.
Create an IoT hub
Start the quickstart by creating your --device-id myEdgeDevice --hub-name {hub_name} - connection string and save it. You'll use this value to configure the IoT Edge runtime in the next section.
Install and start the IoT Edge runtime
Install the Azure IoT Edge runtime on your IoT Edge device and configure it with a device connection string.
The IoT Edge runtime is deployed on all IoT Edge devices. It has three components. The IoT Edge security daemon starts each time installation, you're asked for a device connection string. Use the string that you retrieved from the Azure CLI. This string associates your physical device with the IoT Edge device identity in Azure.
The instructions in this section configure the IoT Edge runtime with Linux containers. If you want to use Windows containers, see Install Azure IoT Edge runtime on Windows to use with Windows containers.
Connect to your IoT Edge device
The steps in this section all take place on your IoT Edge device. If you're using your own machine as the IoT Edge device, you can skip this part. If you're using a virtual machine or secondary hardware, you want to connect to that machine now.
Download and install the IoT Edge service
Use PowerShell to download and install the IoT Edge runtime. Use the device connection string that you retrieved from IoT Hub to configure your device.
On your IoT Edge device, run PowerShell as an administrator.
Download and install the IoT Edge service on your device.
. {Invoke-WebRequest -useb aka.ms/iotedge-win} | Invoke-Expression; ` Install-SecurityDaemon -Manual -ContainerOs Linux
When prompted for a DeviceConnectionString, provide the string that you copied in the previous section. Don't include quotes around the connection string.
View the IoT Edge runtime status
Verify that the runtime was successfully installed and configured.
Check the status of the IoT Edge service.
Get-Service iotedge
If you need to troubleshoot the service, retrieve the service logs.
# Displays logs from today, newest at the bottom. Get-WinEvent -ea SilentlyContinue ` -FilterHashtable @{ProviderName= "iotedged"; LogName = "application"; StartTime = [datetime]::Today} | select TimeCreated, Message | sort-object @{Expression="TimeCreated";Descending=$false} | format-table -autosize -wrap
View all the modules running on your IoT Edge device. Since the service just started for the first time, you should only see the edgeAgent module running. The edgeAgent module runs by default, and helps to install and start any additional modules that you deploy to your device. modules to your IoT Edge devices from the cloud. An IoT Edge module is an executable package implemented as a container. In this section, we'll be deploying a pre-built module from the IoT Edge Modules section of the Azure Marketplace. This module generates telemetry for your simulated device.
In the Azure portal, enter
Simulated Temperature Sensorinto the search and open the Marketplace result.
In the Subscription field, select the subscription with the IoT Hub you're using, if it's not already.
In the IoT Hub field, select the name of the IoT Hub you're using, if it's not already.
Click on Find Device, select your IoT Edge device (named
myEdgeDevice), then select Create.
In the Add Modules step of the wizard, click on the SimulatedTemperatureSensor module to verify its configuration settings, click Save and select Next.
In the Specify Routes step of the wizard, verify the routes are properly set up with the default route that sends all messages from all modules to IoT Hub (
$upstream). If not, add the following code then select Next.
{ "routes": { "route": "FROM /messages/* INTO $upstream" } }
In the Review Deployment step of the wizard, select Submit.
Return to the device details page and select Refresh. In addition to the edgeAgent module that was created when you first started the service, you should see another runtime module called edgeHub and the SimulatedTemperatureSensor module listed.
It may take a few minutes for the new modules to show up. The IoT Edge device has to retrieve its new deployment information from the cloud, start the containers, and then report its new status back to IoT Hub.
View generated data
In this quickstart, you created a new IoT Edge device and installed the IoT Edge runtime on it. Then, you used the Azure portal to push an IoT Edge module to run on the device without having to make changes to the device itself. In this case, the module that you pushed creates environmental data that you can use for the tutorials.
Confirm that the module deployed from the cloud is running on your IoT Edge device.
iotedge list
View the messages being sent from the tempSensor module to the cloud.
iotedge logs tempSensor -f
You can also watch the messages arrive at your IoT hub by using the Azure IoT Toolkit extension for Visual Studio Code.
Clean up resources
If you want to continue on to the IoT Edge tutorials, you can use the device that you registered and set up in this quickstart. Otherwise, you can delete the Azure resources that you created and remove the IoT Edge runtime from your device.
Delete Azure resources
Remove the IoT Edge runtime
If you plan on using the IoT Edge device for future testing, but want to stop the tempSensor module from sending data to your IoT hub while not in use, use the following command to stop the IoT Edge service.
Stop-Service iotedge -NoWait
You can restart the service when you're ready to start testing again
Start-Service iotedge
If you want to remove the installations from your device, use the following commands.
Remove containers that were created on your device by the IoT Edge runtime. Change the name of the tempSensor container if you called it something different.
docker rm -f tempSensor docker rm -f edgeHub docker rm -f edgeAgent
Next steps
In this quickstart, you created a new IoT Edge device and used the Azure IoT Edge cloud interface to deploy code onto the device. Now, you have a test device generating raw data about its environment.
You are ready to continue on to any of the other tutorials to learn how Azure IoT Edge can help you turn this data into business insights at the edge. | https://docs.microsoft.com/en-us/azure/iot-edge/quickstart | 2018-12-10T06:57:53 | CC-MAIN-2018-51 | 1544376823318.33 | [array(['media/quickstart/install-edge-full.png',
'Diagram - Quickstart architecture for device and cloud'],
dtype=object)
array(['media/quickstart/create-iot-hub.png',
'Diagram - Create an IoT hub in the cloud'], dtype=object)
array(['media/quickstart/register-device.png',
'Diagram - Register a device with an IoT Hub identity'],
dtype=object)
array(['media/quickstart/start-runtime.png',
'Diagram - Start the runtime on device'], dtype=object)
array(['media/quickstart/deploy-module.png',
'Diagram - deploy module from cloud to device'], dtype=object)
array(['media/quickstart/iotedge-list-2.png',
'View three modules on your device'], dtype=object)
array(['media/quickstart/iotedge-logs.png',
'View the data from your module'], dtype=object)] | docs.microsoft.com |
, click the Yammer settings icon
, and then click Network Admin.
Invite users to Yammer
Users are not part of the Yammer network until they have clicked the Yammer tile from Office 365 or logged in once to Yammer. Only employees with a company email address can be invited from this screen.
If you have chosen to enforce Office 365 identity, only users with a Yammer license can join the Yammer click Invite.
Invite users in bulk just specifying their email address
In the Yammer admin center, go to Users > Invite Users.
Click logged in. Pending users can be added to groups even before they join the Yammer network.
Pending users receive announcement notification emails from group admins. If users don't want to receive announcements from a particular group they can log into their Yammer account and leave the group or follow the unsubscribe link in the email to unsubscribe from all Yammer emails.
As in all Office products, pending users will be visible in the group member list even if they have never logged.
In the Yammer admin center, go to Users > Remove Users.
Enter an existing user's name.
Select an action to take:
Deactivate this user:
If the user is not using Azure Active Directory (AAD) credentials, this.
If the user is using AAD.
Click Submit.
If there are any deactivated users, they are listed on the Remove Users page. You can reactivate or delete a user from this list.
Users can also delete their own Yammer account. For more information, see Change my Yammer profile and settings.
Monitor account activity and device usage for a single user
In the Yammer admin center, go to Users > Account Activity.
Type a user's name. You'll see what devices they are currently logged in, when they last logged in on each device, and which IP address was used.
You can also log the user off Office click Deactivate.
Bulk update users by importing a .CSV file
*.
Click Export.
This provides a .ZIP file containing 3 clicking the three dots in the top navigation bar and going to Edit Profile > Notifications. They can also go to Edit Profile > Preferences to adjust their own message and time zone settings.
See also
Delete your Yammer account
Manage Yammer users across their lifecycle from Office 365 | https://docs.microsoft.com/en-us/yammer/manage-yammer-users/add-block-or-remove-users | 2018-12-10T07:05:53 | CC-MAIN-2018-51 | 1544376823318.33 | [] | docs.microsoft.com |
You can download the vRealize Automation Designer installer from the vRealize Automation appliance.
Prerequisites
Log in to the Windows machine as a local administrator.
If you are using Internet Explorer, verify that Enhanced Security Configuration is not enabled. See res://iesetup.dll/SoftAdmin.htm.
Procedure
- Open a browser.
- Navigate to the Windows installer download page by using the host name of the ().
- Click vRealize Automation Designer.
- When prompted, save the installer.
What to do next
Install vRealize Automation Designer. | https://docs.vmware.com/en/vRealize-Automation/7.1/com.vmware.vra.extensibility.doc/GUID-4CE2080C-7FAE-40F5-8AD0-B5CF0BA50D3E.html | 2018-12-10T06:38:02 | CC-MAIN-2018-51 | 1544376823318.33 | [] | docs.vmware.com |
Class
DefaultSeedConfigDefault configuration for seeds without score.
Default configuration for seeds without score.
Member Typedef Overview
(...).
Detailed Description
The default definition is as follows. You use this as a blueprint for your own TConfig struct for a Seed or SeedSet class.
struct DefaultSeedConfig { typedef size_t TPosition; typedef size_t TSize; typedef MakeSigned_<size_t>::Type TDiagonal; typedef int TScoreValue; };
See Also
Member Typedef Detail
(...). | http://docs.seqan.de/seqan/develop/class_DefaultSeedConfig.html | 2018-12-10T06:06:56 | CC-MAIN-2018-51 | 1544376823318.33 | [] | docs.seqan.de |
of
Travis CI GmbH, Rigaer Straße 8, 10247 Berlin, registered with the commercial register of the local court in Charlottenburg (Berlin) under HRB 140133 B (hereinafter “Travis CI”),
regarding the software Travis CI (hereinafter “Travis CI Software”)
as of 05/2018
Subject matter #
- The subject matter of these terms and conditions (hereinafter “Terms and Conditions”) is the use of Travis CI Software. The Terms and Conditions regulate all relations between Travis CI and Travis CI’s customer (hereinafter “Customer”) regarding the use of Travis CI Software and are part of the Agreement as defined below. Standard business conditions and/or general terms and conditions of the Customer do not apply, regardless of whether or not Travis CI has expressly objected to them in a particular case.
- Travis CI communicates with the GitHub hosting service (hereinafter “GitHub”), which is offered by GitHub Inc. on the Customer’s behalf. Condition to the proper use of Travis CI Software is a valid contract with GitHub Inc. on the use of GitHub, which may lead to costs on the Customer’s sole responsibility. The Customer will provide Travis CI with his GitHub account information (hereinafter “GitHub Sign-In”) when signing in through GitHub via travis CI-ci.com (hereinafter the “Website”) automatically. He allows Travis CI to access the Customer’s GitHub account. Travis CI will directly communicate in the name of the Customer with GitHub and the Customer authorizes Travis CI to act on his behalf towards GitHub Inc. The Customer is solely liable for any costs or damages that GitHub Inc. associates with the GitHub Sign-In.
Scope of services #
- Travis CI provides Travis CI Software as a service. This means, that Travis CI Software may be used via the internet only.
- The Travis CI Software is a hosted continuous integration service that provides the infrastructure for testing software projects subject to downtime and service suspensions as described below. The functional range of the service is dependent on the package the Customer chooses and will be displayed on the Website.
- Travis CI offers its services for (i) closed software projects, where the projects cannot be viewed publicly and (ii) open software projects, where the projects including their source codes can be viewed publicly.
- The Travis CI Software is integrated with GitHub and offers support for several programming languages. The entire list of supported programming languages is available at about.travis CI-ci.org.
- No consultancy, training, trouble shooting or support is within the scope of the services offered by Travis CI under this Agreement if and to the extent they are not part of the agreed warranty as set out in section 9 hereunder.
- Travis CI offers unpaid and paid services. If a service by Travis CI is offered free of charge, section 8 shall apply.
Concluding of the Agreement #
- Using the Travis CI Software requires the opening of an account via the Website by using the Customer’s GitHub Sign-In (hereinafter the “Account”). Travis CI offers the use of the Travis CI Software only to entrepreneurs within the meaning of section 14 (1) of the German Civil Code.
- The opening of an Account by the Customer is deemed an offer to conclude an agreement under these Terms and Conditions with Travis CI (hereinafter the “Agreement”). Travis CI may at its own discretion accept this offer by explicitly accepting it or rendering services under this Agreement.
- An Account may only be used by one single person. The Customer is entitled to create separate Accounts for his employees.
- The person opening the Account represents that he/she has the legal authority to bind the legal entity he/she acts for to this Agreement and may in knowledge of this Agreement provide the GitHub Sign-Ins to Travis CI.
- Travis CI Software;
- immediately inform Travis CI in case of loss, theft or other disclosure of the Account data to a third party or in a suspicion of misuse of the Account data and to immediately change the password;
- allow the use of the Account data only designated administrators to be specified in the registration procedure.
Obligations of the customer #
- The Customer is obliged to make agreed payments in due time.
- The Customer must not interfere or intent to interfere in any manner with the functionality or proper working of the Travis CI Software.
- The Customer must take care for regular backups of his software. When delivering its services Travis CI assumes that the software builds of the Customer tested are copies only and will not be used in real environments.
- The Customer is obliged to use the Travis CI Software and services in an appropriate way. In particular, any use and/or actions considered ‘Bitcoin-Mining’ is not accepted by Travis CI.
- When using testing data the Customer will make sure that these data do not contain unanonymized personal data of real people.
- The Customer will indemnify and hold harmless Travis CI, its officers and directors, employees and agents from any and all third party claims, damages, costs and (including reasonable attorneys fees) arising out of the Customer’s use of the Travis CI Software in a manner not authorized by this Agreement, and/or applicable law, or the Customer’s or it’s employees’ or personnel’s negligence or willful misconduct.
Downtime and services suspensions #
- Adjustments, changes and updates of the Travis CI Software that help to avoid or maintain dysfunctions of the Travis CI Software may lead to temporary service suspensions. Travis CI will try to limit downtime of the service or restrictions of accessibility to 10 hours a month. Travis CI will try to do regular maintenance works during the weekend or at times between 10 p.m. and 6 a.m. (CET). Outside of the said hours, Travis CI will inform the Customer about upcoming maintenance works.
- The Customer is aware that the service relies on a working internet infrastructure. Additional downtime of the service can occur if the website is not available and at any other time with restrictive access to the internet of the Customer.
- The Customer is aware that the Travis CI Software does not work if GitHub is not properly available (be it to Travis CI or the Customer).
- For the avoidance of doubt, in case of dysfunctions of the Travis CI Software caused by Travis CI through willful intent or gross negligence, Travis CI’s liability for such dysfunctions is not excluded by this section 5
Rights to use #
- TSubject to the full payment of due fees for the services under the Agreement, Customer is granted a limited, non-exclusive, non-transferable, non-sublicenseable right to use the Travis CI Software as software as a service via the internet according to these Terms and Conditions.
- The Customer is not granted any additional right to the Travis CI Software or any other intellectual property of Travis CI. This especially means that the Customer shall not be entitled to make copies of the Travis CI Software. The Customer shall not translate the program code into other forms of code (decompilation) or employ other methods aimed at revealing the Travis CI Software’s code in the various stages of its development (reverse engineering).
- The Customer is not entitled to remove or make alterations to copyright notices, serial numbers or other features which serve to identify the Travis CI Software.
- To the extent the Travis CI Software includes any open source software, Customer’s right to use with respect to each item of the open source software will be governed exclusively by the applicable open source software license associated with the respective open source software, regardless of any other provisions of these Terms and Conditions. Without limiting the foregoing sentence, Customer recognizes that the only warranties and representations respecting the open source software are those provided in the applicable open source software license and Travis shall bear no responsibility or liability whatsoever related to its supply of or Customer’s use of such software. Identification of and licenses for open source software may be found in the Travis CI Software, or in libraries provided with the Travis CI Software, or in links provided in the software or such libraries, or on the Travis-ci.org website, or in other documentation provided or linked to by Travis.
Payments #
- The compensation of the paid services rendered by Travis CI is calculated per month. The current prices are shown in the current price list of Travis CI that is available on the Website. The compensation is due monthly in advance.
- Invoices will be issued via email. Payments shall become due immediately upon issuance of the invoice. Payment must be made using the payment methods provided by Travis CI from time to time and chosen by the Customer in his Account settings.
- All prices in the price list are net-price. Value Added Tax will be added in the invoice if applicable.
- Travis CI may alter the current price list and/or the structuring of prices with at least one month notice to the end of each quarter. Travis CI will inform the Customer via email about the price change. If the Customer does not expressly disagree in writing within a month from the notification of change this is deemed to be his acceptance of the change. The Customer will be informed about this circumstance in the notification of change.
Free Services #
For any services offered by Travis CI free of charge (“Free Services”), the following shall apply in derogation of sections 5, 9 and 10 hereunder: the Free Service is provided on an “as is” and “as available” basis with no right to any warranty given by Travis CI or indemnification hereunder. For the avoidance of doubt, sections 5.1 Sentences 2 to 4 and 5.4 (Downtime and Service Suspensions), section 9 (Warranty) and section 10 (Limitation of Liability) shall not apply to Free Services.
Warranty #
- Defects in the supplied Travis CI Software shall be remediated within a reasonable time following a detailed notification of such defect being given to Travis CI by the Customer.
- For the purpose of remedying defects, Travis CI may choose to replace the defective Travis CI Software with a version of the Travis CI Software which is free of defects. In case of defects of updated, upgraded or new versions (the aforementioned hereinafter each a “New Version”), the right of defect with regard to a New Version shall be limited to the new features of New Version of the Travis CI Software compared to the previous version release.
- Unless Travis CI fails to repair or replace the Travis CI Software, the right of the Customer to terminate the contract due to an inability to use the Travis CI Software shall be excluded.
- The limitation period for all warranty claims shall be 12 months commencing with the first coming to show of the defect.
Limitation of Liability #
- Travis CI’s liability for damages caused by or related to the exercise of rights and obligations under this Agreement shall be excluded. The limitation of liability shall not cover
- damage from injury to life, body or health caused by Travis CI;
- damages caused by Travis CI that are are a result of wilful intent or gross negligence;
- damage caused by Travis CI as a result of slight negligence in the event of Travis CI’s breach of an essential contractual obligation which is indispensable for the duly execution of the contract and thereby jeopardizes the achievement of the contract purpose and such damage is typically foreseeable at the time of the infringement;
- Travis CI’s liability in the event of the assumption of a warranty if an obligation infringement covered thereby triggers Travis CI’s liability.
- If the liability of Travis CI is excluded or restricted, this also applies to the personal liability of its employees, representatives, and agents.
- Liability under the Product Liability Act (Produkthaftungsgesetz) shall remain unaffected.
- Travis CI will not be liable hereunder by reasons of any failure to timely perform its services due to an event beyond its reasonable control, including acts of God.
Data protection, References and Confidentiality #
- Travis CI stores, processes and uses Customer data in accordance with the applicable (data protection) law(s). For further information please refer to Travis CI privacy policy under. In the event Travis CI processes personal data controlled by Customer the data processing agreement under applies.
- Travis CI may identify Customer as a customer of Travis CI and display Customer’s name and logo solely for such purpose on its customer lists, its website, and its marketing and promotional materials provided that Customer may request that Travis CI cease such use at any time upon written notice to Travis CI.
Term and Termination #
- The Agreement runs for an indefinite time and will remain in effect until terminated by one of Parties in accordance with this section 12.
- The Parties may terminate this Agreement for any or no reason at their convenience with a 30 day notice to the end of each month. Termination may be issued in writing or by using the provided account closing mechanism, if provided by Travis CI.
- In addition, each party’s right to terminate this Agreement for a good cause remains unaffected. A good cause for termination of the Agreement by Travis CI shall include, but is not limited to, the following:
- a serious breach of the obligations arising from this Agreement by the Customer;
- a default in payment of the Customer with an amount that equals at least the compensation of two month;
- a serious breach of contract leading to the loss of mutual trust or renders the continuation of this Agreement in consideration of the purpose of the Agreement unreasonable;
- an attempt a denial of service attack on any of the Services by the Customer or any attempt to hack or break any security mechanism on any of the services;
- determination that the Customer’s use of the services poses a security or service risk to Travis CI, or to any user of services offered by Travis CI;
- inappropriate use of the Travis CI Software and/or services, including but not limited to Bitcoin-Mining;
- a major change in the working of GitHub that makes it unreasonable for Travis CI to adapt the Travis CI Software accordingly;
- a major change in the co-operation of GitHub and Travis CI that makes the further offering of Travis CI unreasonable for Travis CI;
- an application for the initiation of insolvency proceedings concerning the Customer, as well as the refusal to open insolvency proceedings for lack of assets, or the issue of a declaration in lieu of an oath, or any similar proceedings.
- In case of the termination of the Agreement, any rights of use granted to Customer for the Travis CI Software shall expire immediately and Customer shall cease to use the Travis CI Software.
Disputes, Applicable Law, Notices #
- This Agreement (and any dispute, controversy, proceedings or claim of whatever nature arising out of or in any way relating to this Agreement or its formation) shall be governed by the laws of the Federal Republic of Germany. The UN Convention on Contracts for the International Sale of Goods (CISG) shall not apply.
- The parties agree that the courts of the seat of Travis CI shall have exclusive jurisdiction to settle any dispute arising out of this Agreement.
- Notices made by Travis CI to the Customer may be posted on the Website and/or send to the email-address specified by the Customer when registering or to any updated email-address the Customer provides. Notices to Travis CI must be directed to [email protected] and/or Travis CI GmbH, Rigaer Str. 8, 10247 Berlin, Germany.
- The official text of this Agreement and any annexes attached here to and any notices given here shall be in English. However communication between Travis CI and the Customer may be in English or German.
Final provisions #
- This Agreement, together with any documents referred to in it, or expressed to be entered into in connection with it, constitutes the whole agreement between the Parties concerning the subject matter of this Agreement.
- The Customer may set off only legally, binding and recognized claims. The rights and obligations arising from this Agreement are generally not transferable. However, Travis CI may transfer this Agreement with all rights and obligations to a company of its choice.
- If any provision of this Agreement is or later becomes invalid, or contains omissions, the validity of the other provisions shall remain unaffected. The parties shall agree upon a new provision, which shall resemble the invalid provision as closely as possible in purpose and meaning considering the interests of the parties and the legal regulations,.
- These General Terms and Conditions may be modified by Travis CI at any time. Travis CI will inform the Customer via email that these General Terms and Conditions have altered by including the new version and/or description of the alteration(s) in this email. If the Customer does not expressly objects in writing within a month from the notification of change to these General Terms and Conditions this is deemed to be his acceptance of the change(s). The Customer will expressly be informed about this circumstance and the significance of his silence in the notification of change. | https://docs.travis-ci.com/legal/terms-of-service/ | 2018-12-10T06:28:45 | CC-MAIN-2018-51 | 1544376823318.33 | [] | docs.travis-ci.com |
The code below is not JeeNode-specific, it can also be used with other Arduino’ish boards.
The EtherCard library on GitHub can be used to send and receive Ethernet packets via an Ether Card board, or any other expansion board with an ENC28J60 chip. It’s based on an initial implementation by Guido Socher and Pascal Stang, but has since evolved considerably.
The RTC library on GitHub can be used to connect to a variety of I2C-based Real Time Clock chips. It includes to ability to set the RTC from the last-compiled date, which makes it easy to get the clock running at the (aproximately) correct time.
The GLCD library on GitHub contains a driver for ST7656-based Graphics LCD screens, such as the Graphics Board for JeeNodes. This code was derived from the ST7565 library by AdaFruit.
GLCDlib includes a large set of monospaced and proportional fonts, as well as some optimisations to improve refresh rates when only parts of the display change.
The IDE-hardware repository on GitHub is not so much a library, but an add-on for version 1.5 of the Arduino IDE to support the ATtiny84 found on the JeeNode Micro.
This code was adapted from the arduino-tiny project, but has not kept up with changes in the way the Arduino IDE now handles hardware platform differences. | https://docs.jeelabs.org/nodelib/other-libs/ | 2018-12-10T07:20:15 | CC-MAIN-2018-51 | 1544376823318.33 | [] | docs.jeelabs.org |
Overview of PingOne Updated 599 11,567 people found this helpful Add to MyDocs | Hide Show Table of Contents Table of Contents Expand | Collapse PingOne SSO for SaaS Apps PingOne SSO for SaaS Apps PingOne SSO for SaaS Apps is for Service Providers (SPs) who want to provide customers secure, single sign-on (SSO) to cloud applications. Your applications are published to our Application Catalog for either private or public availability. | https://docs.pingidentity.com/bundle/p1_overview_aps/page/apsAdminOverview.html | 2018-12-10T06:50:14 | CC-MAIN-2018-51 | 1544376823318.33 | [] | docs.pingidentity.com |
To use your own nameservers you have to:
To be compliant with IANA name server requirements, you MUST have at least 2 redondant name-servers with two differents IPs. (see below for more info)
If you own only one server, you cannot build a compliant name server and you will do it at your own risk.
Sentora does not advise or approve building uncompliant nameservers.
Once you are logged in to the Sentora interface for your server:
- go to Domain>Domains and ensure your root domain (yourdomain.tld) is added. If not, add it.
- go to Domain>DNS Manager and select your root domain from the drop down box, then click on "Select". If there is "No records were found ..." create the default records with the button "Create Records". The default records are now created for that domain.
If your server uses IPv6 instead of IPv4, you have to replace the IPv4 records in tab "A" by using the same records (except IPv6 value) in tab "AAAA" and to remove "A" records.
(NOTE: you are strongly advised to create an spf record manually, see also Online tools to check anything, "To setup and test SPF record").
The mandatory records to act as nameservers are the two ns1 and ns2 "A" (or "AAAA") records, plus both "NS" records.
Ensure that port 53 is open on your server, else Bind will never receive any request!
You can check it with Port forwarding tester
Login to your registrar domain handling tool pages.
2.1) Find the page which shows the "nameservers" list for your domain, something like:
(this list was gathered from gandi.net registrar manager. Sentora has no special agreement with Gandi).
Enter the page or form that enable you to change this list content and replace the server list by your nameservers URLs :
2.2) Enter the page that enables you to chage the "Glue records". For each nameserver (ns1, ns2) enter its name and IP as required on the form.
Glue record entries are MANDATORY when the subdomain used for a nameserver is inside the same domain that the nemaserver resolves itself.
Example :
You want to use the subdomain ns1.yourdomain.tld for the main name server that resolves your domain yourdomain.tld.
Imagine the dialog between an application that want to access to yourdomain.tld and the nameservers:
=> Glue record is mandatory to break the self resolving loop:
2.3) Wait for propagation to complete. (Can take up to 48 hours.)
See also Setting up DNS and Online tools to check anything
A normal query to resolve a domain (request for IP from domain name) is normally handled in a few milliseconds.
When a name server is down, it requires all servers along the chain in the internet to wait until a final time-out occurs (usualy between 2 000 to 15 000 milliseconds), which locks ram and processes during this time. And this state is propagated all around the world.
Using the same computer to host both primary and secondary nameservers (per example with a virtual server hosted on the same computer in order to have another IP) does not offer any redundancy: if the the computer is halted, both nameservers will be down at the same time and all requests to resolve all the domains that they host will fail in time-out.
So, setting your system up this way is only cheating. And worse, on the computer side, an extra VPS adds a significant load for a task that is completely useless.
Currently, the IANA only requires that a nameserver have redundancy, but does not penalizes nameservers that are not redundant (yet).
Due to the number of newbie servers and nameservers exploding across the web. It is possible that the IANA may choose one day to ban nameservers that are the source of too many problems (Down time, connection loss, etc.).
So, each nameserver owner must be 100% RESPONSIBLE for their servers and nameservers... and do the best they can to ensure that the resolution of a domain is always a success (and is correct), because it impacts not only his website and domains, but also the whole World Wide Web.
Cheating cannot be a solution. Hosting a world wide public server, selling hosting space, and more, nameservers, is NOT a game! | http://docs.sentora.org/?node=53 | 2017-02-19T14:23:15 | CC-MAIN-2017-09 | 1487501169776.21 | [] | docs.sentora.org |
GetChildren Method (ADO)
Returns a Recordset whose rows represent the children of a collection Record.
Syntax
Set recordset = record.GetChildren
Return Value
A Recordset object for which each row represents a child of the current Record object. For example, the children of a Record that represents a directory would be the files and subdirectories contained within the parent directory.
Remarks
The provider determines what columns exist in the returned Recordset. For example, a document source provider always returns a resource Recordset. | https://docs.microsoft.com/en-us/sql/ado/reference/ado-api/getchildren-method-ado | 2017-02-19T14:55:08 | CC-MAIN-2017-09 | 1487501169776.21 | [] | docs.microsoft.com |
Bounces¶
When a message to an email address bounces, Mailman’s bounce runner will register a bounce event. This registration is done through a utility.
>>> from zope.component import getUtility >>> from zope.interface.verify import verifyObject >>> from mailman.interfaces.bounce import IBounceProcessor >>> processor = getUtility(IBounceProcessor) >>> verifyObject(IBounceProcessor, processor) True
When a bounce occurs, it’s always within the context of a specific mailing list.
>>> mlist = create_list('[email protected]')
The bouncing email contains useful information that will be registered as well. In particular, the Message-ID is a key piece of data that needs to be recorded.
>>> msg = message_from_string("""\ ... From: [email protected] ... To: [email protected] ... Message-ID: <first> ... ... """)
There is a suite of bounce detectors that are used to heuristically extract the bouncing email addresses. Various techniques are employed including VERP, DSN, and magic. It is the bounce runner’s responsibility to extract the set of bouncing email addresses. These are passed one-by-one to the registration interface.
>>> event = processor.register(mlist, '[email protected]', msg) >>> print(event.list_id) test.example.com >>> print(event.email) [email protected] >>> print(event.message_id) <first>
Bounce events have a timestamp.
>>> print(event.timestamp) 2005-08-01 07:49:23
Bounce events have a flag indicating whether they’ve been processed or not.
>>> event.processed False
When a bounce is registered, you can indicate the bounce context.
>>> msg = message_from_string("""\ ... From: [email protected] ... To: [email protected] ... Message-ID: <second> ... ... """)
If no context is given, then a default one is used.
>>> event = processor.register(mlist, '[email protected]', msg) >>> print(event.message_id) <second> >>> print(event.context) BounceContext.normal
A probe bounce carries more weight than just a normal bounce.
>>> from mailman.interfaces.bounce import BounceContext >>> event = processor.register( ... mlist, '[email protected]', msg, BounceContext.probe) >>> print(event.message_id) <second> >>> print(event.context) BounceContext.probe | http://mailman.readthedocs.io/en/release-3.0/src/mailman/model/docs/bounce.html | 2017-02-19T14:10:00 | CC-MAIN-2017-09 | 1487501169776.21 | [] | mailman.readthedocs.io |
Add a node to the active tree and link to an existing socket
Add a file node to the current node editor
Add a node to the active tree
Add a reroute node
Search for named node and allow to select and activate it
Edit node group
Insert selected nodes into a node group
Make group from selected nodes
Separate selected nodes from the node group
Activate and view same node type, step by step
Update shader script node with new sockets and options from the script
Sort the nodes and show the cyclic dependencies between the nodes
Toggles tool shelf display
Move nodes and attach to frame
Go to parent node tree
Add an input or output socket to the current node tree
Move a socket up or down in the current node tree’s sockets stack
Remove an input or output socket to the current node tree
Resize view so you can see all nodes
Resize view so you can see selected nodes
Set the boundaries for viewer operations | https://docs.blender.org/api/blender_python_api_2_67_1/bpy.ops.node.html | 2017-02-19T23:15:28 | CC-MAIN-2017-09 | 1487501170286.6 | [] | docs.blender.org |
java.lang.Object
org.springframework.jdbc.datasource.DelegatingDataSourceorg.springframework.jdbc.datasource.DelegatingDataSource
org.springframework.jdbc.datasource.TransactionAwareDataSourceProxyorg.springframework.jdbc.datasource.TransactionAwareDataSourceProxy
public class TransactionAwareDataSourceProxy
Proxy for a target JDBC
DataSource, adding awareness of
Spring-managed transactions. Similar to a transactional JNDI DataSource
as provided by a J2EE server.
Data access code that should remain unaware of Spring's data access support
can work with this proxy to seamlessly participate in Spring-managed transactions.
Note that the transaction manager, for example
DataSourceTransactionManager,
still needs to work with the underlying DataSource, not with this proxy.
Make sure that TransactionAwareDataSourceProxy is the outermost DataSource
of a chain of DataSource proxies/adapters. TransactionAwareDataSourceProxy
can delegate either directly to the target connection pool or to some
intermediary proxy/adapter like
LazyConnectionDataSourceProxy or
UserCredentialsDataSourceAdapter..
As a further effect, using a transaction-aware DataSource will apply remaining transaction timeouts to all created JDBC (Prepared/Callable)Statement. This means that all operations performed through standard JDBC will automatically participate in Spring-managed transaction timeouts.
NOTE: This DataSource proxy needs to return wrapped Connections
(which implement the
ConnectionProxy interface) in order to handle
close calls properly. Therefore, the returned Connections cannot be cast
to a native JDBC Connection type like OracleConnection or to a connection
pool implementation type. Use a corresponding
NativeJdbcExtractor
to retrieve the native JDBC Connection.
DataSource.getConnection(),
Connection.close(),
DataSourceUtils.doGetConnection(javax.sql.DataSource),
DataSourceUtils.applyTransactionTimeout(java.sql.Statement, javax.sql.DataSource),
DataSourceUtils.doReleaseConnection(java.sql.Connection, javax.sql.DataSource)
public TransactionAwareDataSourceProxy()
DelegatingDataSource.setTargetDataSource(javax.sql.DataSource)
public TransactionAwareDataSourceProxy(DataSource targetDataSource)
targetDataSource- the target DataSource
public void setReobtainTransactionalConnections(boolean reobtainTransactionalConnections)
The default is "false". Specify "true" to reobtain transactional Connections for every call on the Connection proxy; this is advisable on JBoss if you hold on to a Connection handle across transaction boundaries.
The effect of this setting is similar to the "hibernate.connection.release_mode" value "after_statement".
public Connection getConnection() throws SQLException
The returned Connection handle implements the ConnectionProxy interface, allowing to retrieve the underlying target Connection.
getConnectionin interface
DataSource
getConnectionin class
DelegatingDataSource
SQLException
DataSourceUtils.doGetConnection(javax.sql.DataSource),
ConnectionProxy.getTargetConnection()
protected Connection getTransactionAwareConnectionProxy(DataSource targetDataSource)
close()calls to DataSourceUtils.
targetDataSource- DataSource that the Connection came from
Connection.close(),
DataSourceUtils.doReleaseConnection(java.sql.Connection, javax.sql.DataSource)
protected boolean shouldObtainFixedConnection(DataSource targetDataSource)
The default implementation returns
true for all
standard cases. This can be overridden through the
"reobtainTransactionalConnections"
flag, which enforces a non-fixed target Connection within an active transaction.
Note that non-transactional access will always use a fixed Connection.
targetDataSource- the target DataSource | http://docs.spring.io/spring/docs/2.5.5/javadoc-api/org/springframework/jdbc/datasource/TransactionAwareDataSourceProxy.html | 2017-02-20T00:05:29 | CC-MAIN-2017-09 | 1487501170286.6 | [] | docs.spring.io |
java.lang.Object | +--net.rim.device.api.system.Clipboard
Provides a global clipboard for cut and paste operations.
// Retrieve the Clipboard object. Clipboard cp = Clipboard.getClipboard(); // Copy to clipboard. cp.put(str); // Retrieve the clipboard's current contents String str = (String) cp.get(); // Clean the clipboard. cp.put(null);
public Object get()
Notice that this get is not destructive: the clipboard remains loaded with this object until a new one gets put to replace it.
public static Clipboard getClipboard()
public Object put(Object o)
This operation replaces the existing object with your new one. Accordingly, you can clear out the clipboard by passing null into this method.
Note: This method throws a
SecurityException if
third-party applications provide an object other than a
String or
StringBuffer as parameter to this method.
o- New object to put on the clipboard.
public String toString()
This method invokes toString() on the clipboard's current contained object. If the contents are null, then an empty string is returned.
toStringin class. | http://docs.blackberry.com/en/developers/deliverables/6022/net/rim/device/api/system/Clipboard.html | 2014-03-07T08:33:04 | CC-MAIN-2014-10 | 1393999638008 | [] | docs.blackberry.com |
This page contains some coding conventions for the Groovy-Eclipse project. This is still a work in progress. If you feel there are some conventions that are missing from here, please raise a jira issue for it.
We are only trying to coordinate some high level conventions that maintain the quality of the project across releases and when there are multiple developers. For anything that is not specified here, use your common sense. If you think that there is a major issue of concern, send a message to the [email protected] mailing list.
Copying code from Eclipse and JDT
In general, we want to avoid copying code from JDT and Eclipse. Instead, the preferred way to extend or borrow Eclipse functionality is to use sub-classing and reflection. Where this approach is not possible or it is unwieldy, it may be reasonable to copy an entire file. Here are the guidelines for doing this:
- Add the copied file to the jdt_patch source folder in the plugin. (If this doesn't exist yet, then create it.)
- Change the package name by prefixing 'greclipse." to the package. Do not change the type name.
- First line of the file should describe where it comes from (fully qualified name, plugin, etc).
- If there are visibility problems that mean you cannot change the package name, then leave the package as is and suffix the type name with PATCH. This should be a last resort.
- Surround all changes (apart from the package statement and imports) with "// GROOVY start" and "// GROOVY end"
- If suitable, add a textual description of why you are doing what you are doing after "// GROOVY start"
- If you have been really keen include the bugzilla entry raised against JDT related to the change you are making "prNNNNNN"
- Where you are replacing existing code with new code, use this format:
This ensures comparisons and merges are easier because the old code will precisely match what is in the original file. If the old code includes /* */ comments just apply common sense and think what will be easiest to merge again later.
In some cases, you may want to copy code from JDT as a starting point, but you have no intention of keeping the versions synchronized across Eclipse releases. This situation is fine, but you must keep the original EPL license at the top of the file.
Code formatting, templates, and code cleanups
The Groovy-Eclipse plugins use a set of pre-defined code formatting, templates, and code cleanups. They are set in each project. Also, copies of these preferences are included in the org.codehaus.groovy.eclipse.core project. When creating a new project, be sure to use these preferences.
Task tags in code
Do not use TODO tags. Instead use something like FIXADE (where instead of ADE, use your initials). And after the FIXADE tag, you can optionally include a version number that this change should be applied to (eg- // FIXADE (2.1.0) ). You may also want to add the tag as one that is recognized by the JDT compiler. Go to: Preferences -> Java Compiler -> Task Tags and add what you need.
Error handling
In general, unless there is a good reason not to, errors should be logged to the error log. To do this, in the catch clause, add the following code:
GroovyCore.logException(String, Throwable)
And when in the org.eclipse.jdt.groovy.core project, use this instead (or one of the variants of the method):
org. | http://docs.codehaus.org/display/GROOVY/Groovy-Eclipse+coding+conventions?showComments=true%20showCommentArea=true | 2014-03-07T08:27:39 | CC-MAIN-2014-10 | 1393999638008 | [] | docs.codehaus.org |
Installation
- Install the plugin through the Update Center or download it into the SONARQUBE_HOME/extensions/plugins directory
- Restart the SonarQube server
Known Limitations
Most of the coding rules are currently not translated.
Change Log
Release 1.6 (1 issues)
Release 1.5 (2 issues)
Release 1.4 (2 issues)
Release 1.3 (1 issues)
Release 1.2 (1 issues)
Release 1.1 (1 issues)
| http://docs.codehaus.org/pages/viewpage.action?pageId=235044924 | 2014-03-07T08:29:07 | CC-MAIN-2014-10 | 1393999638008 | [array(['/s/en_GB-1988229788/4727/ffd10e10ff7bc0a1d7e29b4d2225707dd7f03d0b.15/_/images/icons/wait.gif',
None], dtype=object)
array(['/s/en_GB-1988229788/4727/ffd10e10ff7bc0a1d7e29b4d2225707dd7f03d0b.15/_/images/icons/wait.gif',
None], dtype=object)
array(['/s/en_GB-1988229788/4727/ffd10e10ff7bc0a1d7e29b4d2225707dd7f03d0b.15/_/images/icons/wait.gif',
None], dtype=object)
array(['/s/en_GB-1988229788/4727/ffd10e10ff7bc0a1d7e29b4d2225707dd7f03d0b.15/_/images/icons/wait.gif',
None], dtype=object)
array(['/s/en_GB-1988229788/4727/ffd10e10ff7bc0a1d7e29b4d2225707dd7f03d0b.15/_/images/icons/wait.gif',
None], dtype=object)
array(['/s/en_GB-1988229788/4727/ffd10e10ff7bc0a1d7e29b4d2225707dd7f03d0b.15/_/images/icons/wait.gif',
None], dtype=object)
array(['/s/en_GB-1988229788/4727/ffd10e10ff7bc0a1d7e29b4d2225707dd7f03d0b.15/_/images/icons/wait.gif',
None], dtype=object) ] | docs.codehaus.org |
We removed our free Sandbox April 25th.
You can read more on our blog.
MobWrite¶
Google MobWrite is a “Real-time Synchronization and Collaboration Service”. The original project can be found at. This tutorial is based on, which did the really important updates, like enabling MobWrite to run over WSGI instead of mod_python.
All the code of the tutorial is available on GitHub, at. To test it on dotCloud, all you have to do is:
git clone git://github.com/jpetazzo/google-mobwrite.git cd google-mobwrite dotcloud create mobwrite dotcloud push
Moreover, you can read the.
Since MobWrite is a Python web app, we will use a single “python” service.
The role and syntax of the dotCloud Build File is explained with further detail in the documentation, at.
dotcloud.yml:
www: type: python
wsgi.py File¶
The “python” service dispatches HTTP requests to a WSGI-compatible callable which must be found at wsgi.application. In other words, we need to have a application function in the wsgi.py file.
The MobWrite gateway has already been “WSGI-fied”, and contains a suitable application callable. However, the gateway code is located in the daemon directory, and expects to import other modules from this directory. Therefore, we will add the daemon directory to the Python Path (instead of doing e.g. from daemon.gateway import application).
wsgi.py:
import sys sys.path.append('/home/dotcloud/current/daemon') from gateway import application
supervisord.conf File¶
The MobWrite “gateway” (the WSGI application that we enabled in the previous step) actually talks to the MobWrite “daemon”. We need to start this “daemon”. While we could use dotcloud run www to log into the service and start it manually, we will rather write a supervisord.conf file, as explained in the Background Processes Guide.
supervisord.conf:
[program:mobwrite] directory=/home/dotcloud/current/daemon command=/home/dotcloud/current/daemon/mobwrite_daemon.py
Update Shebangs¶
All the Python programs in the original code start with #!/usr/bin/python. That generally works, unless you are using a non-default Python install, or a wrapper like virtualenv. Guess what: dotCloud uses virtualenv, so we have to replace all those #!/usr/bin/python with the more standard #!/usr/bin/env python.
We modify daemon/mobwrite_daemon.py, lib/diff_match_patch.py, lib/json_validator_test.py, lib/mobwrite_core_test.py, tools/download.py, tools/loadtest.py, tools/nullify.py, and tools/upload.py.
Final Touch¶
At that step, you can already dotcloud push and get a working app; but you still need to display a custom HTML form to start it. To make testing more convenient, we will do a minor change to the code, so that going to the MobWrite app will show the editor if no valid POST parameter is supplied.
After this change, you can (re-)push the code, and this time, when you go to the app URL, you should see the edit form. Opening the app from multiple tabs or browsers will allow you to edit the form concurrently (like Etherpad or recent versions of Google Docs Text Editor).
daemon/gateway.py:
out_string = form['q'].value # Client sending a sync. Requesting text return. elif form.has_key('p'): out_string = form['p'].value # Client sending a sync. Requesting JS return. + else: + # Nothing. Redirect to the editor. + response_headers = [ ('Location', environ['SCRIPT_NAME']+'?editor') ] + start_response('303 See Other', response_headers) + return [] in_string = "" s = socket.socket(socket.AF_INET, socket.SOCK_STREAM) | http://docs.dotcloud.com/0.9/tutorials/python/mobwrite/ | 2014-03-07T08:26:00 | CC-MAIN-2014-10 | 1393999638008 | [] | docs.dotcloud.com |
geom_smooth(mapping = NULL, data = NULL, stat = "smooth", position = "identity", ...)
aesor
aes_string. Only needs to be set at the layer level if you are overriding the plot defaults.
layer. This can include aesthetics whose values you want to set, not map. See
layerfor more details.
Add a smoothed conditional mean.
geom_smooth understands the following aesthetics (required aesthetics are in bold):
x
y
alpha
colour
fill
linetype
size
weight
# See stat_smooth for examples of using built in model fitting # if you need some more flexible, this example shows you how to # plot the fits from any model of your choosing qplot(wt, mpg, data=mtcars, colour=factor(cyl))
model <- lm(mpg ~ wt + factor(cyl), data=mtcars) grid <- with(mtcars, expand.grid( wt = seq(min(wt), max(wt), length = 20), cyl = levels(factor(cyl)) )) grid$mpg <- stats::predict(model, newdata=grid) qplot(wt, mpg, data=mtcars, colour=factor(cyl)) + geom_line(data=grid)
# or with standard errors err <- stats::predict(model, newdata=grid, se = TRUE) grid$ucl <- err$fit + 1.96 * err$se.fit grid$lcl <- err$fit - 1.96 * err$se.fit qplot(wt, mpg, data=mtcars, colour=factor(cyl)) + geom_smooth(aes(ymin = lcl, ymax = ucl), data=grid, stat="identity")
stat_smoothsee that documentation for more options to control the underlying statistical transformation. | http://docs.ggplot2.org/0.9.3/geom_smooth.html | 2014-03-07T08:25:33 | CC-MAIN-2014-10 | 1393999638008 | [] | docs.ggplot2.org |
Definition
Performs a hot deployment of a Deployable
Explanation
You use a
Deployer when you wish to deploy a Deployable into a running container (this is known as Hot Deployment). There are 2 types of Deployers:
- Local Deployer: A local deployer deploys in a locally installed container. Usually, local deployers use the file system to perform the deployment by copying the deployable to a container-specific target directory.
- Remote Deployer: A remote deployer is used to deploy to a container that can be on the same machine or on some remote machine. Deploying remotely requires passing information such as username and password of the user to use for deploying, etc. These information are passed using a Runtime Configuration.
Deployer features
Example using the Java API
To instantiate a
Deployer you need to know its class name. A
Deployer is specific to a container (you can find the class names on the container page listing all containers).
The deployment is done using one of the
Deployer.deploy(...) APIs. Some
deploy(...) signatures accept a
DeployableMonitor which is used to wait till the container has not finished deploying. Cargo currently offers a
URLDeployableMonitor which waits by polling a provided URL (see below in the example). Whent the URL becomes available the monitor considers that the
Deployable is fully deployed. In the future, Cargo will provide other
DeployableMonitor such as a
Jsr88DeployableMonitor.
Example without using a DeployableMonitor
Hot-deploying a WAR on Resin 3.0.9 without waiting for the deployment to finish:
Please note that the
Deployer.deploy() method call does not wait for the
Deployable to be fully deployed before returning.
Example using a URLDeployableMonitor
Hot-deploying an WAR on Resin 3.0.9 and waiting for the deployment to finish:
The must point to a resource that is serviced by the
Deployable being deployed.
Example using the Ant. | http://docs.codehaus.org/pages/viewpage.action?pageId=207552522 | 2014-03-07T08:29:39 | CC-MAIN-2014-10 | 1393999638008 | [] | docs.codehaus.org |
.
Download Now:
Recommended for almost all users. Report bugs and submit patches for this version.
The 2.3.x branch of GeoTools has improved the handling of imagery. This represents the first release made in collaboration with the OSGeo foundation.
Our plans for GeoTools can be found in this document.
If your project needs to interact with our project goals and milestones please talk to us. This is an open development process and your contributions can make the difference.
Please click here for a complete list of organizations supporting GeoTools.
The following organizations ask that their logos be included on our front page.
The GeoTools Project is:
More News....
Everyone is welcome to edit this web site; but we need to ask you to sign in first.
For more information read up on Confluence and Documentation in our Developers Guide. | http://docs.codehaus.org/plugins/viewsource/viewpagesrc.action?pageId=77872 | 2014-03-07T08:27:33 | CC-MAIN-2014-10 | 1393999638008 | [] | docs.codehaus.org |
Step-by-step instructions for getting Maven up and running to support your development.
Use other versions at your peril.
Note: Mac OS X comes pre-installed with a copy of Maven 2.2.1; be sure to double-check your version from the command line after following the steps below, or things will break.
You can download that here. No other plugins are required. (Version 2.2.1 zip file)
Mac
Explode the zip on your system to /usr/local
Windows
Extract the downloaded Maven zip file to the root of the C: drive, this will create a maven-2.2.1 folder..
Mac
notch:~ gw$ mvn --version
Maven version: 2.2.1
Java version: 1.5.0_13 OS user home directory as follows:.
This enables manual license reporting. You can get the jar files from this site. | http://docs.jivesoftware.com/jive/6.0/community_admin/topic/com.jivesoftware.help.sbs.online_6.0/developer/HowToInstallMaven.html | 2014-03-07T08:26:12 | CC-MAIN-2014-10 | 1393999638008 | [] | docs.jivesoftware.com |
unpermitted resources or operations. Security can be enforced at different integration points: for example, you may restrict users' access to certain URLs, secure access to the data itself or make specific checks in an operation of the controlling page class. See an example of a simple, custom Hibernate-based entity realm (service).:: for declaring and configuring filters. a future versions of tapestry-security, we'll use permissions to specify association/instance (as opposed to role/type) permissions (such as current user can only edit his own profile information). The exact syntax is yet to be defined.:
More examples
For more extensive examples, take a look at our full-featured integration test web app. See for example the Index page template and class and the AlphaService. Also, take a look and using permissions. In the example, one realm is responsible for only authenticating users while another one is responsible for authorizing them. Check out the source from or browse the sources, starting from AppModule | http://docs.codehaus.org/pages/viewpage.action?pageId=193331223 | 2014-03-07T08:30:29 | CC-MAIN-2014-10 | 1393999638008 | [] | docs.codehaus.org |
FEST-Swing can take a screenshot of the desktop when a JUnit GUI test fails, either when running tests using Ant or inside an IDE (e.g. Eclipse.)
To take screenshots of failed GUI tests, regardless of how they are executed, please follow these steps:
org.fest.swing.annotation.GUITest
FEST-Swing's JUnit extension requires JUnit 4.3.1.
In order to take screenshots of failed GUI tests with Ant please follow these steps:
festreport
org.fest.swing.junit.ant.ScreenshotOnFailureResultFormatterinside the
junitAnt task
festreportinstead of
junitreport, and specify in its classpath where the fest-swing-junit-{VERSION}.jar file is.
The following is a screenshot of a JUnit HTML report:
The Ant task
festreport works exactly as
junitreport: it can generate HTML reports with or without frames. It has been tested with Ant 1.7 and requires Apache-Commons Codec 1.3.. | http://docs.codehaus.org/plugins/viewsource/viewpagesrc.action?pageId=117900329 | 2014-03-07T08:30:51 | CC-MAIN-2014-10 | 1393999638008 | [] | docs.codehaus.org |
java.lang.String REFERENCE_REQUEST
resolveReference(java.lang.String), Constant Field Values
static final java.lang.String REFERENCE_SESSION
resolveReference(java.lang.String), Constant Field Values
java.lang.Object getAttribute(java.lang.String name, int scope)
name- the name of the attribute
scope- the scope identifier
nullif not found
void setAttribute(java.lang.String name, java.lang.Object value, int scope)
name- the name of the attribute
scope- the scope identifier
value- the value for the attribute
void removeAttribute(java
java.lang.String[] getAttributeNames(int scope)
scope- the scope identifier
void registerDestructionCallback(java.lang.String name, java.lang
java.lang.Object resolveReference(java.lang.String key)
At a minimum: the HttpServletRequest/PortletRequest reference for key "request", and the HttpSession/PortletSession reference for key "session".
key- the contextual key
nullif none found
java.lang.String getSessionId()
null
java.lang.Object getSessionMutex()
null | http://docs.spring.io/spring-framework/docs/3.2.0.M1/api/org/springframework/web/context/request/RequestAttributes.html | 2014-03-07T08:32:27 | CC-MAIN-2014-10 | 1393999638008 | [] | docs.spring.io |
Chain Fiddler to an Upstream ProxyTo configure Fiddler to send and receive web traffic to and from another proxy between Fiddler and the destination server:Close Fiddler.Open Internet Explorer > Options > Internet Options > Connections > LAN Settings.Click the check box by Use a proxy server for your LAN.Type the address and port number for the upstream proxy.Restart Fiddler.You should now see the upstream proxy listed in the Fiddler About dialog.See AlsoUnderstanding the Fiddler Proxy | http://docs.telerik.com/fiddler/Configure-Fiddler/Tasks/ChainToUpstreamProxy | 2014-03-07T08:25:07 | CC-MAIN-2014-10 | 1393999638008 | [] | docs.telerik.com |
Although named point for historical reasons, this light source can model light either from a (theoretical) point source, or from a sphere (the latter being more realistic in most cases, and producing less sharp shadows). The light is cast evenly in all directions.
The Lights page has more detail about the controls. In addition to honoring the standard Maya light attributes, the Attribute Editor will also show the following attributes under the Arnold group:
Radius
The radius of the light's spherical surface. Although the name of this light is 'point' for historical reasons, it really is an emissive sphere, unless radius is set to zero, in which case it becomes a true point light of no physical size. | https://docs.arnoldrenderer.com/pages/viewpage.action?pageId=81726843&navigatingVersions=true | 2020-05-25T02:24:13 | CC-MAIN-2020-24 | 1590347387155.10 | [] | docs.arnoldrenderer.com |
AWS.
When you send HTTP requests to AWS, you must sign the requests so AWS can identify the sender. You sign requests with your AWS access key, which consists of an access key ID and a secret access key. We strongly recommend you don't create an access key for your root account. Anyone who has the access key for your root account has unrestricted access to all the resources in your account. Instead, create an access key for an IAM user account with permissions required for the task at hand. As another option, use AWS Security Token Service to generate temporary security credentials, and use those credentials to sign requests.
To sign requests, you must use Signature Version 4. If you have an existing application that uses Signature Version 2, you must update it to use Signature Version 4.
When you use the AWS Command Line Interface (AWS CLI) or one of the AWS SDKs to make requests to AWS, these tools automatically sign the requests for you with the access key that you specify when you configure the tools.
Support and Feedback for AWS Secrets Manager
We welcome your feedback. Send your comments to [email protected].
This document was last published on May 22, 2020. | https://docs.aws.amazon.com/secretsmanager/latest/apireference/Welcome.html | 2020-05-25T02:42:54 | CC-MAIN-2020-24 | 1590347387155.10 | [] | docs.aws.amazon.com |
Marketing Tools tab can be used by the affiliates to generate affiliate links for the products from your store that they want to promote. Affiliates can also access and use the media assets that the merchant has provided them.
Here, in the Product Links section, the affiliate can paste the Product's URL into the product page link bar and a referral link will be generated automatically.
The Media Assets section will show the media files that the merchant has provided for use in promotions.
Since every social media platform is distinct from one another, it is best to design and upload creative media that would be suitable for each platform. This would help affiliates in the promotion of your products on different platforms.
To see how to upload media in the Creatives tab:
After you have uploaded a media file in the creatives tab, it will appear in the Marketing Tools tab in the affiliate portal (below the product links).
The media files that the merchant uploads in the Creatives tab will appear in the Marketing Tools tab in the affiliate portal. These media files can include banners or logos that would go well with social media platforms such as Instagram, Facebook, etc.
The uploaded media will appear in the Media Assets section, which can be used by the affiliate to download the media file. | https://docs.goaffpro.com/affiliate-portal/marketing-tools | 2020-05-25T02:07:58 | CC-MAIN-2020-24 | 1590347387155.10 | [] | docs.goaffpro.com |
Convert Extreme Searches to Machine Learning Toolkit in Splunk Enterprise Security
If you need to convert any locally modified XS searches to MLTK, use the following information to help guide your decisions.
Converting XS commands
The most common common XS commands that have MLTK equivalents in ES follow.
xsWhere
The
xsWhere command is approximately equivalent to the
`mltk_apply` macro. These apply data to a model, compare against thresholds, and find outliers for a field. For each value, given the provided threshold, the macros tell you if the value is an outlier. See Abnormally High Number of HTTP Method Events By Src - Rule in DA-ESS-NetworkProtection.
xsFindBestConcept
The
xsFindBestConcept command is approximately equivalent to the
`mltk_findbest` macro. They are almost the opposite of the
xsWhere and
applycommands. For each value, these tell you in which threshold range the value falls on the distribution curve. For example: the high range is between 0.05 - 0.01, and the extreme range is between 0.01 - 0.000000001. See Access - Total Access Attempts in DA-ESS-AccessProtection.
xsCreateDDContext
The
xsCreateDDContext command is approximately equivalent to the
fit command. These both generate a new model each time the search is run. See Access - Authentication Failures By Source in SA-AccessProtection
xsUpdateDDContext
Each time this is run, it will combine the new training with the existing model. There is no
xsUpdateDDContext equivalent in MLTK at this time. There are no models/contexts that are updated additively. All model-generation searches wipe out the old model and produce a new model based on the data retrieved in the dispatch window.
To accommodate this change, the dispatch times of the Model Gen searches that were converted from
xsUpdateDDContext XS searches have been increased to generate the model from more data, to get more reliable models.
Converting a Context Gen Search
As an example of converting a context gen search, consider Access - Authentication Failures By Source - Context Gen as three lines.
Line one
Line one starts by counting the authentication failures per hour:
| tstats `summariesonly` count as failures from datamodel=Authentication.Authentication where Authentication.action="failure" by Authentication.src,_time span=1h.
Line two
Line two contains
stats median(failures) as median, min(failures) as min, count as count | eval max = median*2, which is putting the results of the search into the input format that the XS
xsUpdateDDContext command requires. In some searches you see the macro
`context_stats` used instead, such as
`context_stats(web_event_count, http_method)`.
Line three
Line three uses the XS
xsUpdateDDContext command to build a data-defined historical view context, puts it in an
app context, gives it a
name, assigns a
container, and a
scope.
Consider the MLTK version of the search is Access - Authentication Failures By Source - Model Gen as two lines.
The steps for converting this search from a context gen search to a model gen search follow:
- Line one starts the same way for both searches, by counting the authentication failures per hour. Keep this when converting to MLTK.
- The fit command takes tables as inputs, thus it is not necessary to include
| stats median(failures) as median, min(failures) as min, count as count | eval max = median*2
- In line two for the MLTK version of the search, do the following:
- Replace the XS command
xsUpdateDDContextwith the approximate equivalent of
fit DensityFunction.
- Include the
failurefield that you're counting in the first part of the search.
- Add the
dist=normto represent the normal distribution bell curve of the density function.
- Use
intofor passing the data into the model.
- Keep the
namefrom the original search because it is also the model name for MLTK.
- All MLTK model names should include the
app:prefix, which properly saves the model into the shared application namespace.
- In this example, append it to the name "failures_by_src_count_1h" so that it resembles
app:failures_by_src_count_1h.
Converting a Correlation Search
As an example of converting a correlation search, consider Access - Brute Force Access Behavior Detected - Rule as four lines.
Line one
Line one starts by searching the authentication data model:
| from datamodel:"Authentication"."Authentication"
Line two
Line two contains
| stats values(tag) as tag,values(app) as app,count(eval('action'=="failure")) as failure,count(eval('action'=="success")) as success by src, which is counting authentication failures followed by success.
Line three
Line three searches for successes greater than 0.
Line four
Line four uses the XS
xswhere command to match a concept within a specified context and determine compatibility, in this case
authentication is above medium.
Consider the MLTK version of the search Access - Brute Force Access Behavior Detected - Rule as four lines.
The steps for converting this search to MLTK:
- Keep line 1 as-is.
- Keep line 2 as-is.
- Keep line 3 as-is.
- In line four, do the following:
- Replace the XS command
xswherewith the approximate equivalent of the
`mltk_apply_upper`macro.
- The macro wraps the MLTK
applyfunction and filters the results based on whether the values are above or below a certain threshold.
- Include the argument for the model name
app:failures_by_src_count_1hfrom the model gen search that builds the model.
- Include the argument for the qualitative_id of
medium.
- Include the argument for the
failurefield that you're counting in the first part of the search.
Converting a Key Indicator Search
To convert a Key Indicator search to use MLTK, you have to first convert the corresponding Model Gen search. The Key Indicator search references the ML model name created by the Model Gen search.
As an example of converting a correlation search, consider Risk - Median Risk Score as seven lines.
Line one
Line one starts by searching for data from the current day.
Line two
Line two starts by searching data from the previous day.
Line three
Line three calculates the delta as a percentage between current_count and historical_count (today's value and yesterday's value). So if yesterday's value was 100 and today's is 125, then the delta = 25% and the direction = increasing.
Line four
Line four evaluates the statistics counts.
Line five
Line five finds the delta percentage for the key indicator in the risk analysis dashboard.
Converting Risk - Median Risk Score to MLTK.
Lines one through three remain as-is. The last two lines are replaced with the MLTK equivalent:
- In line four, replace the
xsfindbestconcept current_countwith the approximate equivalent of
`mltk_findbest`macro. This is a macro that wraps the MLTK
applyfunction. For each value, this macro tells you in which threshold range the value falls on the distribution curve. Notice that this model doesn't need a field name for a specific field that you're applying it on. This is because the field is determined during the
fit, so you only need to make sure that the field exists in the results when doing the
apply.
- In line five, replace the
xsfindbestconcept deltawith the approximate equivalent of the
`get_percentage_qualitative`macro. This applies a qualitative term to the delta between the current count and the historical count, such as extremely, moderately, greatly. You will see these as indicators in the risk analysis dashboard.
You cannot rename current_count, as this is expected.
This documentation applies to the following versions of Splunk® Enterprise Security: 6.0.0, 6.0.1, 6.0.2, 6.1.0, 6.1.1
Feedback submitted, thanks! | https://docs.splunk.com/Documentation/ES/6.1.0/Admin/MLTKconversion | 2020-05-25T02:08:00 | CC-MAIN-2020-24 | 1590347387155.10 | [array(['/skins/OxfordComma/images/acrobat-logo.png', None], dtype=object)] | docs.splunk.com |
The bootstrap approach to deploying a content model involves modifying Alfresco content repository XML configuration files to register the content model.
- To.. | https://docs.alfresco.com/4.0/tasks/deploy-bootstrap.html | 2020-05-25T02:30:33 | CC-MAIN-2020-24 | 1590347387155.10 | [] | docs.alfresco.com |
- background
- channel
- direct
- element
- global
- lighting
- lightselect
- pass
- probabilistic
- rawtotallighting
- rgb
- vraynoiselevel
To add a label to the list of required labels, choose '+ labelname' from Related Labels.
To remove a label from the required labels, choose '- labelname' from above.
- There are no pages at the moment. | https://docs.chaosgroup.com/label/VRAY4MAX/background+channel+direct+element+global+lighting+lightselect+pass+probabilistic+rawtotallighting+rgb+vraynoiselevel | 2020-05-25T03:06:07 | CC-MAIN-2020-24 | 1590347387155.10 | [] | docs.chaosgroup.com |
- color
- diffuse
- diffusefilter
- level
- pass
- rawdiffusefilter
- rawrefractionfilter
- rgb
- temperature
- user-defined
To add a label to the list of required labels, choose '+ labelname' from Related Labels.
To remove a label from the required labels, choose '- labelname' from above.
- There are no pages at the moment. | https://docs.chaosgroup.com/label/color+diffuse+diffusefilter+level+pass+rawdiffusefilter+rawrefractionfilter+rgb+temperature+user-defined | 2020-05-25T02:50:48 | CC-MAIN-2020-24 | 1590347387155.10 | [] | docs.chaosgroup.com |
Developer Guide¶
- Building on honeycomb
- Major components
- Modules
- Developing a device specific translation unit
- Reading of CLI and device configuraiton
This document provides developer-level details for the FRINX CLI southbound plugin, both for the framework itself as well as for the pluggable translation units.
Pre-requisite reading¶
- Honeycomb design documentation:
- CLI plugin available presentations: CLI plugin user guide
Building on honeycomb¶
The essential idea behind the CLI southbound plugin comes from Honeycomb. Honeycomb defines, implements and uses the same pipeline and the same framework to handle data. The APIs, some implementations and also SPIs used in the CLI southbound plugin’s translation layer come from Honeycomb. However, the CLI southbound plugin creates multiple instances of Honeycomb components and encapsulates them behind a mount point.
The following series of diagrams shows the evolution from Opendaylight to Honeycomb and back into Opendaylight as a CLI mountpoint:
High level Opendaylight overview with its concept of a Mountpoint:
High level Honeycomb overview:
Honeycomb core (custom MD-SAL implementation) overview:
How Honeycomb is encapsulated as a mount point in Opendaylight:
Major components¶
The following diagram shows the major components of the CLI southbound plugin and their relationships:
Developing a device specific translation unit¶
This section provides a tutorial for developing a device specific translation unit.
The easiest way how to develop a new transaction unit is to copy existing one and change what you need to make it work. E.g. if you are creating an interface translation unit, the best way is to copy existing interface translation unit for some other device, that is already implemented. You can find existing units on github ,
What you need to change:
- .pom file of the unit
- point to correct unit parent
- dependencies
- name of the unit should be in format
<device>-<domain>-unit(e.g. ios-interface-unit, xr-acl-unit)
- package name should be in format
io.frinx<cli|netconf>., device name and domain (eg. io.frinx.cli.unit.ios.interface)
What you need to add:
- add your unit as a dependency to artifacts/pom
- add your unit as a karaf feature
Installing to Opendaylight¶
For how to run Opendaylight with the CLI southbound plugin, please refer to the user guide. To install a bundle with a new unit (e.g. previously built with maven) it is sufficient to run the following command in the karaf console:
bundle:install -s
Now the new unit should be reported by the CLI southbound plugin as being available. To verify its presence from RESTCONF, use the provided postman collection, CLI registry folder.
It is also possible to include this bundle into a karaf feature and make it install with that particular feature instead of using the bundle:install command.
Testing¶
Please see the user guide for how to mount a CLI device. If there is a new unit installed in Opendaylight, it will be possible to use the new unit’s YANG model and its handlers.
Choosing the right YANG models¶
Before writing a custom YANG model for a unit, it is important to check whether such a model doesn’t already exist. There are plenty of YANG models available, modeling many aspects of network device management. The biggest groups of models are:
- Openconfig
- IETF
It is usually wiser to choose an existing YANG model instead of developing a custom one. Also, it is very important to check for existing units already implemented for a device. If there are any, the best approach will most likely be to use YANG models from the same family as existing units use.
Implementing handlers¶
There are 2 types of handlers. Those which handle writes of configuration data and those which handle reads of operational data. The responsibility of a handler is just to transform between CLI commands and the YANG data. There is nothing more a handler needs to do. For an example, refer to the section discussing unit archetype.
A writer may be registered with or without dependency on another writer. The dependency between writers reflects the actual dependency between CLI commands for a specific device.
The following sample shows a CLI translation unit with dependency between 2 writers. The unit is dedicated for interface configuration on a Cisco IOS device.
R2(config)#interface loopback 1 R2(config-if)#ip address 10.0.0.1 255.255.255.255
As the example shows, the ip address command must be executed after the interface command.
IOS CLI translation unit based on openconfig-interfaces YANG model is here. This CLI translation unit contains InterfaceConfigWriter translating the interface command and Ipv4ConfigWriter translating the ip address command. IosInterfaceUnit contains registration of these writers where dependency between writers is described:
wRegistry.add(new GenericWriter<>(IIDs.IN_IN_CONFIG, new InterfaceConfigWriter(cli))); wRegistry.addAfter(new GenericWriter<>(SUBIFC_IPV4_CFG_ID, new Ipv4ConfigWriter(cli)), IIDs.IN_IN_CONFIG);
Registration of Ipv4ConfigWriter by using the addAfter method ensures that the OpenConfig ip address data is translated after OpenConfig interface data. That means CLI commands are executed in the desired order.
Writers can be registered by using methods:
- add - no dependency on another writer, execution order is not guaranteed
- addAfter - execute registered writer after dependency writer
- addBefore - execute registered writer before dependency writer
Implementing RPCs¶
An RPC handler is a special kind of handler, different to the data handlers. RPC handler can encapsulate any commands. The biggest difference is that any configuration processing in RPCs is not part of transactions, reconciliation etc.
Mounting and managing IOS devices from an application¶
Besides mounting using Postman collections of RESTCONF calls (see the user guide ) it is also possible to manage an IOS device in a similar fashion from within an OpenDaylight application. It is however necessary to acquire an appropriate mountpoint instance from MD-SAL’s mountpoint service.
To do so, first make sure to generate an appropriate Opendaylight application using the archetype.
Next make sure to add a Mountpoint service as a dependency of the application, so update your blueprint:
<reference id="mountpointService" interface="org.opendaylight.mdsal.binding.api.MountPointService"/>
and add an argument to your component:
<bean id="SOMEBEAN" class="PACKAGE.SOMEBEAN" init- <argument ref="dataBroker" /> ... <argument ref="mountpointService"/> </bean>
Also add that argument to your constructor:
final MountPointService mountpointService
So now to get a connected mountpoint from the service:
Optional [MountPoint] mountPoint = a.getMountPoint(InstanceIdentifier.create(NetworkTopology.class) .child(Topology.class, new TopologyKey(new TopologyId("cli"))) .child(Node.class, new NodeKey(new NodeId("IOS1")))); if(mountPoint.isPresent()) { // Get DATA broker Optional<DataBroker> dataBroker = mountPoint.get().getService(DataBroker.class); // Get RPC service Optional<RpcService> rpcService = mountPoint.get().getService(RpcService.class); if(!dataBroker.isPresent()) { // This cannot happen with CLI mountpoints throw new IllegalArgumentException("Data broker not present"); } }
And finally DataBroker service can be used to manage the device:
ReadWriteTransaction readWriteTransaction = dataBroker.get().newReadWriteTransaction(); // Perform read // reading operational data straight from device CheckedFuture<Optional<Version>, ReadFailedException> read = readWriteTransaction.read(LogicalDatastoreType.OPERATIONAL, InstanceIdentifier.create(Version.class)); try { Version version = read.get().get(); } catch (InterruptedException | ExecutionException e) { e.printStackTrace(); } Futures.addCallback(readWriteTransaction.submit(), new FutureCallback<Void>() { @Override public void onSuccess(@Nullable Void result) { // Successfully invoked TX } @Override public void onFailure(Throwable t) { // TX failure } });
In this case Version operational data is being read from the device. In order to be able to do so, make sure to add a maven dependency on the IOS unit containing the appropriate YANG model.
Process of reading CLI configuration from device¶
The diagram below shows the general use of the process
Reading of configuration from CLI network device - different scenarios¶
The diagram below shows four specific scenarios:
- Configuration is read using show running-config pattern for the first time
- Another configuration is read using running-config pattern - cache can be used
- BGP configuration/state is read using “show route bgp 100” - the running-config pattern is not used
- BGP configuration/state is read using “show route bgp 100” again - cached can be used
| https://docs.frinx.io/FRINX_ODL_Distribution/oxygen/developer-guide/cli-service-module.html | 2020-05-25T00:51:10 | CC-MAIN-2020-24 | 1590347387155.10 | [array(['../../../_images/ODL2.png', 'ODL'], dtype=object)
array(['../../../_images/HC12.png', 'HC'], dtype=object)
array(['../../../_images/HCsMdsal2.png', "Honeycomb's core"], dtype=object)
array(['../../../_images/cliMountpoint5.png',
"Honeycomb's core as mountpoint"], dtype=object)
array(['../../../_images/cliInComponents2.png', 'CLI plugin components'],
dtype=object)
array(['../../../_images/Process-of-reading-of-CLI-configuration-from-device1.png',
'Reading CLI conf from device'], dtype=object)
array(['../../../_images/Reading-of-configuration-from-CLI-network-device-different-scenarios1.png',
'Different scenarios'], dtype=object) ] | docs.frinx.io |
Installation from package
Debian
Although there is a package of Gitea in Debian’s contrib, it is not supported directly by us.
Unfortunately, the package is not maintained anymore and broken because of missing sources. Please follow the deployment from binary guide instead.
Should the packages get updated and fixed, we will provide up-to-date installation instructions here.
Alpine Linux
Alpine Linux has gitea in its community repository. It follows the latest stable version. for more information look at.
install as usual:
apk add gitea
config is found in /etc/gitea/app.ini
Windows
There is a Gitea package for Windows by Chocolatey.
choco install gitea
Or follow the deployment from binary guide.
macOS
Currently, the only supported method of installation on MacOS is Homebrew.
Following the deployment from binary guide may work,
but is not supported. To install Gitea via
brew:
brew tap gitea/tap brew install gitea
FreeBSD
A FreeBSD port
www/gitea is available. To install the pre-built binary package:
pkg install gitea
For the most up to date version, or to build the port with custom options, install it from the port:
su - cd /usr/ports/www/gitea make install clean
The port uses the standard FreeBSD file system layout: config files are in
/usr/local/etc/gitea,
bundled templates, options, plugins and themes are in
/usr/local/share/gitea, and a start script
is in
/usr/local/etc/rc.d/gitea.
To enable Gitea to run as a service, run
sysrc gitea_enable=YES and start it with
service gitea start.
Cloudron
Gitea is available as a 1-click install on Cloudron. For those unaware, Cloudron makes it easy to run apps like Gitea on your server and keep them up-to-date and secure.
The Gitea package is maintained here.
There is a demo instance (username: cloudron password: cloudron) where you can experiment with running Gitea.
Third-party
Various other third-party packages of Gitea exist. To see a curated list, head over to awesome-gitea.
Do you know of an existing package that isn’t on the list? Send in a PR to get it added! | https://docs.gitea.io/en-us/install-from-package/ | 2020-05-25T01:39:33 | CC-MAIN-2020-24 | 1590347387155.10 | [] | docs.gitea.io |
- global
- illumination
- light
- material
- mtlwrapper
- outdoor
- override
- particlemtl
- spherical
- sss
- surface
- vismat
To add a label to the list of required labels, choose '+ labelname' from Related Labels.
To remove a label from the required labels, choose '- labelname' from above.
- There are no pages at the moment. | https://docs.chaosgroup.com/label/VRAY4MAX/global+illumination+light+material+mtlwrapper+outdoor+override+particlemtl+spherical+sss+surface+vismat | 2020-07-02T12:45:50 | CC-MAIN-2020-29 | 1593655878753.12 | [] | docs.chaosgroup.com |
nodetool setcompactionthroughput
Sets the throughput capacity for compaction in the system, or disables throttling.
cassandra.yaml
- The cassandra.yaml file is located in the installation_location/conf directory. [connection_options] setcompactionthroughput [--] throughput_cap_in_mb.
- throughput_cap_in_mb
- Corresponds to the compaction_throughput_mb_per_sec in cassandra.yaml.
- positive number - Throttles compaction to the specified MB per second across the instance.
- 0 - disable throttling
Examples
Disable compaction throughput
nodetool setcompactionthroughput 0
Set compaction throughput to 16 MB
nodetool setcompactionthroughput 16 | https://docs.datastax.com/en/ddac/doc/datastax_enterprise/tools/nodetool/toolsSetCompactionThroughput.html | 2020-07-02T13:00:52 | CC-MAIN-2020-29 | 1593655878753.12 | [] | docs.datastax.com |
Description
Provides a complete overview of how local storage attached to Cloud Servers operates, including an explanation of the Disk Speed feature which provides users with the ability to apply different levels of disk performance to local Cloud Disks.
All CloudControl Usage uses Base 2 ("gibibyte") Methodology
When provisioning and managing assets in CloudControl, a "Gigabyte" (GB) is actually based on Base 2 (binary) methodology, meaning that 1 GB = 1024^3 bytes ("gibibyte"). For example, if you provision a "100 GB" local disk, the system provisions a local disk of 100 x 1024^3 bytes = 107374182400 bytes.
Usage reporting follows the same methodology. For more details, see Introduction to Usage Reporting.
Cloud Server Disks
CloudControl supports local storage via storage volumes (i.e. "disks") that are attached to a variety of virtual controllers which emulate a specific hardware type. Most Images and Servers use SCSI controllers for this purpose, but the system also supports IDE and SATA controllers as well as read-only files on CD-ROM and Floppy devices. Being a virtualized environment, each storage volume ("disk") is delivered via a VMDK file that's deployed to a Datastore on the underlying storage infrastructure. Each disk is assigned a "disk speed" which determines the performance characteristics of the underlying storage that will be used for that specific local storage volume. Local disks using the same disk speed may or may not be deployed to different Datastores on the storage infrastructure. ISO and FLP files do not have disk speeds and are always on the Standard disk speed.
From the Operating System, each local disk appears to be an external hard disk attached to a specific spot on the controller. When a Cloud Server is deployed, it will inherit the same virtual controllers, disk sizes, and disk locations as the Source Image. However, users can choose to modify the speed of a disk on a deployed Server to be different than that which was used when the Image was created. For details, see the instructions on how to deploy Servers at How to Deploy a Cloud Server from a Guest OS Customization Image and How to Deploy a Cloud Server from a Non-Guest OS Customization Image.
Note: Client Images always use Standard Storage and are always billed as Standard Storage. Client Images have “disk speed" information associated with each disk, but these speeds are simply metadata that the system defaults to when you deploy a server from that Image. The Images themselves are actually stored on Standard storage. Since the system "defaults" to the disk speed of the Source Image, users can also modify the default disk speeds associated with a Client Image as described in How to Manage Client and Long-Term Retention Images.
Note: Most Server BIOS are programmed to boot the Operating System from SCSI Controller 0, Position 0, but users can modify the BIOS through the Console to change this behavior.
Virtual SCSI Controllers and Adapters
The system supports up to four virtual SCSI controllers per Cloud Server. Each SCSI controller uses a specific "adapter" that defines how the SCSI controller is perceived by the guest Operating System. There are four adapters supported by vSphere:
- BusLogic Parallel - This was one of the first two emulated vSCSI controllers made available on the VMware platform, and remains commonly used on older versions of Windows as the driver is available by default.
- LSI Logic Parallel - This was the other emulated vSCSI controller made available on the VMware platform and remains commonly used on UNIX Operating Systems
- LSI Logic SAS - This is a newer evolution of the parallel driver supported (and in some cases required) for newer versions of Microsoft Windows. It was designed to replace the BusLogic Parallel adapter and provides better performance than the original BusLogic Parallel adapter.
- VMware Paravirtual - This is a VMware-specific driver that is virtualization-aware and designed to support very high throughput with minimal processing cost and is, therefore, the most efficient driver. However, this driver requires that VM Tools be installed and running in order to function properly. There are some other restrictions, particularly on older operating systems - review the appropriate VMware documentation for more details.
- More details on each of the adapters described above are available the VMware Blog article Which vSCSI controller should I choose for performance?
Note that although there are four adapters supported, VMware does not support all of them on every Operating System. When adding a Controller, the system will only make available adapters approved for the Server's Operating System. The Supported Operating Systems dashboard will help you identify what adapters are "available" and which one is "recommended" by VMware for a given operating system as described in Navigating the Supported Operating Systems Dashboard
Each SCSI controller provides 15 "positions" in which a local storage "disk" can be attached. These positions are numbered from 0 through 15, with the 7 position reserved as it is used by the virtual SCSI adapter. This means that with the full complement of four SCSI controllers, there are 60 potential positions where local disks can be placed.
Users can add or remove SCSI controllers for a Cloud Server as described in:
Virtual SATA Controllers
The system supports up to four virtual SATA controllers per Cloud Server. All SATA controllers emulate a standard AHCI 1.0 Serial ATA Controller (Broadcom HT 1000) - there are no separate adapters as there are with SCSI controllers. As described in VMware SCSI Controller Options, use of SATA controllers is not recommended for high I/O environments.
Each SATA controller provides 30 "positions" in which either a local storage "disk" or a CD-ROM device can be attached. These positions are numbered from 0 through 29. This means that with the full complement of four SATA controllers, there are 120 potential positions where either local disks or CD-ROM devices can be placed.
SATA controllers are supported only if already present on Images or Servers, you cannot add or remove SATA controllers on an existing Cloud Server. Therefore, if you wish to use SATA, you will need to import a Client Image with the desired number of SATA controllers already present. SATA is not supported on all Operating Systems. Check the VMware Compatibility Guide for more details.
Virtual IDE Controllers
The system supports up to two virtual IDE controllers per Cloud Server. All IDE controllers emulate a standard Intel 82371AB/EB PCI Bus Master IDE controller - there are no separate adapters as there are with SCSI controllers.
Each IDE controller provides only two "positions" in which either a local storage "disk" or a CD-ROM device can be attached - slots 0 and 1. IDE works under a Master/Slave configuration, so the "1" position can be used only if either a "disk" or CD-ROM device is in the "0" position of the same controller. Therefore, the system will prevent the deletion of a local disk in position 0 if there is a local disk in position 1.
VMware's import process inserts IDE controllers on all images imported through the OVF process, so the controllers are present on almost all Images and Cloud Servers. The system does not support the addition or removal of IDE controllers.
NOTE: IDE Controllers do not support "expanding" a disk and require the server to be powered off in order to add a disk to the controller.
CD-ROM Devices
CD-ROM devices may be present on IDE or SATA controllers only if already present on Images or Servers, you cannot add or remove the CD-ROM devices from the controller. Therefore, if you are looking to use a CD-ROM device as described below, you will need to import a Client Image with the desired CD-ROM devices and/or ISO files already present.
CD-ROM devices may have an ISO file attached, in which case they provide read-only access to the ISO file through the virtual CD-ROM device. Otherwise, the CD-ROM device does not provide any use. All such ISO files are placed on Standard Disk Speed (see below for details) and billed based on their file size as if they were a local disk on Standard disk speed. A server or image can have as many CD-ROM devices and ISO files as supported by the controllers, but the total combined size of all ISO and FLP (see below) on a Cloud Server/Image cannot exceed the Maximum Disk Size (GB) for the data center location.
Currently, the system allows ISO only if it is already present on the Image or Server, so they need to be included on an Imported Image. ISO files may be permanently removed from a Cloud Server, but they cannot be added, modified, or replaced. You will have to import a new Image if you want to make changes.
Floppy Controllers
The system does support up to two Floppy Controllers that can have an FLP file attached, in which case they provide read-only access to the FLP file through the virtual floppy device in the same manner as ISO support. The system allows Floppy and associated FLP files only if it already present on the Image or Server, so they need to be included on an Imported Image. FLP files may be permanently removed from a Cloud Server, but they cannot be added, modified, or replaced. You will have to import a new Image if you want to make changes.
Disk Minimums and Maximums
Beyond that, there are four sets of limits associated with local storage on a Cloud Server, each of which varies by User-Manageable Cluster or Data Center location and can be identified as described in How do I Identify Hardware Specifications and Capabilities Available in a Data Center Location:
- The size of each individual disk must fall within a specified range of Minimum Disk Size and Maximum Disk Size
- The total number of disks must fall within the range dictated by the data center location's Minimum Disk Count and Maximum Disk Count
- Maximum Total Storage defines the maximum aggregate amount of "local" storage associated with a Cloud Server
- There is a smaller Maximum Total Storage for an Image limit on the total size of the Cloud Server if it can be cloned to create a Client Image.
The following 'default' limits are set in most (but not all) Public Cloud locations. We are providing this list for convenience but we recommend reviewing the specifics of the location for as described in How do I Identify Hardware Specifications and Capabilities Available in a Data Center Location for a more accurate answer.
When disks are added to a Cloud Server, they will appear as unformatted drives that need to be formatted for use. A User can choose a specific controller and position on which to install the drive or have the system insert the drive in the "next available" SCSI position. For additional details on how to add a disk, see:
These disks can also be removed from the Cloud Server as described in:
Any individual disk attached to a deployed Cloud Server can be increased in size should greater capacity be required – though note that like a newly-added disk, the additional storage is delivered as an unformatted increase in the size of the volume and therefore needs to be formatted by the OS for use. For more details, see:
Disk Speeds (Tiered Storage)
Local storage ("disks") each have a specific "disk speed" that defines the performance characteristics associated with the local storage. CloudControl will allow you to decide which performance characteristic a given Cloud Server disk should utilize based on the disk speed. Each local storage volume is treated independently in terms of its performance, allowing you to "mix and match" performance on different disks in the same Cloud Server. Disk speeds allow the user to select a level of performance that is focused on the intended function of the disk. For example, log file storage might require a lower level of performance while database files may require a faster level of performance. Because the disk speed can be set at an individual disk level, a given Cloud Server can have different disk speeds depending on how they plan to use the disks.
The "disk speed" associated with each Server is visible in the Manage Server dialog of the Admin UI in a row that is shown underneath the associated Controller with the position and size of the individual disk:
The disk speeds available vary by data center location and can vary by hypervisor cluster within a location. For more details, see:
- You can choose the specific speed of each disk when a Server is deployed. By default, the UI will present the "default" disk speeds associated with an OS or Client Image. You can modify these "default" speeds through a dialog on the Server as described in:
- Client Images inherit the "default speed" of each disk based on the speed of the disks on the Cloud Server when the Image was created. You can edit the "default speed" of each disk associated with a Client Image as described in:
- How to Manage Client and Long-Term Retention Images
- Note the Client Images lose any disk speed characteristics when a Client Image is exported, as the OVF format does not support disk speed information. This means that all Imported Images will initially have all disk speeds set to Standard.
- Once a Server is deployed, you can manage the performance characteristics of an individual disk as described in:
- Each disk speed is reported and tracked separately from a usage perspective. For the Provisioned IOPS disk speeds, two usage elements are calculated: one based on the committed IOPS assigned to each disk using the speed and another based the size of each disk using the speed. Standard, High Performance, and Economy disk speeds are calculated with a single usage element solely based on the size of the disk. For more details, see Introduction to Usage Reporting.
- From a reporting perspective, the system provides reporting for each disk speed's element:
- The Summary Usage Report provides a daily location-level summary on the aggregate storage usage of all Cloud Servers in a Geographic Region. For details, see How to Create a Summary Usage Report
- The Detailed Usage Report provides a daily asset-level report how storage usage was calculated for each Cloud Servers in a Geographic Region. For details, see How to Create a Detailed Usage Report
Each different "disk speed" will be charged at its own rate. Refer to your Cloud Provider's rate card for more information on the specific rates for each disk speed a given data center location.
Overview of Disk Speed Types
There are two types of available disk speeds:
- Provisioned IOPS allow users to choose a committed IOPS and corresponding throughput performance level for a given disk, meaning the storage infrastructure is designed to deliver the user-specified IOPS and Throughput value at all times. Users can also change the IOPS and Throughput performance values of a given disk even after the disk is initially deployed.
- Standard, High Performance, and Economy Speeds provide different performance characteristics but do not include a committed performance level as the storage infrastructure is not designed to deliver consistent IOPS at all times but the different speeds will provide differing performance characteristics. We are in the process of implementing an infrastructure change to provide more consistent performance within these speeds. See the detailed section below for more details.
More details on these types are described in the sections below.
Understanding IOPS and Throughput Performance
Real-world storage performance is governed by a multitude of factors, including application and OS variables, storage latency, and other factors. However, at a high level, one of the key factors is Disk Throughput, where:
- Disk Throughput = IOPS (Inputs/Outputs per second) x Block Size
In the context of CloudControl, when the system provisions disk speed performance based on IOPS, it assumes a block size and sets a corresponding Disk Throughput limit based on the IOPS and block size. These limits are enforced in the hypervisor on each individual disk, meaning that disks are governed by both a maximum IOPS limit and a separate Disk Throughput limit where the Throughput limit is equal to IOPS x block size.
Effective Nov. 22 2019, Block Size Increased to 32 KB for New / Modified Disks
For disks added or modified starting on November 22, 2019, the system will calculate throughput based on a 32 KB block size, effectively doubling Throughput. Any change (disk speed change, IOPS change, or size change) to an existing disk on or after this date will result in a new Throughput value based on the new 32 KB block size.
Prior to November 22, the system calculated all Throughput values based on a 16 KB block size, so Throughput = IOPS x 16 KB.
The net effect is that only one limiter is likely to establish the maximum performance at any given time. If a user's actual block size is less than the 16/32 KB size, IOPS will be the limiting factor and corresponding Disk Throughput will be less than the maximum Disk Throughput allowed. If the actual block size is greater than the 16/32 KB size, Throughput will be the limiting factor and the corresponding IOPS will be less than the maximum IOPS allowed.
- Example: User sets up a Provisioned IOPS disk of 100 GB with 1,000 IOPS. CloudControl will provision the disk with a limit of 1000 IOPS and a Disk Throughput limit of 32,000 KB/second (1000 IOPS x 32 KB)
- If 32 KB Block Size is consistently used, the maximums are in perfect alignment. 1,000 IOPS (IOPS limit) x 32 KB block size = 32,000 KB/second (same as disk throughput limit)
- If an 8 KB Block Size is consistently used, 1,000 IOPS (maximum IOPS limit) x 8 KB block size = Disk Throughput 8,000 KB/second. So overall Disk Throughput matches the expected value given the IOPS and KB block size, but this provides less Throughput than the theoretical maximum.
- If a 64 KB Block Size is consistently used, then at 500 IOPS, 500 IOPS x 64 KB block size = 32,000 KB/second (maximum Throughput value). Since overall Disk Throughput has hit the maximum, only 500 IOPS can be achieved.
In real-world applications, the block sizes often vary and other OS and application variables will have an effect on actual performance. Therefore, the IOPS and Throughput settings represent maximum IOPS and Throughput performance in locations and on disk speeds where these values are enforced. In the case of Provisioned IOPS where the values are committed, users can expect consistent performance based on the IOPS value assigned to a given disk. In the case of Standard/Economy/ High Performance disk speeds, these IOPS values are the maximum "burstable" limit that can be achieved at any given time.
Provisioned IOPS Disk Speed Details
Provisioned IOPS is designed to provide a specific user-defined IOPS and Throughput performance value to a given disk at all times. The IOPS and Throughput limits are enforced in the hypervisor and the underlying storage infrastructure is designed to commit itself to deliver those values at all times as there is no oversubscription on the underlying Datastores.
The user-defined committed IOPS value must conform to a set of rules based on the size of the disk. The rules for the Provisioned IOPS disk speed are the same in all Public Cloud locations where the disk speed is available but may vary in Private Cloud locations. The Public Cloud location values are listed below in parenthesis but you can identify the specific characteristics of any location as described in How do I Identify Hardware Specifications and Capabilities Available in a Data Center Location.
- Min IOPS Per GB (3 IOPS/GB) - When setting the IOPS value, you must provision a minimum of 3 IOPS per GB based on the size of the disk. So if the disk is 100 GB in size and using Provisioned IOPS, you must assign at least 300 IOPS.
- Max IOPS Per GB (15 IOPS/GB) - When setting the IOPS value, you can provision a maximum of 15 IOPS per GB based on the size of the disk. So if the disk is 100 GB in size and using Provisioned IOPS, you cannot assign more than 1,500 IOPS to the disk unless you expand the size.
- Min Total IOPS (16 IOPS) - This is the minimum IOPS value that can be assigned to any disk. This limit does not come into play much in Public Cloud locations except that you cannot use Provisioned IOPS with a disk size of 1 GB as the Max IOPS per GB setting of 15 GB does not reach the Minimum IOPS limit of 16.
- Max Total IOPS (15,000 IOPS) - This is the maximum IOPS value that can be assigned to any disk. In Public Cloud locations, this limit does not come into play but in locations where disk sizes of greater than 1,000 GB are allowed, it means you may not be able to provision the full 15 IOPS/GB maximum with large size disks.
When setting IOPS values for a Variable IOPS disk, the UI will present a slider with the available IOPS values based on the location's setting and the current disk size:
Once a disk is assigned a Provisioned IOPS disk speed, you can modify the IOPS as described in How to Manage a Local Storage Disk on a Cloud Server. However, note that you cannot expand the size of the disk and modify the IOPS value at the same time. In scenarios where a user is trying to drastically expand the size of a small disk, users may need to expand the disk size and modify the IOPS value in steps to stay within the Provisioned IOPS Min/Max IOPS/GB rules.
- Example: Suppose you have a 100 GB Provisioned IOPS disk with 1,000 IOPS and you want to expand that disk to 1,000 GB in size. The Maximum IOPS for a 100 GB Provisioned IOPS disk is 1,500 IOPS (15 IOPS/GB). The minimum IOPS for a 1,000 GB Provisioned IOPS disk is 3,000 IOPS (3 IOPS/GB). Therefore, the system will not allow you to expand the 100 GB directly to 1,000 GB. If you set the IOPS of the 100 GB disk to the 1,500 IOPS maximum, you can expand the size to 500 GB (maximum size given 3 IOPS/GB minimum and 1,500 IOPS). You can then modify the IOPS of the now 500 GB disk to anything between 3,000 IOPS (minimum IOPS needed for a 1,000 GB disk) and 7,500 IOPS (maximum given 15 IOPS/GB). The system will now allow you to expand to 1,000 GB in size, after which you can set the IOPS to anything within the 3,000 IOPS minimum and 15,000 IOPS maximum for a disk this size.
Standard, High Performance, and Economy Disk Speed Details
Standard, High Performance, and Economy disk speeds differ from Provisioned IOPS in that the storage infrastructure does not commit a specific performance level to the disk at all times. However, they do provide differing performance levels based on the disk speed chosen and the size of the disk. There are currently two types of storage infrastructure currently supporting these disk speeds, each of which uses a separate methodology to differentiate performance. The Standard/High Performance/Economy Disk Speed Architecture Matrix below identifies the methodology for each data center location.
- "Traditional" Standard, High Performance, and Economy Disk Speed Architecture provides differing performance characteristics based on differences in the underlying storage architecture. In such locations, there are no specific IOPS/Throughput limits enforced on disks
- "Burstable" IOPS Standard, High Performance, and Economy Disk Speed Architecture uses the same hypervisor-based IOPS and Throughput methodology used for the Provisioned IOPS disk speed, but the limits applied are based on the size of the disk and the disk speed definition rather than being user-defined. In addition, the values represent "burstable" maximums rather than committed performance levels
"Traditional" Standard, High Performance, and Economy Disk Speed Architecture
In this legacy architecture, performance characteristics are primarily based on the underlying disk storage infrastructure supporting the disk speed. To deliver each disk speed, the following infrastructure is used:
- Standard - Cloud disks deployed as Standard speed are deployed on Datastores powered by a Hybrid Disk configuration consisting of 2 TB 7200 RPM Nearline SAS disks in a RAID5 configuration that is fronted by an extensive "Fast Cache" Solid-State Disk infrastructure.
- High Performance - Cloud disks deployed as High Performance speed are deployed on Datastores powered by a Hybrid Disk configuration consisting of 600 GB 15000 RPM Nearline SAS disks in a RAID5 configuration that is fronted by an extensive "Fast Cache" Solid-State Disk infrastructure.
- Economy - Cloud disks deployed as High Performance speed are deployed on Datastores powered by a disk configuration consisting of 3 TB 7200 RPM Nearline SAS disks in a RAID5 configuration. There is NO "Fast Cache" fronting for this storage level.
Users of different disk speeds will see differing performance characteristics based on the underlying storage infrastructure, but there are no specific IOPS or Throughput limitations enforced by the hypervisor. Actual performance will be based on a combination of factors, including how "busy" the infrastructure is servicing other disks on the same storage Datstores and/or storage infrastructure. To increase IOPS and Throughput performance for a given volume, users can upgrade to a higher-performing disk speed to change the underlying infrastructure but there is no specific change in maximum performance level.
Over time, we are migrating locations off this infrastructure onto the new "Burstable IOPS" architecture in order to provide improved and more predictable performance. The Standard/High Performance/Economy Disk Speed Architecture Matrix below outlines the implementation dates for this migration.
A few Cloud locations offer a "SSD" disk speed that predates the Provisioned IOPS disk speed described below. The characteristics of this speed vary by location - contact Support if you have questions about the IOPS and Throughput configuration in a given location.
"Burstable IOPS" Standard, High Performance, and Economy Disk Speed Architecture
The Burstable IOPS disk speed architecture is designed to provide a clearly defined maximum performance level based on the IOPS and Throughput setting enforced within the hypervisor. However, these maximums have limited oversubscription on the underlying storage so the maximum performance level represents a "burstable" maximum rather than a committed value. In addition to the disk speed, the architecture is designed to provide greater performance to larger sized disks, meaning the maximum performance is defined by both the disk speed and the disk size. This means that to increase IOPS and Throughput performance for a given volume, users can either increase the size of the volume or upgrade to a higher-performing disk speed as either action will increase the maximum IOPS and Throughput performance according to the table below.
In locations where the "Burstable" architecture is used, the system will apply IOPS and Throughput limits to each local disk based on the GREATER of:
- Minimum IOPS/Throughput per Disk value for the disk speed (regardless of disk size)
- Size Calculated IOPS/Throughput per Disk value based on the disk speed and the size of the disk
The speeds are enforced in Public Cloud locations according to the following table. Private Cloud locations may use different values. (See the Private Disk Speed Architecture section below.)
- Example: User sets up a 180 GB disk. Using burstable IOPS disk speeds, the maximum IOPS and Throughput assigned to this disk will be the GREATER of the Minimum or Size Calculated IOPS based on the 180 GB size and the disk speed:
- 180 GB Disk using Standard Disk Speed
- Min IOPS is 500
- Size Calculated IOPS is 540 (180 GB x 3 IOPS/GB)
- Disk maximum will be 540 IOPS and 17,280 KB/second (540 x 32 KB) as the Size Calculated IOPS value is higher.
- 180 GB using High Performance Disk Speed
- Min IOPS is 800
- Size Calculated IOPS is 1080 (180 GB x 6 IOPS/GB)
- Disk maximum will be 1080 IOPS and 34,560 KB/second (1080 x 32 KB) as the Size Calculated IOPS value is higher.
- 180 GB using Economy Disk Speed
- Min IOPS is 100.
- Size Calculated IOPS is 90 (180 GB x 0.5 IOPS/GB)
- Disk maximum will be 100 IOPS and 3,200 KB/second (100 x 32 KB) as the Min IOPS value is higher than Size Calculated IOPS.
Standard/High Performance/Economy Disk Speed Architecture Matrix
The matrix below identifies which locations are currently using which architecture. In the event of a change from Traditional to Burstable IOPS, a maintenance announcement will be issued to notify users of the change and effective implementation date.
MCP 2.0 Location Matrix
MCP 1.0 Location Matrix
Private Disk Speed in Enterprise Private Cloud Locations
Because the NUTANIX infrastructure used in Enterprise Private Cloud shares storage across all nodes, the architecture does not provide any differentiation of performance similar to what can be done in Burstable IOPS or Traditional architectures. However, it is important that individual disks do not cause "noisy neighbor" problems that would interfere with the performance of other disks. Therefore, such locations offer a single PRIVATE disk speed with fairly high IOPS and Throughput limits that exist solely to prevent excessive I/O usage by a given disk.
These limits are enforced in Private Cloud locations according to the following table. The limits work in the same manner as Burstable IOPS described about. However, note the Private Disk Speed uses a 16 KB block size for calculating Throughput.
Adding, Expanding and Modifying the Performance Characteristic of Disks
Users can add disks and modify existing disks as described in
- How to Add Additional Local Storage (Disk) to a Cloud Server
- How to Manage a Local Storage Disk on a Cloud Server
The rules regarding adding or modifying disks vary depending on the storage controller and the disk speed associated with the disk a well as the server's running state. In most cases, the system will allow disk changes on a server in a running state. If the disk speed is either Provisioned IOPS or a disk speed in a location using Burstable IOPS Architecture, the system must update the disk's associated Throughput setting. When performed on a running server, this requires the system to relocate the Cloud Server between physical ESXi hosts in order to implement the new Throughput settings. This requirement adds some unique impacts to changes on a running server.
If the system does not have enough excess ESXi host capacity to ensure such a relocation can be accomplished, the system will block changes to a Provisioned IOPS or Burstable IOPS Architecture disk speed on a running server. Users can either choose to try the change at a later time or stop the server and initiate the change. Should this situation occur, the error will look like:
In the unlikely case the relocation between ESXi hosts fails, and the correct Throughput setting is not applied, the Cloud Server will be flagged with a status of "Requires Restart". Users will not be able to take any action against this Server until the server is either restarted via CloudControl or shutdown.
Restart must be initiated through CloudControl but Shutdown can be Performed Locally
When Restarting a Server to clear a "Requires Restart" state, users must use the "Restart" function in CloudControl. If the Server is restarted from within the Guest OS, the System will not be able to detect that and the Server will remain in "Requires Restart" status. For details on how to Restart a Server, refer to the Restart/Reset Server section in How to Manage a Cloud Server.
However, shutting down a server within the Guest OS will clear the "Requires Restart" state, as that change will be detected by the system.
The chart below summarizes the rules:
Recently Updated | https://docs.mcp-services.net/pages/viewpage.action?pageId=3015251 | 2020-07-02T12:07:38 | CC-MAIN-2020-29 | 1593655878753.12 | [] | docs.mcp-services.net |
Getting your customer to come one more time to your store and one less time to your competitor’s is how the battle will be won or lost.
So, in today's day and age, of self driving cars, and 3D printing organs, and most importantly everyone carrying a super computer in their pocket - the smartphone - why are do businesses still think that paper stamp cards are going to win the battle?
In this video Paul vents his frustration and asks why? | http://docs.loopyloyalty.com/en/articles/1027715-why-are-we-still-accepting-paper-stamp-cards-in-2017 | 2020-07-02T12:23:29 | CC-MAIN-2020-29 | 1593655878753.12 | [] | docs.loopyloyalty.com |
.
Don't confuse table compression with compact storage of columns, which is used for backward compatibility of old applications with CQL.. Cassandra compresses existing SSTables when the normal Cassandra compaction process occurs. Force existing SSTables to be rewritten and compressed by using nodetool upgradesstables (Cassandra 1.0.4 or later) or nodetool scrub. | https://docs.datastax.com/en/cassandra-oss/3.0/cassandra/operations/opsWhenCompress.html | 2020-07-02T12:19:25 | CC-MAIN-2020-29 | 1593655878753.12 | [] | docs.datastax.com |
Web Service Security Guidance
With the release of the WSE 3.0 for Visual Studio 2005 and .NET Framwork 2.0, Microsoft released the Web Service Security Guidance. This guidance captures the most common scenarios related to Web Service Security such as:
- Public Web Services
- Intranet Web Services
- Internet Business to Business
- Multiply Internet Web Services
I personally like chapter 3, that talks about security at the transport and the message layer. In fact one one of the biggest feature that is so cool about WSE 3.0 compared with WSE 2.0 is the introduction of turnkey solutions. WSE 3.0 now include 5 turnkey security profiles that can be used for the most common scenarios leaving the developers with more time to concentrate on the business logic of the service. All of these turnkey solutions can be customized.
For those that are thinking of using Windows Communication Foundation (WCF)--formerly know as "Indigo"-- for writing distributed applications in the future then I would consider WSE 3.0 now and not later. First WSE 3.0 will integrate with WCF services or clients and second, many of the transport and message turnkey solutions build in in WSE 3.0 are similar if not that the same as those provided by WCF. Therefore, your knowledge of WSE 3.0 in terms of security will pay off when you move to WCF distributed applications.
WSE 3.0 will support side by side execution with WSE 2.0. Therefore, your development machines and production services can fully support both WSE 2.0 and WSE 3.0 applications. | https://docs.microsoft.com/en-us/archive/blogs/dansellers/web-service-security-guidance | 2020-07-02T13:24:17 | CC-MAIN-2020-29 | 1593655878753.12 | [] | docs.microsoft.com |
Are you planning to Deploy Office 2007? If so, you'll want to check this out.
There is a new WebCast available on titled, Deploying the 2007 Microsoft Office system in an enterprise computing environment. Eric Ellis is the presenter. He is a Support Escalation Engineer on my team. Eric spent the bulk of the last year and a half working directly with the product development teams on the area of Office setup and deployment. He has a ton of expertise in this area and is an excellent speaker. Although towards the end of the presentation you'll notice his voice fading just a bit, because he has been fighting off a cold.
The WebCast covers best practices and techniques used to deploy Microsoft Office in an enterprise environment. You'll learn about the changes that have taken place in Microsoft Office 2007 over previous version, as well as learn about utilities and methods used to customize and simplify the deployment of Office 2007.
Even though the WebCast is targeted at Enterprise customers, even customers that are smaller in scale can benefit from the contents of the WebCast.
Kudos to Eric for doing a great job.
Check out for a listing of all the WebCast coming in the next couple of months. | https://docs.microsoft.com/en-us/archive/blogs/microsoft_office_support_communication_blog/are-you-planning-to-deploy-office-2007-if-so-youll-want-to-check-this-out | 2020-07-02T13:12:23 | CC-MAIN-2020-29 | 1593655878753.12 | [] | docs.microsoft.com |
You can access various Web sample applications, by browsing them on the left side menu, or by building and running them in the Tizen Studio. The samples are complete with source code and resources. If you need the code in your own application, you can easily copy and paste it.
Getting Web Samples in the Tizen Studio
To create a sample application project in the Tizen Studio, to allow you to build and run the samples:
- Launch the Tizen Studio.
- Select File > New > Tizen Project.
- Select Sample and click Next.
- Select Mobile or Wearable, Web Application, and the sample you want. | https://docs.tizen.org/development/sample/web | 2020-07-02T12:58:44 | CC-MAIN-2020-29 | 1593655878753.12 | [] | docs.tizen.org |
About Drawing Hierarchy Rigs
T-RIG-001-003_0<<.
. | https://docs.toonboom.com/help/harmony-17/advanced/rigging/about-drawing-hierarchy-rig.html | 2020-07-02T12:45:07 | CC-MAIN-2020-29 | 1593655878753.12 | [array(['../Resources/Images/HAR/Stage/Breakdown/an_parentingarm.png',
None], dtype=object)
array(['../Resources/Images/HAR/Stage/Breakdown/HAR11/HAR11_Arm_Hierarchy.png',
None], dtype=object)
array(['../Resources/Images/HAR/Stage/Breakdown/an_hierarchyrotation_001.png',
None], dtype=object)
array(['../Resources/Images/HAR/Stage/Breakdown/parenting_disadvantage.png',
None], dtype=object) ] | docs.toonboom.com |
Viewing the List of Layers
The go-to way of selecting and managing layers in a panel is to use the Layer panel, which is, by default, in the right section of the Stage and Camera views.
However, it is also possible to add the Layers view to your workspace, which works the same way. You can then hide the Layer panel by clicking on its
Collapse button in its top-left corner, leaving more space to work in the Stage and Camera views.
You can also manage layers using the Thumbnails view. If the size of the Thumbnails view is big enough, each panel in it will have a vertical list of layers on its right edge, which allows you to scroll through and select layers. Left of the Thumbnails view, under the Tools toolbar, is a Layer toolbar, which allows you to add and remove layers to the selected panel.
There are several ways to display layers in Storyboard Pro. The two main ways are in the Layer panel of the Stage and Camera views, and the Layers view. Both views display a thumbnail of each layer, followed by their name and some toggle buttons to control the behavior of your layers.
- In the Stage or Camera view, do one of the following:
- If the Layer panel is displayed, click on the Collapse
button in its top-left corner to collapse the panel.
- If the Layer panel is hidden, click on the Expand
button in the top-right corner of the view to display the panel.
- Do one of the following:
- In the upper-right corner of a view, click the Add View
button and select Layers.
- Select Windows > Layers.
The Layers panel displays all the layers in the selected panel.
| https://docs.toonboom.com/help/storyboard-pro-7/storyboard/layer/view-layer-list.html | 2020-07-02T12:17:18 | CC-MAIN-2020-29 | 1593655878753.12 | [array(['../../Resources/Images/SBP/Layers/layer-panel-in-stage-view.png',
None], dtype=object)
array(['../../Resources/Images/SBP/alternate-layer-management-layers-view.png',
None], dtype=object)
array(['../../Resources/Images/SBP/alternate-layer-management-thumbnails-view.png',
None], dtype=object)
array(['../../Skins/Default/Stylesheets/Images/transparent.gif', 'Closed'],
dtype=object)
array(['../../Skins/Default/Stylesheets/Images/transparent.gif', 'Closed'],
dtype=object)
array(['../../Resources/Images/SBP/layers_view.png', None], dtype=object)] | docs.toonboom.com |
pg_available_extension_versions
A newer version of this documentation is available. Use the version menu above to view the most up-to-date release of the Greenplum 5.x documentation.
pg_available_extension_versions
The pg_available_extension_versions view lists the specific extension versions that are available for installation. The pg_extension system catalog table shows the extensions currently installed.
The view is read only. | https://gpdb.docs.pivotal.io/5270/ref_guide/system_catalogs/pg_available_extension_versions.html | 2020-07-02T13:43:38 | CC-MAIN-2020-29 | 1593655878753.12 | [] | gpdb.docs.pivotal.io |
Performance Comparison of Dimension Reduction Implementations¶
Different dimension reduction techniques can have quite different computational complexity. Beyond the algorithm itself there is also the question of how exactly it is implemented. These two factors can have a significant role in how long it actually takes to run a given dimension reduction. Furthermore the nature of the data you are trying to reduce can also matter – mostly the involves the dimensionality of the original data. Here we will take a brief look at the performance characterstics of a number of dimension reduction implementations.
To start let’s get the basic tools we’ll need loaded up – numpy and pandas obviously, but also tools to get and resample the data, and the time module so we can perform some basic benchmarking.
Next we’ll need the actual dimension reduction implementations. For the purposes of this explanation we’ll mostly stick with scikit-learn, but for the sake of comparison we’ll also include the MulticoreTSNE implementation of t-SNE, which has significantly better performance than the current scikit-learn t-SNE.
Next we’ll need out plotting tools, and, of course, some data to work
with. For this performance comparison we’ll default to the now standard
benchmark of manifold learning: the MNIST digits dataset. We can use
scikit-learn’s
fetch_mldata to grab it for us.
Now it is time to start looking at performance. To start with let’s look at how performance scales with increasing dataset size.
Performance scaling by dataset size¶
As the size of a dataset increases the runtime of a given dimension
reduction algorithm will increase at varying rates. If you ever want to
run your algorithm on larger datasets you will care not just about the
comparative runtime on a single small dataset, but how the performance
scales out as you move to larger datasets. We can similate this by
subsampling from MNIST digits (via scikit-learn’s convenient
resample utility) and looking at the runtime for varying sized
subsamples. Since there is some randomness involved here (both in the
subsample selection, and in some of the algorithms which have stochastic
aspects) we will want to run a few examples for each dataset size. We
can easily package all of this up in a simple function that will return
a convenient pandas dataframe of dataset sizes and runtimes given an
algorithm.
Now we just want to run this for each of the various dimension reduction implementations so we can look at the results. Since we don’t know how long these runs might take we’ll start off with a very small set of samples, scaling up to only 1600 samples.
Now let’s plot the results so we can see what is going on. We’ll use seaborn’s regression plot to interpolate the effective scaling.
We can see straight away that there are some outliers here. The scikit-learn t-SNE is clearly much slower than most of the other algorithms. It does not have the scaling properties of MDS however; for larger dataset sizes MDS is going to quickly become completely unmanageable. At the same time MulticoreTSNE demonstrates that t-SNE can run fairly efficiently. It is hard to tell much about the other implementations other than the fact that PCA is far and away the fastest option. To see more we’ll have to look at runtimes on larger dataset sizes. Both MDS and scikit-learn’s t-SNE are going to take too long to run so let’s restrict ourselves to the fastest performing implementations and see what happens as we extend out to larger dataset sizes.
At this point we begin to see some significant differentiation among the different implementations. In the earlier plot MulticoreTSNE looked to be slower than some of the other algorithms, but as we scale out to larger datasets we see that its relative scaling performance is far superior to the scikit-learn implementations of Isomap, spectral embedding, and locally linear embedding.
It is probably worth extending out further – up to the full MNIST digits dataset. To manage to do that in any reasonable amount of time we’ll have to restrict out attention to an even smaller subset of implementations. We will pare things down to just MulticoreTSNE, PCA and UMAP.
Here we see UMAP’s advantages over t-SNE really coming to the forefront. While UMAP is clearly slower than PCA, its scaling performance is dramatically better than MulticoreTSNE, and for even larger datasets the difference is only going to grow.
This concludes our look at scaling by dataset size. The short summary is that PCA is far and away the fastest option, but you are potentially giving up a lot for that speed. UMAP, while not competitive with PCA, is clearly the next best option in terms of performance among the implementations explored here. Given the quality of results that UMAP can provide we feel it is clearly a good option for dimension reduction. | https://umap-learn.readthedocs.io/en/latest/benchmarking.html | 2020-07-02T12:20:34 | CC-MAIN-2020-29 | 1593655878753.12 | [array(['_images/performance_14_1.png', '_images/performance_14_1.png'],
dtype=object)
array(['_images/performance_17_1.png', '_images/performance_17_1.png'],
dtype=object)
array(['_images/performance_20_1.png', '_images/performance_20_1.png'],
dtype=object) ] | umap-learn.readthedocs.io |
Built-in views¶
- class
contact_form.views.
ContactFormView¶
The base view class from which most custom contact-form views should inherit. If you don’t need any custom functionality, and are content with the default
ContactFormclass, you can also just use it as-is (and the provided URLConf,
contact_form.urls, does exactly this).
This is a subclass of Django’s FormView, so refer to the Django documentation for a list of attributes/methods which can be overridden to customize behavior.
One non-standard attribute is defined here:
recipient_list¶
The list of email addresses to send mail to. If not specified, defaults to the
recipient_listof the form.
Additionally, the following standard (from
FormView) methods and attributes are commonly useful to override (all attributes below can also be passed to
as_view()in the URLconf, permitting customization without the need to write a full custom subclass of
ContactFormView):
form_class¶
The form class to use. By default, will be
ContactForm. This can also be overridden as a method named
form_class(); this permits, for example, per-request customization (by inspecting attributes of
self.request).
template_name¶
The template to use when rendering the form. By default, will be
contact_form/contact_form.html.
get_success_url()¶
The URL to redirect to after successful form submission. By default, this is the named URL
contact_form.sent. In the default URLconf provided with django-contact-form, that URL is mapped to
TemplateViewrendering the template
contact_form/contact_form_sent.html.
get_form_kwargs()¶
Returns additional keyword arguments (as a dictionary) to pass to the form class on initialization.
By default, this will return a dictionary containing the current
HttpRequest(as the key
request) and, if
recipient_listwas defined, its value (as the key
recipient_list).
Warning
If you override
get_form_kwargs(), you must ensure that, at the very least, the keyword argument
requestis still provided, or
ContactForminitialization will raise
TypeError. The simplest approach is to use
super()to call the base implementation in
ContactFormView, and modify the dictionary it returns.
Warning
Implementing
form_invalid()
To work around a potential performance issue in Django 1.9,
ContactFormViewimplements the
form_invalid()method. If you choose to override
form_invalid()in a subclass of
ContactFormView, be sure to read the implementation and comments in the source code of django-contact-form first. Note that Django 1.9.1, once released, will not be affected by this bug. | http://django-contact-form.readthedocs.io/en/1.3/views.html | 2017-11-17T23:12:23 | CC-MAIN-2017-47 | 1510934804019.50 | [] | django-contact-form.readthedocs.io |
Product: Cloud City Masters Series Tutorial
Product Code: br_sc003
DAZ Original: YES
Created By: Joe Vinton
Released: April 28, 2004
Product Information
* Required Products: Bryce 5
* You can find the tutorial in the following folder(s):
* Tutorials:Masters Series:Cloud City
* Support files (materials, objects) are located in the following folders:
* Materials - Presets:Materials:Masters Series
* Objects - Presets:Objects:Masters Series
* Skies - Content:Masters Series
* Scene - Scene Files:Masters Series
Product Notes
* To access the tutorial, open the Index.html page located in the Bryce 5:Tutorials:Masters Series:Cloud City folder.
* If you installed the tutorial directly into your Bryce 5 program folder, the materials and objects are all located in the Materials and Objects Library, ready to load and examine.
* The skies will need to be manually imported into the Sky Library and are located in the Content:Masters Series folder. | http://docs.daz3d.com/doku.php/artzone/azproduct/3197 | 2017-11-17T23:09:11 | CC-MAIN-2017-47 | 1510934804019.50 | [] | docs.daz3d.com |
Pivotal Greenplum® Command Center 3.3.1 Release Notes
- About This Release
- Supported Platforms
- Pivotal Documentation
- About Pivotal Greenplum Command Center
- Enhancements and Changes in Greenplum Command Center 3.3.1
- Enhancements and Changes in Greenplum Command Center 3.3.0
-.3
- Enhancements and Changes in Greenplum Workload Manager 1.8.2
-.3.1
Greenplum Workload Manager version: 1.8.3
Published: October, 2017
About This Release
Pivotal Greenplum Command Center release 3.3.1 contains Greenplum Command Center release 3.3.1 and Greenplum Workload Manager release 1.8.3.
See Enhancements and Changes in Greenplum Command Center 3.3.1 for information about new features and improvements in the Greenplum Command Center 3.3.1 release.
See Enhancements and Changes in Greenplum Workload Manager 1.8.3 for information about new features and improvements in the Workload Manager 1.8.3.1
New Features and Enhancements
Updated the Admin > Permissions page to allow Administrators to enable or disable anonymous users’ access to the Query Monitor page. Command Center users with operator or operator_basic permission see a message “Guest access is Off” or “Guest access is On”, but cannot change the state of the toggle. The Allow guests to view Query Monitor toggle is on by default.
Restructured the navigation and landing page for non-administrator users.
Cleaned up the user experience for users with basic and self-only permissions.
Changed the Admin message on the Query Monitor page to persist after updating GPCC and migrating GPCC instances.
Changed Hardware Status page to reflect custom host names.
Improved error messaging for Query Monitor guest view.
Improved error messaging when an invalid GPCC URL is received.
Improved error messaging when an instance is started with bad HTTPS certificate files.
Fixes
Disk usage now shows the correct “Free” and “Used” numbers.
Segment node storage status correctly reflects nodes after exapandingh GPDB.
Multi-cluster view layout issue fixed.
Fixed layout issues with IE11.
Dialhome alerts now show correct information for GPDB 4.3.x.
Enhancements and Changes in Greenplum Command Center 3.3.0
Greenplum Command Center 3.3.0 contains the following enhancements:
- Anyone with access to the Command Center web server can sign in anonymously and view the Query Monitor page. Anonymous users cannot cancel queries, view the query text or explain plan, or access any other Command Center pages or features.
- Administrators can post a message for all Command Center users to see on the Query Monitor page.
- Enabling SSL when creating a new Command Center instance enables WebSocket connections for improved security and performance.
- When the Command Center Console is running on a remote host, Workload Manager views are available if GPCC detects Workload Manager when the Command Center user signs in. If Workload Manager becomes available after a user has signed in to Command Center, the user must sign out and in again to view Workload Manager pages in GPCC.
- Explain plans are no longer generated automatically to reduce system load.
- Users can generate an explain plan by clicking Run explain with Pivotal Query Optimizer or Run explain with legacy optimizer. The legacy optimizer will generate plans for queries that normally fall back to the legacy optimizer, even when you choose the Pivotal Query Optimizer.
- The Greenplum Command Center version has been added to the header.
- Many useability improvements have been made in the Command Center user interface.
- The Command Center Database reference documentation has been relocated to the Greenplum Database Reference Guide. See The gpperfmon Database Reference for the most recent update to this reference..3
Following are changes in Greenplum Workload Manager 1.8.3.
Performance improvements in the Workload Manager messaging layer provide a significant reduction in CPU usage by the messaging system.
Significant reduction in CPU usage of the Workload Manager messaging process (beam.smp), both when there is no load and when under heavy load.
Enhancements and Changes in Greenplum Workload Manager 1.8.2
The following are changes in Greenplum Workload Manager 1.8.2.
Enhanced gp_wlm_records view
Three columns have been added to the
gp_wlm_records table:
The following
gp_wlm_records columns are renamed:
The data type of the
query_start column is changed from
text to
timestamptz.
The columns in the table are reordered. See Querying Workload Manager Record Data for the new order.
When you install Workload Manager 1.8.2, a new external CSV file is created for the
gp-wlm-records table with a new name. If you later downgrade to an earlier release, you can restore the original CSV files and then start the previous Workload Manager release.
Enhanced gp_wlm_events view
The following
gp_wlm_events columns are renamed:
The columns in the view are reordered. See Querying Workload Manager Event Data for the new order.
When you install Workload Manager 1.8.2, new external CSV files are created for the
gp_wlm_events view with a new name. If you later downgrade to an earlier release, you can restore the original CSV files and then start the previous Workload Manager release.
Improved Installation Process
The Workload Manager installer
--dbname-records command line option is changed to
--dbname.
The Workload Manager installer now creates the
gp_wlm_events view and the external files it references. The view is created in the same database where the
gp_wlm_records table is created — the database specified with the
--dbname command line option.
The upgrade process has been completely automated for both
gp_wlm_events and
gp_wlm_records. Upon upgrade or uninstall, the
gp_wlm_events and
gp_wlm_records tables and views are automatically dropped from the database in which they were previously installed.
Configuration default changed for idle process and session tracking
The Workload Manager configuration parameters
gpdb_stats:publish_idle_sessions and
systemdata:publish_idle_processes are now set to ‘false’ by default. Upgrading Workload Manager does not change the current values of these parameters.
The effect of this is two-fold. First, only the actual query processes are tracked. Second, the additional load caused by the extra processing is no longer incurred. If you would like to track inactive sessions, you can set these parameters to 'true’ or query session_state.session_level_memory_consumption.idle_start. (Querying
idle_start is not a Workload Manager feature.) (timestampt 'false’. or upgrade Greenplum Workload Manager, run the Workload Manager installer found in the Greenplum Command Center installation directory. See Installing Greenplum Workload Manager in the Greenplum Workload Manager documentation for command syntax and usage.. | http://gpcc.docs.pivotal.io/330/gpcc/relnotes/GPCC-331-release-notes.html | 2017-11-17T23:08:27 | CC-MAIN-2017-47 | 1510934804019.50 | [array(['/images/icon_gpdb.png', None], dtype=object)] | gpcc.docs.pivotal.io |
The View PCoIP ADM template file contains group policy settings that configure PCoIP settings that affect the use of the keyboard. Table 1. View PCoIP Session Variables for the Keyboard Setting Description Disable sending CAD when users press Ctrl+Alt+Del When this policy is enabled, users must press Ctrl+Alt+Insert instead of Ctrl+Alt+Del to send a Secure Attention Sequence (SAS) to the remote desktop during a PCoIP session. You might want to enable this setting if users become confused when they press Ctrl+Alt+Del to lock the client endpoint and an SAS is sent to both the host and the guest. This setting applies to Horizon Agent only and has no effect on a client. When this policy is not configured or is disabled, users can press Ctrl+Alt+Del or Ctrl+Alt+Insert to send an SAS to the remote desktop. Use alternate key for sending Secure Attention Sequence Specifies an alternate key, instead of the Insert key, for sending a Secure Attention Sequence (SAS). You can use this setting to preserve the Ctrl+Alt+Ins key sequence in virtual machines that are launched from inside a remote. This setting applies to Horizon Agent only and has no effect on a client. Parent topic: PCoIP Policy Settings | https://docs.vmware.com/en/VMware-Horizon-7/7.0/com.vmware.horizon-view.desktops.doc/GUID-2FA7564D-FF3E-472B-AD2D-575CCCD82410.html | 2017-11-17T23:18:57 | CC-MAIN-2017-47 | 1510934804019.50 | [] | docs.vmware.com |
Install the Windows guest agent on a Windows reference machines to run as a Windows service and enable further customization of machines.
Prerequisites
Identify or create the reference machine..
Procedure
- Navigate to the vCloud Automation Center Appliance management console installation page.
For example:.
- Click Guest and software agents page in the vRealize Automation component installation section of the page.
For example:.
The Guest and Software Agent Installers page opens, displaying links to available downloads.
- Download and save the Windows guest agent installation file to the C drive of your reference machine.
Windows guest agent files (32-bit.)
Windows guest agent files (64-bit.)
- Install the guest agent on the reference machine.
- Right-click the file and select Properties.
- Click General.
- Click Unblock.
- Extract the files..
Results
The name of the Windows service is VCACGuestAgentService. You can find the installation log VCAC-GuestAgentService.log in C:\VRMGuestAgent.
What to do next
Convert your reference machine into a template for cloning, an Amazon machine image, or a snapshot so your IaaS architects can use your template when creating blueprints. | https://docs.vmware.com/en/vRealize-Automation/7.1/com.vmware.vrealize.automation.doc/GUID-C5BD3D30-FCCC-4125-971E-4E6A27617BF8.html | 2017-11-17T23:20:11 | CC-MAIN-2017-47 | 1510934804019.50 | [] | docs.vmware.com |
part_emitter_destroy_all( ps );
Returns: N/A
This function will remove all defined emitters from the given
system and clear them from memory (this will also stop any
particles from being produced by the given emitter, but it does
NOT remove them from the room). This function should always
be called when the emitters are no longer needed for the system to
prevent memory leaks and errors.
if lives = 0
{
part_emitter_destroy_all(global.Sname);
room_goto(rm_Menu);
}
The above code checks the built in global variable "lives" and if it is 0, it destroys all particle emitters and then changes room. | http://docs.yoyogames.com/source/dadiospice/002_reference/particles/particle%20emitters/part_emitter_destroy_all.html | 2017-11-17T23:04:55 | CC-MAIN-2017-47 | 1510934804019.50 | [] | docs.yoyogames.com |
To add a new policy to an existing Knox service:
On the Service Manager page, select an existing service under Knox.
The List of Policies page appears.
Click.
The Create Policy page appears.
Complete the Create Policy page as follows:
For reference information, see: Wildcard Characters and {USER} Variable..
Click. | https://docs.hortonworks.com/HDPDocuments/HDP2/HDP-2.6.2/bk_security/content/knox_policy.html | 2017-11-17T23:10:43 | CC-MAIN-2017-47 | 1510934804019.50 | [] | docs.hortonworks.com |
Textpattern themes available on GitHub TODO
With the popularity of GitHub, comes a natural tendency for Textpattern theme designers to make their theme code available there. In this page we attempt to keep track of them all until we figure out something better.
Front-side themes
Textpattern CMS default theme
Latest version of the official default Textpattern front-side theme, including the Sass files used for development of the theme. Demo here.
A port of the WordPress theme ‘Yoko’ to Textpattern. Demo here.
This is a simple lightweight theme; developed for use on a blog.
Admin-side themes
Textpattern Hive admin theme\ This is an official admin-side theme shipping with v4.5 onwards. Maintained by Phil Wareham.
*Textpattern Moderne\ A clean and fresh Textpattern admin theme from Naz Hamid. Also turned into an installable theme package version by TXP Builders.
*Textpattern Khaki\ Khaki is an admin theme from TXP Builders based on the Remora theme.
*Steel\ Steel admin-side theme by redbot. | https://docs.textpattern.io/themes/themes-on-github | 2017-11-17T23:07:12 | CC-MAIN-2017-47 | 1510934804019.50 | [] | docs.textpattern.io |
Connecting to an On-Premise Directory
You have two options for connecting to your on-premises directory: can either use AWS Directory Service AD Connector, or you can use AWS Microsoft AD.
Note
If you are part of a compliance program, such as PCI, FedRAMP, or DoD, you must set up a Microsoft AD Directory to meet compliance requirements. | http://docs.aws.amazon.com/workdocs/latest/adminguide/connect_directory.html | 2017-11-17T23:26:55 | CC-MAIN-2017-47 | 1510934804019.50 | [] | docs.aws.amazon.com |
Quick start guide¶
First you’ll need to have Django and django-contact-form installed; for details on that, see the installation guide.
Once that’s done, you can start setting up
django-contact-form. Since it doesn’t provide any database models
or use any other application-config mechanisms, you do not need to
add django-contact-form to your
INSTALLED_APPS setting; you
can simply begin using it right away.
URL configuration¶
The easiest way to set up the views in django-contact-form is to
just use the provided URLconf, found at
contact_form.urls. You can
include it wherever you like in your site’s URL configuration; for
example, to have it live at the URL
from django.conf.urls import include, url urlpatterns = [ # ... other URL patterns for your site ... url(r'^contact/', include('contact_form.urls')), ]
If you’ll be using a custom form class, you’ll need to manually set up your URLs so you can tell django-contact-form about your form class. For example:
from django.conf.urls import include, url from django.views.generic import TemplateView from contact_form.views import ContactFormView from yourapp.forms import YourCustomFormClass urlpatterns = [ # ... other URL patterns for your site ... url(r'^contact/$', ContactFormView.as_view( form_class=YourCustomFormClass), name='contact_form'), url(r'^contact/sent/$', TemplateView.as_view( template_name='contact_form/contact_form_sent.html'), name='contact_form_sent'), ]
Important
Where to put custom forms and views
When writing a custom form class (or custom
ContactFormView
subclass), don’t put your custom code inside
django-contact-form. Instead, put your custom code in the
appropriate place (a
forms.py or
views.py file) in an
application you’ve written.
Required templates¶
The two views above will need two templates to be created:
contact_form/contact_form.html
- This is used to display the contact form. It has a
RequestContext(so any context processors will be applied), and also provides the form instance as the context variable
form.
contact_form/contact_form_sent.html
- This is used after a successful form submission, to let the user know their message has been sent. It has a
RequestContext, but provides no additional context variables of its own.
You’ll also need to create at least two more templates to handle the
rendering of the message:
contact_form/contact_form_subject.txt
for the subject line of the email to send, and
contact_form/contact_form.txt for the body (note that the file
extension for these is
.txt, not
.html!). Both of these will
receive a
RequestContext with a set of variables named for the
fields of the form (by default:
name,
body), as
well as one more variable:
site, representing the current site
(either a
Site or
RequestSite instance, depending on whether
Django’s sites framework is installed).
Warning
Subject must be a single line
In order to prevent header injection attacks, the subject
must be only a single line of text, and Django’s email framework
will reject any attempt to send an email with a multi-line
subject. So it’s a good idea to ensure your
contact_form_subject.txt template only produces a single line
of output when rendered; as a precaution, however,
django-contact-form will split the output of this template at
line breaks, then forcibly re-join it into a single line of text. | http://django-contact-form.readthedocs.io/en/1.3/quickstart.html | 2017-11-17T23:12:13 | CC-MAIN-2017-47 | 1510934804019.50 | [] | django-contact-form.readthedocs.io |
Product: Genesis Evolution: Body Morphs
Product Code: 12994 (ds_mr008)
DAZ Original: Yes
Created by: DAZ 3D
Released: July 2011
Required Products: the Genesis figure (currently only included in DAZ Studio 4)
Visit our site for further technical support questions or concerns:
Thank you and enjoy your new products!
DAZ Productions Technical Support
12637 South 265 West #300
Draper, UT 84020
Phone:(801) 495-1777
TOLL-FREE 1-800-267-5170 | http://docs.daz3d.com/doku.php/artzone/azproduct/12994 | 2017-11-17T23:11:52 | CC-MAIN-2017-47 | 1510934804019.50 | [] | docs.daz3d.com |
Remove-SCVMHost
Syntax
Remove-SCVMHost [-VMHost] <Host> [-VMMServer <ServerConnection>] [-Credential <VMMCredential>] [-RemoveHostWithVMs] [-RunAsynchronously] [-PROTipID <Guid>] [-JobVariable <String>] [-WhatIf] [-Confirm] [<CommonParameters>]
Remove-SCVMHost [-VMHost] <Host> [-VMMServer <ServerConnection>] [-Force] [-RunAsynchronously] [-PROTipID <Guid>] [-JobVariable <String>] [-WhatIf] [-Confirm] [<CommonParameters>], nor will VMM attempt to connect to the host and uninstall the VMM agent. Hence, using the Force parameter is recommended only when removing stale host records from the VMM database.
This cmdlet returns the object upon success (with the property MarkedForDeletion set to $True) or returns an error message upon failure.
Examples
Example 1: Remove a specific domain-joined host from VMM
PS C:\> $Credential = Get-Credential PS C:\> $VMHost = Get-SCVMHost -ComputerName "VMHost01" PS C:\> Remove-SCVMHost -VMHost $VMHost -Credential $Credential -Confirm.
Example 3: Remove a specific host that you can no longer access from VMM
PS C:\> $VMHost = Get-SCVMHost -ComputerName "VMHost03" PS C:\> Remove-SCVMHost -VMHost $VMHost -Force -Confirm.
Required Parameters
Forces the command to run without asking for user confirmation.
Specifies a virtual machine host object. VMM supports Hyper-V hosts, VMware ESX hosts, and Citrix XenServer hosts.
For more information about each type of host, see the Add-SCVMHost cmdlet.
Optional Parameters
Prompts you for confirmation before running the cmdlet..
Specifies that job progress is tracked and stored in the variable named by this parameter.
Specifies the ID of the Performance and Resource Optimization tip (PRO tip) that triggered this action. This parameter lets you audit PRO tips.
For more information about the PSCredential object, type
Get-Help Get-Credential.
For more information about Run As accounts, type
Get-Help New-SCRunAsAccount.
Indicates that the job runs asynchronously so that control returns to the command shell immediately.
Specifies a VMM server object.
Shows what would happen if the cmdlet runs. The cmdlet is not run. | https://docs.microsoft.com/en-us/powershell/module/virtualmachinemanager/remove-scvmhost?view=systemcenter-ps-2016 | 2017-11-17T22:55:24 | CC-MAIN-2017-47 | 1510934804019.50 | [] | docs.microsoft.com |
Links panel
The Links panel provides a way of keeping track of links and lists of resources (often external to the site) so they may be included in article content or Form templates in a convenient way.
On this page:
- Why store links in this link repository?
- Create a new link
- Editing links
- Search function
- List of existing links
- Pages and links listed per page
On this panel you can do two things:
- Create a new link and edit existing ones
- Manage your existing repository of links
So the Links panel in a way combines what for articles is divided in two panels ‘Write’ and ‘Articles’.
Why store links in this link repository?
While you could place links to external resources (or internal pages) directly into any article, this would mean a lot of work keeping track and maintaining links and lists of resources. Textpattern provides a better way: manage your links (and lists of links) in a central place and then include them in article content or Form templates with Textpattern tags. This way changes will take effect at every occurrence of a link and - for example - additions to a topic list of resource links will automatically be included wherever you placed the list.
The linklist tag will output links from the link repository, with filter criteria and presentation settings applied.
Create a new link
This button will take you to the Link property editor (see below) where you can generate a new link along with its properties.
Editing links
Link details
Each link has various pieces of information associated with it, as described here:
Title: a title for the link, which can be harnessed by tags (such as linkdesctitle).
Sort Value: assigns listing priorities by number or letter to your links. See Link sorting rules below for a full explanation.
URL: the hyperlink value assigned to the link.
Category: a category for the link. Categories are used to generate lists of links.
Description: Text that tells something about the link, and can be harnessed by tags (such as link_description).
After editing, you have to Save your edits.
Link sorting rules
When Textpattern sets the list sequence of links it uses the following method: Numbers are considered lower than letters, and values are sequenced lowest to highest, top to bottom.
The first character on the left is considered first and all links are sequenced. Then the second character is considered, and those link values that have the same first character are sequenced like a subcategory according to the second character. Then the third character is considered, then the fourth, etc.
Thus the sort values:
1,
1B,
2,
10,
11,
100,
101,
A,
B would be sequenced as follows.
1
10
100
101
11
1B
2
A
B
Search function
Because the links list can get pretty long, a search function is available at the top of the list. You can use the search function to locate a link directly by a search phrase or to filter the view on your links by particular criteria, thus reducing the number of links links which meet the criteria.
As a default the search will find matches for all criteria. But you can do more refined searches by selecting another area to search in via the drop-down-list toggle button.
List of existing links
Beneath the ‘Create link’ button and search area there is the table, or list, of existing links. Each row is one link.
Columns
The default view shows these columns:
ID: the unique ID number of the link.
Name: the title of the link - click this to edit the link.
Category: if the link was assigned a category, then it will reflect that here.
URL: the actual hyperlink value of the link - click this to open the target destination in a new browser window.
Author: the author who created the link record (only if more than one author exists in the Users panel).
At the top of the list there is an option ‘Show detail’. When marked additional columns (and additional info) will be presented:
Description: what has been told about the link, its character, any recommendations.
Date: the day and time of when this link was created.
Perform changes on selected links
In the first column you will find a checkbox for each link. Here you can select links you want to change in a bulk manner. You can mark links by checking the checkbox or you can use the checkbox in the head bar of the list to mark all links on that page.
In order to quickly select ranges of links click the checkbox of the first link you want to mark, press and hold the shift key, then click the checkbox of the last link in the range. All links links listed per page
At the very bottom of the list you will find a pagination and links for next and previous pages if there are more pages. You can also change the number of links listed per page by selecting another value from the number range. | https://docs.textpattern.io/administration/links-panel | 2017-11-17T22:49:45 | CC-MAIN-2017-47 | 1510934804019.50 | [] | docs.textpattern.io |
All core vRealize Automation workflows are executed in a distributed execution environment.
The vRealize Automation runtime environment consists of one or more DEM Worker instances that can execute any workflow installed in the core engine. Additional Worker instances can be added as needed for scalability, availability and distribution.
Skills can be used to associate DEMs and workflows, restricting execution of a given workflow to a particular DEM or set of DEMs with matching skills. Any number and combination of skills can be associated with a given workflow or DEM. For example, workflow execution can be restricted to a specific datacenter, or to environments that support a specific API the workflow requires. The vRealize Automation Designer and the CloudUtil command-line tool provide facilities for mapping skills to DEMs and workflows.
For more information about distributed execution and working with skills, see Life Cycle Extensibility. | https://docs.vmware.com/en/vRealize-Automation/7.3/com.vmware.vra.prepare.use.doc/GUID-051ABB8B-B180-42F2-9DC8-9DC5DF329253.html | 2017-11-17T23:10:52 | CC-MAIN-2017-47 | 1510934804019.50 | [] | docs.vmware.com |
+= b;
Similarly, you can subtract it using -=, multiply it using *=, divide it using /=, or use bitwise operators using |=, &=, or ^=. You can also add or subtract one from a value using ++, --. | http://docs.yoyogames.com/source/dadiospice/002_reference/001_gml%20language%20overview/401_03_assignments.html | 2017-11-17T23:01:12 | CC-MAIN-2017-47 | 1510934804019.50 | [] | docs.yoyogames.com |
You can add an existing virtual hard disk to a virtual machine. You can copy or move the disk to your virtual machine, or share it between virtual machines.
About this task
Caution:
Moving the virtual hard disk can break other virtual machines that are using the virtual hard disk, because this is the equivalent of removing the hard disk from one physical computer and installing it in another.
Procedure
- Select a virtual machine in the Virtual Machine Library window and click Settings.
- Click Add Device.
- Click Existing Hard Disk.
- Click Add Device.
- In the Open dialog, navigate to the location of the existing .vmdk hard disk file.
- Select the method for adding the virtual hard disk file.
- Click Open.
- Click Apply.
Results
Fusion displays a progress dialog box if you select to copy the virtual disk. | https://docs.vmware.com/en/VMware-Fusion/10.0/com.vmware.fusion.using.doc/GUID-4B3FC5A0-E4A5-4C1B-9F0C-9536628CF4DB.html | 2017-11-17T23:35:56 | CC-MAIN-2017-47 | 1510934804019.50 | [] | docs.vmware.com |
You use the ThinApp Setup Capture wizard to capture and package your applications.
Prerequisites
Download the ThinApp software from and install it on a clean computer. View supports ThinApp version 4.6 and later.
Familiarize yourself with the ThinApp software requirements and application packaging instructions in the ThinApp User's Guide.
Procedure
- Start the ThinApp Setup Capture wizard and follow the prompts in the wizard.
- When the ThinApp Setup Capture wizard prompts you for a project location, select Build MSI package.
- If you plan to stream the application to remote desktops, set the MSIStreaming property to 1 in the package.ini file.
MSIStreaming=1
Results
The ThinApp Setup Capture wizard encapsulates the application, all of the necessary components to run the application, and the application itself into an MSI package.
What to do next
Create a Windows network share to store the MSI packages. | https://docs.vmware.com/en/VMware-Horizon-7/7.3/horizon-administration/GUID-BABBC405-4AAC-429C-89C6-0B4AFD3B8961.html | 2017-11-17T23:36:05 | CC-MAIN-2017-47 | 1510934804019.50 | [] | docs.vmware.com |
You use Horizon Administrator to configure the use of the secure tunnel and PCoIP Secure Gateway. These components ensure that only authenticated users can communicate with remote desktops and applications.
About this task
Clients that use the PCoIP display protocol can use the PCoIP Secure Gateway. Clients that use the RDP display protocol can use the secure tunnel.
For information about configuring the Blast Secure Gateway, see Configure the Blast Secure Gateway.
A typical network configuration that provides secure connections for external clients includes a security server. To enable or disable the secure tunnel and PCoIP Secure Gateway on a security server, you must edit the Connection Server instance that is paired with the security server.
In a network configuration in which external clients connect directly to a Connection Server host, you enable or disable the secure tunnel and PCoIP Secure Gateway by editing that Connection Server instance in Horizon Administrator.
Prerequisites
If you intend to enable the PCoIP Secure Gateway, verify that the Connection Server instance and paired security server are View 4.6 or later.
If you pair a security server to a Connection Server instance on which you already enabled the PCoIP Secure Gateway, verify that the security server is View 4.6 or later.
Procedure
- In Horizon Administrator, select .
- In the Connection Servers panel, select a Connection Server instance and click Edit.
- Configure use of the secure tunnel.
The secure tunnel is enabled by default.
- Configure use of the PCoIP Secure Gateway.
The PCoIP Secure Gateway is disabled by default.
- Click OK to save your changes. | https://docs.vmware.com/en/VMware-Horizon-7/7.3/horizon-installation/GUID-0FC59EB4-45DA-4B3C-A611-F6B5597E9F0C.html | 2017-11-17T23:35:47 | CC-MAIN-2017-47 | 1510934804019.50 | [] | docs.vmware.com |
GSOC 2013 Project Ideas
From Joomla! Documentation
Contents
- 1 Welcome!
- 2 Ideas
- 2.1 Joomla CMS
- 2.1.1 Project: Build New Media Manager
-.2 Joomla Framework: Build New Media Manager
- Brief Explanation: The current media manager is outdated and limited. Build a new). The manager should also include ACL permissions, programmatic implementations and configurations, and be easily reusable into other Joomla extensions.
- Expected Results: A new Media Manager component
Joomla Framework, or, as intended, be an interface between JData and a full blown ORM.:
- and
-
-
-
-
"As I'm using Doctrine when building complex Joomla extensions, I'd love to mentor this project" ~ Herman Peeren GSOC 2013 Project template | https://docs.joomla.org/index.php?title=GSOC_2013_Project_Ideas&oldid=82424 | 2015-06-30T05:18:46 | CC-MAIN-2015-27 | 1435375091751.85 | [] | docs.joomla.org |
Revision history of "JHtmlSliders:: loadBehavior:: loadBehavior/1.6 (content was: "__NOTOC__ =={{JVer|1.6}} JHtmlSliders::_loadBehavior== ===Description=== Load the JavaScript behavior. {{Description:JHtmlSliders::_loadBehavior}} <span class="..." (and the only contributor was "Doxiki2")) | https://docs.joomla.org/index.php?title=JHtmlSliders::_loadBehavior/1.6&action=history | 2015-06-30T05:52:34 | CC-MAIN-2015-27 | 1435375091751.85 | [] | docs.joomla.org |
Difference between revisions of "Making your site Search Engine Friendly"
From Joomla! Documentation
Revision as of 05:19, 12 July 2013}}.
Using a Sitemap
Update Frequency. recommends that you follow these key principles when creating a title:
-. | https://docs.joomla.org/index.php?title=Making_your_site_Search_Engine_Friendly&diff=101577&oldid=101576 | 2015-06-30T05:59:20 | CC-MAIN-2015-27 | 1435375091751.85 | [] | docs.joomla.org |
Difference between revisions of "Creating a basic index file"
From Joomla! Documentation
Revision as of 10:23, 2 May 2008
The index.php file becomes the core of every page that Joomla! delivers. Essentially, you make a page (like any html page) but place php code where the content of your site should go. Here is the bare bones/mynewtemplate/css/css.css" type="text/css" /> </head>
The first line gets Joomla to put the correct header information in. The second creates a link to your style sheet. You will need to change this to the directory name that you chose..
Finish it off - one last bit:
</html> <pre> | https://docs.joomla.org/index.php?title=Creating_a_basic_index_file&diff=5844&oldid=5843 | 2015-06-30T06:01:30 | CC-MAIN-2015-27 | 1435375091751.85 | [] | docs.joomla.org |
Difference between revisions of "Menus Options"
From Joomla! Documentation
Latest revision as of 19:09, 28 April 2013. | https://docs.joomla.org/index.php?title=Help17:Menus_Options&diff=next&oldid=83942 | 2015-06-30T06:30:16 | CC-MAIN-2015-27 | 1435375091751.85 | [] | docs.joomla.org |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.