content
stringlengths 0
557k
| url
stringlengths 16
1.78k
| timestamp
timestamp[ms] | dump
stringlengths 9
15
| segment
stringlengths 13
17
| image_urls
stringlengths 2
55.5k
| netloc
stringlengths 7
77
|
---|---|---|---|---|---|---|
AWS services or capabilities described in AWS Documentation may vary by region/location. Click Getting Started with Amazon AWS to see specific differences applicable to the China (Beijing) Region.
Container for the parameters to the CreateSqlInjectionMatchSet operation..
Namespace: Amazon.WAF.Model
Assembly: AWSSDK.WAF.dll
Version: 3.x.y.z
The CreateSqlInjectionMatchSetRequest type exposes the following members; | http://docs.aws.amazon.com/sdkfornet/v3/apidocs/items/WAF/TWAFCreateSqlInjectionMatchSetRequest.html | 2017-10-17T07:32:14 | CC-MAIN-2017-43 | 1508187820930.11 | [] | docs.aws.amazon.com |
Important
New Docs are available at
All new features, starting with 2.4.0, will be documented there..
The current working version is aws-parallelcluster-236-32;C:\Python36-32\Scripts
Now it should be possible to run the following within a command prompt window:
C:\> pip install aws-parallelcluster
To upgrade an older version of AWS ParallelCluster, you can use either of the following commands, depending on how it was originally installed:
$ sudo pip install --upgrade aws-parallelcluster
Remember when upgrading to check that the existing config is compatible with the latest version installed.
First you’ll need to setup your IAM credentials, see AWS CLI for more information.
$ aws configure AWS Access Key ID [None]: AKIAIOSFODNN7EXAMPLE AWS Secret Access Key [None]: wJalrXUtnFEMI/K7MDENG/bPxRfiCYEXAMPLEKEY Default region name [us-east-1]: us-east-1 Default output format [None]:
Once installed you will need to setup some initial config. The easiest way to do this is below:
$ pcluster configure
This configure wizard will prompt you for everything you need to create your cluster. You will first be prompted for your cluster template name, which is the logical name of the template you will create a cluster from.
Cluster Template [mycluster]:-gov-east-1 us-west-1 eu-central-1 sa-east-1 AWS Region ID []:
Choose a descriptive name for your VPC. Typically, this will be something like
production or
test.
VPC Name [myvpc]:
Next, you will need to choose a key pair that already exists in EC2 in order to log into your master instance. If you do not already have a key pair, refer to the EC2 documentation on EC2 Key Pairs.
Acceptable Values for Key Name: keypair1 keypair-test production-key Key Name []:
Choose the VPC ID into which you’d like your cluster launched.
Acceptable Values for VPC ID: vpc-1kd24879 vpc-blk4982d VPC ID []:
Finally, choose the subnet in which you’d like your master server to run.:
$ pcluster create mycluster
Once the cluster reaches the “CREATE_COMPLETE” status, you can connect using your normal SSH client/settings. For more details on connecting to EC2 instances, check the EC2 User Guide.
AWS ParallelCluster is an enhanced and productized version of CfnCluster.
If you are a previous CfnCluster user, we encourage you to start using and creating new clusters only with AWS ParallelCluster. Although you can still use CfnCluster, it will no longer be developed.
The main differences between CfnCluster and AWS ParallelCluster are listed below.
AWS ParallelCluster CLI manages a different set of clusters
Clusters created by
cfncluster CLI cannot be managed with
pcluster CLI.
The following commands will no longer work on clusters created by CfnCluster:
pcluster list pcluster update cluster_name pcluster start cluster_name pcluster status cluster_name
You need to use the
cfncluster CLI to manage your old clusters.
If you need an old CfnCluster package to manage your old clusters, we recommend you install and use it from a Python Virtual Environment.
Distinct IAM Custom Policies
Custom IAM Policies, previously used for CfnCluster cluster creation, cannot be used with AWS ParallelCluster. If you require custom policies you need to create the new ones by following IAM in AWS ParallelCluster guide.
Different configuration files
The AWS ParallelCluster configuration file resides in the
~/.parallelcluster folder, unlike the CfnCluster one
that was created in the
~/.cfncluster folder.
You can still use your existing configuration file but this needs to be moved from
~/.cfncluster/config to
~/.parallelcluster/config.
If you use the
extra_json configuration parameter, it must be changed as described below:
extra_json = { "cfncluster" : { } }
has been changed to
extra_json = { "cluster" : { } }
Ganglia disabled by default
Ganglia is disabled by default.
You can enable it by setting the
extra_json parameter as described below:
extra_json = { "cluster" : { "ganglia_enabled" : "yes" } }
and changing the Master SG to allow connections to port 80.
The
parallelcluster-<CLUSTER_NAME>-MasterSecurityGroup-<xxx> Security Group has to be modified by
adding a new Security Group Rule
to allow Inbound connection to the port 80 from your Public IP. | https://aws-parallelcluster.readthedocs.io/en/latest/getting_started.html | 2019-07-16T03:55:06 | CC-MAIN-2019-30 | 1563195524502.23 | [] | aws-parallelcluster.readthedocs.io |
Use these instructions to add a data source from Graphite to use with Grafana.
- Open your browser and enter http://<Grafana-host>:3000.
- Click + Add data source.
- Enter a Name for this data source.
- Select Graphite from the Type menu.
- Input the URL of the Graphite server.If you're using a proxy, input an IP address that's accessible from the Grafana backend. For example, use the subnet private IP when deployed in AWS: you have a direct connection, enter a publically accessible IP:
- Use Basic Auth to start with. Graphite's default user and password are root:root.
- Click Save & Test.
You should see the message: Data source is working. | https://docs.alfresco.com/syncservice/tasks/ds-grafana-datasource.html | 2019-07-16T04:14:13 | CC-MAIN-2019-30 | 1563195524502.23 | [] | docs.alfresco.com |
Group 65
Go to Beaconstac
All Collections
Notifications
Notifications
Receiving beacon notifications on Android and iOS smartphones
2 articles in this collection
Written by
Sharat Potharaju
and
Sneh Choudhary
Send beacon notifications in multiple languages
How to target beacon notifications based on the language preference of a smartphone
Written by
Sharat Potharaju
Updated over a week ago
How to create an effective notification
Best practices to create a notification
Written by
Sneh Choudhary
Updated over a week ago | https://docs.beaconstac.com/en/collections/864008-notifications | 2019-07-16T04:12:52 | CC-MAIN-2019-30 | 1563195524502.23 | [] | docs.beaconstac.com |
Copy a File to a Device
8/28/2008
This code sample is named Pput. It demonstrates how to programmatically copy a file from the desktop computer to a mobile device using the Remote API (RAPI)interface..
Feature Area
Relevant APIs
- CeCloseHandle (RAPI) function
- CeRapiInit (RAPI) function
- CeRapiUninit (RAPI) function
- CeReadFile (RAPI) function
- CeWriteFile (RAPI)\Pput
Microsoft Visual Studio 2005 launches and loads the solution.
Build the solution (Ctrl+Shift+B).
Deploy the solution (F5).
Using the application
For example, if you want to copy a file from "c:\readme.txt" on the desktop computer to "\windows\readme.txt" on the mobile device, execute the command:
Pput c:\readme.txt \windows\readme.txt
Remarks
The code sample runs on the desktop host computer.
Development Environments
SDK: Windows Mobile 6 Professional SDK and Windows Mobile 6 Standard SDK
Development Environment: Visual Studio 2005.
ActiveSync: Version 4.5.
See Also
Concepts
Code Samples for Windows Mobile
Copy a File from a Device
Other Resources
ActiveSync
Remote API (RAPI) | https://docs.microsoft.com/en-us/previous-versions/bb158656%28v%3Dmsdn.10%29 | 2019-07-16T05:12:33 | CC-MAIN-2019-30 | 1563195524502.23 | [array(['images/bb158626.windowsmobile_on%28en-us%2cmsdn.10%29.gif',
'A checkmark indicates that the information in this topic is relevant for this platform, and that any API listed is also supported on this platform. Windows Mobile Supported'],
dtype=object)
array(['images/bb158626.windowsembeddedce_off%28en-us%2cmsdn.10%29.gif',
'A hyphen indicates that the information in this topic is not relevant for this platform, or that the platform does not support the API that is listed. Windows Embedded CE Not Supported'],
dtype=object) ] | docs.microsoft.com |
- Chart Types >
- Grid Charts >
- Heatmap
Heatmap¶
On this page
A heatmap represents data in a tabular format as a range of color. A darker, or more intense, color represents a larger aggregated value for a particular data point.
Heatmap Encoding Channels¶
Heatmaps provide the following encoding channels:
Use Cases¶
Heatmaps reveal patterns or trends within your data. Use heatmaps when the exact data values are not as important as depicting higher-level trends and relationships between your data points. They can also highlight any significant outliers, or points which strongly go against the general direction of your data. A heatmap is a good choice to display:
- A comparison of average room rental prices based on location and property type.
- Geographic data, such as election results across different districts or population density.
- The number of customer orders across various store locations by month of the year.
Example¶
The following chart visualizes data pertaining to movies. Each document in the collection represents a movie and contains general information about the film as well as ratings from critics. This heatmap shows the mean (average) Metacritic rating for different movie genres (Y Axis) over time (X Axis):
We bin the years along the X Axis into
decades and aggregate to find the mean
metacritic score
of films from each genre released in each decade.
The intensity field of
metacritic shades each grid
element based on the mean
metacritic field of all of the
intersecting documents based on the X and
Y axes fields. Based on the chart, we see that
from
1930-1940 there are a few genres with very high
average Metacritic scores, and over time a more even distribution of
film ratings begins to occur.
Note
If the space is white, there are no movies from that decade of that particular genre in the dataset. | https://docs.mongodb.com/charts/master/chart-type-reference/heatmap/ | 2019-07-16T05:21:29 | CC-MAIN-2019-30 | 1563195524502.23 | [array(['../../_images/heatmap-example.png', 'Heatmap example'],
dtype=object) ] | docs.mongodb.com |
The alarm history displays the status changes of all alarm rules in the last 30 days. You can view and trace alarm records in a unified and convenient manner.
On the Alarm History page, you can view the status changes of all alarm rules from the last 30 days.
In the upper right corner of the alarm history list is a calendar, with which you can set the alarm history of any desired time range within 30 days.
On the top of the alarm history list, you can click All resource types, All severities, or All statuses to view the alarm history. | https://docs.otc.t-systems.com/en-us/usermanual/ces/en-us_topic_0084572307.html | 2019-07-16T05:12:18 | CC-MAIN-2019-30 | 1563195524502.23 | [] | docs.otc.t-systems.com |
Contents IT Service Management Previous Topic Next Topic Requirements to associate a software installation to PVU mapping Subscribe Log in to subscribe to topics and get notified when content changes. ... SAVE AS PDF Selected Topic Topic & Subtopics All Topics in Contents Share Requirements to associate a software installation to PVU mapping Meeting recommended requirements ensures that you receive the highest quality results with PVU mapping. Use a discovery tool, such as ServiceNow Discovery, to identify hardware and populate the configuration management database (CMDB) with the configuration items you want to manage with IBM PVU licensing. Use a discovery tool, such as ServiceNow Discovery, to identify software installations. Check that the added CPU information is correct. Activate the Software Asset Management plugin - IBM PVU Process Pack plugin. This also activates the Software Asset Management plugin if it is not already active. Refresh processor definitions. Ensure that the software models you want to manage with IBM PVU licensing have the correct license type: Per installation - IBM PVU. Create software counters to calculate IBM PVU licenses. Count licenses to determine compliance with IBM PVU guidelines. On this page Send Feedback Previous Topic Next Topic | https://docs.servicenow.com/bundle/kingston-it-service-management/page/product/asset-management/task/t_ReqAssocSWInstToPVUMapping.html | 2019-07-16T04:46:06 | CC-MAIN-2019-30 | 1563195524502.23 | [] | docs.servicenow.com |
Before you configure replication tasks to and from a cloud site, you establish a connection between both of your cloud organizations.
You can also defer authentication until access to that cloud is needed. For more information, see Working with the vCloud Availability Portal .
Prerequisites
The sites occupied by the organizations must be paired. For more information about pairing cloud sites, see vCloud Availability for Cloud-to-Cloud DR Installation and Configuration Guide.
You must be an organization administrator at both of the organizations to perform remote replication operations. To only view remote replications, administration credentials are not required.
There must be an established connection between both of your cloud organizations.
Procedure
- Log in to the vCloud Availability Portal with organization administrator credentials.
- Select Paired Clouds and locate the Actions column for each of the sites.
- Click the Authenticate icon for the site you want to connect to.
The Authenticate a Cloud window appears.
- Enter the credentials for the organization.
The connection to the cloud organization appears in the Authentication Status column. | https://docs.vmware.com/en/VMware-vCloud-Availability/1.0/com.vmware.c2c.users.guide.doc/GUID-66C92316-761C-4B2F-8546-7339C6552543.html | 2019-07-16T04:22:56 | CC-MAIN-2019-30 | 1563195524502.23 | [] | docs.vmware.com |
SharePoint reporting and statistics
Applies to: Forefront Security for SharePoint
Forefront Security for SharePoint provides various mechanisms to help administrators analyze the state and performance of the Forefront Security for SharePoint services through the SharePoint Forefront Server Security Administrator interface.
Incidents database
The Incidents database (Incidents.mdb) contains all virus and filter detections for a Microsoft® SharePoint server, regardless of the scan job that caught the infection or performed the filtering. Results are stored in the database by FSCController and are not dependent on the Forefront Server Security Administrator remaining open.
To view the Incidents database, click REPORT in the Shuttle Navigator, and then click the Incidents icon. The Incidents work pane appears.
This is the information that Forefront Security for SharePoint reports for each incident:
Time The date and time of the incident.
State The action taken by Forefront Security for SharePoint.
Name The name of the scan job that reported the incident.
Folder The name of the folder where the file was found.
File The name of the virus or the name of the file that matched a file filter or keyword filter.
Incident The type of incident that occurred. The categories are: VIRUS and FILE FILTER. Each is followed by either the name of the virus caught or the name of the filter that triggered the event.
Author Name The name of the author of the document.
Author's E-mail The e-mail address of the author of the document.
Last Modified By The name of the last user to modify the document.
Modified User's E-mail The e-mail address of the last user to modify the document.
Note
The last four fields will be reported as N/A for the Realtime Scan Job because FSSP does not have access to this information during a real-time scan.
VirusLog.txt
Incidents can also be written to a text file called VirusLog.txt file, located in the Microsoft Forefront Security for SharePoint installation path. To enable this feature select Enable Forefront Virus Log in General Options (it is disabled by default). When enabled, all virus incidents are written to the VirusLog.txt text file, under the FSSP installation path (InstalledPath).
The following is a sample entry from the VirusLog.txt file:
Wed Dec 14 12:56:13 2005 (3184), "Information: Realtime scan found virus:
Folder: WorkSpace1\SavedFiles
File: Eicar.com
Incident: VIRUS=EICAR-STANDARD_AV_TEST_FILE
State: Cleaned"
Forefront Security for SharePoint incidents
The following table describes the various incidents FSSP reports. Most of the reported incidents are controlled through settings in General Options.
Statistics
Forefront Security for SharePoint tracks statistics for each scan job. Several kinds of statistics are maintained for documents.
- Documents Scanned The number of documents scanned by Forefront Security for SharePoint since the last restart of the services.
- Documents Detected The number of documents scanned that contained a virus or matched a filter since the last restart of the services.
- Documents Cleaned The number of documents cleaned by Forefront Security for SharePoint since the last restart of the services.
- Documents Removed The number of documents removed by Forefront Security for SharePoint since the last restart of the services. (Action set to Purge – Eliminate Message.)
- Total Documents Scanned The number of documents scanned by Forefront Security for SharePoint since the product was installed.
- Total Documents Detected The number of documents scanned that contained a virus or matched a file or content filter since the product was installed.
- Total Documents Cleaned The number of documents cleaned by Forefront Security for SharePoint since the product was installed.
- Total Documents Removed The number of documents removed by Forefront Security for SharePoint since the product was installed.
Managing statistics
To reset all statistics for a scan job, click the 'x' next to the scan job's name in the Statistics section of the Incidents work pane. You will be asked to confirm the reset. Click Yes to reset all the statistics for the selected scan job.
To save the report and the statistics to a text file, click the Export button (on the Incidents work pane).
Quarantine
Forefront Security for SharePoint, by default, creates a copy of every detected file in its original form (that is, before a Clean, Delete, or Skip action occurs). These files are stored in an encoded format in the Quarantine folder under the Forefront Security for SharePoint DatabasePath folder (which defaults to the installation folder). The actual file name of the detected file, the name of the infecting virus or the file filter name, information about the author of the file and about the person who last modified it, as well as other bookkeeping information, are saved in the file Quarantine.mdb in the Quarantine folder. The Quarantine database is configured as a system data source name (DSN) with the name Forefront Quarantine. This database can be viewed and manipulated using third-party tools.
Quarantine options
An administrator can access the Quarantine pane to delete or extract quarantined items. To view the Quarantine log, click REPORT in the Shuttle Navigator, and then click the Quarantine icon. The Quarantine work pane appears.
The quarantine list reports the date the file was quarantined, the name of the file, the type of incident that triggered the quarantine (such as virus or filter match), the name of the infecting virus or the filter name, the author name, the author e-mail address, last modified by, and modified user’s e-mail
Saving quarantine database items to disk
Use the Save As button on the Quarantine work pane to detach and decode a selected file to disk. You can select multiple items from the quarantine list. Each is saved as a separate file.
ExtractFiles tool
Use the Save As button on the Quarantine work pane to detach and decode a selected file to disk. You can select multiple items from the quarantine list. Each is saved as a separate file.
This is the syntax of ExtractFiles:
extractfiles <path> <type>
- <Path> The absolute path of the folder in which to save the extracted quarantined files.
- <Type> The type of quarantined files to extract. This can be the specific name of a virus, a specific extension, or all quarantined files. For example:
Jerusalem.StandardExtracts files that were infected with the virus named Jerusalem.Standard.
*.docExtracts quarantined files having a .doc extension.
*.*Extracts all quarantined files.
Examples:
extractfiles C:\temp\quarantine Jerusalem.Standard
extractfiles C:\extract\ *.doc
Maintaining the databases
There are several other tasks you can perform with the Incidents or Quarantine databases. You can clear the databases, export database items, purge database items, filter database views, and move the databases.
Clearing the databases Forefront Security for SharePoint Database Warning
The body of the message reads:
The Microsoft Forefront Security for SharePoint <<database name>> database is greater than 1.5 GB (with a maximum size of 2 GB). Its current size is x GB.
If this database grows to 2 GB, updates to the <<database name>> will not occur. Please see the user guide for information about database maintenance.
If for some reason the notification cannot be sent, the failure is ignored and is noted in the program log. One attempt to send the message is made during each compaction cycle for the specific database.
Clearing the Incidents database
The Incidents database can be cleared when it becomes too large.
To clear the Incidents database
On the Incidents work pane on the REPORT section of the Shuttle Navigator, click Clear Log. This clears all the items from the Incidents work pane. You will be asked to confirm your decision.
In the OPERATE section of the Shuttle Navigator, select Run Job. no longer appear in the work panes. However, they are actually deleted from the Incidents both locations, as indicated above.
Note
If a large number of entries is selected, the deletion process can take a long time. In this case, you will be asked to confirm the deletion request.
Clearing the Quarantine database
The Quarantine database can be cleared when it becomes too large.
To clear the Quarantine database, click Clear Log on the Quarantine work pane on the REPORT section of the Shuttle Navigator. This clears the Quarantine listing.
Note
If a large number of entries is selected, the deletion process can take a long time. In this case, you will be asked to confirm the deletion request.
Exporting database items
Click Export on the Incidents or Quarantine work panes to save all the results from the Incidents or Quarantine databases as a text file. Clicking Export displays a standard Windows Save dialog box, in which you select a location for the Incidents.txt or Quarantine.txt file.
In addition to the Export button, the Quarantine pane has a Save As button, used to detach and decode a selected file to disk. You can select multiple items from the Quarantine list. Each is saved as a separate file.
Purging database items
You can instruct Forefront Security for SharePoint to remove items from the databases after they are a certain number of days old. The number of days is indicated by the Purge field on both the Incidents and Quarantine work panes. Each database can have a separate purge value (or none at all). If the purge function is enabled for a database, all files older than the specified number of days are flagged for removal from that database.
To purge database items after a certain number of days
On either the Incidents or the Quarantine work pane in the REPORT section of the Shuttle Navigator, select the Purge check box. This causes the Days field to become available.
In the Days Days field will remain, but no purging will take place until Purge is selected again.
Filtering database views
You can filter the Incidents or Quarantine views to see only certain items. The filter has no effect on the database itself, just on which records are displayed.
To filter the database view
On the Incidents or Quarantine work pane, select the Filtering check box.
Select the items you want to see with the Field option. Each choice in Field corresponds to one of the columns in the display. (For example, you can show only those Incidents whose State is “Deleted”.) If you select any column other than Time (on the Incidents pane) or Date (on the Quarantine pane), the Value field appears. If you select Time or Date, you get entry fields for beginning date and time, and ending date and time.
If you selected Time or Date, enter the beginning and ending date and time. Otherwise, enter a string in the Value field. Wildcard characters can be used. They are those used by the Microsoft Jet database OLE DB driver. The wildcard characters are:
_ (underscore) Matches any single character. (The * and ? characters, which are common wildcard characters, are literals in this instance.)
[ ] Denotes a set or a range. Matches any single character within the specified set (for example, [abcdef]) or range (for example, [a-f]).
[!] Denotes a negative set or range. Matches any single character not within the specified set (for example, [!abcdef]) or range (for example, [!a-f]).
Click Save to apply the filter. The only items you now see are those that match your parameters.
To see all the items again, remove the filter by clearing the Filtering check box and clicking Save.
Moving the databases
You can move the Quarantine and Incidents databases. However, for FSSP to function properly, you must move both databases, as well as all related databases and support files.
To move the databases and all related files
Create a new folder in a new location (for example: C:\Moved Databases).
Set the permissions for the new folder:
- Right-click the new folder, and then select Properties.
- On the Security tab, add Network Service with Full Control privileges.
- Enable all permissions for Administrators and System.
Stop SharePoint and any Forefront Security for SharePoint services that might still be running after the SharePoint server is stopped.
Copy the entire contents of the Data folder, including the subfolders, from Microsoft Forefront Security\SharePoint server into the folder created in step 1. (This results in a folder called, for example, C:\Moved Databases\Data.)
Change the path in the DatabasePath registry key to point to the new data folder location. This key is found at:
For 32-bit systems:
HKLM\SOFTWARE\Microsoft\Forefront Server Security\SharePoint\DatabasePath
For 64-bit systems:
HKLM\SOFTWARE\Wow6432Node\Microsoft\Forefront Server Security\SharePoint\DatabasePath
Restart the SharePoint services.
Windows event viewer
Forefront Security for SharePoint stores virus detections, stop codes, system information, and other general application events in the Windows application log. Use Windows Event Viewer to access the log.
Additionally, these events are stored in ProgramLog.txt located in the Data subdirectory of Microsoft Forefront Security\SharePoint. The maximum size of the ProgramLog.txt file is controlled by the Max Program Log Size field in General Options.
Performance
All Forefront Security for SharePoint statistics can be displayed using the Performance snap-in (Perfmon.exe) provided by Windows and usually found in Administrative Tools. The performance object is called Microsoft Forefront Server Security.
Reinstalling Forefront Security for SharePoint performance counters
In the event that the Forefront Security for SharePoint performance counters are deleted, they can be reinstalled in two ways:
- By reinstalling Forefront Security for SharePoint.
- By issuing PerfMonitorSetup from a command prompt
To reinstall performance counters from a command prompt
Open a Command Prompt window.
Navigate to the Forefront Security for SharePoint installation folder (default: C:\Program Files(x86)\Microsoft Forefront Security\SharePoint).
Enter the following command: PerfMonitorSetup –install | https://docs.microsoft.com/en-us/previous-versions/tn-archive/bb795177(v%3Dtechnet.10) | 2019-07-16T04:54:21 | CC-MAIN-2019-30 | 1563195524502.23 | [] | docs.microsoft.com |
- Reference >
mongoShell Methods >
- Cursor Methods >
- cursor.skip()
cursor.skip()¶
On this page
Definition¶
cursor.
skip(<offset>)¶
Call the
cursor.skip()method on a cursor to control where MongoDB begins returning results. This approach may be useful in implementing paginated results.
Note
You must apply
cursor.skip()to the cursor before retrieving any documents from the database.
The
cursor.skip()method has the following parameter:
Pagination Example¶
Using
cursor.skip()¶
The following JavaScript function uses
cursor.skip() to
paginate a collection in natural order:
The
cursor.skip() method requires the server to scan from the
beginning of the input results set before beginning to return results.
As the offset increases,
cursor.skip() will become slower.
Using Range Queries¶
Range queries can use indexes to avoid scanning
unwanted documents, typically yielding better performance as the offset
grows compared to using
cursor.skip() for pagination.
Descending Order¶
Use this procedure to implement pagination with range queries:
- Choose a field such as
_idwhich generally changes in a consistent direction over time and has a unique index to prevent duplicate values,
- Query for documents whose field is less than the start value using the
$ltand
cursor.sort()operators, and
- Store the last-seen field value for the next query.
For example, the following function uses the above procedure to print
pages of student names from a collection, sorted approximately in order
of newest documents first using the
_id field (that is, in
descending order):
You may then use the following code to print all student names using this
pagination function, using
MaxKey to start
from the largest possible key:
Note
While ObjectId values should increase over time, they are not necessarily monotonic. This is because they: | https://docs.mongodb.com/manual/reference/method/cursor.skip/index.html | 2019-07-16T05:20:54 | CC-MAIN-2019-30 | 1563195524502.23 | [] | docs.mongodb.com |
Contents Now Platform Administration Previous Topic Next Topic Scheduled data import scripting options Subscribe Log in to subscribe to topics and get notified when content changes. ... SAVE AS PDF Selected Topic Topic & Subtopics All Topics in Contents Share Scheduled data import scripting options Multiple JavaScript objects are available in the Scheduled Data Import Pre script and Post script fields. Table 1. Data import scripting options Object Description Example cancel Set this object to true to stop the import action. Use the Pre script field to evaluate the conditions of the import and determine whether to cancel the import process. To cancel the import process, use the following call: cancel = true; import_set Get the GlideRecord object for the new import set. This variable allows you to query the following columns from the sys_import_set table: number sys_id state table_name If you want to use information from the import set, you can specify one of the properties of the import_set variable. var x = import_set.number; data_source GlideRecord of the data source to be used for the scheduled import. Typically, you define the data source with the Scheduled Data Import record. If you want to access this data source or modify the data source in certain conditions, you can use the following. data_source.import_set_table_name = 'new_set_from_scheduler'; data_source.update(); On this page Send Feedback Previous Topic Next Topic | https://docs.servicenow.com/bundle/helsinki-platform-administration/page/administer/import-sets/reference/r_DataImportScriptingOptions.html | 2019-07-16T04:54:55 | CC-MAIN-2019-30 | 1563195524502.23 | [] | docs.servicenow.com |
Open the customizer by clicking on Customize and go to Site Identity: and Select the image that Sara has prepared. It must be the icon of 180 or 196px size.
Constants
Go to Customize > YITH Docs Customizations > YITH Docs Constants:
and enter the plugin slug into the following three fields:
- Slug on yithemes.com: this is for the View Product Page button on the top bar.
- P
- Text domain: this is for the Localization page in the documentation. | https://docs.yithemes.com/blog/wordpress-and-plugins-configuration/favicon-and-constants/ | 2019-07-16T04:25:37 | CC-MAIN-2019-30 | 1563195524502.23 | [] | docs.yithemes.com |
How to Run a GemFire Cache Transaction that Coordinates with an External Database
Coordinate a GemFire cache transaction with an external database by using CacheWriter/CacheListener and TransactionWriter/TransactionListener plug-ins, to provide an alternative to using JTA transactions.
There are a few things you should be careful about while working with GemFire cache transactions and external databases: look up the connection (from a user managed Map) based on
cacheTransactionManager.getTransactionId().
Make sure that the prepare transaction feature is enabled in your external database. It is disabled in PostgreSQL by default. In PostgreSQL, the following property must be modified to enable it:
max_prepared_transactions = 1 # 1 or more enables, zero (default) disables this feature.
Use the following procedure to write a GemFire cache transaction that coordinates with an external database:
- Configure GemFire regions as necessary as described in How to Run a GemFire Cache Transaction.
- Begin the transaction.
- If you have not previously committed a previous transaction in this connection, start a database transaction by issuing a BEGIN statement.
- Perform GemFire cache operations; each cache operation invokes the CacheWriter. Implement the CacheWriter to perform the corresponding external database operations.
Commit the transaction. At this point, the TransactionWriter is invoked. The TransactionWriter returns a TransactionEvent, which contains all the operations in the transaction. Call PREPARE TRANSACTION within your TransactionWriter code.
After a transaction is successfully committed in GemFire, the TransactionListener is invoked. The TransactionListener calls COMMIT PREPARED to commit the database transaction. | https://gemfire.docs.pivotal.io/94/geode/developing/transactions/run_a_cache_transaction_with_external_db.html | 2019-07-16T03:58:20 | CC-MAIN-2019-30 | 1563195524502.23 | [] | gemfire.docs.pivotal.io |
Reward Exchange Rates
Reward Exchange Rates determine the number of points that are earned based on the order amount, as well as the value of the points earned. Different exchange rates can be applied to different websites and different customer groups. If multiple exchange rates from different websites and customer groups apply to the same customer, the following rules of priority apply:
Exchange Rate Priority
When converting currency to points, the amount of points cannot be divided. Any currency remainder is rounded down. For example, if $2.00 converts to 10 points, points will be earned in groups of $2.00. Therefore, a $7.00 order would earn 30 points, and the remaining $1.00 would be rounded down. The monetary amount of the order is defined as the amount which the merchant receives, or the grand total minus shipping, tax, discounts, store credit, and gift cards. The points will be earned the moment when there are no non-invoiced items in the order (all items are either paid or canceled). If an AdminThe password-protected back office of your store where orders, catalog, content, and configurations are managed. user does not want to allow customers to earn Reward Points for canceled orders, those points can be manually deducted from the Manage Customers page.
To set up exchange rates:
- Points to Currency
- Currency to Points
For either Direction setting, the amount is represented in the base currency of the website.
When converting points to currency, the amount of points cannot be divided. For example, if 10 points converts to $2.00, points must be redeemed in groups of ten. Therefore, 25 points would redeem for $4.00, with 5 points remaining in the customer’s balance.
A quick rating takes only 3 clicks. Add a comment to help us improve Magento even more. | https://docs.magento.com/m2/2.2/ee/user_guide/marketing/reward-exchange-rates.html | 2019-07-16T05:02:34 | CC-MAIN-2019-30 | 1563195524502.23 | [] | docs.magento.com |
Input And Output
Goals
Step 1
IRBputs 'Hello World'Expected result:irb :006 > puts 'Hello World' Hello World => nil irb :007 >
Step 2
Type this in the file input_and_output.rb:print "All" print " " print "in" print " " print "a" print " " print "row"Terminalruby input_and_output.rb
Step 3
Type this in the file conditionals_test.rb:print "How many apples do you have? > " apple_count = gets.to_i puts "You have #{apple_count} apples."Terminalruby conditionals_test.rb
Challenge(s)
What do you want to say? > sally sells seashells You said: sally sells seashells
Next Step:
Go on to Numbers And Arithmetic | https://docs.railsbridgeboston.org/ruby/input_and_output | 2019-07-16T04:48:08 | CC-MAIN-2019-30 | 1563195524502.23 | [] | docs.railsbridgeboston.org |
Django 2.0 release notes¶
December 2, 2017
Welcome to Django 2.0!
These release notes cover the new features, as well as some backwards incompatible changes you’ll want to be aware of when upgrading from Django 1.11 or earlier. We’ve dropped some features that have reached the end of their deprecation cycle, and we’ve begun the deprecation process for some features.
This release starts Django’s use of a loose form of semantic versioning, but there aren’t any major backwards incompatible changes that might be expected of a 2.0 release. Upgrading should be a similar amount of effort as past feature releases.
See the Upgrading Django to a newer version guide if you’re updating an existing project.
Python compatibility¶
Django 2.0 supports Python 3.4, 3.5, 3.6, and 3.7. We highly recommend and only officially support the latest release of each series.
The Django 1.11.x series is the last to support Python 2.7.
Django 2.0 will be the last release series to support Python 3.4. If you plan a deployment of Python 3.4 beyond the end-of-life for Django 2.0 (April 2019), stick with Django 1.11 LTS (supported until April 2020) instead. Note, however, that the end-of-life for Python 3.4 is March 2019.
Third-party library support for older version of Django¶
Following the release of Django 2.0, we suggest that third-party app authors
drop support for all versions of Django prior to 1.11. At that time, you should
be able to run your package’s tests using
python -Wd so that deprecation
warnings do appear. After making the deprecation warning fixes, your app should
be compatible with Django 2.0.
What’s new in Django 2.0¶
Simplified URL routing syntax¶
The new
django.urls.path() function allows a simpler, more readable URL
routing syntax. For example, this example from previous Django releases:
url(r'^articles/(?P<year>[0-9]{4})/$', views.year_archive),
could be written as:
path('articles/<int:year>/', views.year_archive),
The new syntax supports type coercion of URL parameters. In the example, the
view will receive the
year keyword argument as an integer rather than as
a string. Also, the URLs that will match are slightly less constrained in the
rewritten example. For example, the year 10000 will now match since the year
integers aren’t constrained to be exactly four digits long as they are in the
regular expression.
The
django.conf.urls.url() function from previous versions is now available
as
django.urls.re_path(). The old location remains for backwards
compatibility, without an imminent deprecation. The old
django.conf.urls.include() function is now importable from
django.urls
so you can use
from django.urls import include, path, re_path in your
URLconfs.
The URL dispatcher document is rewritten to feature the new syntax and provide more details.
Mobile-friendly
contrib.admin¶
The admin is now responsive and supports all major mobile devices. Older browsers may experience varying levels of graceful degradation.
Window expressions¶
The new
Window expression allows
adding an
OVER clause to querysets. You can use window functions and aggregate functions in
the expression.
Minor features¶
django.contrib.admin¶
- The new
ModelAdmin.autocomplete_fieldsattribute and
ModelAdmin.get_autocomplete_fields()method allow using a Select2 search widget for
ForeignKeyand
ManyToManyField.
django.contrib.auth¶
- The default iteration count for the PBKDF2 password hasher is increased from 36,000 to 100,000.
django.contrib.gis¶
- Added MySQL support for the
AsGeoJSONfunction,
GeoHashfunction,
IsValidfunction,
isvalidlookup, and distance lookups.
- Added the
Azimuthand
LineLocatePointfunctions, supported on PostGIS and SpatiaLite.
- Any
GEOSGeometryimported from GeoJSON now has its SRID set.
- Added the
OSMWidget.default_zoomattribute to customize the map’s default zoom level.
- Made metadata readable and editable on rasters through the
metadata,
info, and
metadataattributes.
- Allowed passing driver-specific creation options to
GDALRasterobjects using
papsz_options.
- Allowed creating
GDALRasterobjects in GDAL’s internal virtual filesystem. Rasters can now be created from and converted to binary data in-memory.
- The new
GDALBand.color_interp()method returns the color interpretation for the band.
django.contrib.postgres¶
- The new
distinctargument for
ArrayAggdetermines if concatenated values will be distinct.
- The new
RandomUUIDdatabase function returns a version 4 UUID. It requires use of PostgreSQL’s
pgcryptoextension which can be activated using the new
CryptoExtensionmigration operation.
django.contrib.postgres.indexes.GinIndexnow supports the
fastupdateand
gin_pending_list_limitparameters.
- The new
GistIndexclass allows creating
GiSTindexes in the database. The new
BtreeGistExtensionmigration operation installs the
btree_gistextension to add support for operator classes that aren’t built-in.
inspectdbcan now introspect
JSONFieldand various
RangeFields (
django.contrib.postgresmust be in
INSTALLED_APPS).
django.contrib.sitemaps¶
- Added the
protocolkeyword argument to the
GenericSitemapconstructor.
Cache¶
cache.set_many()now returns a list of keys that failed to be inserted. For the built-in backends, failed inserts can only happen on memcached.
File Storage¶
File.open()can be used as a context manager, e.g.
with file.open() as f:.
Forms¶
- The new
date_attrsand
time_attrsarguments for
SplitDateTimeWidgetand
SplitHiddenDateTimeWidgetallow specifying different HTML attributes for the
DateInputand
TimeInput(or hidden) subwidgets.
- The new
Form.errors.get_json_data()method returns form errors as a dictionary suitable for including in a JSON response.
Generic Views¶
- The new
ContextMixin.extra_contextattribute allows adding context in
View.as_view().
Management Commands¶
inspectdbnow translates MySQL’s unsigned integer columns to
PositiveIntegerFieldor
PositiveSmallIntegerField.
- The new
makemessages --add-locationoption controls the comment format in PO files.
loaddatacan now read from stdin.
- The new
diffsettings --outputoption allows formatting the output in a unified diff format.
- On Oracle,
inspectdbcan now introspect
AutoFieldif the column is created as an identity column.
- On MySQL,
dbshellnow supports client-side TLS certificates.
Migrations¶
- The new
squashmigrations --squashed-nameoption allows naming the squashed migration.
Models¶
- The new
StrIndexdatabase function finds the starting index of a string inside another string.
- On Oracle,
AutoFieldand
BigAutoFieldare now created as identity columns.
- The new
chunk_sizeparameter of
QuerySet.iterator()controls the number of rows fetched by the Python database client when streaming results from the database. For databases that don’t support server-side cursors, it controls the number of results Django fetches from the database adapter.
QuerySet.earliest(),
QuerySet.latest(), and
Meta.get_latest_bynow allow ordering by several fields.
- Added the
ExtractQuarterfunction to extract the quarter from
DateFieldand
DateTimeField, and exposed it through the
quarterlookup.
- Added the
TruncQuarterfunction to truncate
DateFieldand
DateTimeFieldto the first day of a quarter.
- Added the
db_tablespaceparameter to class-based indexes.
- If the database supports a native duration field (Oracle and PostgreSQL),
Extractnow works with
DurationField.
- Added the
ofargument to
QuerySet.select_for_update(), supported on PostgreSQL and Oracle, to lock only rows from specific tables rather than all selected tables. It may be helpful particularly when
select_for_update()is used in conjunction with
select_related().
- The new
field_nameparameter of
QuerySet.in_bulk()allows fetching results based on any unique model field.
CursorWrapper.callproc()now takes an optional dictionary of keyword parameters, if the backend supports this feature. Of Django’s built-in backends, only Oracle supports it.
- The new
connection.execute_wrapper()method allows installing wrappers around execution of database queries.
- The new
filterargument for built-in aggregates allows adding different conditionals to multiple aggregations over the same fields or relations.
- Added support for expressions in
Meta.ordering.
- The new
namedparameter of
QuerySet.values_list()allows fetching results as named tuples.
- The new
FilteredRelationclass allows adding an
ONclause to querysets.
Pagination¶
- Added
Paginator.get_page()to provide the documented pattern of handling invalid page numbers.
Templates¶
- To increase the usefulness of
Engine.get_default()in third-party apps, it now returns the first engine if multiple
DjangoTemplatesengines are configured in
TEMPLATESrather than raising
ImproperlyConfigured.
- Custom template tags may now accept keyword-only arguments.
Tests¶
- Added threading support to
LiveServerTestCase.
- Added settings that allow customizing the test tablespace parameters for Oracle:
DATAFILE_SIZE,
DATAFILE_TMP_SIZE,
DATAFILE_EXTSIZE, and
DATAFILE_TMP_EXTSIZE.
Validators¶
- The new
ProhibitNullCharactersValidatordisallows the null character in the input of the
CharFieldform field and its subclasses. Null character input was observed from vulnerability scanning tools. Most databases silently discard null characters, but psycopg2 2.7+ raises an exception when trying to save a null character to a char/text field with PostgreSQL.
Backwards incompatible changes in 2.0¶
Removed support for bytestrings in some places¶
To support native Python 2 strings, older Django versions had to accept both
bytestrings and unicode strings. Now that Python 2 support is dropped,
bytestrings should only be encountered around input/output boundaries (handling
of binary fields or HTTP streams, for example). You might have to update your
code to limit bytestring usage to a minimum, as Django no longer accepts
bytestrings in certain code paths. Python’s
-b option may help detect
that mistake in your code.
For example,
reverse() now uses
str() instead of
force_text() to
coerce the
args and
kwargs it receives, prior to their placement in
the URL. For bytestrings, this creates a string with an undesired
b prefix
as well as additional quotes (
str(b'foo') is
"b'foo'"). To adapt, call
decode() on the bytestring before passing it to
reverse().
Database backend API¶
This section describes changes that may be needed in third-party database backends.
- The
DatabaseOperations.datetime_cast_date_sql(),
datetime_cast_time_sql(),
datetime_trunc_sql(),
datetime_extract_sql(), and
date_interval_sql()methods now return only the SQL to perform the operation instead of SQL and a list of parameters.
- Third-party database backends should add a
DatabaseWrapper.display_nameattribute with the name of the database that your backend works with. Django may use it in various messages, such as in system checks.
- The first argument of
SchemaEditor._alter_column_type_sql()is now
modelrather than
table.
- The first argument of
SchemaEditor._create_index_name()is now
table_namerather than
model.
- To enable
FOR UPDATE OFsupport, set
DatabaseFeatures.has_select_for_update_of = True. If the database requires that the arguments to
OFbe columns rather than tables, set
DatabaseFeatures.select_for_update_of_column = True.
- To enable support for
Windowexpressions, set
DatabaseFeatures.supports_over_clauseto
True. You may need to customize the
DatabaseOperations.window_start_rows_start_end()and/or
window_start_range_start_end()methods.
- Third-party database backends should add a
DatabaseOperations.cast_char_field_without_max_lengthattribute with the database data type that will be used in the
Castfunction for a
CharFieldif the
max_lengthargument isn’t provided.
- The first argument of
DatabaseCreation._clone_test_db()and
get_test_db_clone_settings()is now
suffixrather than
number(in case you want to rename the signatures in your backend for consistency).
django.testalso now passes those values as strings rather than as integers.
- Third-party database backends should add a
DatabaseIntrospection.get_sequences()method based on the stub in
BaseDatabaseIntrospection.
Dropped support for Oracle 11.2¶
The end of upstream support for Oracle 11.2 is Dec. 2020. Django 1.11 will be supported until April 2020 which almost reaches this date. Django 2.0 officially supports Oracle 12.1+.
Default MySQL isolation level is read committed¶
MySQL’s default isolation level, repeatable read, may cause data loss in
typical Django usage. To prevent that and for consistency with other databases,
the default isolation level is now read committed. You can use the
DATABASES setting to use a different isolation level, if needed.
AbstractUser.last_name
max_length increased to 150¶
A migration for
django.contrib.auth.models.User.last_name is included.
If you have a custom user model inheriting from
AbstractUser, you’ll need
to generate and apply a database migration for your user model.
If you want to preserve the 30 character limit for last names, use a custom form:
from django.contrib.auth.forms import UserChangeForm class MyUserChangeForm(UserChangeForm): last_name = forms.CharField(max_length=30, required=False)
If you wish to keep this restriction in the admin when editing users, set
UserAdmin.form to use this form:
from django.contrib.auth.admin import UserAdmin from django.contrib.auth.models import User class MyUserAdmin(UserAdmin): form = MyUserChangeForm admin.site.unregister(User) admin.site.register(User, MyUserAdmin)
QuerySet.reverse() and
last() are prohibited after slicing¶
Calling
QuerySet.reverse() or
last() on a sliced queryset leads to
unexpected results due to the slice being applied after reordering. This is
now prohibited, e.g.:
>>> Model.objects.all()[:2].reverse() Traceback (most recent call last): ... TypeError: Cannot reverse a query once a slice has been taken.
Form fields no longer accept optional arguments as positional arguments¶
To help prevent runtime errors due to incorrect ordering of form field arguments, optional arguments of built-in form fields are no longer accepted as positional arguments. For example:
forms.IntegerField(25, 10)
raises an exception and should be replaced with:
forms.IntegerField(max_value=25, min_value=10)
call_command() validates the options it receives¶
call_command() now validates that the argument parser of the command being
called defines all of the options passed to
call_command().
For custom management commands that use options not created using
parser.add_argument(), add a
stealth_options attribute on the command:
class MyCommand(BaseCommand): stealth_options = ('option_name', ...)
Indexes no longer accept positional arguments¶
For example:
models.Index(['headline', '-pub_date'], 'index_name')
raises an exception and should be replaced with:
models.Index(fields=['headline', '-pub_date'], name='index_name')
Foreign key constraints are now enabled on SQLite¶
This will appear as a backwards-incompatible change (
IntegrityError:
FOREIGN KEY constraint failed) if attempting to save an existing model
instance that’s violating a foreign key constraint.
Foreign keys are now created with
DEFERRABLE INITIALLY DEFERRED instead of
DEFERRABLE IMMEDIATE. Thus, tables may need to be rebuilt to recreate
foreign keys with the new definition, particularly if you’re using a pattern
like this:
from django.db import transaction with transaction.atomic(): Book.objects.create(author_id=1) Author.objects.create(id=1)
If you don’t recreate the foreign key as
DEFERRED, the first
create()
would fail now that foreign key constraints are enforced.
Backup your database first! After upgrading to Django 2.0, you can then rebuild tables using a script similar to this:
from django.apps import apps from django.db import connection for app in apps.get_app_configs(): for model in app.get_models(include_auto_created=True): if model._meta.managed and not (model._meta.proxy or model._meta.swapped): for base in model.__bases__: if hasattr(base, '_meta'): base._meta.local_many_to_many = [] model._meta.local_many_to_many = [] with connection.schema_editor() as editor: editor._remake_table(model)
This script hasn’t received extensive testing and needs adaption for various cases such as multiple databases. Feel free to contribute improvements.
In addition, because of a table alteration limitation of SQLite, it’s prohibited
to perform
RenameModel and
RenameField operations on models or
fields referenced by other models in a transaction. In order to allow migrations
containing these operations to be applied, you must set the
Migration.atomic attribute to
False.
Miscellaneous¶
The
SessionAuthenticationMiddlewareclass is removed. It provided no functionality since session authentication is unconditionally enabled in Django 1.10.
The default HTTP error handlers (
handler404, etc.) are now callables instead of dotted Python path strings. Django favors callable references since they provide better performance and debugging experience.
RedirectViewno longer silences
NoReverseMatchif the
pattern_namedoesn’t exist.
When
USE_L10Nis off,
FloatFieldand
DecimalFieldnow respect
DECIMAL_SEPARATORand
THOUSAND_SEPARATORduring validation. For example, with the settings:
USE_L10N = False USE_THOUSAND_SEPARATOR = True DECIMAL_SEPARATOR = ',' THOUSAND_SEPARATOR = '.'
an input of
"1.345"is now converted to
1345instead of
1.345.
Subclasses of
AbstractBaseUserare no longer required to implement
get_short_name()and
get_full_name(). (The base implementations that raise
NotImplementedErrorare removed.)
django.contrib.adminuses these methods if implemented but doesn’t require them. Third-party apps that use these methods may want to adopt a similar approach.
The
FIRST_DAY_OF_WEEKand
NUMBER_GROUPINGformat settings are now kept as integers in JavaScript and JSON i18n view outputs.
assertNumQueries()now ignores connection configuration queries. Previously, if a test opened a new database connection, those queries could be included as part of the
assertNumQueries()count.
The default size of the Oracle test tablespace is increased from 20M to 50M and the default autoextend size is increased from 10M to 25M.
To improve performance when streaming large result sets from the database,
QuerySet.iterator()now fetches 2000 rows at a time instead of 100. The old behavior can be restored using the
chunk_sizeparameter. For example:
Book.objects.iterator(chunk_size=100)
Providing unknown package names in the
packagesargument of the
JavaScriptCatalogview now raises
ValueErrorinstead of passing silently.
A model instance’s primary key now appears in the default
Model.__str__()method, e.g.
Question object (1).
makemigrationsnow detects changes to the model field
limit_choices_tooption. Add this to your existing migrations or accept an auto-generated migration for fields that use it.
Performing queries that require automatic spatial transformations now raises
NotImplementedErroron MySQL instead of silently using non-transformed geometries.
django.core.exceptions.DjangoRuntimeWarningis removed. It was only used in the cache backend as an intermediate class in
CacheKeyWarning’s inheritance of
RuntimeWarning.
Renamed
BaseExpression._output_fieldto
output_field. You may need to update custom expressions.
In older versions, forms and formsets combine their
Mediawith widget
Mediaby concatenating the two. The combining now tries to preserve the relative order of elements in each list.
MediaOrderConflictWarningis issued if the order can’t be preserved.
django.contrib.gis.gdal.OGRExceptionis removed. It’s been an alias for
GDALExceptionsince Django 1.8.
Support for GEOS 3.3.x is dropped.
The way data is selected for
GeometryFieldis changed to improve performance, and in raw SQL queries, those fields must now be wrapped in
connection.ops.select. See the Raw queries note in the GIS tutorial for an example.
Features deprecated in 2.0¶
context argument of
Field.from_db_value() and
Expression.convert_value()¶
The
context argument of
Field.from_db_value() and
Expression.convert_value() is unused as it’s always an empty dictionary.
The signature of both methods is now:
(self, value, expression, connection)
instead of:
(self, value, expression, connection, context)
Support for the old signature in custom fields and expressions remains until Django 3.0.
Miscellaneous¶
- The
django.db.backends.postgresql_psycopg2module is deprecated in favor of
django.db.backends.postgresql. It’s been an alias since Django 1.9. This only affects code that imports from the module directly. The
DATABASESsetting can still use
'django.db.backends.postgresql_psycopg2', though you can simplify that by using the
'django.db.backends.postgresql'name added in Django 1.9.
django.shortcuts.render_to_response()is deprecated in favor of
django.shortcuts.render().
render()takes the same arguments except that it also requires a
request.
- The
DEFAULT_CONTENT_TYPEsetting is deprecated. It doesn’t interact well with third-party apps and is obsolete since HTML5 has mostly superseded XHTML.
HttpRequest.xreadlines()is deprecated in favor of iterating over the request.
- The
field_namekeyword argument to
QuerySet.earliest()and
QuerySet.latest()is deprecated in favor of passing the field names as arguments. Write
.earliest('pub_date')instead of
.earliest(field_name='pub_date').
Features removed in 2.0¶
These features have reached the end of their deprecation cycle and are removed in Django 2.0.
See Features deprecated in 1.9 for details on these changes, including how to remove usage of these features.
- The
weakargument to
django.dispatch.signals.Signal.disconnect()is removed.
django.db.backends.base.BaseDatabaseOperations.check_aggregate_support()is removed.
- The
django.forms.extraspackage is removed.
- The
assignment_taghelper is removed.
- The
hostargument to
SimpleTestCase.assertsRedirects()is removed. The compatibility layer which allows absolute URLs to be considered equal to relative ones when the path is identical is also removed.
Field.reland
Field.remote_field.toare removed.
- The
on_deleteargument for
ForeignKeyand
OneToOneFieldis now required in models and migrations. Consider squashing migrations so that you have fewer of them to update.
django.db.models.fields.add_lazy_relation()is removed.
- When time zone support is enabled, database backends that don’t support time zones no longer convert aware datetimes to naive values in UTC anymore when such values are passed as parameters to SQL queries executed outside of the ORM, e.g. with
cursor.execute().
django.contrib.auth.tests.utils.skipIfCustomUser()is removed.
- The
GeoManagerand
GeoQuerySetclasses are removed.
- The
django.contrib.gis.geoipmodule is removed.
- The
supports_recursioncheck for template loaders is removed from:
django.template.engine.Engine.find_template()
django.template.loader_tags.ExtendsNode.find_template()
django.template.loaders.base.Loader.supports_recursion()
django.template.loaders.cached.Loader.supports_recursion()
- The
load_templateand
load_template_sourcestemplate loader methods are removed.
- The
template_dirsargument for template loaders is removed:
django.template.loaders.base.Loader.get_template()
django.template.loaders.cached.Loader.cache_key()
django.template.loaders.cached.Loader.get_template()
django.template.loaders.cached.Loader.get_template_sources()
django.template.loaders.filesystem.Loader.get_template_sources()
django.template.loaders.base.Loader.__call__()is removed.
- Support for custom error views that don’t accept an
exceptionparameter is removed.
- The
mime_typeattribute of
django.utils.feedgenerator.Atom1Feedand
django.utils.feedgenerator.RssFeedis removed.
- The
app_nameargument to
include()is removed.
- Support for passing a 3-tuple (including
admin.site.urls) as the first argument to
include()is removed.
- Support for setting a URL instance namespace without an application namespace is removed.
Field._get_val_from_obj()is removed.
django.template.loaders.eggs.Loaderis removed.
- The
current_appparameter to the
contrib.authfunction-based views is removed.
- The
callable_objkeyword argument to
SimpleTestCase.assertRaisesMessage()is removed.
- Support for the
allow_tagsattribute on
ModelAdminmethods is removed.
- The
enclosurekeyword argument to
SyndicationFeed.add_item()is removed.
- The
django.template.loader.LoaderOriginand
django.template.base.StringOriginaliases for
django.template.base.Originare removed.
See Features deprecated in 1.10 for details on these changes.
- The
makemigrations --exitoption is removed.
- Support for direct assignment to a reverse foreign key or many-to-many relation is removed.
- The
get_srid()and
set_srid()methods of
django.contrib.gis.geos.GEOSGeometryare removed.
- The
get_x(),
set_x(),
get_y(),
set_y(),
get_z(), and
set_z()methods of
django.contrib.gis.geos.Pointare removed.
- The
get_coords()and
set_coords()methods of
django.contrib.gis.geos.Pointare removed.
- The
cascaded_unionproperty of
django.contrib.gis.geos.MultiPolygonis removed.
django.utils.functional.allow_lazy()is removed.
- The
shell --plainoption is removed.
- The
django.core.urlresolversmodule is removed in favor of its new location,
django.urls.
CommaSeparatedIntegerFieldis removed, except for support in historical migrations.
- The template
Context.has_key()method is removed.
- Support for the
django.core.files.storage.Storage.accessed_time(),
created_time(), and
modified_time()methods is removed.
- Support for query lookups using the model name when
Meta.default_related_nameis set is removed.
- The MySQL
__searchlookup is removed.
- The shim for supporting custom related manager classes without a
_apply_rel_filters()method is removed.
- Using
User.is_authenticated()and
User.is_anonymous()as methods rather than properties is no longer supported.
- The
Model._meta.virtual_fieldsattribute is removed.
- The keyword arguments
virtual_onlyin
Field.contribute_to_class()and
virtualin
Model._meta.add_field()are removed.
- The
javascript_catalog()and
json_catalog()views are removed.
django.contrib.gis.utils.precision_wkt()is removed.
- In multi-table inheritance, implicit promotion of a
OneToOneFieldto a
parent_linkis removed.
- Support for
Widget._format_value()is removed.
FileFieldmethods
get_directory_name()and
get_filename()are removed.
- The
mark_for_escaping()function and the classes it uses:
EscapeData,
EscapeBytes,
EscapeText,
EscapeString, and
EscapeUnicodeare removed.
- The
escapefilter now uses
django.utils.html.conditional_escape().
Manager.use_for_related_fieldsis removed.
- Model
Managerinheritance follows MRO inheritance rules. The requirement to use
Meta.manager_inheritance_from_futureto opt-in to the behavior is removed.
- Support for old-style middleware using
settings.MIDDLEWARE_CLASSESis removed. | https://docs.djangoproject.com/en/2.1/releases/2.0/ | 2019-07-16T05:08:06 | CC-MAIN-2019-30 | 1563195524502.23 | [] | docs.djangoproject.com |
Developing.:
- Identify auditing needs - For each policy:policyList | https://docs.hortonworks.com/HDPDocuments/HDP3/HDP-3.1.0/authorization-ranger/content/developing_a_custom_authorization_module.html | 2019-07-16T05:17:24 | CC-MAIN-2019-30 | 1563195524502.23 | [] | docs.hortonworks.com |
Partially indexed items in Content Search in Office 365
A Content Search that you run from the Security & Compliance Center in Office 365 automatically includes partially indexed items in the estimated search results when you run a search. Partially indexed items are Exchange mailbox items and documents on SharePoint and OneDrive for Business sites that for some reason weren't completely indexed for search. In Exchange, a partially indexed item typically contains a file—of a file type that can't be indexed—that is attached to an email message. Here are some other reasons why items can't be indexed for search and are returned as partially indexed items when you run a search:
The file type is unrecognized or unsupported for indexing.
Messages have an attached file without a valid handler, such as image files; this is the most common cause of partially indexed email items.
The file type is supported for indexing but an indexing error occurred for a specific file.
Too many files attached to an email message.
A file attached to an email message is too large.
A file is encrypted with non-Microsoft technologies.
A file is password-protected.
Note
Most Office 365 organizations have less than 1% of content by volume and less than 12% by size that is partially indexed. The reason for the difference between volume and size is that larger files have a higher probability of containing content that can't be completely indexed.
For legal investigations, your organization may be required to review partially indexed items. You can also specify whether to include partially indexed items when you export search results to a local computer or when you prepare the results for analysis with Office 365 Advanced eDiscovery. For more information, see Investigating partially indexed items in Office 365 eDiscovery.
File types not indexed. There are also file types for which full-text indexing has been disabled, either by default or by an administrator. Unsupported and disabled file types are labeled as unindexed items in Content Searches. As previously stated, partially indexed items can be included in the set of search results when you run a search, export the search results to a local computer, or prepare search results for Advanced eDiscovery.
For a list of supported and disabled file formats, see the following topics:
Exchange - File formats indexed by Exchange Search
Exchange - Get-SearchDocumentFormat
SharePoint - Default crawled file name extensions and parsed file types in SharePoint
Messages and documents with partially indexed file types can be returned in search results
Not every email message with an partially indexed file attachment or every partially indexed SharePoint document is automatically returned as an partially indexed item. That's because other message or document properties, such as the Subject property in email messages and the Title or Author properties for documents are indexed and available to be searched. For example, a keyword search for "financial" will return items with an partially indexed file attachment if that keyword appears in the subject of an email message or in the file name or title of a document. However, if the keyword appears only in the body of the file, the message or document would be returned as a partially indexed item.
Similarly, messages with partially indexed file attachments and documents of an partially indexed file type are included in search results when other message or document properties, which are indexed and searchable, meet the search criteria. Message properties that are indexed for search include sent and received dates, sender and recipient, the file name of an attachment, and text in the message body. Document properties indexed for search include created and modified dates. So even though a message attachment may be an partially indexed item, the message will be included in the regular search results if the value of other message or document properties matches the search criteria.
For a list of email and document properties that you can search for by using the Search feature in the Security & Compliance Center, see Keyword queries and search conditions for Content Search.
Partially indexed items included in the search results
Your organization might be required to identify and perform additional analysis on partially indexed items to determine what they are, what they contain, and whether they're relevant to a specific investigation. As previously explained, the partially indexed items in the content locations that are searched are automatically included with the estimated search results. You have the option to include these partially indexed items when you export search results or prepare the search results for Advanced eDiscovery.
Keep the following in mind about partially indexed items:
When you run a content search, the total number and size of partially indexed Exchange items (returned by the search query) are displayed in search statistics in the details pane, and labeled as Indexed items. Note that statistics about partially indexed items displayed in the details pane don't include partially indexed items in SharePoint or OneDrive.
If the search that you're exporting results from was a search of specific content locations or all content locations in your organization, only the unindexed items from content locations that contain items that match the search criteria will be exported. In other words, if no search results are found in a mailbox or site, then any unindexed (by clicking Only items that have an unrecognized format, are encrypted, or weren't indexed for other reasons under Output options).
If you choose to include all mailbox items in the search results, or if a search query doesn't specify any keywords or only specifies a date range, partially indexed items might not be copied to the PST file that contains the partially indexed items. This is because all items, including any partially indexed items, will be automatically included in the regular search results.
Partially indexed items aren't available to be previewed. You have to export the search results to view partially indexed items returned by the search.
Additionally, when you export search results and include partially indexed items in the export, partially indexed items from SharePoint items are exported to a folder named Uncrawlable. When you export partially indexed Exchange items, they are exported differently depending on the whether or not the partially indexed items matched the search query and the configuration of the export settings.
The following table shows the export behavior of indexed and partially indexed items and whether or not each is included for the different export configuration settings.
Partially indexed items excluded from the search results
If an item is partially indexed but it doesn't meet the search query criteria, it won't be included as a partially indexed item in the search results. In other words, the item is excluded from the search results. For example, let's say you run a search and don't include any keywords or properties because you want to include all content. But you include a date range condition for the query. If a partially indexed item falls outside of that date range, it won't be included as a partially indexed item. Date ranges are an effective way to exclude partially indexed items from your search results.
Similarly, if you choose to include partially indexed items when you export the results of a search, partially indexed items that were excluded from the search results won't be exported.
One exception to this rule is when you create a query-based hold that's associated with an eDiscovery case. If you create a query-based hold, all partially indexed items are placed on hold. This includes partially indexed items that don't match the search query criteria and partially indexed items that might fall outside of a date range condition. For more information about creating query-based holds, see Step 4 in eDiscovery cases.
Indexing limits for messages in Content Search
The following table describes the indexing limits that might result in an email message being returned as a partially indexed item in a Content Search in Office 365.
For a list of indexing limits for SharePoint documents, see Search limits for SharePoint Online.
More information about partially indexed items
As previously stated, because message and document properties and their metadata are indexed, a keyword search might return results if that keyword appears in the indexed metadata. However, that same keyword search might not return the same item if the keyword only appears in the content of an item with an unsupported file type. In this case, the item would be returned as a partially indexed item.
If a partially indexed item is included in the search results because it met the search query criteria (and wasn't excluded) then it won't be included as a partially indexed item in the estimated search statistics. Also, it won't be included with partially indexed items when you export search results.
Although a file type is supported for indexing and is indexed, there can be indexing or search errors that will cause a file to be returned as a partially indexed item. For example, searching a very large Excel file might be partially successful (because the first 4 MB are indexed), but then fails because the file size limit is exceeded. In this case, it's possible that the same file is returned with the search results and as a partially indexed item.
Attached files encrypted with Microsoft technologies are indexed and can be searched. Files encrypted with non-Microsoft technologies are partially indexed.
Email messages encrypted with S/MIME are partially indexed. This includes encrypted messages with or without file attachments.
Messages protected using Information Rights Management (IRM) are indexed and will be included in the search results if they match the search query.
See also
Investigating partially indexed items in Office 365 eDiscovery
Feedback | https://docs.microsoft.com/en-us/office365/securitycompliance/partially-indexed-items-in-content-search | 2019-07-16T04:23:04 | CC-MAIN-2019-30 | 1563195524502.23 | [] | docs.microsoft.com |
Analyze .NET Framework memory issues
Find memory leaks and inefficient memory use in .NET Framework code by using the Visual Studio managed memory analyzer. The minimum .NET Framework version of the target code is .NET Framework 4.5. The option will only be available from the dump summary page in the Ultimate version of Visual Studio 2013. If you are using Premium or Professional you will not see the option.
The memory analysis tool analyzes information in dump files with heap data that a copy of the objects in an app's memory. You can collect dump (.dmp) files from the Visual Studio IDE or by using other system tools.
You can analyze a single snapshot to understand the relative impact of the object types on memory use, and to find code in your app that uses memory inefficiently.
You can also compare (diff) two snapshots of an app to find areas in your code that cause the memory use to increase over time.
For a walkthrough of the managed memory analyzer, see Using Visual Studio 2013 to Diagnose .NET Memory Issues in Production on the Visual Studio ALM + Team Foundation Server blog .
Contents
Memory use in .NET Framework apps
Identify a memory issue in an app
Collect memory snapshots
Analyze memory use
Memory use in .NET Framework apps
The .NET Framework is a garbage-collected runtime, so that in most apps, memory use is not a problem. But in long-running applications like web services and applications, and in devices that have a limited amount of memory, the accumulation of objects in memory can impact the performance of the app and the device that it runs on. Excessive memory use can starve the application and the machine of resources if the garbage collector is running too often, or if the operating system is forced to move memory between RAM and disk. In the worst case, an app can crash with an "Out of memory" exception.
The .NET managed heap is a region of virtual memory where reference objects created by an app are stored. The lifetime of objects are managed by the garbage collector (GC). The garbage collector uses references to keep track of objects that occupy blocks of memory. A reference is created when an object is created and assigned to a variable. A single object can have multiple references. For example, additional references to an object can be created by adding the object to a class, collection, or other data structure, or by assigning the object to a second variable. A less obvious way of creating a reference is by one object adding a handler to another object's event. In this case, the second object holds the reference to the first object until the handler is explicitly removed or the second object is destroyed.
For each application, the GC maintains a tree of references that tracks the objects referenced by the application. The reference tree has a set of roots, which includes global and static objects, as well as associated thread stacks and dynamically instantiated objects. An object is rooted if the object has at least one parent object that holds a reference to it. The GC can reclaim the memory of an object only when no other object or variable in the application has a reference to it.
Contents
Identify a memory issue in an app
The most visible symptom of memory issues is the performance of your app, especially if the performance degrades over time. Degradation of the performance of other apps while your app is running might also indicate a memory issue. If you suspect a memory issue, use a tool like Task Manager or Windows Performance Monitor to investigate further. For example, look for growth in the total size of memory that you cannot explain as a possible source of memory leaks:
You might also notice memory spikes that are larger than your knowledge of the code would suggest, which might point to inefficient memory use in a procedure:
Collect memory snapshots
The memory analysis tool analyzes information in dump files that contain heap information. You can create dump files in Visual Studio, or you can use a tool like ProcDump from Windows Sysinternals. See What is a dump, and how do I create one? on the Visual Studio Debugger Team blog.
Note
Most tools can collect dump information with or without complete heap memory data. The Visual Studio memory analyzer requires full heap information.
To collect a dump from Visual Studio
You can create a dump file for a process that was started from a Visual Studio project, or you can attach the debugger to a running process. See Attach to Running Processes with the Visual Studio Debugger.
Stop execution. The debugger stops when you choose Break All on the Debug menu, or at an exception or at a breakpoint
On the Debug menu, choose Save Dump As. In the Save Dump As dialog box, specify a location and make sure that Minidump with Heap (the default) is selected in the Save as type list.
To compare two memory snapshots
To analyze the growth in memory use of an app, collect two dump files from a single instance of the app.
Contents
Analyze memory use
Filter the list of objects|Analyze memory data in from a single snapshot|Compare two memory snapshots
To analyze a dump file for memory use issues:
In Visual Studio, choose File, Open and specify the dump file.
On the Minidump File Summary page, choose Debug Managed Memory.
The memory analyzer starts a debug session to analyze the file and displays the results in the Heap View page:
Contents
Filter the list of objects
By default, the memory analyzer filters the list of objects in a memory snapshot to show only the types and instances that are user code, and to show only those types whose total inclusive size exceed a threshold percentage of the total heap size. You can change these options in the View Settings list:
You can also filter the type list by entering a string in the Search box. The list displays only those types whose names contain the string.
Contents
Analyze memory data in from a single snapshot
Visual Studio starts a new debugging session to analyze the file, and displays the memory data in a Heap View window.
Contents
Object Type table
The top table lists the types of objects that are held in memory.
Count shows the number of instances of the type in the snapshot.
Size (Bytes) is the size of the all instances of the type, excluding the size of objects it holds references to. The
Inclusive Size (Bytes) includes the sizes of referenced objects.
You can choose the instances icon (
) in the Object Type column to view a list of the instances of the type.
Instance table
Instance is the memory location of the object that serves as the object identifier of the object
Value shows the actual value of value types. You can hover over the name of a reference type to view its data values in a data tip.
Size (Bytes) is the size of the object, excluding the size of objects it holds references to. The
Inclusive Size (Bytes) includes the sizes of referenced objects.
By default, types and instances are sorted by Inclusive Size (Bytes). Choose a column header in the list to change the sort order.
Paths to Root
For a type selected from the Object Type table, the Paths to Root table shows the unique type hierarchies that lead to root objects for all objects of the type, along with the number of references to the type that is above it in the hierarchy.
For an object selected from the instance of a type, Paths to Root shows a graph of the actual objects that hold a reference to the instance. You can hover over the name of the object to view its data values in a data tip.
Referenced Types / Referenced Objects
For a type selected from the Object Type table, the Referenced Types tab shows the size and number of referenced types held by all objects of the selected type.
For a selected instance of a type, Referenced Objects shows the objects that are held by the selected instance. You can hover over the name to view its data values in a data tip.
Circular references
An object can reference a second object that directly or indirectly holds a reference to the first object. When the memory analyzer encounters this situation, it stops expanding the reference path and adds a [Cycle Detected] annotation to the listing of the first object and stops.
Root types
The memory analyzer adds annotations to root objects that describe the kind of reference that is being held:
Compare two memory snapshots
You can compare two dump files of a process to find objects that might be the cause of memory leaks. The interval between the collection of the first (earlier) and second (later) file should be large enough that the growth of the number of leaked objects is easily apparent. To compare the two files:
Open the second dump file, and then choose Debug Managed Memory on the Minidump File Summary page.
On the memory analysis report page, open the Select baseline list, and then choose Browse to specify the first dump file.
The analyzer adds columns to the top pane of the report that display the difference between the Count, Size, and Inclusive Size of the types to those values in the earlier snapshot.
A Reference Count Diff column is also added to the Paths to Root table.
Contents
See Also
Other Resources
VS ALM TFS Blog: Using Visual Studio 2013 to Diagnose .NET Memory Issues in Production
Channel 9 | Visual Studio TV | Managed Memory Analysis
Channel 9 | Visual Studio Toolbox | Managed Memory Analysis in Visual Studio 2013 | https://docs.microsoft.com/en-us/previous-versions/visualstudio/visual-studio-2013/dn342825%28v%3Dvs.120%29 | 2019-07-16T05:28:36 | CC-MAIN-2019-30 | 1563195524502.23 | [array(['images/dn342825.mngdmem_resourcemanagerconsistentgrowth%28vs.120%29.png',
'Consistent memory growth in Resource Monitor Consistent memory growth in Resource Monitor'],
dtype=object)
array(['images/dn342825.mngdmem_resourcemanagerspikes%28vs.120%29.png',
'Memory spikes in Resource Manager Memory spikes in Resource Manager'],
dtype=object)
array(['images/dn342825.dbg_mma_objecttypelist%28vs.120%29.png',
'The Object Type list The Object Type list'], dtype=object)
array(['images/dn342825.dbg_mma_instancestable%28vs.120%29.png',
'Instances table Instances table'], dtype=object)
array(['images/dn342825.mngdmem_diffcolumns%28vs.120%29.png',
'Diff columns in the type list Diff columns in the type list'],
dtype=object) ] | docs.microsoft.com |
About Digitally Signing RemoteApp Programs
Applies To: Windows Server 2008 R2.
Important
To connect to a RemoteApp program by using a digitally signed .rdp file, the client must be running at least Remote Desktop Client (RDC) 6.1. (The RDC 6.1 client supports Remote Desktop Protocol 6.1.) RD Session Host server or RD.
Membership in the local Administrators group, or equivalent, on the RD Session Host server that you plan to configure, is the minimum required to complete this procedure. Review details about using the appropriate accounts and group memberships at.
To configure the digital certificate to use
On the RD Session Host server, open RemoteApp Manager. To open RemoteApp Manager, click Start, point to Administrative Tools, point to Remote Desktop Services, and then click RemoteApp Manager.
In the.
Note
The Select Certificate dialog box is populated by certificates that are located in the local computer's certificates store or in your personal certificate store. The certificate that you want to use must be located in one of these stores.
Using Group Policy settings to control client behavior when opening a digitally signed .rdp file
You can use Group Policy to configure clients to always recognize RemoteApp programs from a particular publisher as trusted. You can also configure whether clients will block RemoteApp programs and (). | https://docs.microsoft.com/en-us/previous-versions/windows/it-pro/windows-server-2008-R2-and-2008/cc754499(v=ws.11) | 2019-07-16T04:32:36 | CC-MAIN-2019-30 | 1563195524502.23 | [] | docs.microsoft.com |
Introduction to Administering Authentication and Access Control
Applies To: Windows Server 2008
Users and groups
Active either import the user object class definitions that are provided with AD LDS, or supply your own user object definitions. partition.
Security principals
The term security principal refers to any object that has a security identifier (SID) and that can be assigned permissions to directory objects. In AD LDS, security principals can reside in AD LDS, on a local computer, or in an Active Directory domain.
Note
You can retrieve an active user's individual and group SIDs by explicitly querying the tokenGroups attribute on the rootDSE. The value returned includes the SIDs of all the groups of which the user is a member.
AD LDS security principals
AD LDS does not include any default security principals. However, AD LDS does provide importable schema extensions that you can use to create users in AD LDS. Users created from these user classes can be used as security principals. In addition, you can make any object class in the AD LDS schema a security principal by adding the
msDS-bindableobject auxiliary class to the schema definition of an object class. Each AD LDS security principal must be assigned an account and password, which AD LDS uses for authentication..
AD LDS bind redirection
AD LDS bind redirection is designed for use primarily with legacy applications that cannot authenticate directly against Active Directory Domain Services (AD DS) but still have to use AD LDS as the application data store. Through bind redirection, AD LDS can accept a bind request from an application and redirect this bind request to AD DS, based on the contents of a proxy object. A proxy object in AD LDS represents a local Windows or an Active Directory security principal, and it can be augmented to store additional data related to that security principal that is specific to the application. Through bind redirection, applications can take advantage of the identity store of AD DS without any additional overhead. For example, if an administrator disables an account in AD DS, that account can no longer be used to bind to AD LDS. In addition, applications retain the flexibility of using AD LDS as an application data store.
Proxy objects for bind redirection
Bind redirection relies on the existence of stub, or proxy, objects in AD LDS, each of which represents a Windows security principal. To implement bind redirection, first create a proxy object schema definition to store data that is specific to your application. In this definition, include the
msDS-bindProxy auxiliary class. The
msds-BindProxy class possesses a single "must contain" attribute,
ObjectSid, which holds the security ID (SID) of the associated Windows security principal. For more information, see Bind to an AD LDS Instance Through a Proxy Object.
Processing bind requests
When a user sends a bind request to AD LDS, AD LDS does the following based on the type of request:
For simple bind requests:
If the object class to which the user requests a bind possesses
msDS-bindableObjectas a static auxiliary class, AD LDS processes the bind request directly, generating a security context that is based on the AD LDS user's SID.
If, instead, the object class to which the user requests a bind possesses the
msDS-bindProxyas an auxiliary class, AD LDS redirects the bind request to the host operating system. AD LDS then generates a security context that includes the token that is returned by the host operating system for the Windows security principal in response to the redirection, along with the AD LDS groups in which the principal is a member.
For simple authentication and security layer (SASL) bind requests:
- The bind request goes directly to the
AcceptSecurityContextapplication programming interface (API) call. AD LDS AD DS by using Windows security principals. This type of design provides the highest degree of security, because passwords flow directly from the client to AD DS, rather than through AD LDS. AD LDS can then be used simply as an application-specific data store. Proxy objects in AD LDS can hold application data that is specific to each Windows security principal, and an attribute on the AD LDS proxy object can be used to uniquely link each proxy object to a particular Windows security principal. A unique identifier on a Windows security principal, such as a security ID (SID) or a globally unique identifier (GUID), can be used for this linking. If your application cannot authenticate against AD DS, and you need to synchronize directory contents between AD LDS AD LDS proxy object can support a unique distinguished name when a user attempts to bind to the proxy object, AD LDS redirects the bind request to the host operating system, rather than processing the request locally.
Password security
Because bind redirection can only be used in conjunction with simple bind requests, any password that is sent to AD LDS as part of a bind redirection is sent in a plaintext, unencrypted format. Therefore, it is strongly recommended that you implement bind redirection only when using an encrypted connection, for example, Secure Sockets Layer (SSL), which is the default setting in AD LDS. For more information, see Appendix A: Configuring LDAP over SSL Requirements for AD LDS.
Additional Considerations
Only simple bind requests can be redirected.
Bind redirection supports any security principals that are supported by the host operating system of the machine on which AD LDS is running.
If both
msDS-bindProxyand msDS-bindableObject are specified as auxiliary classes on a single object, msDS-bindableObject takes precedence. | https://docs.microsoft.com/en-us/previous-versions/windows/it-pro/windows-server-2008-R2-and-2008/cc816913%28v%3Dws.10%29 | 2019-07-16T04:13:40 | CC-MAIN-2019-30 | 1563195524502.23 | [] | docs.microsoft.com |
Storage¶
Sometimes you need to store useful information. Such information is stored as data: representation of information (in a digital form when stored on computers). If you store data on a computer it should persist, even if you switch the device off and on again.
Happily MicroPython on the micro:bit allows you to do this with a very simple file system. Because of memory constraints there is approximately 30k of storage available on the file system.
What is a file system?
It’s a means of storing and organising data in a persistent manner - any data stored in a file system should survive restarts of the device. As the name suggests, data stored on a file system is organised into files.
A computer file is a named digital resource that’s stored on a file system.
Such resources contain useful information as data.. For example,
.txt indicates a text file,
.jpg a JPEG image and
.mp3 sound data encoded as MP3.
Some file systems (such as the one found on your laptop or PC) allow you to organise your files into directories: named containers that group related files and sub-directories together. However, the file system provided by MicroPython is a flat file system. A flat file system does not have directories - all your files are just stored in the same place.
The Python programming language contains easy to use and powerful ways in which to work with a computer’s file system. MicroPython on the micro:bit implements a useful subset of these features to make is easy to read and write files on the device, while also providing consistency with other versions of Python.
Figyelem.
Open Sesame¶
Reading and writing a file on the file system is achieved by the
open
function. Once a file is opened you can do stuff with it until you close it
(analogous with the way we use paper files). It is essential you close a file
so MicroPython knows you’ve finished with it.
The best way to make sure of this is to use the
with statement like this:
with open('story.txt') as my_file: content = my_file.read() print(content)
The
with statement uses the
open function to open a file and assign it
to an object. In the example above, the
open function opens the file called
story.txt (obviously a text file containing a story of some sort).
The object that’s used to represent the file in the Python code is called
my_file. Subsequently, in the code block indented underneath the
with
statement, the
my_file object is used to
read() the content of the
file and assign it to the
content object.
Here’s the important point, the next line containing the
with statement is only
the single line that reads the file. Once the code block associated with the
with statement is closed then Python (and MicroPython) will automatically
close the file for you. This is called context handling and the
open
function creates objects that are context handlers for files.
Put simply, the scope of your interaction with a file is defined by the code
block associated with the
with statement that opens the file.
Confused?
Don’t be. I’m simply saying your code should look like this:
with open('some_file') as some_object: # Do stuff with some_object in this block of code # associated with the with statement. # When the block is finished then MicroPython # automatically closes the file for you.
Just like a paper file, a digital file is opened for two reasons: to read its
content (as demonstrated above) or to write something to the file. The default
mode is to read the file. If you want to write to a file you need to tell the
open function in the following way:
with open('hello.txt', 'w') as my_file: my_file.write("Hello, World!")
Notice the
'w' argument is used to set the
my_file object into write
mode. You could also pass an
'r' argument to set the file object to read
mode, but since this is the default, it’s often left off.
Writing data to the file is done with the (you guessed it)
write
method that takes the string you want to write to the file as an argument. In
the example above, I write the text “Hello, World!” to a file called
“hello.txt”.
Simple!
Megjegyzés
When you open a file and write (perhaps several times while the file is in an open state) you will be writing OVER the content of the file if it already exists.
If you want to append data to a file you should first read it, store the content somewhere, close it, append your data to the content and then open it to write again with the revised content.
While this is the case in MicroPython, “normal” Python can open files to write in “append” mode. That we can’t do this on the micro:bit is a result of the simple implementation of the file system.
OS SOS¶
As well as reading and writing files, Python can manipulate them. You certainly need to know what files are on the file system and sometimes you need to delete them too.
On a regular computer, it is the role of the operating system (like Windows,
OSX or Linux) to manage this on Python’s behalf. Such functionality is made
available in Python via a module called
os. Since MicroPython is the
operating system we’ve decided to keep the appropriate functions in the
os
module for consistency so you’ll know where to find them when you use “regular”
Python on a device like a laptop or Raspberry Pi.
Essentially, you can do three operations related to the file system: list the files, remove a file and ask for the size of a file.
To list the files on your file system use the
listdir function. It
returns a list of strings indicating the file names of the files on the file
system:
import os my_files = os.listdir()
To delete a file use the
remove function. It takes a string representing
the file name of the file you want to delete as an argument, like this:
import os os.remove('filename.txt')
Finally, sometimes it’s useful to know how big a file is before reading from
it. To achieve this use the
size function. Like the
remove function, it
takes a string representing the file name of the file whose size you want to
know. It returns an integer (whole number) telling you the number of bytes the
file takes up:
import os file_size = os.size('a_big_file.txt')
It’s all very well having a file system, but what if we want to put or get files on or off the device?
Just use the
microfs utility!
File Transfer¶
If you have Python installed on the computer you use to program your BBC
micro:bit then you can use a special utility called
microfs (shortened to
ufs when using it in the command line). Full instructions for installing
and using all the features of microfs can be found
in its documentation.
Nevertheless it’s possible to do most of the things you need with just four simple commands:
$ ufs ls story.txt
The
ls sub-command lists the files on the file system (it’s named after
the common Unix command,
ls, that serves the same function).
$ ufs get story.txt
The
get sub-command gets a file from the connected micro:bit and saves it
into your current location on your computer (it’s named after the
get
command that’s part of the common file transfer protocol [FTP] that serves the
same function).
$ ufs rm story.txt
The
rm sub-command removes the named from from the file system on the
connected micro:bit (it’s named after the common Unix command,
rm, that
serves the same function).
$ ufs put story2.txt
Finally, the
put sub-command puts a file from your computer onto the
connected device (it’s named after the
put command that’s part of FTP that
serves the same function).
Mainly main.py¶
The file system also has an interesting property: if you just flashed the
MicroPython runtime onto the device then when it starts it’s simply waiting
for something to do. However, if you copy a special file called
main.py
onto the file system, upon restarting the device, MicroPython will run the
contents of the
main.py file.
Furthermore, if you copy other Python files onto the file system then you can
import them as you would any other Python module. For example, if you had
a
hello.py file that contained the following simple code:
def say_hello(name="World"): return "Hello, {}!".format(name)
...you could import and use the
say_hello function like this:
from microbit import display from hello import say_hello display.scroll(say_hello())
Of course, it results in the text “Hello, World!” scrolling across the
display. The important point is that such an example is split between two
Python modules and the
import statement is used to share code.
Megjegyzés
If you have flashed a script onto the device in addition to the MicroPython
runtime, then MicroPython will ignore
main.py and run your embedded
script instead.
To flash just the MicroPython runtime, simply make sure the script you
may have written in your editor has zero characters in it. Once flashed
you’ll be able to copy over a
main.py file. | https://microbit-micropython-hu.readthedocs.io/hu/latest/tutorials/storage.html | 2019-07-16T04:30:07 | CC-MAIN-2019-30 | 1563195524502.23 | [array(['../_images/files.jpg', '../_images/files.jpg'], dtype=object)] | microbit-micropython-hu.readthedocs.io |
Backup as a Service¶
Beta Warning
This service is currently in beta and seeks for feedback
Contents
What is Backup as a Service?¶
On APPUiO we provide a managed backup service based on Restic.
Just create a
backup object in the namespace you’d like to backup.
It’s that easy. We take care of the rest: Regularly run the backup job and
monitor if and how it is running.
Getting started¶
Follow these steps to enable backup in your project:
Prepare an S3 endpoint which holds your backup data. We recommend cloudscale.ch object storage, but any other S3 endpoint should work.
Store the endpoint credentials in a secret:
oc -n mynamespace create secret generic backup-credentials \ --from-literal=username=myaccesskey \ --from-literal=password=mysecretaccesskey
Store an encryption password in a secret:
oc -n mynamespace create secret generic backup-repo \ --from-literal=password=mybackupencryptionpassword
Configure the backup by creating a backup object:
oc -n mynamespace apply -f - <<EOF apiVersion: backup.appuio.ch/v1alpha1 kind: Schedule metadata: name: schedule-test spec: backend: repoPasswordSecretRef: name: backup-repo key: password s3: endpoint: bucket: baas accessKeyIDSecretRef: name: backup-credentials key: username secretAccessKeySecretRef: name: backup-credentials key: password backup: schedule: ' 0 1 * * *' keepJobs: 10 promURL: check: schedule: ' 0 0 * * 0' promURL: prune: schedule: ' 0 4 * * *' retention: keepLast: 5 keepDaily: 14 EOF
For figuring out the crontab syntax, we recommend to get help from crontab.guru.
Hints
- You can always check the state and configuration of your backup by using
oc -n mynamespace describe schedule.
- By default all PVCs are stored in backup. By adding the annotation
appuio.ch/backup=falseto a PVC object it will get excluded from backup.
Application aware backups¶
It’s possible to define annotations on pods with backup commands. These backup commands should create an application aware backup and stream it to stdout. Since the backupcommand isn’t run in a bash there is no env available. Therefore this might have to be specified in the backupcommand using ‘/bin/bash -c’.
Define an annotation on pod:
<SNIP> template: metadata: labels: app: postgres annotations: appuio.ch/backupcommand: '/bin/bash -c "pg_dump -U user -p 5432 -d dbname"' <SNIP>
With this annotation the operator will trigger that command inside the the container and capture the stdout to a backup.
Tested with: * MariaDB * MongoDB
But it should work with any command that has the ability to output the backup to stdout.
Data restore¶
There are two ways to restore your data once you need it.
Automatic restore¶
This kind of restore is managed via CRDs. These CRDs support two targets for restores:
- S3 as tar.gz
- To a new PVC (mostly untested though → permissions might need some more investigation)
Example of a restore to S3 CRD:
apiVersion: backup.appuio.ch/v1alpha1 kind: Restore metadata: name: restore-test spec: restoreMethod: s3: endpoint: bucket: restoremini accessKeyIDSecretRef: name: backup-credentials key: username secretAccessKeySecretRef: name: backup-credentials key: password backend: s3: endpoint: bucket: baas accessKeyIDSecretRef: name: backup-credentials key: username secretAccessKeySecretRef: name: backup-credentials key: password repoPasswordSecretRef: name: backup-repo key: password
The S3 target is intended as some sort of self service download for a specific backup state. The PVC restore is intended as a form of disaster recovery. Future use could also include automated complete disaster recoveries to other namespaces/clusters as way to verify the backups.
Manual restore¶
Restoring data currently has to be done manually from outside the cluster. You need Restic installed.
Configure Restic to be able to access the S3 backend:
export RESTIC_REPOSITORY=s3: export RESTIC_PASSWORD=mybackupencryptionpassword export AWS_ACCESS_KEY_ID=myaccesskey export AWS_SECRET_ACCESS_KEY=mysecretaccesskey
List snapshots:
restic snapshots
Mount the snapshot:
restic mount ~/mnt
Copy the data to the volume on the cluster f.e. using the
occlient:
oc rsync ~/mnt/hosts/tobru-baas-test/latest/data/pvcname/ podname:/tmp/restore oc cp ~/mnt/hosts/tobru-baas-test/latest/data/pvcname/mylostfile.txt podname:/tmp
Please refer to the Restic documentation for the various restore possibilities.
How it works¶¶
- Only supports data from PVCs with access mode
ReadWriteManyat the moment
- Backups are not actively monitored / alerted yet | https://appuio-community-documentation.readthedocs.io/en/latest/baas.html | 2019-07-16T04:20:38 | CC-MAIN-2019-30 | 1563195524502.23 | [] | appuio-community-documentation.readthedocs.io |
Message-ID: <[email protected]> Subject: Exported From Confluence MIME-Version: 1.0 Content-Type: multipart/related; boundary="----=_Part_174027_1558549458.1563249659406" ------=_Part_174027_1558549458.1563249659406 Content-Type: text/html; charset=UTF-8 Content-Transfer-Encoding: quoted-printable Content-Location:
This page provides information on the simulations that can be se= tup through the Phoenix FD toolbar.
Page Contents
The Quick Simulation Setup buttons on the Phoenix toolbar allow for quick setup of many commonl= y used fluid simulations. Pressing any of the buttons will convert the curr= ently selected objects into emitters for the simulation and will additional= ly create other helper objects and set up V-Ray materials depending on the = type of simulation.=20
=20=20
The Fire/Smoke Quick Setups create a FireSmokeSim simulator based on the sel= ected object(s), and sometimes also create additional helpers and component= s.
Sets up a simulation where the selected ob= ject(s) emit fuel which ignites and produces smoke on burning. Drag particl= es are emitted and shaded as sparks using a Particle Shader, a Particle Shader = component with its Mode set to Point.
Directly releases temperature and smoke from the selected objects(s). Ea= ch object would also emit a different smoke color. Note that the discharge = is animated in time.
Sets up a simulation where the selected ob= ject(s) emit fuel which ignites and explodes with high-energy, gradually bu= rning out and producing smoke. Note that the discharge is animated in = time.
Sets up a simulation where the selected ob= ject(s) emit dense, heavy smoke.
The object(s) emit smoke with sub-zero tem= perature in Celsius, which floats down and creates a Dry Ice effect.
The selected object(s) emit temperature th= at drives a fine, thin smoke. Drag particles are emitted and shaded using a= Particle Shader, a Particle Shader component with its Mode set to Point. A Discharge Modifier with its = Modify Particles by parameter set to Normal Z is used so particles are only emitted upwards. Finally, a = PHXTurbulence hel= per is added to add more interesting smoke behavior.
The object(s) emit temperature and form a = smooth flame shaded with a specific fire color gradient.
The volume of the selected object(s) is fi= lled with separate clouds which are animated to gradually move. The preset works by imprinting smoke in the simu= lator from a noise texture using a Volume Brush mode Source. The B= rush Effect is pretty low, so the smoke would gradually appear ove= r time. The noise texture is animated and it also offsets with time, so the= cloud shapes would evolve over time and move. There is an additional Turbulence force a= dded to the scene to break the smoke up, and also a Plain Force blowing against the direct= ion of movement of the clouds. In order for the smoke not to fill up the en= tire simulator over time, Smoke Dissipation is enabled und= er the Dynamics rollout= and it's a tug of war between this option and the Brush Effect of the Source during the simulation.
The Liquid Quick Setups create a LiquidSim simulator based on the selected object(s= ) in addition to modifiers and shaders as needed.
The object(s) emit water with a specific s= urface tension.
The object(s) emit milk and the liquid is = shaded using the Milk preset of the VRayFastSSS2.
The object(s) emit beer with foam particle=
s with high bubble-to-bubble interaction forces. This is done through =
a Particle Shader, a Particle Shader component with its Mode set to Bubbles. A Discharge Modifier with its
The object(s) emit coffee with foam partic= les. The foam is shaded using a Particle Shader, a Particle Shader component wi= th its Mode set to Cellular. A Discharge = Modifier with its Modify Particles by p= arameter set to Normal Z is used so particles ar= e only emitted downwards.
The object(s) emit viscous honey. Note tha= t the discharge is animated in time towards zero to show the specific coili= ng when the flow thins out..
The object(s) emit viscous chocolate that = wet and stick to any obstacle objects. A Discharge Modifier with its Modify Particles by parameter set to&n= bsp;Normal Z is used so p= articles are only emitted downwards. Note that the discharge is animated in= time towards zero.
Emits a short burst of blood from the emit= ter object(s). Note that the discharge is animated in time towards zer= o.
Emits a slightly viscous paint with high s= urface tension from the selected object(s). Each object would emit a differ= ent color of paint so that colors can be mixed. The scene uses the PhoenixFDGridTe= x to transfer the RGB color of the simulation onto the liquid mesh= 's material for rendering.
Each selected object emits a different col= or and weight of ink composed of drag particles. The particles are shaded u= sing a Particle Shader, a Particle Shader component with its Mode<= /strong> set to Point. A Discharge Modifier with its&n= bsp;Modify Particles by parameter set to Normal Z is used so particles are only emitted downwards. No= te that the discharge is animated in time towards zero.
The object(s) emit a large amount of water= with splash, foam and mist particles. Three different Particle Shader components a= re used with their Modes set to Bubbles<= /strong>, Splashes, and Fog respectively. A Dis= charge Modifier with its Modify Particles by= parameter set to Normal Z is used so parti= cles are only emitted downwards.
The object(s) are placed in a fish tank co= ntainer, surrounded by an infinite animated ocean surface. Foam and splash = particles are emitted and are shaded with two different Particle Shaders with their&= nbsp;Modes set to Bubbles and&n= bsp;Splashes respectively. The objects can be animate= d to sail onto the ocean, emerge, submerge or splash into the tank. | https://docs.chaosgroup.com/exportword?pageId=23694147 | 2019-07-16T04:01:00 | CC-MAIN-2019-30 | 1563195524502.23 | [] | docs.chaosgroup.com |
Message-ID: <[email protected]> Subject: Exported From Confluence MIME-Version: 1.0 Content-Type: multipart/related; boundary="----=_Part_174081_1596861721.1563252604490" ------=_Part_174081_1596861721.1563252604490 Content-Type: text/html; charset=UTF-8 Content-Transfer-Encoding: quoted-printable Content-Location:
The following QuickStart guides provide step-by-step tutorials and asset= s to help you get started with V-Ray for Nuke. In addition, these guides ta= ke an in-depth look at using V-Ray geometry formats, V-Ray lights, and adva= nced rendering techniques.
For advanced guides on production workflows, see the Production Workflows section.=
An introductory guide to V-Ray for Nuke. Covers setting up and rendering= a simple scene using V-Ray lights, materials, and render elements. Include= s how to set up a comparison scene with the Nuke Scanline renderer and the = V-Ray renderer.
A detailed tutorial on using and loading V-= Ray geometry formats. Also provides a comparison between V-Ray file formats= and the ones provided within Nuke.
A guide to using V-Ray materials, textures, and render elements. Covers = how to use the VRayMaterialPreview node to develop the look of a shader wit= hout setting up an entire scene.
A detailed guide to the various lights avai= lable in V-Ray.
A detailed tutorial on rendering that builds on the previous QuickStart = guides. Includes working with more Render Elements, the VRayExtraTex node, = improving render quality, and rebuilding a comp. | https://docs.chaosgroup.com/exportword?pageId=26512358 | 2019-07-16T04:50:05 | CC-MAIN-2019-30 | 1563195524502.23 | [] | docs.chaosgroup.com |
Security.
Additional Resources
For additional technical best practices and developer-centric information, see the following information.
- The Magento Security blog investigates and provides insights to security issues, best practices, and solutions for all of your security questions.
- Try out the free Magento Security Scan Tool! Monitor your sites for security risks, update malware patches, and detect unauthorized access with this tool from Magento Commerce.
- Check all available Developer Tools through the Admin. These features can help test, verify, and prepare your site and Admin for workloads and traffic.
- The Magento Community has limitless best practices, recommendations, and tutorials to help get you started with Magento, maintaining your catalogs, and much more. Check out the best Community Resources.
Acknowledgments
Parts of this article were inspired by real-world solutions that were shared by community members. The resulting article incorporates content from the community, with input from our team.
- (@dracony_gimp) for his security presentation, Being Hacked is Not Fun.
Willem de Groot for providing a sample Nginx configuration.
A quick rating takes only 3 clicks. Add a comment to help us improve Magento even more. | https://docs.magento.com/m2/2.2/ce/user_guide/magento/magento-security-best-practices.html | 2019-07-16T05:09:25 | CC-MAIN-2019-30 | 1563195524502.23 | [array(['../Skins/Default/Stylesheets/Images/transparent.gif', 'Closed'],
dtype=object)
array(['../Skins/Default/Stylesheets/Images/transparent.gif', 'Closed'],
dtype=object)
array(['../Skins/Default/Stylesheets/Images/transparent.gif', 'Closed'],
dtype=object)
array(['../Skins/Default/Stylesheets/Images/transparent.gif', 'Closed'],
dtype=object)
array(['../Skins/Default/Stylesheets/Images/transparent.gif', 'Closed'],
dtype=object) ] | docs.magento.com |
Here you can add new users to any domain handled by vHosts, users are added to database immediately and are able to login. NOTE: You cannot bestow admin status to these users in this section.
This enables you to change the password of any user in the database. Although changes will take effect immediately, users currently logged in will not know the password has been changed until they try to log in again.
This removes the user or users (comma separated) from the database. The deleted users will be kicked from the server once submit is clicked.
This section allows admins to get information about a specific user including current connections as well as offline and online messages awaiting delivery.
This will display all registered users for the selected domain up to the number specified. | https://docs.tigase.net/tigase-server/7.1.1/Administration_Guide/webhelp/_users.html | 2019-07-16T04:06:28 | CC-MAIN-2019-30 | 1563195524502.23 | [] | docs.tigase.net |
Pivotal Greenplum 5.12.0 Release Notes
A newer version of this documentation is available. Click here to view the most up-to-date release of the Greenplum 5.x documentation.
Pivotal Greenplum 5.12.0 Release Notes
Updated: December, 2018
- Welcome to Pivotal Greenplum 5.12.0
- New Features
- Changed Features
- Experimental Features
- Differences Compared to Open Source Greenplum Database
- Supported Platforms
- Pivotal Greenplum Tools and Extensions Compatibility
- Hadoop Distribution Compatibility
- Upgrading to Greenplum Database 5.12.0
- Migrating Data to Pivotal Greenplum 5.x
- Pivotal Greenplum on DCA Systems
- Resolved Issues
- Known Issues and Limitations
- Update for gp_toolkit.gp_bloat_expected_pages Issue
- Update for gp_toolkit.gp_bloat_diag Issue
Welcome to Pivotal Greenplum 5.12.12.12.0 is a minor release that includes feature enhancements and feature changes, and resolves some issues.
New Features
Greenplum Database 5.12.0 includes the following new features and enhancements:
gpcopy Enhancements
- The utility can migrate data between Greenplum Database installations with differing numbers of segments instances.
In previous releases The gpcopy utility supported copying only between source and destination Greenplum Database systems with the same number of segments.
- The utility supports the new option --on-segment-threshold. The option specifies the number of rows that determines when gpcopy copies a table using the Greenplum Database source and destination master instead of the source and destination segment instances. For smaller tables, copying tables using the Greenplum Database master is more efficient than using segment instances.
The default value is 10000 rows. If a table contains 10000 rows or less, the table is copied using the Greenplum Database master. The value -1 disables copying tables using the master. All tables are copied using the segment instances.
For more information about the utility, see gpcopy, and for more information about migrating data, see Migrating Data.
PXF Enhancements
- PXF now bundles several JAR files, and removes its dependency on a Hadoop client installation. The PXF installation now includes Hadoop and supporting libraries, as well as a PostgreSQL JDBC JAR file.
The PXF package name has changed to org.greenplum.pxf from org.apache.hawq.pxf. The name change may impact Greenplum Database installations with custom PXF profile definitions. Refer to Upgrade PXF for the PXF upgrade procedure.
If you developed a custom PXF connector with a previously released version of the PXF SDK, you must update the name of each imported PXF API class and interface to use the org.greenplum.pxf prefix.
Enhancements to the Pivotal Greenplum-Kafka Integration (Experimental)
The Pivotal Greenplum-Kafka Integration (an experimental feature) now supports:
- New Kafka key formats - The Connector now supports parsing and loading binary and delimited format Kafka message key and value data.
- Custom data formatters - The Connector now supports custom formatters for Kafka message data. You register a custom formatter user-defined function and specify the function name and arguments in the Greenplum-Kafka Integration configuration file.
- Verbose mode - The Connector output includes debug messages when enabled via the --verbose option.
- Version number - When specified via the --version option, the Connector displays its version number to stdout.
- Data load resume mode - The Greenplum-Kafka Integration exposes new load options. When the Kafka message offset recorded by the Connector does not match that of Kafka, you can now specify that the Connector resume the load from the earliest known offset, or resume and load only new messages.
Refer to the gpkafka load reference page for more information.
Changed Features
Greenplum Database 5.12.0 includes these changed features.
- For clients connecting to Greenplum Database with TCP using SSL, Greenplum Database libpq supports TLS 1.2.
In previous releases, libpq supports TLS 1.0.
- The Pivotal Greenplum PL/R language extension package is updated to 2.3.3. Version 2.3.3 adds conversion functions from the R SEXP datatype to Greenplum Database integer and float datatypes. The functions can significantly increase the performance of data conversion.
For information about Greenplum PL/R package compatibility, See Pivotal Greenplum Tools and Extensions Compatibility.
- The Pivotal Greenplum PL/Container package is updated to 1.4.0. Version 1.4.0 includes these enhancements.
- Adds support for these string quoting functions.
- plpy.quote_literal(string) - Returns the string quoted to be used as a string literal in an SQL statement string. Returns null on null input.
- plpy.quote_nullable(string) - Returns the string quoted to be used as a string literal in an SQL statement string. Returns NULL on null input.
- plpy.quote_ident(string) - Returns the string quoted to be used as a string literal in an SQL statement string. Quotes are added only if necessary (for example, if the string contains non-identifier characters or would be case-folded).
- When returning text from a PL/Python function, PL/Container converts a Python unicode object to text in the database encoding. If the conversion cannot be performed, an error is returned.
If an error of level ERROR or FATAL is raised in a nested Python function call, the message includes the list of enclosing functions.
For information about Greenplum PL/Container package compatibility, See Pivotal Greenplum Tools and Extensions Compatibility.
- The Pivotal Greenplum-Kafka Integration (Experimental) includes the following changes:
- The Connector no longer displays debug messages to stdout by default. You must specify the --verbose option to retain this behaviour.
- When you specify a KEY or a VALUE block in a gpkafka Version 2 configuration file, you must now also provide the associated COLUMNS block.
- The Connector now uses a new format for naming internal tables used by the gpkafka utilities..
- Integration with the Pivotal Greenplum-Kafka Integration (experimental). The Connector provides high speed, parallel data transfer from a Kafka cluster to a Pivotal Greenplum Database cluster for batch and streaming ETL operations. Refer to the Pivotal Greenplum-Kafka Integration (Experimental) documentation for more information about this feature.
- Data Direct ODBC/JDBC Drivers
- gpcopy utility for copying or migrating objects between Greenplum systems.
- The PXF Apache Ignite connector.
Supported Platforms
Pivotal Greenplum 5.12.12.12.12.12The PostGIS extension package version 2.1.5+pivotal.1 is compatible only with Greenplum Database 5.5.0 and later.
4For information about the Python package, including the modules provided, see the Python Data Science Module Package.
5For information about the R package, including the libraries provided, see the R Data Science Library Package..12.0, (Experimental).12.12.0
The upgrade path supported for this release is Greenplum Database 5.x to Greenplum Database 5.12.12.0
An upgrade from 5.x to 5.12.12.0 on the Greenplum Database master host.When prompted, choose an installation location in the same base directory as your current installation. For example:
/usr/local/greenplum-db-5.12.0
If you install Greenplum Database with the rpm (as root), the installation directory is /usr/local/greenplum-db-5.12.12.12.0 /usr/local/greenplum-db
- Source the environment file you just edited. For example:
$ source ~/.bashrc
- Run the gpseginstall utility to install the 5.12.12.12.0, or you can upgrade from Pivotal Greenplum 5.x to 5.12.0.
Only Pivotal Greenplum Database is supported on DCA systems. Open source versions of Greenplum Database are not supported.
- Installing the Pivotal Greenplum 5.12.0 Software Binaries on DCA Systems
- Upgrading from 5.x to 5.12.12.0 Software Binaries on DCA Systems
For information about installing Pivotal Greenplum on non-DCA systems, see the Greenplum Database Installation Guide.
Prerequisites
- Ensure your DCA system supports Pivotal Greenplum 5.12.0. See Supported Platforms.
- Ensure Greenplum Database 4.3.x is not installed on your system.
Installing Pivotal Greenplum 5.12.0 on a DCA system with an existing Greenplum Database 4.3.x installation is not supported. For information about uninstalling Greenplum Database software, see your Dell EMC DCA documentation.
Installing Pivotal Greenplum 5.12.0
- Download or copy the Greenplum Database DCA installer file greenplum-db-appliance-5.12.0-RHEL6-x86_64.bin to the Greenplum Database master host.
- As root, run the DCA installer for 5.12.12.0.
# ./greenplum-db-appliance-5.12.0-RHEL6-x86_64.bin hostfile
Upgrading from 5.x to 5.12.0 on DCA Systems
Upgrading Pivotal Greenplum from 5.x to 5.12.12.12.0 on the Greenplum Database master host and specify the file hostfile that lists all hosts in the cluster. If necessary, copy hostfile to the directory containing the installer before running the installer.
This example command runs the installer for Greenplum Database 5.12.0 for Red Hat Enterprise Linux 6.x.
# ./greenplum-db-appliance-5.12
Resolved Issues
The listed issues are resolved in Pivotal Greenplum Database 5.12.0.
For issues resolved in prior 5.x releases, refer to the corresponding release notes. Release notes are available from the Pivotal Greenplum page on Pivotal Network or on the Pivotal Greenplum Database documentation site at Release Notes.
- 29612 - PXF
- PXF failed to read from a Hive table secured with Kerberos due to using an incorrect username.
- This issue has been resolved. PXF now uses the correct username to access a Hive system secured with Kerberos.
- 29601 - gprecoverseg
- In some cases, the gprecoverseg utility generated a Greenplum Database PANIC during a segment instance recovery operation. The PANIC occurred when the utility generated a SIGSEGV when it did not manage shared memory correctly while a segment instance was being stopped.
- This issue has been resolved. The utility has improved the shared memory error handling in the specified situation.
- 29565 - PXF
- PXF did not correctly read from Hive tables that included one or more header rows.
- This issue has been resolved. PXF now skips the appropriate number of header rows when reading from a Hive table.
- 29550 - DML
- In some cases, Greenplum Database did not properly insert data into an append-optimized table when a VACUUM operation ran concurrently with the INSERT operation. Greenplum Database did not correctly manage table files on disk that were being vacuumed. This issue had the potential to cause data loss on the impacted data blocks.
- This issue has been resolved. Greenplum Database has improved how append-optimized table disk files are managed in the specified situation.
- 29544 - gpbackup/ gprestore
- When performing some backup operations, the gpbackup utility failed and returned the error Dependency resolution failed. The error occurred when the utility did not handle dependencies for some database objects correctly.
- This issue has been resolved. The utility has improved the handling dependent database objects.
- 29526 - Query Planner/ Query Optimizer
- For some queries that contain a nested JOIN and the inner JOIN uses a SELECT statement, Greenplum Database returned an error. The error was returned when Greenplum Database did not correctly manage the processing of the inner and outer joins.
- This issue has been resolved. Greenplum Database has improved the management of joins for the specified type of queries.
- 29493 - Query Execution
- In some cases when executing a query, Greenplum Database returned an invalid memory alloc request error. The error was generated by Greenplum Database while monitoring the memory used during query execution when the query executed a large number of functions. The number of functions caused Greenplum Database to consume a large amount of memory during query execution.
- This issue has been resolved. Greenplum Database has enhanced how it monitors query execution memory to consume less memory.
- 29474 - PXF
- PXF was not able to read external data from Azure Data Lake.
- This issue has been resolved. PXF has been updated to allow users to access Azure Data Lake with the appropriate libraries and configuration.
- 29424 - Dispatch
- Greenplum Database performance was poor for some DDL commands such as creating a partitioned with a large number of partitions. When executing the DDL command, Greenplum Database transfers a large amount of data to segment instances. The amount of data being transferred caused performance issues.
- This issue has been resolved. Greenplum Database has reduced the amount of data being transferred in the specified situation.
- 29403 - gprecoverseg
- The gprecoverseg -r <host-name> command might fail if a host system IP address is changed and Greenplum Database system is not restarted. The command completes after the system is restarted.
- This issue has been resolved. The Greenplum Database handling of DNS changes has been improved and the gprecoverseg command completes in the specified situation without a system restart.
- 29394 - Query Planner
- For some queries, the Greenplum Database legacy planner did not generate some comparisons correctly when the comparisons were in predicates that were inferred from query predicates. The generated comparisons did not use comparison operators from the correct operator family. In some cases, this caused the query to return incorrect results.
- This issue has been resolved. The legacy planner has improved how comparisons are generated in the specified situation.
- 160999837 - gpbackup/ gprestore
- The gpbackup utility incorrectly backed up the table definition of external web tables when performing a backup operation. This caused gprestore to fail when attempting to restore the table definition.
- This issue has been resolved. Now gpbackup correctly backs up the table definition of up external web tables.
- 160941596 - gpbackup/ gprestore
- When the gprestore utility performed a restore operation, datatypes that are defined with the storage attribute set to external were incorrectly restored with the storage attribute set to extended.
- This issue has been resolved. Now the utility correctly restores datatypes that are defined with the storage attribute set to external.
- 160932887 - ANALYZE
- For append-optimized partitioned tables, performing an ANALYZE operation on a leaf partition of the partitioned table did not updated the root partition statistics. Statistics for empty leaf partitions were not set correctly.
- This issue has been resolved. Now the statistics for empty leaf partitions of append-optimized partitioned tables are set correctly.
- 160864241 - gpbackup/ gprestore
- The gprestore utility failed if the restore operation restored to a database and the name contains upper and lower case characters or special characters.
- This issue has been resolved. Now the utility can perform a restore operation to a database with a valid name that contains upper and lower case characters and special characters.
- 160195564 - Metrics collector extension
- If the Command Center agent is restarted while a query is executing in a Greenplum Database session, the metrics collector did not send the database name and role name for the next query executed in that session.
- This issue is resolved.
- 159644436 - Metrics collector extension
- When the Greenplum Command Center agents stopped, the metrics collector extension continued to send metrics.
- This issue is resolved. The metrics collector extension now stays in the disabled state when the Command Center agents are not running.
- 154886918 - Query Optimizer
- When generating a query plan that performed a nested loop join, GPORCA always performed a broadcast motion on the inner side of the join. When the tables being joined are co-located, the broadcast motion is unnecessary and created a suboptimal query plan
- This issue has been resolved. GPORCA has been improved to avoid unnecessary broadcast motions in the specified situation.; | https://gpdb.docs.pivotal.io/5120/relnotes/GPDB_5120_README.html | 2019-07-16T04:23:50 | CC-MAIN-2019-30 | 1563195524502.23 | [array(['/images/icon_gpdb.png', None], dtype=object)] | gpdb.docs.pivotal.io |
Setup
Maven is a tool used to automate and simplify the development cycle of any Java-based project. The XAP plugin for Maven utilizes Maven to simplify the development cycle of XAP-based applications. You can use this plugin to easily create, compile, package, run unit tests, execute and deploy Processing Units.
You don’t need to be an experienced Maven user to start working with this plugin. This section provides you with everything you need to know in order to start developing Processing Units with the Maven plugin. Experienced Maven users can use the Maven plugin to embed Processing Unit development with their existing development environment.
The XAP Maven plugin has been tested with Maven 3.0. For further information about maven see: apache.org; What is Maven?
Prior to Installation
In order to use the XAP Maven plugin, Maven needs to be installed on the machine. If a Maven installation already exists on the machine, it can be used. If not, XAP comes with a ready-to-use distribution of Maven 3.0, located under:
<XAP HOME>/tools/maven/apache-maven-3.2.5.
All you need to do is add the Maven
bin directory to the system
PATH variable, and you are ready to go. To test whether the Maven command is accessible, open a command line window, type
mvn \-version, and press Enter.
The following message should be displayed:
>mvn -version Apache Maven 3.0.5 (r01de14724cdef164cd33c7c8c2fe155faf9602da; 2013-02-19 15:51:28+0200) Java version: 1.8.0_66, vendor: Oracle Corporation Java home: /usr/lib/jvm/java-8-oracle/jre Default locale: en_US, platform encoding: UTF-8 OS name: "linux", version: "3.13.0-76-generic", arch: "amd64", family: "unix"
First uses of Maven require internet connection in order for the local repository to be populated with required libraries. Once all required libraries are in the local repository the internet connection is not mandatory.
Maven uses repositories: a local repository where all required dependencies (artifacts) are stored for Maven’s use, and remote repositories from which Maven downloads required dependencies that are missing in the local repository. If a dependency is missing from the local repository during execution, Maven automatically downloads the missing dependencies from the remote repositories. The download might take a few minutes (progress messages are printed to the console). When the download is finished, Maven returns to its original tasks.
Installation
To install the XAP Maven plugin:
Run the
installmavenrep script from the
<XAP Home>\tools\maven directory:
D:<XAP Home>\tools\maven>installmavenrep.bat
This installs the XAP libraries and the XAP Maven plugin into the local Maven repository. Once the installation is finished, the Maven plugin is ready to be used.
<maven-repository-dir>\org\apache\maven\plugins\xap-maven-plugin
Location of Libraries and Local Repository
Library Location:
- XAP libraries are installed under:
<maven-repository-dir>/com/gigaspaces
Dependencies
<dependency> <artifactId>gs-runtime</artifactId> <groupId>com.gigaspaces</groupId> <version>11.0.0-14800-RELEASE</version> </dependency> <dependency> <artifactId>gs-openspaces</artifactId> <groupId>com.gigaspaces</groupId> <version>11.0.0-14800-RELEASE</version> </dependency>
Local Repository Location
By default, Maven creates the local repository under the your home directory:
<USER_HOME>\.m2\repository. For example, on Windows XP, the local repository is created in
C:\Documents and Settings\<username>\.m2\repository. However, the location of the repository can be changed by editing the
settings.xml file under
<Maven Root>\conf.
public repository Location
You can install the XAP artifacts using a public repository:
<repository> <id>org.openspaces</id> <url></url> </repository>
Using Available Project Templates
You may view list of available project templates and their description using the following command:
mvn xap:create
The result is a list of available template names and descriptions:
Use the
-Dtemplate=<template> argument to specify a project template. Example:
mvn xap:create -Dtemplate=persistent-event-processing
Creating Processing Unit Project
The XAP Maven plugin can create Processing Unit projects. It generates the resources and the appropriate directory structure, making it easy to immediately start working on the Processing Units. Projects can be created in any directory. Before creating the project change to the directory where the project should be created. To create a Processing Unit project, use the following command-line:
mvn xap:create -DgroupId=<group-id> -DartifactId=<artifact-id> -Dtemplate=<project-template>
The project is generated in the current directory (
my-app directory).
Executing
xap:create without specifying a template shows a list of available templates and their description.
To start working with the project (compiling, packaging etc…) you should change directory to the directory of the project.
Processing Unit Project Structure
Basically, a Processing Unit project structure is what Maven users call a multi-module project. It consists of a main (top-level) project that contains sub-projects called modules. A Processing Unit is implemented as a module of the main project, thus a main project might consist of many Processing Units.
The project, created by the
event-processing template, consists of a main project and three modules (sub-projects):
- feeder – a Processing Unit that writes data into the space.
- processor – a Processing Unit that takes data from the space, processes it and writes the results back to the space.
- common – a module that contains resources shared by both the feeder and the processor.
libdirectory of the feeder’s and processor’s distributables.
The main project and each of the modules contain a project-descriptor file called
pom.xml; which contains information about the project’s properties, dependencies, build configuration, and so on. A module is considered a Processing Unit module if its
pom.xml file contains the property
<gsType>PU</gsType>. In this case, only the feeder and the processor are considered Processing Unit modules.
Compiling the Processing Unit Project
In order to compile the Processing Unit project, use the following command line from the main project’s directory.
mvn compile
This compiles each module and puts the output files under the modules’ target directory.
Running Processing Unit Modules
Sometimes, during development, the developer might want to run the Processing Unit module to check its functionality. The XAP Maven plugin allows you to run Processing Unit modules without the need to package them as Processing Unit distributables first. This feature saves time, while evading build phases that are not required for this task.
Make sure you are in the directory of the project.
To run Processing Unit modules, use the following command-line (found in the
artifactId folder):
mvn xap:run -Dcluster=<"cluster-properties"> -Dgroups=<groups> -Dlocators=<locators> -Dproperties=<"context-level-properties-location"> -Dmodule=<module-name>
Example:
mvn compile xap:run -Dcluster="schema=partitioned total_members=1,1 id=1" -Dproperties="embed://prop1=value1" -Dmodule=feeder
Determining Module Execution
- If the current directory is a the base directory of a module, only this module is executed.
- If the current directory is the main project directory and the
moduleargument is not set, all modules are executed one by one.
- If the current directory is the main project directory and the
moduleargument is set, only the specified module is executed.
Overriding Space/Cluster Configuration
If you need to override the configuration of the space or cluster when running the processing units through the XAP plugin and you want to do it by replacing the original configuration files, you can do it by placing the required file in the project’s root directory.
Examples: To change the logging configuration place the new _gslogging.properties file in the config directory (you may need to create this directory) under the project’s root directory.
To change the security permissions place the new policy.all file in the policy directory (you may need to create this directory) under the project’s root directory.
Those changes apply only when deploying the processing units.
Packaging Processing Units
In order to deploy Processing Units, you need to package them in a distributable form. The XAP Maven plugin allows you to package two types of distributable supported by XAP: a single JAR archive and an open directory structure.
Make sure you are in the directory of the project. To package the Processing Units, use the following command-line from the main project directory:
mvn package
The Processing Units’ distributable bundles are generated for each module, under the directory
target. For example, the distributables of a module named
feeder are generated under
<proj-dir>\feeder\target.
The single JAR distributable is
feeder.jar; the open directory structure distributable is created under the directory
feeder.
Suppressing Unit Test Execution While Packaging
If not specified explicitly, unit tests are executed when packaging the Processing Units.
To suppress the execution of unit tests, add one of the following arguments to the command line:
skipTests or
maven.test.skip:
For example:
>mvn package -DskipTests .. or .. >mvn package -Dmaven.test.skip
Running Processing Units
After packaging the Processing Units, you might want to test the validity of the assemblies. The XAP Maven plugin makes it possible to run the Processing Units as standalone modules. The Maven plugin includes all the assembly dependencies in the execution classpath, making sure that the Processing Unit finds all the required resources. Managing to run the Processing Unit as a module while failing to run it as a standalone module might imply that a problem exists with the assembly definitions.
Make sure you are in the directory of the project. To run Processing Units as standalone modules, use the following command-line:
mvn xap:run-standalone -Dcluster=<"cluster-properties"> -Dgroups=<groups> -Dlocators=<locators> -Dproperties=<"context-level-properties-location"> -Dmodule=<module-name>
Example:
mvn xap:run-standalone -Dcluster="schema=partitioned total_members=1,1 id=1" -Dproperties="embed://prop1=value1" -Dmodule=feeder
Determining Processing Unit Execution
- If the current directory is a Processing Unit module’s base directory, only this Processing Unit is executed.
- If the current directory is the main project directory and the
pu-nameargument is not set, all Processing Units are executed one by one.
- If the current directory is the main project directory and the
pu-nameargument is set, only the specified Processing Unit is executed.
Overriding Space/Cluster Configuration
Overriding the space and cluster configuration is explained in Running Processing Unit Modules.
Deploying Processing Units
Processing Units usually run in the Service Grid. In order to deploy a Processing Unit, you first need to package it (see Packaging Processing Units).
XAP supports two forms of Processing Unit distributable: A single JAR archive and an open directory structure. The XAP Maven plugin allows you to deploy Processing Units simply – packaged as JAR archives – into the Service Grid.
When deploying Processing Units, make sure that the Grid Service Manager (GSM) and the Grid Service Container (GSC) are running.
Make sure you are in the directory of the project. Once your Processing Units are packaged, use the following command-line to deploy them to the Service Grid:
mvn xap:deploy -Dsla=<sla> -Dcluster=<cluster> -Dgroups=<groups> -Dlocators=<locators> -Dtimeout=<timeout> -Dproperties=<"prop1=val1 prop2=val2..."> -Doverride-name=<override-name> -Dmax-instances-per-vm=<max-instances-per-vm> -Dmax-instances-per-machine=<max-instances-per-machine> -Dmodule=<module-name>
If the current directory is a Processing Unit module’s base directory, only this processing unit is deployed.
If the current directory is the main project directory and the
module argument is not set, Maven deploys the Processing Unit in the order described below.
If the current directory is the main project directory and the
module argument is set, only the specified Processing Unit is deployed.
Undeploying Processing Units
The XAP Maven plugin makes it simple to undeploy Processing Units from the Service Grid. Make sure you are in the directory of the project. To undeploy a Processing Unit from the Service Grid, use the following command-line:
mvn xap:undeploy -Dgroups=<groups> -Dlocators=<locators> -Dtimeout=<timeout> -Dmodule=<module-name>
- If the current directory is a Processing Unit module’s base directory, only this Processing Unit is undeployed.
- If the current directory is the main project directory and the
moduleargument is not set, Maven undeploys the Processing Unit the order described below.
- If the current directory is the main project directory and the
moduleargument is set, only the specified Processing Unit is undeployed.
Controlling Order of Deployment/Undeployment
Deployment
A Processing Unit might have a dependency on another Processing Unit (this dependency is defined in the Processing Unit
pom.xml file). It is important to deploy these Processing Units in the right order to prevent errors.
- The independent Processing Unit should be deployed first, and the the dependent Processing Unit should be deployed second.
- The Maven plugin identifies these dependencies and deploys the Processing Units in the right order.
- If there is no dependency between the Processing Units, they are deployed in the same order in which the modules are declared in the main project
pom.xmlfile.
Undeployment
Undeployment of Processing Units takes place in a reverse order: the dependent Processing Unit is undeployed first and the independent second.
Adding Dependencies to Modules
A dependency is a library (usually a JAR archive containing class libraries) required by the Processing Unit for compilation, execution, etc.
For example, if the Processing Unit’s code uses a class from an external archive, this archive needs to be added as a dependency of the Processing Unit.
Adding dependencies is done a Maven-typical way, which is editing the module’s
pom.xml file.
For example, to add
commons-logging version 1.1.1 as a dependency to the processor Processing Unit, add the following XML snippet to the
<dependencies> section of the
pom.xml file:
<project> ... <dependencies> ... <!--The added snippet--> <dependency> <groupId>commons-logging</groupId> <artifactId>commons-logging</artifactId> <version>1.1.1</version> <scope>compile</scope> </dependency> ... </dependencies> ... </project>
Private Dependencies
Private dependencies are Processing Unit dependencies that are not shared with other Processing Units. Processing Unit distributions hold private dependencies in the
lib directory. To add private dependency, add it to the Processing Unit module
pom.xml file. For example, to add the
commons-logging version 1.1.1 as a private dependency of the processor Processing Unit, add the XML snippet above to the processor module’s
pom.xml file. When the Processing Unit is packaged, the
commons-logging archive is located under the
lib directory of the processor distributable.
Shared Dependencies
Shared dependencies are Processing Unit dependencies that are shared with other Processing Units. To add shared dependencies, add the dependencies to the common module
pom.xml file. For example, to add the
commons-logging version 1.1.1 as a shared dependency of the processor and the feeder Processing Units, add the XML snippet above to the common module’s
pom.xml file. When the Processing Units are packaged, the
commons-logging archive is located under the
lib directory of the processor and the feeder distributables.
Importing Processing Unit Projects to Eclipse IDE
It is possible to import a Processing Unit project into the Eclipse environment. Imported projects have built-in launch targets, allowing you to run the processor and the feeder using Eclipse run (or debug) targets.
1. Generate Eclipse Project
Execute the following command from the project root directory:
mvn eclipse:eclipse
This generates a
.project file under each module’s base directory.
2. Import Generated Projects to Eclipse Environment
- Select File > Import > General > Existing Projects into Workspace.
- In the Import dialog, keep the Select root directory option selected, and click Browse.
- Select the base directory of the project you want to import and click Finish.
This imports the three modules to Eclipse, each as a separate project.
3. Define M2_REPO Variable
Imported projects use a variable called
M2_REPO to point to the location of Maven’s local repository. If this is a fresh Eclipse installation, the
M2_REPO variable needs to be defined:
- Select Window > Preferences.
- In the Preferences dialog, select Java > Build Path > Classpath Variables, and click New.
- In the New Variable Entry dialog, type
M2_REPOin the Name field.
- Press Folder and select the directory of Maven’s local repository.
- Click OK to close all dialogs.
4. Convert Generated Projects To Maven Projects
Do the following for each project:
- Right click on the project.
- Select Configure > Convert to Maven Project.
Importing Processing Unit Projects to IntelliJ IDE
It is possible to import a Processing Unit project into the IntelliJ environment. Imported projects have built-in launch targets, allowing you to run the processor and the feeder using IntelliJ run (or debug) targets.
1. Import Generated Projects to IntelliJ Environment
- Select File > New > Project from Existing Sources….
- Browse to the folder where you created the project and from the root directory choose the file
pom.xml.
- Don’t change the default settings of this page and click Next.
- Enable the IDE profile and disable the Default profile then click Next.
- Click Next.
- Select project SDK and click Next.
- Enter Project name and location then click Finish.
This imports the modules to IntelliJ.
2. Running the example
- Execute the following command from the project root directory:
mvn xap:intellij
- If the example is using persistency, then first run Mirror processing unit from IntelliJ - select Run > Edit Configurations… Under Application click on Mirror and then press OK then click on Run > Run Mirror.
- Run Processor and then Feeder procssing units in either way you are running with persistency or not - select Run > Edit Configurations… Under Application click on Processor and then press OK then click on Run > Run Processor.
- Select Run > Edit Configurations… Under Application click on Feeder and then press OK then click on Run > Run Feeder.
Viewing Persistent Data
When running a Processing Unit that uses persistency, e.g when using the persistent-event-processing template, one would like to view the persisted data. XAP Maven Plugin makes it easy to start the HSQLDB viewer to immediately view persisted data.
The HSQLDB viewer is for monitoring HSQLDB databases only.
To start the HSQLDB viewer use the following command-line:
mvn xap:hsql-ui -Ddriver=<driver-class> -Durl=<url> -Duser=<user> -Dpassword=<password> -Dhelp | https://docs.gigaspaces.com/xap/11.0/dev-java/installation-maven.html | 2019-07-16T04:28:13 | CC-MAIN-2019-30 | 1563195524502.23 | [] | docs.gigaspaces.com |
A readContentBlob operation will return information about the ContentBlob identified, provided the authenticated user has access to it.
The readContentBlob operation may return
- a JSON object representing the ContentBlob.
- a csv file, if text/csv is specified in the Accept header and if the content blob can be converted into a csv or is already a csv | https://docs.seek4science.org/tech/api/descriptions/readContentBlob.html | 2019-07-16T04:46:03 | CC-MAIN-2019-30 | 1563195524502.23 | [] | docs.seek4science.org |
Contents Customer Service Management Previous Topic Next Topic Agent auto assignment using time-based criteria Subscribe Log in to subscribe to topics and get notified when content changes. ... SAVE AS PDF Selected Topic Topic & Subtopics All Topics in Contents Share Agent auto assignment using time-based criteria Time-based methods, such as schedules and priority assignment, help you auto assign agents based on configuration settings and optional properties. The calculated ratings are used to determine the best agent to perform the task. Any combination of time-based methods can be enabled in the application configuration screen. When a task is created, the schedule of the agent and the task to be performed are combined with rating-based criteria to auto-assign an agent. Agent auto assignment using schedulesAgents can be auto assigned based on the agent or the task schedule.Agent auto assignment using priority assignmentThe priority assignment feature enables you to configure auto assignment so that agents can be assigned to perform tasks or provide services on a continual, 24x7x365 basis. Priority assignment is triggered when the priority of a task matches the priority set in the application configuration page. On this page Send Feedback Previous Topic Next Topic | https://docs.servicenow.com/bundle/kingston-customer-service-management/page/product/planning-and-policy/concept/c_AgAtAssgnTime.html | 2019-07-16T04:57:29 | CC-MAIN-2019-30 | 1563195524502.23 | [] | docs.servicenow.com |
Contents Now Platform Administration Previous Topic Next Topic Activate multifactor authenticator Subscribe Log in to subscribe to topics and get notified when content changes. ... SAVE AS PDF Selected Topic Topic & Subtopics All Topics in Contents Share Activate multifactor authenticator Administrators can activate the Integration - Multifactor Authentication plugin, which is not active by default. Before you beginRole required: admin About this task On this page Send Feedback Previous Topic Next Topic | https://docs.servicenow.com/bundle/madrid-platform-administration/page/integrate/authentication/task/t_ActivateMultifactorAuthenticator.html | 2019-07-16T04:47:18 | CC-MAIN-2019-30 | 1563195524502.23 | [] | docs.servicenow.com |
This documentation does not apply to the most recent version of Splunk. Click here for the latest version.
datatypesbnf.conf
The following are the spec and example files for datatypesbnf.conf.
datatypesbnf.conf.spec
# Version 6.2.2 # #
This documentation applies to the following versions of Splunk® Enterprise: 6.2.2
Feedback submitted, thanks! | https://docs.splunk.com/Documentation/Splunk/6.2.2/Admin/Datatypesbnfconf | 2019-07-16T04:41:58 | CC-MAIN-2019-30 | 1563195524502.23 | [array(['/skins/OxfordComma/images/acrobat-logo.png', None], dtype=object)] | docs.splunk.com |
Alternate Media Storage
Magento Open Source gives you the option to store media files in a database on a database server, or on a Content Delivery NetworkA large distributed network of servers that specializes in the high performance delivery of multi-media content. (CDN), as opposed to storing them on the file system of the web server. The advantage of using alternate storage is that it minimizes the effort required to synchronize media when multiple instances of the system that are deployed on different servers that need access to the same images, CSSCascading Style Sheets: A style sheet language that controls the appearance of HTML documents; a way to control the appearance of text, graphics, lists, links, and all other elements on a web page. files, and other media files.
A quick rating takes only 3 clicks. Add a comment to help us improve Magento even more. | https://docs.magento.com/m2/2.2/ce/user_guide/system/media-storage.html | 2019-07-16T05:07:49 | CC-MAIN-2019-30 | 1563195524502.23 | [] | docs.magento.com |
-2017 “
16e6bbf4534bda01185c4dc08ab79abd5879f027”. | https://docs.mongodb.com/manual/about/ | 2017-04-23T05:32:43 | CC-MAIN-2017-17 | 1492917118477.15 | [] | docs.mongodb.com |
validictory 1.0.1¶
Overview¶
validictory is a general purpose Python data validator that allows validation of arbitrary Python data structures.
Schema format is based on the JSON Schema proposal, so combined with
json the
library is also useful as a validator for JSON data.
Contains code derived from jsonschema by Ian Lewis and Ysuke Muraoka.
Obtaining validictory¶
Source is available from GitHub.
The latest release is always available on PyPI and can be installed via pip.
Documentation lives at ReadTheDocs. | http://validictory.readthedocs.io/en/latest/ | 2017-04-23T05:21:40 | CC-MAIN-2017-17 | 1492917118477.15 | [] | validictory.readthedocs.io |
May 25, 2016
This document is intended for IT architects who want upgrade from XenDesktop 7.1 to XenDesktop 7.5 and add an on-premises enterprise cloud to the data center using Citrix CloudPlatform.
This document discusses the environment created in the Citrix Solutions Lab to successfully deploy and support 5000 virtual desktop users in a multibranch environment.
The objective of this document is to describe how to build a modular environment to deliver desktops and applications to local, remote, and mobile users, supporting both XenDesktop and XenMobile. The design of this environment focused not on maximizing the number of users or on maximizing performance, but rather on assessing the number of users that could be supported on a pre-defined set of hardware while still maintaining a positive user experience and providing mobile support to XenMobile users.
This guide will walk you through an example of how to use Citrix Cloud and local Virtual Desktop Agents to create an on-premises XenDesktop deployment while leveraging the broker in the cloud.
This guide walks you through an example of how to use Citrix Cloud and local Virtual Desktop Agents to create an on-premises XenDesktop deployment while leveraging the broker in the cloud.
This document is intended to aid IT architects and administrators who have an existing XenDesktop deployment and are looking to add other key components of Citrix Workspace Suite. It includes an overview of the architecture and introductory implementation guidance. | http://docs.citrix.com/en-us/categories/solution_content/reference-architectures.html | 2017-04-23T05:26:20 | CC-MAIN-2017-17 | 1492917118477.15 | [] | docs.citrix.com |
Solace Node.js API
The Solace Node.js messaging API allows applications to send and receive direct messages with a Solace message router.
Message Exchange Patterns
The Node.js API supports all common message exchange patterns.
- Publish / Subscribe
- Point to Point
- Request / Reply
Features
The Node.js API supports:
- Connection management to Solace message routers
- Addition and removal of topic subscriptions
- Sending and receiving Direct messages
- Structured data types that allow interoperability between various architectures and programming languages
- Request/reply messaging support
The Node.js API does not support:
- Sending and receiving Guaranteed messages
- Session Transactions and XA Transactions
- SolCache Client API support
- Queue browsing
- Topic dispatch | http://docs.solace.com/Solace-Messaging-APIs/node-js-home.htm | 2017-04-23T05:26:27 | CC-MAIN-2017-17 | 1492917118477.15 | [] | docs.solace.com |
Control and help secure email, documents, and sensitive data that you share outside your company walls. From easy classification to embedded labels and permissions, enhance data protection at all times with Azure Information Protection—no matter where it’s stored or who it’s shared with.
Watch the Azure Information Protection sessions from Microsoft Ignite 2016
Learn more about Azure Information Protection
Quick start tutorial
Frequently asked questions
Deployment roadmap
Installing the client | https://docs.microsoft.com/en-us/information-protection/ | 2017-04-23T05:34:23 | CC-MAIN-2017-17 | 1492917118477.15 | [] | docs.microsoft.com |
Using Ensemble as an ESB
Configuring ESB Services and Operations
[Back]
Server:
docs1
Instance:
LATEST
User:
UnknownUser
[
] >
[
Ensemble Interoperability
] >
[
Using Ensemble as an ESB
] >
[
Configuring ESB Services and Operations
]
Go to:
Configuring Pass-through Business Services
Configuring Pass-through Business Operations
Using SAML Validation in Pass-through Services
Suppressing Persistent Messages in Pass-through Services and Operations
Using Other Business Services, Processes, and Operations
Tracking Performance Statistics of Pass-through Services and Operations
Search
:
This chapter describes how to configure the services and operations in the ESB production and how to use the services provided by them. This chapter contains the following sections:
Configuring Pass-through Business Services
Configuring Pass-through Business Operations
Using SAML Validation in Pass-through Services
Suppressing Persistent Messages in Pass-through Services and Operations
Using Other Business Services, Processes, and Operations
Tracking Performance Statistics of Pass-through Services and Operations
Configuring Pass-through Business Services
To add a pass-through business service, Select the + sign for adding a new service in the Production Configuration page, then:
Select the class based on the protocol used and whether SAML security is being used. Choose a class from the following:
EnsLib.HTTP.GenericService
EnsLib.REST.GenericService
EnsLib.REST.SAMLGenericService
EnsLib.SOAP.GenericService
EnsLib.SOAP.SAMLGenericService
Decide if the pass-through service is to be called through the CSP port or a special port. For live productions, you should use the CSP port in conjunction with a robust web server software so that you have a secure, fully configurable system that can handle heavy loads. The web server installed with Ensemble is a limited system intended for use on development systems, but not on fully-loaded live systems. The special port is a light-weight listener that provides minimal configuration and security. Although it is possible to configure a service to accept calls on both ports, we do not recommend this configuration.
If your pass-through service is to be called on the CSP port:
Leave the
Port
field blank.
Select the
Enable Standard Requests
check box in the
Connection Settings
field on the Settings tab.
On the
Additional Settings
section, set the
Pool Size
to 0. This suppresses the pass-through service from listening on the special port. If you omit this step and leave the
Port
field blank, Ensemble displays an error message.
If your pass-through service is to be called from a special port:
Specify a port number.
Clear the
Enable Standard Requests
check box in the
Connection Settings
field on the Settings tab.
Set the
Target
to point to the pass-through operation.
To optimize performance for pass-through services that use the CSP port, you can configure the pass-through business service to keep the connection open between calls. You can do this by checking the
Keep Standard Request Partition
checkbox.
See
Using SAML Validation in Pass-through Services
for information on using SAML validation on SOAP and REST pass-through services.
To use the CSP port to access the pass-through business service, you also need to define a web application. See
Configuring a Web Application
for details.
Configuring Pass-through Business Operations
To add a pass-through business operation, Select the + sign for adding a new operation in the Production Configuration page, then:
Select the class based on the protocol used and whether you want to suppress storing the messages in the database. See
Suppressing Persistent Messages in Pass-through Services and Operations
for more information. Choose a class from the following:
EnsLib.HTTP.GenericOperation
EnsLib.HTTP.GenericOperationInProc
suppresses storing messages.
EnsLib.REST.GenericOperation
EnsLib.REST.GenericOperationInProc
suppresses storing messages.
EnsLib.SOAP.GenericOperation
Configure the pass-through operation
HTTP Server
,
HTTP Port
, and
URL
settings. You can either configure these settings directly or use the external service registry to configure them. To set them via the external service registry, set the value of the
External Registry ID
property. See
Using the External Service Registry to Configure ESB Hosts
for details on setting
External Registry ID
. After you apply the
External Registry ID
setting, Ensemble reads the current values from the registry and uses them to set the other properties and marks them as read-only.
You can either explicitly set the URL or set it to be derived from the incoming URL sent to the generic service. You can do this either through the service registry or directly through the URL property. To derive it from the incoming URL, set either the URL segment of the external service registry
Endpoint
field or the
URL
property as follows:
empty string: Use the URL from the GenericMessage, which is typically the URL passed into the generic service. Typically, you use this if the pass-through service is called from a special port and the URL does not contain the web application name and the service component name.
| (vertical bar): Remove the web application name and configuration name from the URL value in the GenericMessage and use the remainder as the URL. Typically, you use this when the GenericService is called using the standard CSP port. The web application name and the configuration name of the business service are needed to route the call to the GenericService but are not needed by the external server.
The URL is compared to web application name and the configuration name with a case-sensitive compare. All alphabetic characters in the web application name must be in lower case and the segment of the URL corresponding to the configuration name must match the case in the configuration name as defined in the production.
^ (circumflex): Remove the web application name from the URL value in the GenericMessage and use the remainder as the URL. Typically, this is only used if the GenericService is called using the standard CSP port and the pass-through operation component has a name that is identical to the first part of the URL expected by the external server. All alphabetic characters in the web application name in the URL must be in lower case.
If the
URL
property specifies a string that is not empty and does not contain either a vertical bar or circumflex, then the incoming URL is not used to generate the outgoing URL.
Although the pass-through operations typically do not change the contents of the pass-through message, you can specify that the
EnsLib.HTTP.GenericOperation
and
EnsLib.REST.GenericOperation
operations perform character set translation. For example, you could have the operation perform character translation so that accented characters are displayed correctly in a web browser. To set the HTTP or REST generic operation to perform character set translation, clear the
Read Raw Mode
check box in
Additional Settings
.
Using SAML Validation in Pass-through Services
Pass-through services validates the SAML token but does not check it to see if it provides access to the resources on the external server. The external server must check if the SAML token permits the requested access. The
Validation
field controls the level of SAML token validation done by the pass-through business service. The following flags specify the kind of validation that is done:
t
Must contain an Authorization header SAML token with key 'access_token='
a
Token must contain an Assertion.
r
Requires Assertions to contain NotBefore/NotOnOrAfter time conditions
v
Verifies Assertion signatures using a Trusted X.509 certificate and, if present, NotBefore/NotOnOrAfter conditions.
o
Validates other signed nodes such as TimeStamp.
By default, validation has a value of 1, which is equivalent to specifying tarvo. If you specify a value of 0, the pass-through service performs no validation.
When checking the NotBefore and NotOnOrAfter time conditions, the default clock skew allowance is 90 seconds. To change the skew allowance, set the global
^Ens.Config("SAML","ClockSkew")
. To set the default clock skew for all components to 180 seconds, enter the following command:
Set ^Ens.Config("SAML","ClockSkew")=180
To change the skew allowance for a specific component to 180 seconds, enter the following command:
Set ^Ens.Config("SAML","ClockSkew",
component-name
)=180
Suppressing Persistent Messages in Pass-through Services and Operations
To obtain maximum efficiency using pass-through services and operations, you can suppress the use of persistent messages. The pass-through service sends the call directly to the pass-through operation without creating a persistent message. The operation sends the reply message to the service in the same way. This allows you to achieve high throughput levels and eliminates the need to purge messages, but has the following limitations:
No persisted record of the pass-through call is maintained. You cannot view the message in the message viewer or view a message trace. This makes it challenging to troubleshoot problems.
No retry mechanism is available. If the first attempt to contact the server fails, the failure is returned to the application calling the pass-through service.
This mode is available only if a pass-through service targets a pass-through operation directly. If the message passes through any other production component, such as a business process router, then persisted messages are used throughout the process.
To suppress the use of persistent messages for a pass-through service and pass-through operation pair, choose one of the specialized classes for the pass-through operation:
EnsLib.HTTP.GenericOperationInProc
EnsLib.REST.GenericOperationInProc
EnsLib.SOAP.GenericOperationInProc
In addition, to suppress persistent messages, you must clear the
Persist Messages Sent InProc
checkbox in the pass-through business service configuration.
If the pass-through service is using the standard CSP port, you can further improve efficiency by configuring the service to keep the TCP connection open between calls. To do this, in the pass-through service configuration, check the
Keep Standard Request Partition
checkbox.
Using Other Business Services, Processes, and Operations
Although the simplest ESB systems can consist of a production with only pass-through services and operations, some requirements can only be met with more complex production components. For example, if your ESB must do any of the following, you need other kinds of business services and operations:
Route calls from a single incoming service to multiple external services depending on the content of the call.
Modify the parameters or protocol used for a service.
Expose a single service that is implemented by combining the capabilities of two or more external services.
Provide the service directly on the ESB.
Provide security other than one based on SAML tokens.
For more information on using other business services, processes, and operations to handle REST and SOAP service requests, see
Creating Web Services and Web Clients with Ensemble
.
Tracking Performance Statistics of Pass-through Services and Operations
To ensure that your ESB system continues to meet the needs of your users, it is important to monitor and track its performance. By performing this monitoring regularly, you can manage any increase in workload by adding resources to handle the increased load. If your pass-through services and operations are using persistent messages, you can use Ensemble’s monitoring facilities to track and report on performance, but the Activity Volume monitoring provides a mechanism to produce summary performance statistics that are useful for the following:
Tracking overall system performance.
Ensuring that the system is meeting Service-Level Agreements (SLAs).
Identifying potential problems.
If you have suppressed storing persistent messages (see
Suppressing Persistent Messages in Pass-through Services and Operations
), these summary statistics are your main tool for tracking performance.
The summary statistics consists of the following information for each pass-through service and operation:
Total number of messages that completed during a fixed time interval (10 seconds)
Total elapsed time to complete processing these messages
As generally true for monitoring performance, it is important to monitor and record performance statistics on a regular basis. This allows you to detect trends and compare current performance to a baseline. This can be useful in determining whether the cause of performance problems is increased load, network issues, problems with the servers providing the services, or ESB performance issues.
The summary statistics mechanism for pass-through services and operations consists of the following components:
Configurations and Global settings to enable pass-through services and operations to generate the statistics.
Daemons to get the statistics from the pass-through services and operations and send them to a daemon to store the statistics.
User interface to monitor and explore the statistics.
For details, see
Monitoring Activity Volume
in
Monitoring Ensemble
.
© 1997-2017, InterSystems Corp.
[Back]
[Top of Page]
Build:
Caché v2017.1 (792)
Last updated:
2017-03-20 19:02:32
Source:
EESB.xml | http://docs.intersystems.com/latest/csp/docbook/DocBook.UI.Page.cls?KEY=EESB_servop | 2017-04-23T05:29:00 | CC-MAIN-2017-17 | 1492917118477.15 | [] | docs.intersystems.com |
JApplicationHelper::getPath
From Joomla! Documentation
Revision as of 14:37,Path
Description
Get a path.
Description:JApplicationHelper::getPath [Edit Descripton]
public static function getPath ( $varname $user_option=null )
- Returns string The requested path
- Defined on line 152 of libraries/joomla/application/helper.php
- Since
- Referenced by
See also
JApplicationHelper::getPath source code on BitBucket
Class JApplicationHelper
Subpackage Application
- Other versions of JApplicationHelper::getPath
SeeAlso:JApplicationHelper::getPath [Edit See Also]
User contributed notes
<CodeExamplesForm /> | https://docs.joomla.org/index.php?title=API17:JApplicationHelper::getPath&oldid=89432 | 2016-04-29T03:39:39 | CC-MAIN-2016-18 | 1461860110356.23 | [] | docs.joomla.org |
Revision history of "JSimpleXML::getParserParser/1.6 (content was: "__NOTOC__ =={{JVer|1.6}} JSimpleXML::getParser== ===Description=== Get the parser. {{Description:JSimpleXML::getParser}} <span class="editsection" style="font-s..." (and the only contributor was "Doxiki2")) | https://docs.joomla.org/index.php?title=JSimpleXML::getParser/1.6&action=history | 2016-04-29T01:54:34 | CC-MAIN-2016-18 | 1461860110356.23 | [] | docs.joomla.org |
stack.sh is an opinionated OpenStack developer installation. It
installs and configures various combinations of Cinder, Glance,
Heat, Horizon, Keystone, Nova, Neutron, and Swift
This script's options can be changed by setting appropriate environment
variables. You can configure things like which git repositories to use,
services to enable, OS images to use, etc. Default values are located in the
stackrc file. If you are crafty you can run the script on multiple nodes
using shared settings for common resources (eg., mysql or rabbitmq) and build
a multi-node developer install.
To keep this script simple we assume you are running on a recent Ubuntu
(14.04 Trusty or newer), Fedora (F20 or newer), or CentOS/RHEL
(7 or newer) machine. (It may work on other platforms but support for those
platforms is left to those who added them to DevStack.) It should work in
a VM or physical server. Additionally, we maintain a list of deb and
rpm dependencies and other configuration files in this repo.
Learn more and get the most recent version at
Print the commands being run so that we can see the command that triggers
an error. It is also useful for following along as the install occurs.
Make sure custom grep options don't get in the way
Make sure umask is sane
Not all distros have sbin in PATH for regular users.
Keep track of the DevStack directory
Check for uninitialized variables, a big cause of bugs
Set start of devstack timestamp
Clean up last environment var cache
stack.sh keeps the list of deb and rpm dependencies, config
templates and other useful files in the files subdirectory
stack.sh keeps function libraries here
Make sure $TOP_DIR/inc directory is present
stack.sh keeps project libraries here
Make sure $TOP_DIR/lib directory is present
Check if run in POSIX shell
OpenStack is designed to be run as a non-root user; Horizon will fail to run
as root since Apache will not serve content from root user).
stack.sh must not be run as root. It aborts and suggests one course of
action to create a suitable user account.
OpenStack is designed to run at a system level, with system level
installation of python packages. It does not support running under a
virtual env, and will fail in really odd ways if you do this. Make
this explicit as it has come up on the mailing list.
Provide a safety switch for devstack. If you do a lot of devstack,
on a lot of different environments, you sometimes run it on the
wrong box. This makes there be a way to prevent that.
Initialize variables:
Import common functions
Import config functions
Import 'public' stack.sh functions
Determine what system we are running on. This provides os_VENDOR,
os_RELEASE, os_PACKAGE, os_CODENAME
and DISTRO
Check for a localrc section embedded in local.conf and extract if
localrc does not already exist
Phase: local
stack.sh is customizable by setting environment variables. Override a
default setting via export:
export DATABASE_PASSWORD=anothersecret
./stack.sh
or by setting the variable on the command line:
DATABASE_PASSWORD=simple ./stack.sh
Persistent variables can be placed in a local.conf file:
[[local|localrc]]
DATABASE_PASSWORD=anothersecret
DATABASE_USER=hellaroot
We try to have sensible defaults, so you should be able to run ./stack.sh
in most cases. local.conf is not distributed with DevStack and will never
be overwritten by a DevStack update.
DevStack distributes stackrc which contains locations for the OpenStack
repositories, branches to configure, and other configuration defaults.
stackrc sources the localrc section of local.conf to allow you to
safely override those settings.
Warn users who aren't on an explicitly supported distro, but allow them to
override check and attempt installation with FORCE=yes ./stack
Check to see if we are already running DevStack
Note that this may fail if USE_SCREEN=False
Make sure the proxy config is visible to sub-processes
Remove services which were negated in ENABLED_SERVICES
using the "-" prefix (e.g., "-rabbit") instead of
calling disable_service().
We're not as root so make sure sudo is available
UEC images /etc/sudoers does not have a #includedir, add one
Conditionally setup detailed logging for sudo
Set up DevStack sudoers
Some binaries might be under /sbin or /usr/sbin, so make sure sudo will
see them by forcing PATH
For Debian/Ubuntu make apt attempt to retry network ops on it's own
Some distros need to add repos beyond the defaults provided by the vendor
to pick up required packages.
NOTE: We always remove and install latest -- some environments
use snapshot images, and if EPEL version updates they break
unless we update them to latest version.
This trick installs the latest epel-release from a bootstrap
repo, then removes itself (as epel-release installed the
"real" repo).
You would think that rather than this, you could use
$releasever directly in .repo file we create below. However
RHEL gives a $releasever of "6Server" which breaks the path;
see
Enable a bootstrap repo. It is removed after finishing
the epel-release installation.
... and also optional to be enabled
install the lastest RDO
Destination path for installation DEST
Create the destination directory and ensure it is writable by the user
and read/executable by everybody for daemons (e.g. apache run for horizon)
Destination path for devstack logs
Destination path for service data
Configure proper hostname
Certain services such as rabbitmq require that the local hostname resolves
correctly. Make sure it exists in /etc/hosts so that is always true.
If you have all the repos installed above already setup (e.g. a CI
situation where they are on your image) you may choose to skip this
to speed things up
Set up logging level
Draw a spinner so the user knows something is happening
Echo text to the log file, summary log file and stdout
echo_summary "something to say"
Echo text only to stdout, no log files
echo_nolog "something not for the logs"
Set up logging for stack.sh
Set LOGFILE to turn on logging
Append '.xxxxxxxx' to the given name to maintain history
where 'xxxxxxxx' is a representation of the date the file was created
Clean up old log files. Append '.*' to the user-specified
LOGFILE to match the date in the search template.
Redirect output according to config
Set fd 3 to a copy of stdout. So we can set fd 1 without losing
stdout later.
Set fd 1 and 2 to write the log file
Set fd 6 to summary log file
Set fd 1 and 2 to primary logfile
Set fd 6 to summary logfile and stdout
Specified logfile name always links to the most recent log
Set up output redirection without log files
Set fd 3 to a copy of stdout. So we can set fd 1 without losing
stdout later..
This is deprecated....logs go in LOGDIR, only symlinks will be here now.
We make sure the directory is created.
We cleanup the old logs
Basic test for $DEST path permissions (fatal on error unless skipped)
Kill background processes on exit
Only do the kill when we're logging through a process substitution,
which currently is only to verbose logfile
Kill the last spinner process
Exit on any errors so that errors don't compound
Begin trapping error exit codes
Print the kernel version
Reset the bundle of CA certificates
Import common services (database, message queue) configuration
Service to enable with SSL if USE_SSL is True
Clone all external plugins
Plugin Phase 0: override_defaults - allow plugins to override
defaults before other services are run
Import Apache functions
Import TLS functions
Source project function libraries
Phase: source
Do all interactive config up front before the logging spew begins
Generic helper to configure passwords
If the password is not defined yet, proceed to prompt user for a password.
If there is no localrc file, create one
Presumably if we got this far it can only be that our
localrc is missing the required password. Prompt user for a
password and write to localrc.
restore previous xtrace value
To select between database backends, add the following to local.conf:
disable_service mysql
enable_service postgresql
The available database backends are listed in DATABASE_BACKENDS after
lib/database is sourced. mysql is the default.
Rabbit connection info
In multi node DevStack, second node needs RABBIT_USERID, but rabbit
isn't enabled.
Services authenticate to Identity with servicename/SERVICE_PASSWORD
Horizon currently truncates usernames and passwords at 20 characters
Keystone can now optionally install OpenLDAP by enabling the ldap
service in local.conf (e.g. enable_service ldap).
To clean out the Keystone contents in OpenLDAP set KEYSTONE_CLEAR_LDAP
to yes (e.g. KEYSTONE_CLEAR_LDAP=yes) in local.conf. To enable the
Keystone Identity Driver (keystone.identity.backends.ldap.Identity)
set KEYSTONE_IDENTITY_BACKEND to ldap (e.g.
KEYSTONE_IDENTITY_BACKEND=ldap) in local.conf.
Only request LDAP password if the service is enabled
We only ask for Swift Hash if we have enabled swift service.
SWIFT_HASH is a random unique string for a swift cluster that
can never change.
Save configuration values
OpenStack uses a fair number of other projects.
Bring down global requirements before any use of pip_install. This is
necessary to ensure that the constraints file is in place before we
attempt to apply any constraints to pip installs.
Install package requirements
Source it so the entire environment is available
Configure an appropriate Python environment
Install subunit for the subunit output stream
Install Python packages into a virtualenv so that we can track them
Do the ugly hacks for broken packages and distros
Install required infra support libraries
Phase: pre-install
NOTE(sdague): dlm install is conditional on one being enabled by configuration
Install Oslo libraries
Install client libraries
Install middleware
swift3 middleware to provide S3 emulation to Swift
Replace the nova-objectstore port by the swift port
Image catalog service
Block volume service
Network service
Compute service
django openstack_auth
dashboard
Add name to /etc/hosts.
Don't be naive and add to existing line!
Phase: install
Install the OpenStack client, needed for most setup commands
Configure the master host to receive
Set rsyslog to send to remote host
If certificates were used and written to the SSL bundle file then these
should be exported so clients can validate their connections.
Create a new named screen to run processes in
Set a reasonable status bar
Clear screenrc file
Initialize the directory for service status check
A better kind of sysstat, with the top process per time slice
Rather than just export these, we write them out to a
intermediate userrc file that can also be used to debug if
something goes wrong between here and running
tools/create_userrc.sh (this script relies on services other
than keystone being available, so we can't call it right now)
Use this for debugging issues before files in accrc are created
Set up password auth credentials now that Keystone is bootstrapped
Write a clouds.yaml file
Run init_neutron only on the node hosting the Neutron API server
Some Neutron plugins require network controllers which are not
a part of the OpenStack project. Configure and start them.
Delete traces of nova networks from prior runs
Do not kill any dnsmasq instance spawned by NetworkManager
Force IP forwarding on, just in case
Additional Nova configuration that is dependent on other services
Phase: post-config
Apply configuration from local.conf if it exists for layer 2 services
Phase: post-config
Only run the services specified in ENABLED_SERVICES
Launch Swift Services
Launch the Glance services
Upload an image to Glance.
The default image is CirrOS, a small testing image which lets you login as root
CirrOS has a cloud-init analog supporting login via keypair and sending
scripts as userdata.
See for more on cloud-init
Option to upload legacy ami-tty, which works with xenserver
Create a randomized default value for the keymgr's fixed_key
Launch the nova-api and wait for it to answer before continuing
Create a small network
Create some floating ips
Create a second pool
Once neutron agents are started setup initial network elements
Configure and launch Heat engine, api and metadata
Initialize heat
Creates source able script files for easier user switching.
This step also creates certificates for tenants and users,
which is helpful in image bundle steps.
Save some values we generated for later use
Apply configuration from local.conf if it exists for layer 2 services
Phase: extra
Phase: extra
Apply late configuration from local.conf if it exists for layer 2 services
Phase: post-extra
Run local.sh if it exists to perform user-managed tasks
Check the status of running services
ensure that all the libraries we think we installed from git,
actually were.
Prepare bash completion for OSC
If cinder is configured, set global_filter for PV devices
Force all output to stdout and logs now
Force all output to stdout now
Dump out the time totals
If you installed Horizon on this server you should be able
to access the site using your browser.
If Keystone is present you can point nova cli to this server
Warn that a deprecated feature was used
Indicate how long this took to run (bash maintained variable SECONDS)
Restore/close logging file descriptors | http://docs.openstack.org/developer/devstack/stack.sh.html | 2016-04-29T01:52:31 | CC-MAIN-2016-18 | 1461860110356.23 | [] | docs.openstack.org |
Difference between revisions of "JRegistryFormat"
From Joomla! Documentation
Revision as of 20:23,
Description
Description:JRegistryFormat [Edit Descripton]
Methods
- Defined in libraries/joomla/registry/format.php
- Extended by
Importing
jimport( 'joomla.registry.format' );
See also
JRegistryFormat source code on BitBucket
Subpackage Registry
- Other versions of JRegistryFormat
SeeAlso:JRegistryFormat [Edit See Also]
User contributed notes
<CodeExamplesForm /> | https://docs.joomla.org/index.php?title=API17:JRegistryFormat&diff=next&oldid=72676 | 2016-04-29T03:13:57 | CC-MAIN-2016-18 | 1461860110356.23 | [] | docs.joomla.org |
Revision history of "JRequest::getMethod/1.5"
View logs for this page
There is no edit history for this page.
This page has been deleted. The deletion and move log for the page are provided below for reference.
- 13:22, 20 June 2013 JoomlaWikiBot (Talk | contribs) deleted page JRequest::getMethod/1.5 (cleaning up content namespace and removing duplicated API references) | https://docs.joomla.org/index.php?title=JRequest::getMethod/1.5&action=history | 2016-04-29T03:28:50 | CC-MAIN-2016-18 | 1461860110356.23 | [] | docs.joomla.org |
.
Note
Weak references to an object are cleared before the object’s __del__() is called, to ensure that the weak reference callback (if any) finds the object still alive.-in types such as list and dict do not directly support weak references but can add support through subclassing:
class Dict(dict): pass obj = Dict(red=1, green=2, blue=3) # this object is weak referenceable
CPython implementation detail: Other built-in types such as tuple and long do not support weak references even when subclassed..
See also
Weak.iteritems():] | https://docs.python.org/2.6/library/weakref.html | 2016-04-29T01:57:56 | CC-MAIN-2016-18 | 1461860110356.23 | [] | docs.python.org |
Information for "Screen.weblinks.categories.edit.15" Basic information Display titleScreen.weblinks.categories.edit.15 Redirects toHelp15:Screen.weblinks.categories.edit.15 (info) Default sort keyScreen.weblinks.categories.edit.15 Page length (in bytes)55 Page ID6619:22, 1 February 2010 Latest editorChris Davenport (Talk | contribs) Date of latest edit19:22, 1 February 2010 Total number of edits1 Total number of distinct authors1 Recent number of edits (within past 30 days)0 Recent number of distinct authors0 Retrieved from ‘’ | https://docs.joomla.org/index.php?title=Screen.weblinks.categories.edit.15&action=info | 2016-04-29T02:34:43 | CC-MAIN-2016-18 | 1461860110356.23 | [] | docs.joomla.org |
How do you copy a site from localhost to a remote host? From Joomla! Documentation Revision as of 10:14, 28 June 2011 by Mvangeest (Talk | contribs) (diff) ← Older revision | Latest revision (diff) | Newer revision → (diff) Redirect page Copying a website from localhost to a remote host Retrieved from ‘’ Categories: FAQUpgrading and Migrating FAQAdministration FAQVersion 1.5 FAQ | https://docs.joomla.org/index.php?title=How_do_you_copy_a_site_from_localhost_to_a_remote_host%3F&oldid=60005 | 2016-04-29T03:25:05 | CC-MAIN-2016-18 | 1461860110356.23 | [] | docs.joomla.org |
37.1.
ic — Access to the Mac OS X Internet Config¶
This module provides access to various internet-related preferences set through System Preferences or the Finder.
Note
This module has been removed in Python 3.x.:
- class
ic.
IC([signature[, ic]])¶
Create an Internet Config object. The signature is a 4-character creator code of the current application (default
'Pyth') which may influence some of ICs settings. The optional ic argument is a low-level
icglue.icinstancecreated beforehand, this may be useful if you want to get preferences from a different config file, etc.
ic.
launchurl(url[, hint])¶
ic.
parseurl(data[, start[, end[, hint]]])¶
ic.
mapfile(file)¶
ic.
maptypecreator(type, creator[, filename])¶
ic.
settypecreator(file)¶
These functions are “shortcuts” to the methods of the same name, described below.
37.1.1. IC Objects¶:
IC.
launchurl(url[, hint])¶
Parse the given URL, launch the correct application and pass it the URL. The optional hint can be a scheme name such as
'mailto:', in which case incomplete URLs are completed with this scheme. If hint is not provided, incomplete URLs are invalid.
IC.
parseurl(data[, start[, end[, hint]]])¶
Find a URL somewhere in data and return start position, end position and the URL. The optional start and end can be used to limit the search, so for instance if a user clicks in a long text field you can pass the whole text field and the click-position in start and this routine will return the whole URL in which the user clicked. As above, hint is an optional scheme used to complete incomplete URLs.
IC.
mapfile(file)¶.
IC.
maptypecreator(type, creator[, filename])¶
Return the mapping entry for files with given 4-character type and creator codes. The optional filename may be specified to further help finding the correct entry (if the creator code is
'????', for instance).
The mapping entry is returned in the same format as for mapfile. | https://docs.python.org/2.7/library/ic.html | 2016-04-29T01:56:43 | CC-MAIN-2016-18 | 1461860110356.23 | [] | docs.python.org |
Changes related to "Security Checklist 4 - Joomla Setup"
← Security Checklist 4 - Joomla Setup
This is a list of changes made recently to pages linked from a specified page (or to members of a specified category). Pages on your watchlist are bold.
No changes during the given period matching these criteria. | https://docs.joomla.org/Special:RecentChangesLinked/Security_Checklist_4_-_Joomla_Setup | 2016-04-29T03:16:57 | CC-MAIN-2016-18 | 1461860110356.23 | [] | docs.joomla.org |
Information for "Documentation Translation" Basic information Display titleCategory:Documentation Translation Default sort keyDocumentation Translation Page length (in bytes)282 Page ID30848 Page content languageEnglish (en) Page content modelwikitext Indexing by robotsAllowed Number of redirects to this page0 Category information Number of pages24 Number of subcategories0 Number of files0 Page protection EditAllow all users MoveAllow all users Edit history Page creatorTom Hutchison (Talk | contribs) Date of page creation08:59, 20 February 2014 Latest editorTom Hutchison (Talk | contribs) Date of latest edit10:04, 16 July 2014 Total number of edits5 Total number of distinct authors1 Recent number of edits (within past 30 days)0 Recent number of distinct authors0 Retrieved from ‘’ | https://docs.joomla.org/index.php?title=Category:Documentation_Translation&action=info | 2016-04-29T02:22:37 | CC-MAIN-2016-18 | 1461860110356.23 | [] | docs.joomla.org |
On RDS hosts with Windows Server 2008 R2 or a later operating system, you can upgrade the View Agent software and edit pool settings so that the RDS host can provide remote desktops and remote Windows-based applications.
About this task
With VMware Horizon 6.0 and later releases, you can use Microsoft RDS hosts to provide remote applications, in addition to remote desktops. With this added functionality, the previously hidden server farm name is displayed in View Administrator.
Prerequisites
Verify that at least one View Connection Server instance in the replicated group has been upgraded. View Connection Server must be upgraded first so that the secure JMS pairing mechanism can work with View Agent.
Verify that the RDS host currently hosting remote desktops is running Windows Server 2008 R2, Windows Server 2012, or Windows Server 2012 R2. Windows Server 2008 (Terminal Services) was supported for earlier versions of View but is not a supported operating system for this release. If you do not have a supported Windows Server operating system, you must do a fresh installation rather than an upgrade. For a list of supported operating systems, see Supported Operating Systems for View Agent.
Verify that the RDS Host role is installed in the operating system. See the procedure called "Install Remote Desktop Services on Windows Server 2008 R2" in the Setting Up Desktop and Application Pools in View document.
Familiarize yourself with the procedure for running the View Agent installer. See the procedure called "Install View Agent on a Remote Desktop Services Host," in Setting Up Desktop and Application Pools in View, available by clicking the Help button in View Administrator.
Verify that you have a domain user account with administrative privileges on the hosts that you will use to run the installer and perform the upgrade.
Procedure
- In View Administrator, edit the desktop pool settings for the pool to disable the pool.
Go to Edit., select the pool, and click
- On the RDS host, download and run the installer for the new version of View Agent.
You can download the installer from the VMware Web site.
- In View Administrator, edit the farm settings and set the default display protocol to PCoIP.
Go to Edit., select the farm, and click
You can also use a setting that allows the end user to choose the protocol. To use remote applications, the protocol must be PCoIP.
- In View Administrator, edit the desktop pool settings for the pool to enable the pool.
Results
This host can now provide remote applications in addition to remote desktops. In View Administrator, if you go to RDS Desktop Pool. If you go to , you see a farm ID in the list that corresponds to the pool ID., you see that the type of pool is
What to do next
Upgrade the clients. See Upgrade the Client Application. | https://docs.vmware.com/en/VMware-Horizon-6/6.1/com.vmware.horizon-view.upgrade.doc/GUID-D1D8258E-4E0A-4FFE-AD20-520D69CAF78E.html | 2018-07-16T05:15:14 | CC-MAIN-2018-30 | 1531676589179.32 | [] | docs.vmware.com |
To Set Environment, User, and Role Access
To Create an Environment
Before you can give yourself MQ access permissions, you need an environment to associate permissions with.
Log into Anypoint Platform.
Click Access Management from the left navigation bar or click Access Management from the main Anypoint Platform screen.
Click Environments from the left Access Management choices, and click Add environment:
In the Add Environment screen, provide a name for your environment, and click either Production or Sandbox. You may want to create separate environments for each. A sandbox is used to test applications, whereas production is the public view. For this tutorial, you can choose Production.
To Give Users MQ Access Permissions
First give yourself MQ access permissions and then give others in your organization access.
Assign user permissions to yourself for use with MQ. These permissions let you create client applications, and destinations by creating queues and exchanges. You can use this same information to assign user permissions for others in your organization.
To assign user permissions:
In Anypoint Platform, click Access Management and Users.
Click a user name value:.
To Create an Admin Role
You can create a role that you can apply to other users in your organization. While you don’t need roles to complete this tutorial, when you use MQ as an administrator or developer, you should create roles for all those in your organization who use MQ.
Creating a role lets you assign access rights to users in your organization, such as for administrators, developers, or for those who only view information but don’t change it.
In Anypoint Platform, click Access Management and Roles.
Click Add Role.
Name the role
Admin Roleand click the Add role button.
Click the Role name and click MQ.
Specify the Production Environment, and set all the Permissions by clicking Select All:
Click the blue plus button to save your changes. Anypoint Platform displays your settings.
The settings are:
Clear destinations: Same privileges as View destinations, plus can purge messages.
Manage clients: Same privileges as View clients, plus can create Client Apps.
Manage destinations: Same privileges as View destinations and Clear destinations, plus can create new queues and message exchanges, edit existing queues and message exchange settings, access Message Sender and Browser pages, and can also delete.
View destinations: Can view all the destinations with their settings (ID, Type, Default TTL and Default Lock TTL), "In Queue" messages, and "In Flight" messages.
View clients: Can view all the Client Apps and their Client App IDs and Client Secrets.
To Add Additional Roles For MQ Access
After you create an admin role, you can optionally create other roles for users in your organization who need Anypoint MQ access in Anypoint Platform.
From the left navigation bar or the main screen, click Access Management.
Click Roles and Add role:
Type the role name and description, and click Add role:
In the list of roles, click the name of role you just created:, or you can delete the role.
You can also delete a role from the Roles list page by clicking the checkbox for an entry, and then clicking Delete role.
To Switch Environments
In MQ, click Production:
.
In Switch Environment, click the name of another environment, such as Sandbox (if you previously created a Sandbox environment) and click Switch.
To change your default environment, click Open Your Profile To Change The Default Environment. Set the Default Environment to a different environment. | https://docs.mulesoft.com/anypoint-mq/mq-access-management | 2018-07-16T04:51:27 | CC-MAIN-2018-30 | 1531676589179.32 | [] | docs.mulesoft.com |
This document explains how to patch a system for SQL injection vulnerability in the SQL Server Transport using hotfix release 2.2.4.
Detailed information about the vulnerability, its impact, available mitigation steps and patching instructions can be found in the security advisory.
Updating the NuGet package
This vulnerability can be fixed by upgrading the SQL Server Transport package that is being used. The package can be updated by issuing the following command in the Package Manager Console within Visual Studio:
Update-Package NServiceBus.SqlServer -Version 2.2.4
After the package has been updated, all affected endpoints must be rebuilt and redeployed.
Patching a deployed system
This vulnerability can also be fixed by updating the SQL Server Transport DLL without the need to rebuild and redeploy an affected endpoint by following these steps:
- Update the NuGet package
- For each affected endpoint:
- Stop the endpoint.
- Copy the
NServiceBus.file from the updated NuGet package to directory where binaries of the endpoint are stored. Make sure that updated version of the .dll overwrites the previous one.
Transport. SqlServer. dll
- Restart the endpoint. | https://docs.particular.net/transports/upgrades/sqlserver-2.x-2.2.4 | 2018-07-16T04:41:51 | CC-MAIN-2018-30 | 1531676589179.32 | [] | docs.particular.net |
f:comment¶
Everything inside an
f:comment ViewHelper will be removed. ViewHelpers within an
f:comment block will not be executed.
This ViewHelper is most useful when debugging a website: for example, you can temporarily wrap a
f:for loop with an
f:comment ViewHelper to stop it from being executed.
Example¶
<f:comment> <p>This text will not be displayed</p> <p>My name is: {address.firstName}</p> </f:comment> <p>{address.firstName} lives in city xyz</p> <![CDATA[<p>This text was made by {address.firstName} and will be displayed but not processed.</p>]]>
Output
<p>Stefan lives in city xyz</p> <p>This text was made by {address.firstName} and will be displayed but not processed.</p>
Tip
The last row in this example is a special case. By using CDATA notation, you can stop the processing of variable placeholders from being executed. The text will be output, but the placeholder {address.firstName} won't be replaced. This is most common when working with JavaScript in your template, where you may want to output a string containing curly quotes. | https://docs.typo3.org/typo3cms/ExtbaseGuide/Fluid/ViewHelper/Comment.html | 2018-07-16T05:04:15 | CC-MAIN-2018-30 | 1531676589179.32 | [] | docs.typo3.org |
Shader declares Material properties in a Properties block. If you want to access some of those properties in a shader program, you need to declare a Cg/HLSL variable with the same name and a matching type. An example is provided in Shader Tutorial: Vertex and Fragment Programs.
For example these shader properties:
_MyColor ("Some Color", Color) = (1,1,1,1) _MyVector ("Some Vector", Vector) = (0,0,0,0) _MyFloat ("My float", Float) = 0.5 _MyTexture ("Texture", 2D) = "white" {} _MyCubemap ("Cubemap", CUBE) = "" {}
would be declared for access in Cg/HLSL code as:
fixed4 _MyColor; // low precision type is usually enough for colors float4 _MyVector; float _MyFloat; sampler2D _MyTexture; samplerCUBE _MyCubemap;
Cg/HLSL can also accept uniform keyword, but it is not necessary:
uniform float4 _MyColor;
Property types in ShaderLab map to Cg/HLSL variable types this way:
Shader property values are found and provided to shaders from these places:
The order of precedence is like above: per-instance data overrides everything; then Material data is used; and finally if shader property does not exist in these two places then global property value is used. Finally, if there’s no shader property value defined anywhere, then “default” (zero for floats, black for colors, empty white texture for textures) value will be provided.
Materials can contain both serialized and runtime-set property values.
Serialized data is all the properties defined in shader’s Properties block. Typically these are values that need to be stored in the material, and are tweakable by the user in Material Inspector.
A material can also have some properties that are used by the shader, but not declared in shader’s Properties block. Typically this is for properties that are set from script code at runtime, e.g. via Material.SetColor. Note that matrices and arrays can only exist as non-serialized runtime properties (since there’s no way to define them in Properties block).
For each texture that is setup as a shader/material property, Unity also sets up some extra information in additional vector properties.
Materials often have Tiling and Offset fields for their texture properties. This information is passed into shaders in a float4 {TextureName}
_ST property:
xcontains X tiling value
ycontains Y tiling value
zcontains X offset value
wcontains Y offset value
For example, if a shader contains texture named
_MainTex, the tiling information will be in a
_MainTex_ST vector.
{TextureName}
_TexelSize - a float4 property contains texture size information:
xcontains 1.0/width
ycontains 1.0/height
zcontains width
wcontains height
{TextureName}
_HDR - a float4 property with information on how to decode a potentially HDR (e.g. RGBM-encoded) texture depending on the color space used. See
DecodeHDR function in UnityCG.cginc shader include file.
When using Linear color space, all material color properties are supplied as sRGB colors, but are converted into linear values when passed into shaders.
For example, if your Properties shader block contains a
Color property called “MyColor“, then the corresponding ”MyColor” HLSL variable will get the linear color value.
For properties that are marked as
Float or
Vector type, no color space conversions are done by default; it is assumed that they contain non-color data. It is possible to add
[Gamma] attribute for float/vector properties to indicate that they are specified in sRGB space, just like colors (see Properties). | http://docs.unity3d.com/Manual/SL-PropertiesInPrograms.html | 2016-07-23T11:05:39 | CC-MAIN-2016-30 | 1469257822172.7 | [] | docs.unity3d.com |
You're almost done! Now it's time to publish your website so that your visitors can see it.
Copy your website to the Production environment
Open the Workflow page.
Drag the code that you want your visitors to see from either the Dev or Stage environments into the Prod environment.
After you copy your code to the Production environment, repeat the process with the copied code's related files and databases.
Your website is published! Acquia Cloud displays your website to all of your visitors.
Protect your Production environment
Setting your Production environment to Production mode protects you from accidentally copying databases and files to your Prod environment. To enable Production mode:
Sign in to Acquia, select your site, and open the Workflow page.
Open the menu for the Prod environment, and then select Switch to Production mode.
Click Change Mode.
Add a domain name to your Acquia Cloud website
After you create or import your website into Acquia Cloud, the websites in each of your environments are provided a default domain name by Acquia Cloud. You can use these default domain names until you configure a custom domain name for the website, such as.
Your DNS provider guides you through the custom domain name creation process and has to point your domain name to the IP address of your Acquia Cloud website. The IP addresses for each of your website's environments are on the Domains page.
To add your website's DNS name to Acquia Cloud after your provider has made its change, complete the following steps:
On the Domains page, in the environment that you want to add a domain, click Add domain.
Enter the domain name without the protocol header in the text box. For example, use
mysite.com, but not.
Click Add domain.
If you want to make your website available at both example.com and, add both domain names.
Additional information
For more information about any of the steps or procedures you used during this process, see the Acquia Cloud Help.
If you need help or require assistance, visit the Acquia subscriber forums to join discussions with your peers and get help on a broad range of technical topics. | https://docs.acquia.com/cloud/getting-started/publish | 2016-07-23T11:05:27 | CC-MAIN-2016-30 | 1469257822172.7 | [] | docs.acquia.com |
This. By default TurboGears2 configures it to log using
a form and retrieving the user information through the user_name field of the
User class. This is made possible by the
authenticator plugin that TurboGears2
uses by default which asks
base_config.sa_auth.authmetadata to
authenticate
the user against given login and password.
The authentication layer.
IdentityApplicationWrapperprovides the ability to load related data (e.g., real name, email) so that it can be easily used in the application. Such a functionality is provided by so-called
ApplicationAuthMetadatain your
app_cfg.py.
When the
IdentityApplicationWrapper retrieves the user identity and its
metadata it makes them available inside request as
request.identity.
For example, to check whether the user has been authenticated you may use:
# ... from tg import request # ... if request.identity: flash('You are authenticated!')
request.identity will equal to
None if the user has not been
authenticated.
Also the whole
repoze.who authentication information are available
in WSGI environment with
repoze.who.identity key, which can be
accessed using the code below:
from tg import request # The authenticated user's data kept by repoze.who: who_identity = request.environ.get('repoze.who.identity')
The username will be available in
identity['repoze.who.userid']
(or
request.identity['repoze.who.userid'], depending on the method you
By default, TurboGears 2.3.8 configures
repoze.who to use
tg.configuration.auth.fastform.Fast the
config.app_cfg.base_config.sa_auth
options in your project. Identifiers, Authenticators and Challengers can be overridden
providing a different list for each of them as:
base_config.sa_auth['identifiers'] = [('myidentifier', myidentifier)]
You don’t have to use
repoze.who directly either, unless you decide not
to use it the way TurboGears configures it. | http://turbogears.readthedocs.io/en/latest/turbogears/authentication.html | 2016-07-23T11:04:15 | CC-MAIN-2016-30 | 1469257822172.7 | [] | turbogears.readthedocs.io |
JBoss, a free J2EE 1.4 certified will show you how to download and install JBoss 4.0. You will learn about the directory structure and understand what the key services and configuration files are.
Before installing and running the server, you need to check your system to make sure you have a working Java 1.4 or 1.5 installation. Java 1.5 is required to use the new simplified EJB3 technologies. The simplest way to check on your Java environment is to execute the java -version command to ensure that the java executable is in your path and that you are using an appropriate version:
[tmp]$ java -version Java(TM) 2 Runtime Environment, Standard Edition (build 1.5.0_02-56) Java HotSpot(TM) Client VM (build 1.5.0_02-36, mixed mode, sharing)
The most recent release of JBoss is available from the JBoss downloads page,. After you have downloaded the version you want to install, use the JDK jar tool (or any other ZIP extraction tool) to extract the jboss-4.0.4.zip archive contents into a location of your choice. It does not matter where on your system you install JBoss. Note, however, that installing JBoss into a directory that has a name that contains spaces causes problems in some situations with Sun-based VMs. This is caused by bugs with file URLs not correctly escaping the spaces in the resulting URL. The jboss-4.0.4.tgz archive is a gzipped tar file that requires a gnutar-compatible tar program that can handle the long pathnames in the archive. The default tar binaries on Solaris and OS X do not currently support the long pathnames.
JBoss also provides a GUI installer that can simplify the installation process. In addition to the basic installation, the installer allows you to select the which services are installed secure the JBoss management applications. Using a custom JBoss install created by the installer can greatly simplify the installation and configuration of JBoss.
The installer can be run directly from a web browser using Java Web Start or can be downloaded as an executable JAR file named jboss-4.0.4-installer.jar. On many operating system, you can run executable JARs by double-clicking them. If your system doesn't support that, you can run the installer directly from the command line:
[tmp]$ java -jar jboss-4.0.4-installer.jar
When you launch the installer, you will be able to select the starting server configuration set, as shown in Figure 1.1, “The JBoss AS installer configuration set selection screen”.
The starting configuration determine which sets of packages are available for installation. The following table describes each of the configuration sets.
After selecting the configuration set, you have the option to further customize the services installed, eliminating unneeded options. When choosing configuration sets, be aware that you can not add packages not in the configuration set. If you you wanted a simple web container (the tomcat configuration) that also had JMS support (the jms configuration), it would be necessary to go to a larger configuration, such as the default configuration, and remove the unwanted packages. Figure 1.2, “The JBoss installer package selection screen” shows the package selection screen.
The following screen (Figure 1.3, “The JBoss installer configuration name screen”) allows for the customization of the server configuration name. Unless you need to create multiple configurations, you should use a configuration name of default. Use of any other configuration requires you to start JBoss with the -c option to specify the configuration JBoss should use.
The installer will then guide you through a few install customization screens. The first the tar/zip archive, all JBoss services are left in a developer-friendly state requiring no authentication to access most JBoss services, including administrative services. The installer gives you a chance to secure those services on the security screen, shown in Figure 1.4, “The JBoss installer security configuration screen”. It is recommended that you click to enable security for all services and change the password from the default admin/admin values.
When you install from the installer, you get a smaller install image that is more tuned for your environment. However, the directory structure will be slightly different than when using the tar/zip archive. The examples in the book need to make use of many different configurations and will assume the complete install. Although use of the installer is recommended for normal JBoss use, you'll need to download the complete image to work through all the examples.
Installing the JBoss distribution creates a jboss-4.0.4 directory that contains server start scripts, JARs, server configuration sets and working directories. You need to know your way around the distribution layout to locate JARs for compiling code, updating configurations, deploying your code, etc. Figure 1.5, “The JBoss AS directroy structure” illustrates the installation directory of the JBoss server.
Throughout this book we refer to the top-level jboss-4.0.4 directory as the JBOSS_DIST directory. In Figure 1.5, “The JBoss AS directroy 1.2, “The JBoss top-level directory structure” shows the the top-level directories and their function.
Table 1.3, “The JBoss server configuration directory structure” shows the.6, “An expanded view of the default server configuration file set conf and deploy directories” below shows the contents of the default configuration file set.
Figure 1.6. complete DTD and syntax of this file is described, along with the details on integrating custom services, in Section 2.4.2, “JBoss MBean Services”.
The jndi.properties file specifies the JNDI InitialContext properties that are used within the JBoss server when an InitialContext is created using the no-arg constructor.
This file configures the Apache log4j framework category priorities and appenders used by the JBoss server code.
This file contains sample server side authentication configurations that are applicable when using JAAS based security. See Chapter 8, Security on JBoss for additional details on the JBoss security framework and the format of this file.
The props directory contains the users and roles property files for the jmx-console.
This file provides the default configuration for the legacy EJB 1.1 CMP engine.
This file provides the default container configurations. Use of this file is covered in Chapter 5, EJBs on JBoss
This file provides a default configuration file for the JBoss CMP engine. See Chapter 11, The CMP Engine for the details of this descriptor. 1.7.1 embedded database service configuration file. It sets up the embedded database and related connection factories. The format of JCA datasource files is discussed in Section 7.3, “Configuring JDBC DataSources”.
http-invoker.sar contains the detached invoker that supports RMI over HTTP. It also contains the proxy bindings for accessing JNDI over HTTP. This will be discussed in Section 2.6.2.5, “The HttpInvoker - RMI/HTTP Transport”.CA layer is discussed in Chapter 7, Connectors on JBoss.
The jbossweb-tomcat55.sar directory provides the Tomcat 5.5 servlet engine. The SAR is unpacked rather than deployed as a JAR archive so that the tomcat configuration files can be easily edited. This service is discussed in Chapter 9, Web Applications.. Configuration of JMS destinations is discussed in Chapter 6, Messaging on JBoss.
jbossmq-httpil.sar provides a JMS invocation layer that allows the use of JMS over HTTP.
The jbossmq-service.xml file configures the core JBossMQ JMS service. JMS services are discussed in Chapter 6, Messaging on JBoss. the MBean server. The JMX Console is discussed in Section 2.3.1, “Inspecting the Server - the JMX Console Web Application”. This service is discussed in Section 2.3.4, “Connecting to JMX Using Any Protocol”.. This is discussed further in Section 10.3, “System Properties Management”.
The scheduler-service.xml and schedule-manager-service.xml files are MBean service descriptors that provide a scheduling type of service. This is discussed further in Section 10.7, “Scheduling Tasks”.
The sqlexception-service.xml file is an MBean service descriptor for the handling of vendor specific SQLExceptions. Its usage is discussed in Section 11.11, “Entity Commands and Primary Key Generation”. to su:
[bin]$ sh run.sh ========================================================================= JBoss Bootstrap Environment JBOSS_HOME: /tmp/jboss-4.0.3 JAVA: /System/Library/Frameworks/JavaVM.framework/Home//bin/java JAVA_OPTS: -server -Xms128m -Xmx128m -Dprogram.name=run.sh CLASSPATH: /tmp/jboss-4.0.3/bin/run.jar:/System/Library/Frameworks/JavaVM.framework/Home /lib/tools.jar ========================================================================= 15:19:42,557 INFO [Server] Starting JBoss (MX MicroKernel)... 15:19:42,564 INFO [Server] Release ID: JBoss 4.0.3RC2 (build: CVSTag=Branch_4_0 date=200508131954) 15:19:42,570 INFO [Server] Home URL: file:/private/tmp/jboss-4.0.3/ 15:19:42,573 INFO [Server] Library URL: file:/private/tmp/jboss-4.0.3/lib/ 15:19:42,604 INFO [Server] Patch URL: null 15:19:42,608 INFO [Server] Server Name: default 15:19:42,627 INFO [Server] Server Home Dir: /private/tmp/jboss-4.0.3/server/default 15:19:42,629 INFO [Server] Server Home URL: file:/private/tmp/jboss-4.0.3/server/default/ 15:19:42,634 INFO [Server] Server Data Dir: /private/tmp/jboss-4.0.3/server/default/data 15:19:42,636 INFO [Server] Server Temp Dir: /private/tmp/jboss-4.0.3/server/default/tmp 15:19:42,638 INFO [Server] Server Config URL: file:/private/tmp/jboss-4.0.3/server/default/conf/ 15:19:42,640 INFO [Server] Server Library URL: file:/private/tmp/jboss-4.0.3/server/default/lib/ 15:19:42,642 INFO [Server] Root Deployment Filename: jboss-service.xml 15:19:42,657 INFO [Server] Starting General Purpose Architecture (GPA)... 15:19:43,960 INFO [ServerInfo] Java version: 1.4.2_05,Apple Computer, Inc. 15:19:43,963 INFO [ServerInfo] Java VM: Java HotSpot(TM) Client VM 1.4.2-38,"Apple Computer, Inc." 15:19:43,970 INFO [ServerInfo] OS-System: Mac OS X 10.3.8,ppc 15:19:45,243 INFO [Server] Core system initialized ... 15:20:42,584 INFO [Server] JBoss (MX MicroKernel) [4.0.3RC2 (build: CVSTag=Branch_4_0 date=200508131954)] Started in 58s:659:
[bin]$ ./run.sh -c minimal ... 15:02:02,939 INFO [Server] JBoss (MX MicroKernel) [4.0.3RC2 (build: CVSTag=Branch_4_0 date=200508311658)] Started in 6s:809. | https://docs.jboss.org/jbossas/jboss4guide/r5/html/ch01.html | 2016-08-31T15:12:02 | CC-MAIN-2016-36 | 1471982290634.12 | [] | docs.jboss.org |
Confirming fast performance running Apache and MongoDB (even a replicaSet secondary, with distant primary over WAN) on same box, communicating via unix domain socket.
$Mongo = new \MongoClient(
'mongodb:///tmp/mongodb-27017.sock'
, array(
'replicaSet' => 'rs1'
, 'timeout' => 300000
)
);
$Mdb = $Mongo->DB; // create or open DB
$Mdb->setReadPreference( \MongoClient::RP_NEAREST );
"Postgres core developer Bruce Momjian has blogged about this topic. Momjian states, "Unix-domain socket communication is measurably faster." He measured query network performance showing that the local domain socket was 33% faster than using the TCP/IP stack.". | http://docs.php.net/manual/en/mongo.connecting.uds.php | 2016-08-31T14:23:42 | CC-MAIN-2016-36 | 1471982290634.12 | [] | docs.php.net |
Returns a true division of the inputs, element-wise.
Instead of the Python traditional ‘floor division’, this returns a true division. True division adjusts the output type to present the best answer, regardless of input types.
Notes
The floor division operator // was added in Python 2.2 making // and / equivalent operators. The default floor division operation of / can be replaced by true division with from __future__ import division.
In Python 3.0, // is the floor division operator and / the true division operator. The true_divide(x1, x2) function is equivalent to true division in Python.
Examples
>>> x = np.arange(5) >>> np.true_divide(x, 4) array([ 0. , 0.25, 0.5 , 0.75, 1. ])
>>> x/4 array([0, 0, 0, 0, 1]) >>> x//4 array([0, 0, 0, 0, 1])
>>> from __future__ import division >>> x/4 array([ 0. , 0.25, 0.5 , 0.75, 1. ]) >>> x//4 array([0, 0, 0, 0, 1]) | http://docs.scipy.org/doc/numpy-1.4.x/reference/generated/numpy.true_divide.html | 2016-08-31T14:18:18 | CC-MAIN-2016-36 | 1471982290634.12 | [] | docs.scipy.org |
- Media shortcuts
- Troubleshooting: Media
- My device is not using the correct ring tone or wallpaper
- Some pictures that I take are blurry
- The flash dims or turns off
- Some features are not available on my device
- I cannot save media files
- I cannot open media files
- My device does not recognize my media card
- I cannot find podcasts
- The media player screen closes
- Ring tones, sounds, and alerts
- Browser
- Calendar
- Contacts
- Clock
- Tasks and memos
- Typing
- Keyboard
-
- GPS technology
- Maps
- Applications
- BlackBerry Device Software
- Manage Connections
- Power and battery
- Memory and media cards
- Search
- SIM card
- Security
- Service books and diagnostic reports
- Synchronization
- Accessibility options
- Calculator
- BrickBreaker
- Word Mole game
- Glossary
- Legal notice
BlackBerry Manuals & Help > Manuals for BlackBerry Users > BlackBerry Smartphones > BlackBerry Pearl > BlackBerry Pearl 9100/9105 Smartphones - User Guide - BlackBerry Pearl Series - 6.0
Search This Document
Use a picture as your device wallpaper
Related tasks
Next topic: Supported audio and video file formats
Previous topic: Add or change your signature
Was this information helpful? Send us your comments. | http://docs.blackberry.com/en/smartphone_users/deliverables/25417/Use_a_pic_as_device_wallpaper_52_1018037_11.jsp | 2012-05-25T22:12:28 | crawl-003 | crawl-003-011 | [] | docs.blackberry.com |
Patches
From Joomla! Documentation
Test relatiive he codebase, that creates a new revision or build. You can find your build number in the changelog.php file in your joomla! root.
Next comes @@ -87,7 +87,7 @@ which tells us that the change will start. | http://docs.joomla.org/Patches | 2012-05-25T22:12:28 | crawl-003 | crawl-003-011 | [] | docs.joomla.org |
User Guide
Local Navigation
About BlackBerry Bridge
If you're running BlackBerry Device Software 5.0 or later on your BlackBerry smartphone, you can connect your BlackBerry PlayBook tablet to your smartphone to access your smartphone's email, calendars, BlackBerry Messenger, files, and other data directly from your tablet. Once connected, you can also use your smartphone as a wireless remote control for your tablet.
Your tablet connects to your smartphone using Bluetooth technology. To connect, you need BlackBerry Bridge installed on your smartphone. You can download it from the BlackBerry App World storefront.
When your tablet and smartphone are connected, these icons appear in the BlackBerry Bridge folder on your tablet home screen:
Was this information helpful? Send us your comments. | http://docs.blackberry.com/en/smartphone_users/deliverables/27018/About_Bridge_1921706_11.jsp | 2012-05-25T22:13:51 | crawl-003 | crawl-003-011 | [] | docs.blackberry.com |
Note: Splunk to recognize a timestamp anywhere in an event by adding
TIME_PREFIX = and
MAX_TIMESTAMP_LOOKAHEAD = keys to a
[<spec>] stanza in
props.conf. Set a value for
MAX_TIMESTAMP_LOOKAHEAD = to tell Splunk how far into an event to look for the timestamp. Set a value for
TIME_PREFIX = to tell Splunk what pattern of characters to look for to indicate the beginning of the timestamp.
Example:
If an event looks like:
1989/12/31 16:00:00 ed May 23 15:40:21 2007 ERROR UserManager - Exception thrown Ignoring unsupported search for eventtype: /doc sourcetype="access_combined" NOT eventtypetag=bot
To identify the timestamp: May 23 15:40:21 2007
Configure
props.conf:
[source::/Applications/splunk/var/spool/splunk] TIME_PREFIX = \d{4}/\d{2}/\d{2} \d{2}:\d{2}:\d{2} \w+\s MAX_TIMESTAMP_LOOKAHEAD = 44
Note: Optimize the speed of timestamp extraction by setting the value of
MAX_TIMESTAMP_LOOKAHEAD = to look only as far into an event as needed for the timestamp you want to extract. In this example
MAX_TIMESTAMP_LOOKAHEAD = is optimized to look 44 characters into the event .. | http://docs.splunk.com/Documentation/Splunk/3.4.1/Admin/ConfigurePositionalTimestampExtraction | 2012-05-25T22:52:17 | crawl-003 | crawl-003-011 | [] | docs.splunk.com |
to the 2006 LCOG Needs and Issues Inventory Page.
This process will help state and federal agencies understand
the importance of your particular area’s or agency’s
projects.
Background
The
Needs and Issues (N&I) Process is the standardized process
for government agencies, service districts, and non-profit organizations
to submit a notification of upcoming project needs. The State of
Oregon, federal agencies, and some non-profit agencies use the
N&I information to prioritize grant funding opportunities.
This year the process has been tailored to our region by the Lane
Council of Governments.
Action
Required
Submit up to five projects
for review to the Lane Economic Committee (LEC) using the online Needs &
Issues Application. Please review the evaluation
criteria ( PDF)*
before filling in the form. Detail each project’s
description, expected outcome, impacted service area or population, anticipated
timeline, economic development impact, community impact, costs and funding
sources to date.
Please
also complete your own Local Priority
List online. This list should be
reviewed and approved by your respective local elected officials.
Upon completion of the application and the Local Priority List,
your projects will be reviewed by the LEC. Please
complete and submit both the application and local priority list
by July 1, 2006.
Alternate Submission Method
For those that prefer another method of submitting the Needs
& Issues Application, a Word
version of document ( DOC)
is available for download. The Local
Priority List ( DOC)
is also available in Word. Simply
fill out the form using Microsoft Word and send the completed document
to
[email protected].
Printed versions can be mailed to: Lane Council of Governments; Needs
and Issues Applications; 99 E. Broadway, Suite 400; Eugene, OR 97401-3111.
Should you have any questions about this application, please feel
free to contact:
Fiona Gwozdz or Colin Crocker
LCOG Needs and Issues Interns
(541) 682-6573
[email protected]
Steve Dignam
(541) 682-7450
[email protected]
Back
to Planning Services Page
*PDF
files require Acrobat
Reader.
© 1999-2008 Lane Council of Governments
Disclaimer | Contact
Webmaster | http://docs.lcog.org/ni/default.htm | 2012-05-25T22:57:47 | crawl-003 | crawl-003-011 | [] | docs.lcog.org |
Sample Data Specifications for 1.6
From Joomla! Documentation
Sample Data: Outline of Structure
Goals:
- To introduce users to Joomla and the Joomla community
- To provide an introduction to Joomla features
- To provide useful information for upgrading users
- To provide a basis for testing and troubleshooting
Audiences:
- New users with no prior experience/probably one click installs
- Experienced web developers using or testing Joomla! for the first time
- 1.5 or 1.0 users moving into 1.6
- People wanting a fresh installation for trouble shooting or template design
Menu Structure Top menu&
- User types -(Content page for each)
- Beginners
- Upgraders
- Experienced web masters or designers
(submenus for each?)
Main Menu, overall navigation, show on all pages
- Home (should use only global parameters )(possibly use featured content, now that it isn't front page it should be less confusing)
- About Joomla
- FAQ (nested category structure)
- Links (to j.org and selected sites)
- Feeds
- Using Joomla
- The front end For site visitors For content creators and managers
- The administrator Global configuration, what you need to know
- Layouts
- Security
- Performance
- Help system and documentation
- ACL
- Parameters Global Parameters Menu Parameters Item parameters
- Trouble shooting and getting help
- Learn More
- Extensions
- Components
Content page describing what a component is
- Submenu:Each Core Component description page
- Submenu:
Each core view
- Extending the core,
- adding new components
- Modules: Content page describing what modules are and how to place them on a page
- Submenu: Each module (show on a content page using loadposition. Describe the function of the module.)
- The Module Manager
- Extending the core, adding new modules
- Templates Content page on what a template is
- Submenu: Each template showing front page
- Description of each template (e.g. Template parameters, something special)
- Typography for each template
- The Template Manager (changing the default etc) ***Extending the core, adding new templates or modifying core templates creating your own templates
- Languages Content Page about languages with links to language pack source ***submenu:
- The Language Manager
- Extending the core,
- adding new languages,
- modifying core language files
- Extending the core, creating a new translation
- Plugins
Content Page about plugins
- Submenu: (each page should have a brief description, list of core plugins in that category and state whether they are on or off by default)
Content plugins {show examples of how to use in content)
- System Plugins
- Editor Plugins
- Extended Editor Plugins
- User plugins
- Authentication Plugins
- The Plugin Manager
- Extending the core, adding new plugins
- User menu:
- Your details
- submit content
- submit weblink
- Administrator
Modules:
Main menu, top menu--all pages login home page and a few others Others as needed plus the display in the modules part of the menus
Content Pages:
Welcome What can you do with Joomla? A framework and a CMSlanding page for each group of users Joomla Community Free Software License | http://docs.joomla.org/Sample_Data_Specifications_for_1.6 | 2012-05-25T22:38:32 | crawl-003 | crawl-003-011 | [] | docs.joomla.org |
Provides indexed access to the record buffers in the internal cache.
property Buffers [Index: Integer]: TRecordBuffer;
__property TRecordBuffer Buffers[int Index];
All TDataSet descendants except for unidirectional datasets maintain an internal cache of records from the underlying database table. For example, if the dataset is used to populate a data-aware grid, the cache includes a record for each row in the grid.
For datasets that cache records, each entry in Buffers points to a record buffer in the internal cache. The BufferCount property specifies the total number of record buffers in the cache. | http://docs.embarcadero.com/products/rad_studio/delphiAndcpp2009/HelpUpdate2/EN/html/delphivclwin32/DB_TDataSet_Buffers.html | 2012-05-25T22:38:31 | crawl-003 | crawl-003-011 | [] | docs.embarcadero.com |
destructor Destroy; override;
Do not call Destroy directly in an application. Usually destruction of datasets is handled automatically by Delphi. If an application creates its own instances of a dataset, however, and does not assign an Owner that is responsible for freeing the dataset, then the application should call Free, which checks that the dataset reference is not nil before calling Destroy. | http://docs.embarcadero.com/products/rad_studio/delphiAndcpp2009/HelpUpdate2/EN/html/delphivclwin32/DB_TDataSet_Destroy.html | 2012-05-25T23:02:52 | crawl-003 | crawl-003-011 | [] | docs.embarcadero.com |
Implementations of this interface are used for getting a transaction propagation context at the client-side. We need a specific implementation of this interface for each kind of DTM we are going to interoperate with. (So we may have 20 new classes if we are going to interoperate with 20 different kinds of distributed transaction managers.) The reason for having the methods in this interface return Object is that we do not really know what kind of transaction propagation context is returned.
TransactionPropagationContextImporter
public Object getTransactionPropagationContext()
nullif the invoking thread is not associated with a transaction.
public Object getTransactionPropagationContext(Transaction tx)
nullif the argument is
nullor of a type unknown to this factory. | http://docs.jboss.org/jbossas/javadoc/4.0.1-sp1/transaction/org/jboss/tm/TransactionPropagationContextFactory.html | 2012-05-25T23:05:58 | crawl-003 | crawl-003-011 | [] | docs.jboss.org |
In addition to tag methods, the HTMLParser class provides some additional methods and instance variables for use within tag methods.
<PRE>element. The default value is false. This affects the operation of handle_data() and save_end().
<A>tag with the same names. The default implementation maintains a list of hyperlinks (defined by the
HREFattribute for
<A>tags) within the document. The list of hyperlinks is available as the data attribute anchorlist.
See About this document... for information on suggesting changes.See About this document... for information on suggesting changes. | http://docs.python.org/release/2.3.2/lib/html-parser-objects.html#l2h-3984 | 2012-05-25T16:07:29 | crawl-003 | crawl-003-011 | [] | docs.python.org |
To forward spam messages, enter an e-mail address in the form of [email protected] . If the address is located on the same domain, you can omit the domain and only enter the User ID.
Important: If you have chosen the Forward To option, be aware of the Default Max Mailbox Size limit set in the Domain Properties. If you receive a large quantity of spam, this limit could be exceeded for the mailbox that stores spam. Make sure that you delete messages from this mailbox on a regular basis. You may also want to set up a Full Mailbox Notify Address for e-mail to be sent to when a mailbox is almost full. For more information, see Setting Domain Properties.
If you want the spam to be sent to a mailbox, place a hyphen between the user and sub-mailbox name, such as root- [email protected]. If the account is located on the same mail domain, you can omit the domain and enter root-spam.
Important: If you enter an address with a sub-mailbox that does not exist, the sub-mailbox is created only if Create is selected in the Sub-Mailbox Creation options of the Domain Properties. For more information, see Setting Domain Properties. | http://docs.ipswitch.com/_Messaging/IMailServer/v11.01/Help/Admin/forward_to_example.htm | 2012-05-26T00:13:31 | crawl-003 | crawl-003-011 | [] | docs.ipswitch.com |
Availability: Tk.
The turtle module provides turtle graphics primitives, in both an object-oriented and procedure-oriented ways. Because it uses Tkinter for the underlying graphics, it needs a version of python installed with Tk support.
The procedural interface uses a pen and a canvas which are automagically created when any of the functions are called.
The turtle module defines the following functions:
fill(1)before drawing a path you want to fill, and call
fill(0)when you finish to draw the path.
If extent is not a full circle, one endpoint of the arc is the current pen position. The arc is drawn in a counter clockwise direction if radius is positive, otherwise in a clockwise direction. In the process, the direction of the turtle is changed by the amount of the extent.
This module also does
from math import *, so see the
documentation for the math module for additional constants
and functions useful for turtle graphics.
For examples, see the code of the demo() function.
This module defines the following classes: | http://docs.python.org/release/2.4.4/lib/module-turtle.html | 2012-05-25T23:13:14 | crawl-003 | crawl-003-011 | [] | docs.python.org |
Salesforce is disabling TLS 1.0: see here
CodeScan connects to Salesforce to download the metadata (
ant download) and to run the unit tests. When TLS 1.0 is disabled and your version of Java does not support TLS 1.1, these functionality will fail with an error such as
TLS 1.0 has been disabled in this organization. Please use TLS 1.1 or higher when connecting to Salesforce using https..
Depending what version of Java you are running CodeScan/Ant analysis with you may have to make some changes:
Java 8
Works without any modifications. If you do experience any problems please contact support.
Java 7
When using ant, please add the following to your ANT_OPTS environment variable:
ANT_OPTS="-Dhttps.protocols=TLSv1.1,TLSv1.2".
Java 6
Java 6 is no longer supported by CodeScan. If you are running an older version of CodeScan that still supports Java 6:
Java 6 (1.6) update 111 and higher: Set environment variable
ANT_OPTS="-Dhttps.protocols=TLSv1.1,TLSv1.2" when running the ant command
Java 6 (1.6) before update 111: Does not support TLS 1.0. You can continue to use CodeScan for static analysis only
Please sign in to leave a comment. | https://docs.codescan.io/hc/en-us/articles/360012126771-TLS-1-0-has-been-disabled-in-this-organization | 2019-11-12T04:28:57 | CC-MAIN-2019-47 | 1573496664567.4 | [] | docs.codescan.io |
0100 Marco Setiadi
General information
- Submitterʼs name
- Marco Setiadi
- Submitted on behalf of
- Individual
About the submission
- Please select the industry your submission is in relation to. If required, you may select multiple industries.
-.
Recruitment professionals travel from all over the world to work in Australia with life time’s worth of savings. They risk everything by leaving their secure jobs, comfort of their home and families to travel to the other side of the world and give 100% in order to build a career in Recruitment, knowing that if you do not work hard then you will have no one to fall back on and support you - no family or home. This is a huge reason to the success of overseas Recruiters, they risk it all to develop a career which can support a life in Australia for themselves and their future families.
As a Recruitment Consultant we work with Australian candidates & Australian Businesses to find the perfect fit they have been unable to find without our support. Removing Recruitment from the list would severely affect the number of Australians finding employment and Australian Businesses finding local talent.
Australia's own Recruiters have the safety net of families and secure homes if it doesn't work out, they do not find it imperative to be successful in the same way as someone who has risked it all and moved across the world with a lifetime worth of savings and a desire to succeed. When we travel from overseas we MUST be successful, there is no lifeline. Hence why overseas Recruiters support heavily to the industry and the growth of the economy in the way they do.
Overseas Recruiters spend all of their money in Australia hugely contributing to the growth of the Economy. Family and friends then come to Australia from overseas to visit the country for long periods of time, in doing so they bring large amounts of savings with them which is again spent in Australia supporting the Economy. Whenever we do return home for a holiday to our home countries we speak about how amazing Australia is - word of mouth really is the greatest promotion tool.’ [39941|100766] | https://docs.employment.gov.au/0100-marco-setiadi | 2019-11-12T02:50:52 | CC-MAIN-2019-47 | 1573496664567.4 | [] | docs.employment.gov.au |
A compound statement is a set of statements enclosed within a begin and end statement pair. Except for the begin, each statement must be terminated with a semi-colon (;). Execution of a begin statement initiates a new programming block which can contain its own set of declarations followed by one or more statements. Except for the most trivial procedures and functions, all will be defined within an outer compound statement block.
Any needed declarations must be made before any other statements. Variables are declared first, followed by any cursor declarations, and lastly exception handler declarations. Specifics are provided below in the Declarations section.
In standard SQL, a programming block can be specified to be atomic which means that either all of the database modifications made within the block succeed or none do. An atomic block cannot contain a commit or rollback statement (nor can it contain a start transaction) as that would violate its operational purpose. If the outermost block (i.e., begin statement) specifies atomic then any error that occurs will automatically cause all prior changes made by the stored procedure to be rolled back. If no errors occur, then the changes can be committed by the program or procedure that executed the call to the procedure.
If you want your stored procedure to commit its own changes (i.e., a transactional stored procedure) then under standard SQL it needs to issue its own start transaction and commit statements. This technique, however, is not particularly useful when one wants to be assured that the stored procedure is guaranteed to be deadlock free.
In order to guarantee that a stored procedure is deadlock free, it needs to ensure that all needed locks are acquired at the beginning of the procedure. RDM SQL has extended the start transaction statement so that the tables that need to be locked can be explicitly specified. However, a better solution that does not require the user to have explicit control over the needed locks is to use an alternative to begin atomic called begin transaction. This will start a transaction and automatically acquire all of the locks needed by the SQL statements contained in the stored procedure. The associated (i.e., final) end statement will automatically commit those changes.
The above stored procedure text is stored in file
begin_trans.sql which is compiled and executed as shown in the following
rdm-sql script.
The above sequence shows that the execution of the
begin_trans stored procedure was executed as a complete transaction. Note that the
acctmgr row with
mgrid "SLY" remained in the table after the rollback was issued.
However, in the following sequence, a start transaction is executed before the call to begin_trans to add "CLINT" to the
acctmgr table so that the changes made by the procedure are part of the outer transaction so that the rollback removes CLINT from the table. | https://docs.raima.com/rdm/14_1/sqlpl_compoundstmt.html | 2019-11-12T04:19:57 | CC-MAIN-2019-47 | 1573496664567.4 | [] | docs.raima.com |
SQL Server Guides
The following guides are available. They discuss general concepts and apply to all versions of SQL Server, unless stated otherwise in the respective guide.
Always On Availability Groups Troubleshooting and Monitoring Guide
Index design Guide
Memory Management Architecture Guide
Pages and Extents Architecture Guide
Post-migration Validation and Optimization Guide
Query Processing Architecture Guide
SQL Server Transaction Locking and Row Versioning Guide
SQL Server Transaction Log Architecture and Management Guide
Thread and Task Architecture Guide | https://docs.microsoft.com/en-us/sql/relational-databases/sql-server-guides | 2017-12-11T02:10:58 | CC-MAIN-2017-51 | 1512948512054.0 | [array(['../includes/media/yes.png', 'yes'], dtype=object)
array(['../includes/media/yes.png', 'yes'], dtype=object)
array(['../includes/media/yes.png', 'yes'], dtype=object)
array(['../includes/media/yes.png', 'yes'], dtype=object)] | docs.microsoft.com |
Pivot Painter Tool 2.0
The Pivot Painter 2.0 MAXScript stores the pivot and rotational information in the model's textures. Those textures can then be referenced inside of Unreal's shader system to create interactive effects.
The motion shown in the sample video is procedurally generated in real-time using vertex shaders. The Pivot Painter material function forms motion inheritance information for each of the model's leaves and branches. Each element is animated using its individual pivot point, direction vector, bounds size and inherited motion. The results are fluid and realistic.
Creating these types of materials has been made much simpler with the addition of the Pivot Painter 2's Material Functions . Sample content, like that provided in Content Examples also helps by showing how an animation, like the one featured above, can be generated. Retrieving sub-object pivot points is now simply a matter of processing a mesh in 3D Studio Max with the Pivot Painter script, importing the files and creating a material using the available Pivot Painter functions. The sample foliage motion material function provides support for hierarchies up to 4 levels deep and 30,000 model elements.
Creating motion this way has its benefits. A model processed using this technique uses only one additional UV channel more than a standard Static Mesh, but its animations are far less expensive than skeletal animations because they are calculated in real time. Vertex shader instruction counts are generally less of a performance concern on the graphical side than pixel instruction counts, due to the number of vertices on a model generally being significantly lower than the number of pixels that the model draws.
If you'd like to explore some of the examples demonstrated in this page, you can download the Content Examples project from the Epic Games Launcher and open the PivotPainter2 map.
What's New for Pivot Painter 2.0
With Pivot Painter 2.0's release, you'll notice that there have been some improvements and changes to the MAXScript. While some options have been removed (Hierarchy Painter) or renamed (Per Object Painter to Vertex Alpha Painter), this is an improvement to the overall workflow to simplify the process of creating these types of detailed assets. This is all in service of expanding the capabilities of Pivot Painter 2 to get even better results than before, ultimately giving you the widest range of options when developing your own content! Read on below to read about the additional improvements that have been added.
Workflow Improvements
The rigging pre-processing step is now done through 3DS Max's standard Link tool. Simply model your tree as separate, logical elements like leaves and branches. Ensure that their pivots are ideally located and oriented (X-axis down the object's length) and then link them together as one would in a traditional rig.
Ultimately, this makes producing complex foliage far simpler. You can rig one branch, duplicate it, and then place it. A re-parented branch will retain its children's hierarchical arrangement as well.
Once your tree is rigged and modeled, simply select any element within the tree and under the Render Options sections, press the Process The Selected Object Hierarchy button. The script will automatically traverse the selected element's hierarchy to find its root and then go back up the chain to collect and process all of its children, finally, rendering the textures using whatever data was chosen from the available Render Options.
This method supports both individual and linked elements at the same time. In the case of combined grass and tree models, you would select every blade at once and then an element from the tree before pressing the Process Selected Object Hierarchy button.
Texture Coordinate Control
You now have control over which UV set is written to, enabling you to combine this system with others like the Vertex Animation tools . Also, the preferences chosen in Pivot Painter 2.0 are noted in the final output texture name as a helpful reminder.
The automated naming convention is as follows:
[MeshName]_rgb_[Current Texture RGB Choice]_a_[Alpha Choice]_[UV Channel]
An example final output would look something like this.
ExampleMesh_rgb_PivotPos_a_ParentIndex_UV_2
Extensibility
The processing and rendering code has been thoroughly abstracted, resulting in a minimal amount of effort needed to add new rendering options in the future.
New Rendering Options
As part of the improvements with the MAXScript, there is a new bit-shifting algorithm used behind the scenes that stores integers as float data. This enables the hierarchy depth and max object count to increase from 3,000 to 30,000, which is vital for representing complex foliage.
This particular tree asset contains 14,431 sub-models.
16-bit RGB:
Pivot Point Location
Origin Positon
Origin Extents
8-bit RGB:
Object Basis Vectors (one vector at a time)
16-bit Alpha:
Parent Index (Int as float)
Number of Steps From Root
Random 0-1 Value Per Element
Bounding Box Diameter
Selection Order (Int as float)
Normalized 0-1 Hierarchy Position
Object X,Y, Z Bound Lengths From Pivot Location
Parent Index (Float - Up to 2048)
8-bit Alpha:
Normalized 0-1 Position in the Hierarchy
Random 0-1 Value Per Element
Object X,Y, Z Bounds Length (Up To 2048)
Recreating Bounding Boxes
When using 3DS Max, the model's bounding boxes are expanded as its sub-object geometry shifts during the modeling process. The object's bounding box is left no longer being oriented or aligned to the mesh during this process. The same thing can happen when you alter the meshes' pivot transform. To counteract this, the Recreate Bounding Boxes section will replace the bounding box of the selected meshes with one that is properly aligned and oriented to the meshes pivot point, which is very useful for the scripts other data gathering features (Vertex Alpha Painter and Bound information).
On the left, the model's pivot transform has been altered but the bounding box is not aligned and oriented properly. After using the Process Selected Objects button under the Recreate Bounding Boxes section, the rightmost model's bounding box has been properly aligned and oriented.
Merge Selected Model's Normals
The Merge Selected Model's Normals feature averages out normals on multiple models where open edge vertices happen to lay on top of each other. This option will fix the normal seam issues that occur when a single model is broken into multiple pieces specifically for use with Pivot Painter.
3DS Max Version and Script Information
This tool has been tested with 3DS Max 2015 and 2016 currently. Other versions of 3DS Max have not been specifically tested, so be aware that you may run into issues when using these versions of 3DS Max.
To install the MAXScript, just drag-and-drop it from its location in [UE4Directory]/Engine/Extras/3dsMaxScripts/PivotPainter2.ms right into the 3DS Max viewport and the script will launch itself.
If you find that you are using this script a lot, you can always add it to one of the tool bars or quad menus. If you are unfamiliar how to do this the Autodesk site has a very detailed walk through that will explain the process.
3DS Max Unit Setup
Before you begin to use the tool you will need to first ensure that the units of measurement 3DS Max uses is set up to correctly match the units of measurement that UE4 uses. This way, you can ensure that the data the tool exports from 3DS Max will work in the same manner inside UE4. Since UE4 uses centimeters for its default unit of measurement, you will need to ensure that 3DS Max uses this as well. To change this setting in 3DS Max, do the following:
Open up 3DS Max and from the Main Toolbar select Customize > Unit Setup.
Next, click on the System Unit Setup button and under the System Unit Scale section. Use the drop-down, changing the setting from inches to centimeters. Then click the OK button.
Finally, change the Display Unit Scale to Generic Units and then press the OK button.
Importing Assets
When importing your assets, there are a few things you should be aware of to get the best results. Follow the setups for your Static Meshes* and textures** below.
Static Meshes
In the Import Options window, make sure to uncheck Skeletal Mesh and to enable the option for Combined Meshes.
It is recommended to do a "full" reimport (overwrite the previous model) whenever updating, instead of using the Reimport option. This is the safest method to prevent any material issues.
Textures
Once you import your generated textures created with Pivot Painter 2.0, make sure to open the Texture asset and set the following:
Writing Shaders
All of the textures rendered from Pivot Painter provide us with the tools to create simple mathematical representations of our trees' sub-object (branches and leaves). With that information, we can then start to approximate each sub-asset's reaction to a wind source enabling us to create very detailed elemental hierarchies to produce naturalistic motion.
If you desire to write your own shader, you will need to know that some of the data types need to be unpacked. These unpacking functions use the following naming convention:
ms_PivotPainter2_*
The PivotPainter2FolaigeShader Material Function, as it stands, can operate as a useful Material in itself, if it fits your needs, or it can act as the scaffolding for your own version of a foliage shader. For additional information about the Material Functions available for Pivot Painter, visit the Pivot Painter Material Functions reference page.
Pivot Painter 2 Foliage Shader
To implement Pivot Painter 2.0 animation within a Material is now fairly simple, with a lot of the back-end work already set up for you! Simply insert the PivotPainter2FoliageShader into your Material's code before making the final connection to the Material Attributes input pin. You'll also want to make sure to disable Tangent Space Normals in the Material Details panel.
This Material Function assumes that you've processed the asset with Pivot Painter's default UV and texture settings.
Once you create a Material Instance, you will have access to the available Wind Settings that enable you to control the sub-object hierarchy. In order to make use of these, you will first need to enable the Wind Setting for the level(s) that you want to affect.
When a Wind Setting is enabled, the available options for that hierarchy depth will be visible and editable.
You'll now have access to the Shared Wind Settings that allows you to assign the rendered textures from Pivot Painter.
With the correct textures assigned, you can now open the Wind Settings groups and enable the desired features. Each Wind Setting group controls the meshes at a specific hierarchical depth. In the example of a tree, Wind Setting 1 would control the trunk, the next group, the branches and so on.
Wind Turbulence and Gusts
This should enable you to produce fairly nice foliage animations. If they aren't tailored to your liking, you can create a new wind turbulence and gust magnitude texture. To do this, the RGB values of a vector are used to offset the wind vector and the alpha is then used to control the strength of the wind. These two channel sets are each sampled separately in the shader.
Use a texture to control vector offsets and strength of the wind for variety.
Optimizations
It is helpful to understand how the material works when attempting to optimize the results. The PivotPainter2FoliageShader Material Function has been designed in a generic fashion. It executes the same wind reaction code 4 times. Each time another set of settings is used it is done so on a hierarchy level that is one step deeper. You can see how this works by opening the PivotPainter2FoliageShader Material Function.
Should you open the PivotPainter2FolaigeShader Material Function, you will find this network of shader functionality for the Wind Hierarchy Depths that is exposed when you create a Material Instance.
Now, you can use the sections below to understand optimizations techniques you can use.
Material Instances
When all the shader features are enabled you can produce a fairly expensive Material. The shader can be optimized by disabling the Wind Settings groups (or hierarchy levels of a tree) that do not need to animate. For instance, you could simply enable Wind Setting 4 to animate the model's leaves on their own. Grouping wind reaction settings by hierarchy depth mean that the model's elements should be grouped in the same fashion. All leaves should be "X" number of steps away from the root object. All branches that behave similarly should be grouped together as well. For this reason, it is recommended to set up a "parent" or "master" Material Instance for assets like trees, where you can easily have multiple hierarchies.
The base Material is instanced to create your "master" Material Instance, which is used to define all of your wind settings all in one place. Then use additional Material Instances to define the pixel shader components of your hierarchy, like the base color textures for trunk, branches, and leaves. This way, you can also disable and optimize any properties not needed for that hierarchy depth.
As an example, the leaf Material Instance needs to simulate the trunk, both sets of branches, and the leaves themselves so that they properly move with the rest of the tree. The trunk, on the other hand, only needs to simulate the trunk animation, which means that you can disable the other hierarchies for the leaves and branches because they are not needed.
Bend Normals
Another optimization to consider, especially if you're writing your own shader or plan to edit this one is, the method that we use to bend normals. The PivotPainter2FoliageShader Material Function performs actual rotations to the surface Normals when the user chooses to update the normals within the shader. This could be done more cheaply (with the risk of artifacts) using custom UVs and the BlendAngleCorrectedNormals Material Function.
Additional Use Example
Using the positional information and the hierarchical depth, you can create your own procedural growth or building animations like the ones below. You can also take a look at the Content Examples map PivotPainter2 for an example of how to set these types of examples up.
These examples can be found in the Content Examples project available from the Learn tab of the Epic Games Launcher. Open up the PivotPainter2.umap to explore these and other examples.
Other Notes
The Wind Actor is now deprecated in UE4 and can be replaced using Material Parameter Collections and Blueprints. You can create a Blueprint that updates the Material Parameter Collection with a float 4 Wind Actor Parameter. That Material Parameter Collection can then be referenced within a given foliage material in place of the Wind Actor.
Troubleshooting
If the models seem to be animating poorly, attempt the following solutions:
Reimport the assets.
Check the model and texture settings.
Make sure that the Materials have Tangent Space Normals disabled.
Non-uniformly scaled meshes within 3DS Max will return incorrect transform values and will result in broken results. If this appears to be the case attempt a "Reset XForm" operation. It's always safer to non-uniformly scaled meshes via the sub-objects rather than at the object-level. It's recommended to do this before you start duplicating, parenting and placing model elements.
Some mesh warping appears due to optimizations are done within the shader. To properly calculate wind's effect on a branch, we would need to calculate the wind's effect on each leaf's pivot and its vector before executing each mesh rotation. Every rotation and offset operation is expensive, so we perform the mesh rotation for each element within its local space (before the other rotations are factored in). The resulting mesh offset from the rotation is then added to the other transformations. The result is less accurate but far cheaper. Sometimes the reduced accuracy will cause the foliage scale to alter a little based on the combination of offsets. So if this occurs you can try the following:
Reduce the intensity of the wind simulation.
Animate fewer hierarchical levels.
Rotate the mesh so that the coincidence occurs less frequently. | https://docs.unrealengine.com/latest/INT/Engine/Content/Tools/PivotPainter/PivotPainter2/index.html | 2017-12-11T02:09:22 | CC-MAIN-2017-51 | 1512948512054.0 | [array(['./../../../../../../images/Engine/Content/Tools/PivotPainter/PivotPainter2/CE_PivotPainter2Map.jpg',
'CE_PivotPainter2Map.png'], dtype=object)
array(['./../../../../../../images/Engine/Content/Tools/PivotPainter/PivotPainter2/ScriptMotion_UI.jpg',
'ScriptMotion_UI.png'], dtype=object)
array(['./../../../../../../images/Engine/Content/Tools/PivotPainter/PivotPainter2/BoundingBox.jpg',
'BoundingBox.png'], dtype=object)
array(['./../../../../../../images/Engine/Content/Tools/PivotPainter/PivotPainter2/PivotPainter2FoliageShader.jpg',
'PivotPainter2FoliageShader.png'], dtype=object)
array(['./../../../../../../images/Engine/Content/Tools/PivotPainter/PivotPainter2/EnableWindSettings.jpg',
'EnableWindSettings.png'], dtype=object)
array(['./../../../../../../images/Engine/Content/Tools/PivotPainter/PivotPainter2/Parameter.jpg',
'Parameter.png'], dtype=object)
array(['./../../../../../../images/Engine/Content/Tools/PivotPainter/PivotPainter2/WindParameters.jpg',
'WindParameters.png'], dtype=object)
array(['./../../../../../../images/Engine/Content/Tools/PivotPainter/PivotPainter2/InstanceParents.jpg',
'InstanceParents.png'], dtype=object) ] | docs.unrealengine.com |
About multi-echo fMRI
What is multi-echo fMRI?
Most echo-planar image (EPI) sequences collect a single brain image following a radio frequency (RF) pulse, at a rate known as the repetition time (TR). This typical approach is known as single-echo fMRI. In contrast, multi-echo (ME) fMRI refers to collecting data at multiple echo times, resulting in multiple volumes with varying levels of contrast acquired per RF pulse.
The physics of multi-echo fMRI
Multi-echo fMRI data is obtained by acquiring multiple echo times (commonly called TEs) for each MRI volume during data collection. While fMRI signal contains important neural information (termed the blood oxygen-level dependent, or BOLD signal, it also contains “noise” (termed non-BOLD signal) caused by things like participant motion and changes in breathing. Because the BOLD signal is known to decay at a set rate, collecting multiple echos allows us to assess non-BOLD.
The image below shows the basic relationship between echo times and the image acquired at 3T (top, A) and 7T (bottom, B). Note that the earliest echo time is the brightest, as the signal has only had a limited amount of time to decay. In addition, the latter echo times show areas in which is the signal has decayed completely (‘drop out’) due to inhomogeneity in the magnetic field. By using the information across multiple echoes these images can be combined in an optimal manner to take advantage of the signal in the earlier echoes (see Optimal combination).
Adapted from Kundu et al. (2017).
In order to classify the relationship between the signal and the echo time we can consider a
single voxel at two timepoints (x and y) and the measured signal measured at three different echo times -
.
Adapted from Kundu et al. (2017).
For the left column, we are observing a change that we term
- that is a change
in the intercept or raw signal intensity.
A common example of this is participant movement, in which the voxel (which is at a static
location within the scanner) now contains different tissue or even an area outside of the brain.
As we have collected three separate echoes, we can compare the change in signal at each echo time,
.
For
we see that this produces a decaying curve.
If we compare this to the original signal, as in
we see that there is no echo time dependence, as the final plot is a flat line.
In the right column, we consider changes that are related to brain activity.
For example, imagine that the two brain states here (x and y) are a baseline and task activated state respectively.
This effect is a change in in
which is equivalent
to the inverse of
.
We typically observe this change in signal amplitude occurring over volumes with
the hemodynamic response, while here we are examining the change in signal over echo times.
Again we can plot the difference in the signal between these two states as a function of echo time,
finding that the signal rises and falls.
If we compare this curve to the original signal we find
that the magnitude of the changes is dependent on the echo time.
For a more comprehensive review of these topics and others, see Kundu et al. (2017).
Why use multi-echo?
There are many potential reasons an investigator would be interested in using multi-echo EPI (ME-EPI). Among these are the different levels of analysis ME-EPI enables. Specifically, by collecting multi-echo data, researchers are able to:
Compare results across different echoes: currently, field standards are largely set using single-echo EPI. Because multi-echo is composed of multiple single-echo time series, each of these can be analyzed separately and compared to one another.
Combine the results by weighted averaging: Rather than analyzing single-echo time series separately, we can combine them into an “optimally combined time series”. For more information on this combination, see Optimal combination. Optimally combined data exhibits higher SNR and improves statistical power of analyses in regions traditionally affected by drop-out.
Denoise the data based on information contained in the echoes: Collecting multi-echo data allows access to unique denoising methods. ICA-based denoising methods like ICA-AROMA (Pruim et al. (2015)) have been shown to significantly improve the quality of cleaned signal. These methods, however, have comparably limited information, as they are designed to work with single-echo EPI.
tedana is an ICA-based denoising pipeline built especially for
multi-echo data. Collecting multi-echo EPI allows us to leverage all of the information available for single-echo datasets,
as well as additional information only available when looking at signal decay across multiple TEs.
We can use this information to denoise the optimally combined time series.
Considerations for ME-fMRI
Multi-echo fMRI acquisition sequences and analysis methods are rapidly maturing. Someone who has access to a multi-echo fMRI sequence should seriously consider using it.
Costs and benefits of multi-echo fMRI
The following are a few points to consider when deciding whether or not to collect multi-echo data.
Possible increase in TR
The one difference with multi-echo is a slight time cost. For multi-echo fMRI, the shortest echo time (TE) is essentially free since it is collected in the gap between the RF pulse and the single-echo acquisition. The second echo tends to roughly match the single-echo TE. Additional echoes require more time. For example, on a 3T MRI, if the T2* weighted TE is 30ms for single echo fMRI, a multi-echo sequence may have TEs of 15.4, 29.7, and 44.0ms. In this example, the extra 14ms of acquisition time per RF pulse is the cost of multi-echo fMRI.
One way to think about this cost is in comparison to single-echo fMRI. If a multi-echo sequence has identical spatial resolution and acceleration as a single-echo sequence, then a rough rule of thumb is that the multi-echo sequence will have 10% fewer slices or 10% longer TR. Instead of compromising on slice coverage or TR, one can increase acceleration. If one increases acceleration, it is worth doing an empirical comparison to make sure there isn’t a non-trivial loss in SNR or an increase of artifacts.
Weighted averaging may lead to an increase in SNR
Multiple studies have shown that a weighted average of the echoes to optimize
T2* weighting, sometimes called “optimally combined,” gives a reliable, modest
boost in data quality.
The optimal combination of echoes can currently be calculated in several
software packages including AFNI, fMRIPrep, and tedana.
In tedana, the weighted average can be calculated with
tedana.workflows.t2smap_workflow().
If no other acquisition compromises are necessary to acquire multi-echo data,
this boost is worthwhile.
Consider the life of the dataset
If other compromises are necessary, consider the life of the data set. If data is being acquired for a discrete study that will be acquired, analyzed, and published in a year or two, it might not be worth making compromises to acquire multi-echo data. If a data set is expected to be used for future analyses in later years, it is likely that more powerful approaches to multi-echo denoising will sufficiently mature and add even more value to a data set.
Other multi-echo denoising methods, such as MEICA, the predecessor to tedana, have shown the potential for much greater data quality improvements, as well as the ability to more accurately separate visually similar signal vs noise, such as scanner based drifts vs slow changes in BOLD signal. More powerful methods and associated algorithms are still being actively developed. Users need to have the time and knowledge to look at the denoising output from every run to make sure denoising worked as intended.
You may recover signal in areas affected by dropout
Typical single echo fMRI uses an echo time that is appropriate for signal
across most of the brain.
While this is effective, it also leads to drop out in regions with low
values.
This can lead to low or even no signal at all in some areas.
If your research question could benefit from having improved signal
characteristics in regions such as the orbitofrontal cortex, ventral temporal
cortex or the ventral striatum then multi-echo fMRI may be beneficial.
Consider the cost of added quality control
The developers of
tedana strongly support always examining data for quality
concerns, whether or not multi-echo fMRI is used.
Multi-echo data and denoising are no exception.
For this purpose,
tedana currently produces basic diagnostic images by
default, which can be inspected in order to determine the quality of denoising.
See Outputs of tedana for more information on these outputs.
Acquiring multi-echo data
Available multi-echo fMRI sequences
We have attempted to compile some basic multi-echo fMRI protocols in an OSF project. The parameter choices in these protocols run and seem reasonable, but they have not been optimized for a specific situation. They are a good starting point for someone designing a study, but should not be considered canonical. If you would like to use one of them, please customize it for your own purposes and make sure to run pilot scans to test your choices.
Siemens
For Siemens users, there are two options for Works In Progress (WIPs) Sequences.
- The Center for Magnetic Resonance Research at the University of Minnesotaprovides a custom MR sequence that allows users to collect multiple echoes(termed Contrasts). The sequence and documentation can be found here.By default the number of contrasts is 1, yielding a single-echo sequence.In order to collect multiple echoes, increase number of Contrasts on theSequence Tab, Part 1 on the MR console.
- The Martinos Center at Harvard also has a MR sequence available, with thedetails available here. The number of echoes can be specified on theSequence, Special tab in this sequence.
GE
For GE users, there are currently two sharable pulse sequences:
Multi-echo EPI (MEPI) – Software releases: DV24, MP24 and DV25 (with offline recon)
- Hyperband Multi-echo EPI (HyperMEPI) - Software releases: DV26, MP26, DV27, RX27(here hyperband can be deactivated to do simple Multi-echo EPI – online recon)
Please reach out to the GE Research Operation team or each pulse sequence’s author to begin the process of obtaining this software. More information can be found on the GE Collaboration Portal
Once logged in, go to Groups > GE Works-in-Progress you can find the description of the current ATSM (i.e. prototypes).
Philips
For Philips users, sequences can be defined using product software.
Multi-echo EPI (ME-EPI) can be acquired using the product software and can be combined with SENSE parallel imaging and MultiBand. The combination with MultiBand requires a SW release >R5.1 and MultiBand functionality to be present. No default ME-EPI are provided, but existing single-echo EPI sequences from the BOLD fMRI folder can be modified into multi-echo sequences by increasing the number of echoes. As a starting point to develop a 3 echo EPI protocol start by opening the default fMRI protocol and modify the following: increase number of echoes to 3 on the Contrast tab, set SENSE = 3, MB-SENSE = 3, set to 3mm isotropic voxels and adjust TEs to your preference.
Other available multi-echo MRI sequences
In addition to ME-fMRI, other MR sequences benefit from acquiring multiple echoes, including T1-weighted imaging (MEMPRAGE) and susceptibility weighted imaging. While most of these kinds of sequences fall outside the purview of this documentation, quantitative T2* mapping is relevant since a baseline T2* map is used in several processing steps including Optimal combination. While the T2* map estimated directly from fMRI time series is noisy, no current study quantifies the benefit to optimal combination or tedana denoising if a higher quality T2* map is used. Some benefit is likely, so, if a T2* map is independently calculated, it can be used as an input to many functions in the tedana workflow.
Warning
While tedana allows the input of a T2* map from any source, and a more accurate T2* map should lead to better results, this hasn’t been systematically evaluated yet.
There are many ways to calculate T2* maps, with some using multi-echo acquisitions. We are not presenting an expansive review of this literature here, but Cohen-Adad et al. (2012) and Ruuth et al. (2019) are good places to start learning more about this topic.
Acquisition parameter recommendations
There is no empirically tested best parameter set for multi-echo fMRI acquisition. The guidelines for optimizing parameters are similar to single-echo fMRI. For multi-echo fMRI, the same factors that may guide priorities for single echo fMRI sequences are also relevant. Choose sequence parameters that meet the priorities of a study with regards to spatial resolution, spatial coverage, sample rate, signal-to-noise ratio, signal drop-out, distortion, and artifacts.
A minimum of 3 echoes is required for running the current implementation fo TE-dependent denoising in
tedana.
It may be useful to have at least one echo that is earlier and one echo that is later than the
TE one would use for single-echo T2* weighted fMRI.
Note
This is in contrast to the dual echo denoising method which uses a very early (~5ms) first echo in order to clean data. For more information on this method, see Bright and Murphy (2013).
More than 3 echoes may be useful, because that would allow for more accurate estimates of BOLD and non-BOLD weighted fluctuations, but more echoes have an additional time cost, which would result in either less spatiotemporal coverage or more acceleration. Where the benefits of more echoes balance out the additional costs is an open research question.
We are not recommending specific parameter options at this time. There are multiple ways to balance the slight time cost from the added echoes that have resulted in research publications. We suggest new multi-echo fMRI users examine the Publications using multi-echo fMRI that use multi-echo fMRI to identify studies with similar acquisition priorities, and use the parameters from those studies as a starting point. More complete recommendations and guidelines are discussed in the appendix of Dipasquale et al. (2017).
Note
In order to increase the number of contrasts (“echoes”) you may need to first increase the TR, shorten the first TE and/or enable in-plane acceleration. For typically used parameters see the ME-fMRI parameters section below.
ME-fMRI parameters
The following section highlights a selection of parameters collected from published papers that have used multi-echo fMRI. You can see the spreadsheet of publications at Publications using multi-echo fMRI.
The following plots reflect the average values for studies conducted at 3 Tesla.
(Source code, png, hires.png, pdf)
Processing multi-echo fMRI
Most multi-echo denoising methods, including
tedana,
must be called in the context of a larger ME-EPI preprocessing pipeline.
Two common pipelines which support ME-EPI processing include fMRIPrep and afni_proc.py.
Users can also construct their own preprocessing pipeline for ME-EPI data from which to call the multi-echo denoising method of their choice. There are several general principles to keep in mind when constructing ME-EPI processing pipelines.
In general, we recommend the following:
1. Estimate motion correction parameters from one echo and apply those parameters to all echoes
When preparing ME-EPI data for multi-echo denoising with a tool like
tedana,
it is important not to do anything that mean shifts the data or otherwise separately
scales the voxelwise values at each echo.
For example, head-motion correction parameters should not be calculated and applied at an individual echo level (see above). Instead, we recommend that researchers apply the same transforms to all echoes in an ME-EPI series. That is, that they calculate head motion correction parameters from one echo and apply the resulting transformation to all echoes.
2. Perform slice timing correction and motion correction before multi-echo denoising
Similarly to single-echo EPI data, slice time correction allows us to assume that voxels across
slices represent roughly simultaneous events.
If the TR is slow enough to necessitate slice-timing (i.e., TR >= 1 sec., as a rule of thumb), then
slice-timing correction should be done before
tedana.
This is because slice timing differences may impact echo-dependent estimates.
The slice time is generally defined as the excitation pulse time for each slice. For single-echo EPI data, that excitation time would be the same regardless of the echo time, and the same is true when one is collecting multiple echoes after a single excitation pulse. Therefore, we suggest using the same slice timing for all echoes in an ME-EPI series.
3. Perform distortion correction, spatial normalization, smoothing, and any rescaling or filtering after denoising
Any step that will alter the relationship of signal magnitudes between echoes should occur after denoising and combining
of the echoes. For example, if echo is separately scaled by its mean signal over time, then resulting intensity gradients
and the subsequent calculation of voxelwise T2* values will be distorted or incorrect. See the description of
tedana’s approach for more details on how T2* values are calculated. An agressive temporal filter
(i.e. a 0.1Hz low pass filter) or spatial smoothing could similarly distort the relationship between the echoes at each
time point.
Note
We are assuming that spatial normalization and distortion correction, particularly non-linear normalization methods with higher order interpolation functions, are likely to distort the relationship between echoes while rigid body motion correction would linearly alter each echo in a similar manner. This assumption has not yet been empirically tested and an affine normalzation with bilinear interpolation may not distort the relationship between echoes. Additionally, there are benefits to applying only one spatial transform to data rather than applying one spatial transform for motion correction and a later transform for normalization and distortion correction. Our advice against doing normalization and distortion correction is a conservative choice and we encourage additional research to better understand how these steps can be applied before denoising.
General Resources
Journal articles describing multi-echo methods
- Publications using multi-echo fMRI catalogues papers using multi-echo fMRI,with information about acquisition parameters.
- Posse, NeuroImage 2012Includes an historical overview of multi-echo acquisition and research
- Kundu et al, NeuroImage 2017A review of multi-echo denoising with a focus on the MEICA algorithm
- Olafsson et al, NeuroImage 2015The appendix includes a good explanation of the math underlying MEICA denoising
- Dipasquale et al, PLoS One 2017The appendix includes some recommendations for multi-echo acquisition
Videos
An educational session from OHBM 2017 by Dr. Prantik Kundu about multi-echo denoising
A series of lectures from the OHBM 2017 multi-echo session on multiple facets of multi-echo data analysis
- Multi-echo fMRI lecture from the 2018 NIH FMRI Summer Course by Javier Gonzalez-Castillo
An NIMH Center for Multimodal Neuroimaging video by the Section on Functional Imaging Methods
Multi-echo preprocessing software
tedana requires data that has already been preprocessed for head motion, alignment, etc.
AFNI can process multi-echo data natively as well as apply tedana denoising through the use of afni_proc.py. To see various implementations, start with Example 12 in the afni_proc.py help
fmriprep can also process multi-echo data, but is currently limited to using the optimally combined timeseries. For more details, see the fmriprep workflows page and [tedana] How do I use tedana with fMRIPrepped data?.
Currently SPM and FSL do not natively support multi-echo fmri data processing.
Other software that uses multi-echo fMRI
tedana represents only one approach to processing multi-echo data.
Currently there are a number of methods that can take advantage of or use the
information contained in multi-echo data.
These include:
- detection of neural events in the absence of a prespecified model. Byleveraging the information present in multi-echo data, changes in relaxationtime can be directly estimated and more events can be detected.For more information, see the following paper.
- Bayesian approach to denoising: An alternative approach to separating outBOLD and non-BOLD signals within a Bayesian framework is currently underdevelopment.
- Multi-echo Group ICA: Current approaches to ICA just use a single run ofdata in order to perform denoising. An alternative approach is to useinformation from multiple subjects or multiple runs from a single subjectin order to improve the classification of BOLD and non-BOLD components.
- Dual Echo Denoising: If the first echo can be collected early enough,there are currently methods that take advantage of the very limited BOLDweighting at these early echo times.
-
Datasets
A number of multi-echo datasets have been made public so far. This list is not necessarily up to date, so please check out OpenNeuro to potentially find more.
Multi-echo fMRI replication sample of autobiographical memory, prospection and theory of mind reasoning tasks
-
Multiband multi-echo imaging of simultaneous oxygenation and flow timeseries for resting state connectivity
Valence processing differs across stimulus modalities
Cambridge Centre for Ageing Neuroscience (Cam-CAN)
rt-me-fMRI - A task and resting state dataset for real-time, multi-echo fMRI methods development and validation
-
Publications using multi-echo fMRI
The sheet at the bottom of this page contains an extensive list of multi-echo fMRI publications. You can view and suggest additions to this spreadsheet here. This is a volunteer-led effort so, if you know of a excluded publication, whether or not it is yours, please add it.
Interactive visualizations of publications and parameters
You can explore interactive and accessible information about multi-echo studies from the list of publications below, their study design parameters, and MRI sequence parameter options using this web application. If you’d like to add more studies, parameters or visualization options to the application, feel free to create an issue or send a pull request on the application’s GitHub repository. | https://tedana.readthedocs.io/en/stable/multi-echo.html | 2022-06-25T11:45:17 | CC-MAIN-2022-27 | 1656103034930.3 | [array(['_images/physics_kundu_2017_multiple_echoes.jpg',
'_images/physics_kundu_2017_multiple_echoes.jpg'], dtype=object)
array(['_images/physics_kundu_2017_TE_dependence.jpg',
'_images/physics_kundu_2017_TE_dependence.jpg'], dtype=object)
array(['_images/multi-echo-1_00_00.png', '_images/multi-echo-1_00_00.png'],
dtype=object)
array(['_images/multi-echo-1_01_00.png', '_images/multi-echo-1_01_00.png'],
dtype=object)
array(['_images/multi-echo-1_02_00.png', '_images/multi-echo-1_02_00.png'],
dtype=object) ] | tedana.readthedocs.io |
Introduction to Azure Front Door
Describe how Azure Front Door provides a fast, reliable, and secure modern cloud content delivery network. Determine whether Azure Front Door can help you transform your global consumer and enterprise apps into more secure, high-performing, personalized modern apps.
Learning objectives
By the end of this module, you will be able to:
- Evaluate whether Azure Front Door can help transform your organization's apps into high-performing, personalized modern apps.
- Describe how web application firewall features of Azure Front Door help protect your apps.
Prerequisites
- Experience with content delivery network (CDN) platforms
- Introduction min
-
-
-
- Knowledge check min
- | https://docs.microsoft.com/en-us/learn/modules/intro-to-azure-front-door/ | 2022-06-25T12:18:21 | CC-MAIN-2022-27 | 1656103034930.3 | [] | docs.microsoft.com |
Variables in Preference Items
Applies To: Windows Server 2008
Preference extensions support Windows environment variables and generate a number of additional process environment variables. Any variable may be used in a configuration parameter value. Each Help document states whether variables are supported in a specific field.
Note
Using Registry Match Targeting targeting items, you can define variables at client run-time, and have these control behavior using the Environment Variable Targeting targeting items or as values in a preference item setting.
Windows environment variables
The Windows environment is a list of variables saved as name/value pairs. To see the current list of variables, type SET at the command prompt. Each process, including the desktop, has a list of variables unique to the process. When one process launches another, normally a copy of the environment of the launching process is passed to the launched process. Typically, environment variable names are enclosed between two percent signs (for example, %ProgramFiles%). Windows resolves the environment variable when an application requests the value associated to the name.
Preference process variables
Preference extensions implement the process variables listed below.
Note
Variables are not case sensitive. you want to configure, and then click Edit.
Place the cursor in the desired box.
To enter a preference process variable, press F3, select a variable from the list, and then click Select to insert the variable in the box.
To enter an existing Windows environment variable, type the variable in the box.
Note. | https://docs.microsoft.com/en-us/previous-versions/windows/it-pro/windows-server-2008-R2-and-2008/cc753915(v=ws.10)?redirectedfrom=MSDN | 2022-06-25T11:59:52 | CC-MAIN-2022-27 | 1656103034930.3 | [] | docs.microsoft.com |
Drupal Commerce can be integrated with payment providers from all over the world. This page provides a list of the 139 contributed payment gateway modules that currently exist. See the documentation on Extending Drupal Commerce for information on adding one or more of these modules to your Drupal Commerce project and the Install and Configure a Payment Gateway documentation for information on configuration.
If you don't see a payment provider you want for your project, Drupal Commerce provides a framework for implementing your own online payment gateways. See the Creating payment gateways documentation for more information.
If you create your own payment gateway module or find one that's not in this list yet, please let us know about it so that we can add it.
Found errors? Think you can improve this documentation? edit this page | https://docs.drupalcommerce.org/commerce2/developer-guide/payments/available-gateways | 2022-06-25T11:22:14 | CC-MAIN-2022-27 | 1656103034930.3 | [] | docs.drupalcommerce.org |
21. Setting and Email Capture Channel
The Email Capture Channel feature enables DCM to read from an email account and using rules do specific actions with those emails. AppBase uses an extension rule (DLL) called DefaultEmailCaptureRule to do the parse of the incoming emails.
Check the Extension Rule
Access to Application Studio at the upper right corner.
Navigate to Business Rules > Rules, search for the rule DefaultEmailCaptureRule.
Configuring a Capture Channel Schema
- Navigate to Channel Setup > Capture Channel Schemas and click the New Capture Channel button.
- In the General properties section, set Name as 'EmailCaptureChannel' and for Rule select the ‘DefaultEmailCaptureRule‘ rule (the rule we just checked in the previous steps), Set Tags for ‘cust’.
- In section Channel Type select Email (1). It will show the sections Batch Properties (2) and Email (3) where you will have to fill in more details in the next step.
- In the Batch Properties section set the Number of items to process at once to 3 (three) and Check for items every (seconds) to 30 seconds.
- In the Email section, enter the properties related to the email connection.
Use the configuration values from the table below
The @@NAME@@ indicates that the value is from a Solution Variable called NAME
- In the Description section insert the text 'GBank Email Capture Channel'.
- Save the Capture Channel configuration.
Setting System Variables
In the previous task, for security reasons, we used system variables for the email configuration. Now we need to create these system variables and assign the real values.
- Navigate to Solution Preferences > Solution Variables to add the variables 'GBANK_xxxx'.
Click the Add icon to add each one of the variables used in the previous step. Use 'GBANK EMAIL CAPTURE CHANNEL' as Description.
Do not use the ‘@’ symbols when creating the system variables.
- Compare your results with the following image.
As with any change to the solution model you have to run a deployment to apply them.
- Navigate to Deployment Management > Deploy, click on Preview to start the deployment process.This process may take 5-10 minutes.
Setting System Variables Values
- After deploy done successfully, go to Solution Variables. Edit the variables and insert the values for your email configuration from the table below:
- Verify the values you have entered in the previous step. See the following image as a reference.
Starting Capture Channel
- Go back to Channel Setup > Manage Capture Channels. Enable the EmailCaptureChannel channel by clicking the green play button.
If there are no errors, the button image will change to a red square (stop).
- Send a test email from the email app
- Go to the Email Capture Monitor menu and select the Capture Channel from the dropdown list, then click the Filter
- Open the incoming email in the capture monitor. Look at the section Capture Rule Execution Details. Here you will be able to find any error message about the process of capturing and parsing the incoming emails.
Capture Rule Result:
<DATA> <affectedRows>1</affectedRows> <ValidationSummary> <IsValid>true</IsValid> </ValidationSummary> </DATA>
Next Steps
22. Using DCM Rest Services | https://docs.eccentex.com/doc1/21-setting-and-email-capture-channel | 2022-06-25T10:48:58 | CC-MAIN-2022-27 | 1656103034930.3 | [] | docs.eccentex.com |
Testing Genesys Chat
ServiceJourney comes with a convenient testing tool to help test the integration between Genesys Cloud CX and ServiceJourney.
The Genesys Chat Testing Tool
The tool is available to ServiceJourney administrators.
- On the top right, navigate to → ServiceJourney
- On the left, navigate to → Case Management
- Then navigate to Admin Tools → Genesys Chat
Connecting to your Genesys Cloud CX Organization
Assuming the ServiceJourney Connector has been installed, fill in the following fields in the Basics and Advanced tabs: | https://docs.eccentex.com/doc1/testing-genesys-chat | 2022-06-25T10:12:28 | CC-MAIN-2022-27 | 1656103034930.3 | [array(['../doc1/2062024815/Genesys%20chat.png?inst-v=8f326cbe-d759-410f-b89d-9e6c8bf0a399',
None], dtype=object) ] | docs.eccentex.com |
Data API Error Handling
On successful processing of SingleStore's Data API requests, the server returns the 200 OK HTTP status line. On Data API request failure, the server returns a status code for the error along with a message describing the issue. The error messages are returned as bare strings in the response body.
The following table provides a list of HTTP response status codes that are used for handling errors in SingleStore’s Data API requests. | https://docs.singlestore.com/db/v7.8/en/reference/data-api/data-api-error-handling.html | 2022-06-25T11:39:35 | CC-MAIN-2022-27 | 1656103034930.3 | [] | docs.singlestore.com |
pokyRepository
bitbake-layersScript
local.conf
IMAGE_FEATURESand
EXTRA_IMAGE_FEATURES
bmaptool
/dev
devtmpfsand a Device Manager.:: You should have at least 50 Gbytes of free disk space for building images.
Meet Minimal Version Requirements:.),", respectively.
Once the "Working With. 2.4.4). "Workflows" section in the Yocto Project Reference.
Table of Contents
bitbake-layersScript
local.conf
IMAGE_FEATURESand
EXTRA_IMAGE_FEATURES
bmaptool
bitbake-layers Script"
section further down in this manual.
Follow these general steps to create your layer without the aid of a script:".
You need toSD,:_2.4.4.bbappend must apply to
someapp_2.4.
The
bitbake-layers script
replaces the
yocto-layer
script, which is deprecated in the Yocto Project
2.4 release.
The
yocto-layer script
continues to function as part of the 2.4 release
but will be removed post 2.4.. meta-mylayer /home/scottrif/meta-mylayer Reference" in the Yocto Project Reference Manual for further details..
If you have recipes that use
pkg_postinst
scripts Reference",.4) beaglebone Create SD card image for Beaglebone mpc8315e-rdb Create SD card image for MPC8315E-RDB genericx86 Create an EFI disk image for genericx86*-201710061409-sda.direct The following build artifacts were used to create the image(s): ROOTFS_DIR: /home/scottrif/poky/build/tmp.wic.r4hkds0b/scottrif/poky/scripts/lib/wic/canned-wks/directdisk-gpt.wks \ /home/scottrif/scottrif/poky/build/tmp.wic.hk3wl6zn binary package version.
"Systemd-bootTarget":
Choose "Systemd-boot "Systemd-bootTarget", there are additional requirements and considerations. See the "Selecting Systemd-boot
"Systemd-bootTarget", then you do not need any information
in this section.
You can skip down to the
"Running Tests"
section.
If you did set
TEST_TARGET to
"Systemd-boot "Systemd-bootTarget" is
to set up the test image:
Set up your
local.conf file:
Make sure you have the following statements in
your
local.conf file:
IMAGE_FSTYPES += "tar.gz" INHERIT += "testimage" TEST_TARGET = "Systemd-boot
Systemd-boot | https://docs.yoctoproject.org/2.4.4/dev-manual/dev-manual.html | 2022-06-25T11:08:26 | CC-MAIN-2022-27 | 1656103034930.3 | [] | docs.yoctoproject.org |
Rapid Power-Ups
Rapid Power-ups are delivered to a Cortex as Storm packages directly, without requiring any additional containers to be deployed. This allows users to rapidly expand the power of their Synapse deployments without needing to engage with additional operations teams in their environments. For an introduction to Rapid Power-ups and some information about publicly available Power-Ups, see the following blog post.
Available Rapid Power-Ups
The following Rapid Power-Ups are available:
Getting Started with Rapid Power-Ups
Vertex maintains a package repository which allows for loading public and private packages.
If you are a Synapse User Interface user, you can navigate to the Power-Ups tab to register your Cortex and configure packages directly from the UI.
Alternatively, one can use the storm tool to get started with Rapid Power-Ups in their Cortex.
First load the
vertex package.
storm> pkg.load
Register the Cortex using
vertex.register.
This will create an account if one does not exist, and send a magic link to the email address
which can be used to log in. For additional details run
vertex.register --help.
storm> vertex.register <your email>
Once the registration has completed, the available packages can be viewed.
storm> vertex.pkg.list
Storm packages can then be installed using the
vertex package.
For additional details run
vertex.pkg.install --help.
storm> vertex.pkg.install <pkgname>
Configuration
For Power-Ups that require an API key, the
<pkgname>.setup.apikey command can be used
to set the key globally or for the current user, with the latter taking precedence.
Other configuration requirements are detailed in the individual package documentation. | https://synapse.docs.vertex.link/en/latest/synapse/power_ups/rapid_power_ups.html | 2022-06-25T10:32:23 | CC-MAIN-2022-27 | 1656103034930.3 | [] | synapse.docs.vertex.link |
Markdown Images
Astro Imagetools comes with built-in support for optimizing markdown images. The Vite plugin included in the package will detect images used inside markdown files. If found any, it will automatically generate the image sets using the source and alternative text as the
src and
alt props, and then it will replace the original string with them.
Like the
<Picture /> component, both absolute paths, remote, and data URIs are supported as the source path. But in addition to that, relative paths are also supported for markdown images. 🎉🎉🎉
In complex scenarios where you need more config options, you can pass them as query parameters. Or, if you have to set their values dynamically, you can import the
<Picture /> component (and any other components too)! Astro supports importing and using Astro components inside MD files. Check the official Astro Markdown documentation for more info on this.
Note: Automatic markdown image optimization is supported only for markdown files. If you are using the
<Markdown />component, you have to use the
<Picture />component instead.
Both the Markdown Syntax
and HTML Syntax
<img src="..." alt="..." />are supported.
Example Markdown Images Usage
--- src: alt: A random image setup: | import { Picture } from "astro-imagetools/components"; --- # Hello Markdown Images <!-- A remote image --> <!-- A local image relative to the markdown file --> <!-- A local image relative to the project root --> <!-- An example of using query params --> <!-- An example of the `<Image />` component inside MD pages --> <Picture src={frontmatter.src} alt={frontmatter.alt} /> | https://astro-imagetools-docs.vercel.app/en/markdown-images | 2022-06-25T10:54:45 | CC-MAIN-2022-27 | 1656103034930.3 | [array(['...', None], dtype=object)
array(['https://picsum.photos/1024/768', 'A random remote image'],
dtype=object)
array(['./images/landscape.jpg', 'A local image'], dtype=object)
array(['../src/images/landscape.jpg', 'Another local image'], dtype=object)
array(['https://picsum.photos/1024/768?grayscale',
'A remote image with query params'], dtype=object)] | astro-imagetools-docs.vercel.app |
CodeScan provides you with the options to delete the unwanted projects / organizations which are no longer in use for your convenience.
NOTE: CANNOT UNDO THIS ACTION
Deleting a Project
- Go to the project you want to delete under the organization and select Administration > Deletion as in the image below
Now click on the option delete which you will be able to see on the page as in the image
It gives you a prompt which asks for your permission to delete it. Select Delete.
Deleting an Organization
- First, make sure you are on the correct organization and click on the Organization name on the top left to confirm that you are on the organization main page not in any project.
Now go to Administration > Organization settings as in the image.
Now, a new window will appear and if you scroll the page down to the end, you will be able to view the option which says Delete under the column delete organization as in the image below.
- Click on it.
- Another tab pops up as in the below picture where you have to confirm the organization name and then click on delete.
This deletes your desired organization. | https://docs.codescan.io/hc/en-us/articles/360052157812-Delete-Projects-Organizations | 2022-06-25T11:25:31 | CC-MAIN-2022-27 | 1656103034930.3 | [array(['/hc/article_attachments/360076701352/mceclip4.png',
'mceclip4.png'], dtype=object) ] | docs.codescan.io |
- Instance-level analytics
- Group-level analytics
- Project-level analytics
- User-configurable analytics
- DevOps Research and Assessment (DORA) key metrics
- Definitions
Analyze GitLab usage
Instance-level analytics
Introduced in GitLab 12.2.
Instance-level analytics make it possible to aggregate analytics across GitLab, so that users can view information across multiple projects and groups in one place.
Learn more about instance-level analytics.
Group-level analytics
- Introduced in GitLab 12.8.
- Moved to GitLab Premium in 13.9.
GitLab provides several analytics features at the group level. Some of these features require you to use a higher tier than GitLab Free.
- Application Security
- Contribution
- DevOps Adoption
- Insights
- Issue
- Productivity
- Repositories
- Value Stream
Project-level analytics
You can use GitLab to review analytics at the project level. Some of these features require you to use a higher tier than GitLab Free.
- Application Security
- CI/CD
- Code Review
- Insights
- Issue
- Merge Request, enabled with the
project_merge_request_analyticsfeature flag
- Repository
- Value Stream
User-configurable analytics
The following analytics features are available for users to create personalized views:
Be sure to review the documentation page for this feature for GitLab tier requirements.
DevOps Research and Assessment (DORA) key metrics
- Introduced in GitLab 13.7.
- Added support for lead time for changes in GitLab 13.10.
The DevOps Research and Assessment (DORA) team developed several key metrics that you can use as performance indicators for software development teams.
Deployment frequency
Deployment frequency is the frequency of successful deployments to production (hourly, daily, weekly, monthly, or yearly). This measures how often you deliver value to end users. A higher deployment frequency means you can get feedback sooner and iterate faster to deliver improvements and features. GitLab measures this as the number of deployments to a production environment in the given time period.
Deployment frequency displays in several charts:
Lead time for changes
Lead time for changes measures the time to deliver a feature once it has been developed, as described in Measuring DevOps Performance.
Lead time for changes displays in several charts:
Time to restore service
Time to restore service measures how long it takes an organization to recover from a failure in production. GitLab measures this as the average time required to close the incidents time to restore service, use the GraphQL or the REST APIs.
Change failure rate
Change failure rate measures the percentage of deployments that cause a failure in production. GitLab measures this as the number of incidents divided by the number of deployments to a production environment change failure rate, use the GraphQL or the REST APIs.
Supported DORA metrics in GitLab
Definitions
We use the following terms to describe GitLab analytics:
- Cycle time: The duration of only the execution work. Cycle time is often displayed in combination with the lead time, which is longer than the cycle time. GitLab measures cycle time from the earliest commit of a linked issue’s merge request to when that issue is closed. The cycle time approach underestimates the lead time because merge request creation is always later than commit time. GitLab displays cycle time in group-level Value Stream Analytics and project-level Value Stream Analytics.
- Deploys: The total number of successful deployments to production in the given time frame (across all applicable projects). GitLab displays deploys in group-level Value Stream Analytics and project-level Value Stream Analytics.
- Lead time: The duration of your value stream, from start to finish. Different to Lead time for changes. Often displayed in combination with “cycle time,” which is shorter. GitLab measures lead time from issue creation to issue close. GitLab displays lead time in group-level Value Stream Analytics.
- Mean Time to Change (MTTC): The average duration between idea and delivery. GitLab measures MTTC from issue creation to the issue’s latest related merge request’s deployment to production.
- Mean Time to Detect (MTTD): The average duration that a bug goes undetected in production. GitLab measures MTTD from deployment of bug to issue creation.
- Mean Time To Merge (MTTM): The average lifespan of a merge request. GitLab measures MTTM from merge request creation to merge request merge (and closed/un-merged merge requests are excluded). For more information, see Merge Request Analytics.
- Mean Time to Recover/Repair/Resolution/Resolve/Restore (MTTR): The average duration that a bug is not fixed in production. GitLab measures MTTR from deployment of bug to deployment of fix.
- Throughput: The number of issues closed or merge requests merged (not closed) in a period of time. Often measured per sprint. GitLab displays merge request throughput in Merge Request Analytics.
- Value Stream: The entire work process that is followed to deliver value to customers. For example, the DevOps lifecycle is a value stream that starts with “plan” and ends with “monitor”. GitLab helps you track your value stream using Value Stream Analytics.
- Velocity: The total issue burden completed in some period of time. The burden is usually measured in points or weight, often per sprint. For example, your velocity may be “30 points per sprint”. GitLab measures velocity as the total points or weight of issues closed in a given period of time. | https://docs.gitlab.com/14.10/ee/user/analytics/ | 2022-06-25T10:20:27 | CC-MAIN-2022-27 | 1656103034930.3 | [] | docs.gitlab.com |
6 Guidelines¶
6.1 Technical guidelines¶
Modularity
The OptServer is a very lightweight service that handles requests as follows:
receive a request,
save the submitted problem to disk,
run a command-line version of MOSEK (
mosekcli) to solve the problem and save results on disk,
provide the solution to the caller.
In particular a MOSEK installation including the
mosekcli binary is required to run OptServer. Typically one would use the MOSEK binary from the same distribution package from which the OptServer was installed, but the setup is modular and it is possible to use any other MOSEK version. In particular updating the solver can be performed independently of updating the OptServer binaries.
Network load
Most of the network load is due to the transfer of the optimization problem from the client to the server. Therefore for long running jobs the transfer time is typically negligible, but for very small problems it will be a significant part of the solution time.
Disk and database
The database is used to store information about jobs (optimization tasks) as well as user information if using the Web GUI. Actual jobs are stored on disk along with log and solutions. Each submitted job is allocated a folder in
var/Mosek/jobs/tasks. Therefore, in case of a problem, the status and solution can be recovered from disk.
A suitable amount of free space must be available. OptServer does not delete data for completed jobs.
Load balancing
The OptServer does not implement any load balancing. It launches jobs as requests come along. Users should be careful not to overcommit the CPUs and memory, and ensure there is a sufficient number of licenses (see below).
6.2 The license system¶
MOSEK is a commercial product that always needs a valid license to work. MOSEK uses a third party license manager to implement license checking. The number of license tokens provided determines the number of optimizations that can be run simultaneously.
A MOSEK license must be available on the machine which hosts the OptServer. Each job submitted to the OptServer will be solved by a new solver process, hence it will require a new license checkout. If the license is not unlimited, then the number of tokens determines the maximal number of jobs that can run simultaneously. In this case setting the license wait flag with the parameter
MSK_IPAR_LICENSE_WAIT will force MOSEK to wait until a license token becomes available instead of returning with an error.
6.3 Security¶
The Web GUI of the OptServer uses HTTPS. To enable the GUI the user must point the installation script to a folder with
cert.pem and
key.pem files, for example:
./install_MosekServer --inplace \ --port $PORT \ --mosekdir ../../.. \ --database-resource "host=/var/run/postgresql user=$USER dbname=$DBNAME sslmode=disable" \ --ssl ../etc/Mosek/ssl \ --mode gui
The Web GUI is not be available in HTTP mode, i.e. without encryption. The Web GUI provides role management, in particular creating users and generating their access tokens. When submitting a job the user should provide their access token.
To enable job submission by anonymous users specify
--enable-anonymous in the setup step. | https://docs.mosek.com/10.0/opt-server/guidelines-optserver.html | 2022-06-25T11:58:58 | CC-MAIN-2022-27 | 1656103034930.3 | [] | docs.mosek.com |
Authentication methods
Verify that users and clients are who they say they are.
Authentication is the process by which the database server establishes the identity of the client, and by extension determines whether the client application (or the user who runs the client application) is permitted to connect with the database user name that was requested. YugabyteDB offers a number of different client authentication methods. The method used to authenticate a particular client connection can be selected on the basis of (client) host address, database, and user.
NoteThe authentication methods do not require any external security infrastructure and are the quickest way for YugabyteDB DBAs to secure the database. Password authentication is the easiest choice for authenticating remote user connections.
The various methods for authenticating users supported by YugabyteDB are listed below.
Host-based authentication
Fine-grained authentication for local and remote clients based on IP addresses. | https://docs.yugabyte.com/preview/secure/authentication/ | 2022-06-25T10:25:48 | CC-MAIN-2022-27 | 1656103034930.3 | [array(['/images/section_icons/secure/authorization.png',
'Authentication methods in YugabyteDB Authentication methods in YugabyteDB'],
dtype=object) ] | docs.yugabyte.com |
Data Model - Terminology
Note: This documentation presents the data model from a User or Analyst perspective. See the Synapse Data Model technical documentation or the Synapse source code for more detailed information.
Recall that Synapse is a distributed key-value hypergraph analysis framework. That is, Synapse is a particular implementation of a hypergraph model, where an instance of a hypergraph is called a Cortex. In our brief discussion of graphs and hypergraphs, we pointed out some fundamental concepts related to the Synapse hypergraph implementation:
(Almost) everything is a node. There are no pairwise (“two-dimensional”) edges in a hypergraph the way there are in a directed graph. While Synapse includes some edge-like nodes (digraph nodes or “relationship” nodes) in its data model, but they are still nodes. (We later introduced “lightweight” (light) edges as additional edge-like constructs used for particular use cases to improve performance. But mostly everything is a node.)
Tags act as hyperedges. In a directed graph, an edge connects exactly two nodes. In Synapse, tags are labels that can be applied to an arbitrary number of nodes. These tags effectively act as an n-dimensional edge that can connect any number of nodes – a hyperedge.
(Almost) every key navigation of the graph is a pivot. Since there are no pairwise edges in a hypergraph, you can’t query or explore the graph by traversing its edges. Instead, navigation primarily consists of pivoting from the properties of one set of nodes to the properties of another set of nodes. (Since tags are hyperedges, there are ways to lift by or “pivot through” tags to effectively perform “hyperedge traversal”; and it is possible to traverse Synapse’s light edges. But most navigation is via pivots.)
To start building on those concepts, you need to understand the basic elements of the Synapse data model. The fundamental terms and concepts you should be familiar with are:
Synapse uses a query language called Storm (see Storm Reference - Introduction) to interact with data in the hypergraph. Storm allows a user to lift, filter, and pivot across data based on node properties, values, and tags. Understanding these model structures will significantly improve your ability to use Storm and interact with Synapse data.
Type
A type is the definition of a data element within the Synapse data model. A type describes what the element is and enforces how it should look, including how it should be normalized, if necessary, for both storage (including indexing) and representation (display).
The Synapse data model includes standard types such as integers and strings, as well as common types defined within or specific to Synapse, including globally unique identifiers (
guid), date/time values (
time), time intervals (
ival), and tags (
syn:tag). Many objects (Form) within the Synapse data model are built upon (extensions of) a subset of common types.
In addition, knowledge domain-specific objects may themselves be specialized types. For example, an IPv4 address (
inet:ipv4) is its own specialized type. While an IPv4 address is ultimately stored as an integer, the type has additional constraints (i.e., to ensure that IPv4 objects in the Cortex can only be created using integer values that fall within the allowable IPv4 address space). These constraints may be defined by a constructor (
ctor) that defines how a property of that type can be created (constructed).
Users typically will not interact with types directly; they are primarily used “behind the scenes” to define and support the Synapse data model. From a user perspective, it is important to keep the following points in mind for types:
Every element in the Synapse data model must be defined as a type. Synapse uses forms to define the objects that can be represented (modeled) within a Synapse hypergraph. Forms have properties (primary and secondary) and every property must be explicitly defined as a particular type.
Type enforcement is essential to Synapse’s functionality. Type enforcement means every property is defined as a type, and Synapse enforces rules for how elements of that type can (or can’t) be created. This means that elements of the same type are always created, stored, and represented in the same way which ensures consistency and helps prevent “bad data” from getting into a Cortex.
Type awareness facilitates interaction with a Synapse hypergraph. Synapse and the Storm query language are “model aware” and know which types are used for each property in the model. At a practical level this allows users to use a more concise syntax when using the Storm query language because in many cases the query parser “understands” which navigation options make sense, given the types of the properties used in the query. It also allows users to use wildcards to pivot (see Storm Reference - Pivoting) without knowing the “destination” forms or nodes - Synapse “knows” which forms can be reached from the current set of data based on types.
It is still possible to navigate (pivot) between elements of different types that have the same value. Type enforcement simplifies pivoting, but does not restrict you to only pivoting between properties of the same type. For example, a Windows registry value may be a string type (type
str), but that string may represent a file path (type
file:path). While the Storm query parser would not automatically “recognize” that as a valid pivot (because the property types differ), it is possible to explicitly tell Storm to pivot from a specific
file:pathnode to any registry value nodes whose string property value (
it:dev:regval:str) matches that path.
Type-Specific Behavior
Synapse implements various type-specific optimizations to improve performance and functionality. Some of these are “back end” optimizations (i.e., for indexing and storage) while some are more “front end” in terms of how users interact with data of certain types via Storm. See Storm Reference - Type-Specific Storm Behavior for additional detail.
Viewing or Working with Types
Types (both base and model-specific) are defined within the Synapse source code. An auto-generated dictionary (from current source code) of Types (Base Types and Types) can be found in the online documentation.
Types can also be viewed within a Cortex. A full list of current types can be displayed with the following Storm command:
storm> syn:type
See Storm Reference - Model Introspection for additional detail on working with model elements within Storm.
Type Example
The data associated with a type’s definition is displayed slightly differently between the Synapse source code, the auto-generated online documents, and from the Storm command line. Users wishing to review type structure or other elements of the Synapse data model are encouraged to use the source(s) that are most useful to them.
The example below shows the type for a fully qualified domain name (
inet:fqdn) as it is represented in the Synapse source code, the online documents, and from the Storm CLI.
Source Code
('inet:fqdn', 'synapse.models.inet.Fqdn', {}, { 'doc': 'A Fully Qualified Domain Name (FQDN).', 'ex': 'vertex.link'}),
Auto-Generated Online Documents
inet:fqdn
A Fully Qualified Domain Name (FQDN). It is implemented by the following class:
synapse.models.inet.Fqdn.
A example of
inet:fqdn:
vertex.link
Storm
storm> syn:type=inet:fqdn syn:type=inet:fqdn :ctor = synapse.models.inet.Fqdn :doc = A Fully Qualified Domain Name (FQDN).
Form
A form is the definition of an object in the Synapse data model. A form acts as a “template” that tells you how to create an object (Node). While the concepts of form and node are closely related, it is useful to maintain the distinction between the template for creating an object (form) and an instance of a particular object (node).
inet:fqdn is a form;
inet:fqdn = woot.com (
<form> = <valu>) is a node.
A form consists of the following:
A primary property. The primary property of a form must be selected / defined such that the value of that property is unique across all possible instances of that form. A form’s primary property must be defined as a specific type. In many cases, a form will have its own type definition - for example, the form
inet:fqdnis of type
inet:fqdn. All forms are types (that is, must be defined as a type) although not all types are forms.
Optional secondary properties. If present, secondary properties must also have a defined type, as well as any additional constraints on the property, such as:
Whether a property is read-only once set.
Any normalization (outside of type-specific normalization) that should occur for the property (such as converting a string to all lowercase, stripping any whitespace, etc.).
Secondary properties are form-specific and are explicitly defined for each form. However, Synapse also supports a set of universal secondary properties (universal properties) that are valid for all forms.
Property discusses these concepts in greater detail.
While types underlie the data model and are generally not used directly by analysts, forms comprise the essential “structure” of the data analysts work with. Understanding (and having a good reference) for form structure and options is essential for working with Synapse data.
Form Namespace
The Synapse data model uses a structured, hierarchical namespace for forms. Each form name consists of at least two namespace elements separated by a colon (
: ). For example:
-
file:bytes
-
inet:email
-
inet:fqdn
-
ou:org
The first element in the namespace represents a rough “category” for the form (i.e.,
inet for Internet-related objects). The Synapse data model is meant to be extensible to support any analytical discipline, from threat intelligence to business analytics and beyond. The ability to group portions of the data model into related categories makes an extensive model easier to manage, and also allows Synapse users to leverage or focus on those sub-portions of the model most relevant to them.
The second and / or subsequent elements in the form name define the specific “subcategory” or “thing” within the form’s primary category (e.g.,
inet:fqdn to represent a fully qualified domain name (FQDN) within the “Internet” (
inet) category, or
inet:dns:query to represent a query using the DNS protocol within the “Internet” category, etc.)
Properties have a namespace the leverages and extends the form namespace (note that form names are also primary properties). See Property and Property Namespace below for additional detail.
Viewing or Working with Forms
Like types, forms are defined within the Synapse source code and include a base set of forms intended to be generic across any data model, as well as a number of model-specific (knowledge domain-specific) forms. An auto-generated dictionary (from current source code) of Forms can be found in the online documentation.
Forms can also be viewed within a Cortex. A full list of current forms can be displayed with the following Storm command:
storm> syn:form
See Storm Reference - Model Introspection for additional detail on working with model elements within Storm.
Form Example
The data associated with a form’s definition is displayed slightly differently between the Synapse source code, the auto-generated online documents, and from the Storm command line. Users wishing to review form structure or other elements of the Synapse data model are encouraged to use the source(s) that are most useful.
The example below shows the form for a fully qualified domain name (
inet:fqdn) as it is represented in the Synapse source code, the online documents, and from Storm. Note that the output displayed via Storm includes universal properties (
.seen,
.created), where the static source code (and the documents generated from it) do not. Universal properties are defined separately within the Synapse source and have their own section (Universal Properties) in the auto-generated online documents.
Source Code
('inet:fqdn', {}, ( ('domain', ('inet:fqdn', {}), { 'ro': True, 'doc': 'The parent domain for the FQDN.', }), ('host', ('str', {'lower': True}), { 'ro': True, 'doc': 'The host part of the FQDN.', }), ('issuffix', ('bool', {}), { 'doc': 'True if the FQDN is considered a suffix.', }), ('iszone', ('bool', {}), { 'doc': 'True if the FQDN is considered a zone.', }), ('zone', ('inet:fqdn', {}), { 'doc': 'The zone level parent for this FQDN.', }), ))
Auto-Generated Online Documents
inet:fqdn A Fully Qualified Domain Name (FQDN).
The base type for the form can be found at inet:fqdn.
An example of
inet:fqdn:
-
vertex.link
Properties:
- :domain / inet:fqdn:domain
-
The parent domain for the FQDN. It has the following property options set:
-
Read Only:
True
The property type is inet:fqdn.
- :host / inet:fqdn:host
-
The host part of the FQDN. It has the following property options set:
-
Read Only:
True
The property type is str. Its type has the following options set:
-
lower:
True
- :issuffix / inet:fqdn:issuffix
-
True if the FQDN is considered a suffix.
The property type is bool.
- :iszone / inet:fqdn:iszone
-
True if the FQDN is considered a zone.
The property type is bool.
- :zone / inet:fqdn:zone
-
The zone level parent for this FQDN.
The property type is inet:fqdn.
Storm
Form (
inet:fqdn) alone:
storm> syn:form=inet:fqdn syn:form=inet:fqdn :doc = A Fully Qualified Domain Name (FQDN). :runt = False :type = inet:fqdn
Form with secondary properties:
storm> syn:prop:form=inet:fqdn syn:prop=inet:fqdn :doc = A Fully Qualified Domain Name (FQDN). :extmodel = False :form = inet:fqdn :type = inet:fqdn :univ = False syn:prop=inet:fqdn.seen :base = .seen :doc = The time interval for first/last observation of the node. :extmodel = False :form = inet:fqdn :relname = .seen :ro = False :type = ival :univ = False syn:prop=inet:fqdn.created :base = .created :doc = The time the node was created in the cortex. :extmodel = False :form = inet:fqdn :relname = .created :ro = True :type = time :univ = False syn:prop=inet:fqdn:domain :base = domain :doc = The parent domain for the FQDN. :extmodel = False :form = inet:fqdn :relname = domain :ro = True :type = inet:fqdn :univ = False syn:prop=inet:fqdn:host :base = host :doc = The host part of the FQDN. :extmodel = False :form = inet:fqdn :relname = host :ro = True :type = str :univ = False syn:prop=inet:fqdn:issuffix :base = issuffix :doc = True if the FQDN is considered a suffix. :extmodel = False :form = inet:fqdn :relname = issuffix :ro = False :type = bool :univ = False syn:prop=inet:fqdn:iszone :base = iszone :doc = True if the FQDN is considered a zone. :extmodel = False :form = inet:fqdn :relname = iszone :ro = False :type = bool :univ = False syn:prop=inet:fqdn:zone :base = zone :doc = The zone level parent for this FQDN. :extmodel = False :form = inet:fqdn :relname = zone :ro = False :type = inet:fqdn :univ = False
Node
A node is a unique object within the Synapse hypergraph. In Synapse nodes represent standard objects (“nouns”) such as IP addresses, files, people, conferences, airplanes, or software packages. They can also represent more abstract objects such as industries, risks, attacks, or goals. However, in Synapse nodes also represent relationships (“verbs”) because things that would be edges in a directed graph are generally nodes in a Synapse hypergraph. It may be better to think of a node generically as a “thing” - any “thing” you want to model within Synapse (entity, relationship, event) is represented as a node.
Every node consists of the following components:
A primary property that consists of the Form of the node plus its specific value (
<form> = <valu>). All primary properties must be unique for a given form. For example, the primary property of the node representing the domain
woot.comwould be
inet:fqdn = woot.com. The uniqueness of the
<form> = <valu>pair ensures there can be only one node in Synapse that represents the domain
woot.com. Because this unique pair “defines” the node, the comma-separated form / value combination (
<form>,<valu>) is also known as the node’s ndef (short for “node definition”).
One or more universal properties. As the name implies, universal properties are applicable to all nodes.
Optional secondary properties. Similar to primary properties, secondary properties consist of a property name defined as a specific type, and the property’s associated value for the node (
<prop> = <pval>). Secondary properties are specific to a given node type (form) and provide additional detail about that particular node.
Optional tags. A Tag acts as a label with a particular meaning that can be applied to a node to provide context. Tags are discussed in greater detail below.
Viewing or Working with Nodes
To view or work with nodes, you must have a Cortex that contains nodes (data). Users interact with data in Synapse using the Storm query language (Storm Reference - Introduction).
Node Example
The Storm query below lifts and displays the node for the domain
google.com:
storm> inet:fqdn=google.com inet:fqdn=google.com :domain = com :host = google :issuffix = False :iszone = True :zone = google.com .created = 2022/06/23 22:28:43.347 #rep.majestic.1m
In the output above:
inet:fqdn = google.comis the primary property (
<form> = <valu>).
While not explicitly displayed, the node’s ndef would be
inet:fqdn,google.com.
.createdis a universal property showing when the node was added to the Cortex.
:domain,
:host, etc. are form-specific secondary properties with their associated values (
<prop> = <pval>). For readability, secondary properties are displayed as relative properties within the namespace of the form’s primary property (e.g.,
:iszoneas opposed to
inet:fqdn:iszone).
#rep.majestic.1mis a tag indicating that
google.comhas been reported by web analytics company Majestic in their top million most-linked domains.
Property
Properties are the individual elements that define a Form or (along with their specific values) that comprise a Node.
Primary Property
Every Form consists of (at minimum) a primary property that is defined as a specific Type. Every Node consists of (at minimum) a primary property (its form) plus the node-specific value of the primary property (
<form> = <valu>). In defining a form for a particular object (node), the primary property must be defined such that its value is unique across all possible instances of that form.
The concept of a unique primary property is straightforward for forms that represent simple objects; for example, the “thing” that makes an IP address unique is the IP address itself:
inet:ipv4 = 1.2.3.4. Defining an appropriate primary property for more complex multidimensional nodes (such as those representing a Relationship or an Event) can be more challenging.
Because a primary property uniquely defines a node, it cannot be modified once the node is created. To “change” a node’s primary property you must delete and re-create the node.
Secondary Property
A Form can include optional secondary properties that provide additional detail about the form. As with primary properties, each secondary property must be defined as an explicit Type. Similarly, a Node includes optional secondary properties (as defined by the node’s form) along with their specific values (
<prop> = <pval>).
Secondary properties are characteristics that do not uniquely define a form, but may further describe or distinguish a given form and its associated nodes. For example, the Autonomous System (AS) that an IP address belongs to does not “define” the IP (and in fact an IP’s associated AS can change), but it provides further detail about the IP address.
Many secondary properties are derived from a node’s primary property (derived properties) and are automatically set when the node is created. For example, creating the node
file:path='c:\windows\system32\cmd.exe' will automatically set the properties
:base = cmd.exe,
:base:ext = exe, and
:dir = c:/windows/system32. Because a node’s primary property cannot be changed once set, any secondary properties derived from the primary property cannot be changed (i.e., are read-only) as well. Non-derived secondary properties can be set, modified, or even deleted.
Universal Property
Most secondary properties are form-specific, providing additional detail about individual objects within the data model. However, Synapse defines a subset of secondary properties as universal properties that are applicable to all forms within the Synapse data model. Universal properties include:
.created, which is set for all nodes and whose value is the date / time that the node was created within a Cortex.
.seen, which is optional for all nodoes and whose value is a time interval (minimum or “first seen” and maximum or “last seen”) during which the node was observed, existed, or was valid.
Property Namespace
Properties exist within and extend the Form Namespace. Forms (form names) are primary properties, and consist of at least two elements separated by a colon (
: ). Secondary properties extend and exist within the namespace of their primary property (form). Secondary properties are preceded by a colon (
: ) and use the colon to separate additional namespace elements, if needed. (Universal properties are preceded by a period (
. ) to distinguish them from form-specific secondary properties.) For example, the secondary (both universal and form-specific) properties of
inet:fqdn include:
inet:fqdn.created(universal property)
inet:fqdn:zone(secondary property)
Secondary properties also comprise a relative namespace / set of relative properties with respect to their primary property (form). In many cases the Storm query language allows (or requires) you to reference a secondary property using its relative property name where the context of the relative namespace is clear (i.e.,
:zone vs.
inet:fqdn:zone).
Relative properties are also used for display purposes within Synapse for visual clarity (see the Node Example above).
In some cases secondary properties may have their own “namespace”. Viewed another way, while both primary and secondary properties use colons to separate elements of the property name, not all separators represent property “boundaries”; some act more as name “sub-namespace” separators. For example
file:bytes is a primary property / form. A
file:bytes form may include secondary properties such as
:mime:pe:imphash and
:mime:pe:complied. In this case
:mime and
:mime:pe are not themselves secondary properties, but sub-namespaces for individual MIME data types and the “PE executable” data type specifically.
Viewing or Working with Properties
As Properties are used to define Forms, they are defined within the Synapse source code with their respective Forms. Universal properties are not defined “per-form” but have their own section (Universal Properties) in the online documentation.
Properties can also be viewed within a Cortex. A full list of current properties can be displayed with the following Storm command:
storm> syn:prop
See Storm Reference - Model Introspection for additional detail on working with model elements within Storm.
Property Example
The data associated with a property’s definition is displayed slightly differently between the Synapse source code, the auto-generated online documents, and from the Storm command line. Users wishing to review property structure or other elements of the Synapse data model are encouraged to use the source(s) that are most useful to them.
As primary properties are forms and secondary properties (with the exception of universal properties) are form-specific, properties can be viewed within the Synapse source code and online documentation by viewing the associated Forms.
Within Storm, it is possible to view individual primary or secondary properties as follows:
Storm
Primary property:
storm> syn:prop=inet:fqdn syn:prop=inet:fqdn :doc = A Fully Qualified Domain Name (FQDN). :extmodel = False :form = inet:fqdn :type = inet:fqdn :univ = False
Secondary property:
storm> syn:prop=inet:fqdn:domain syn:prop=inet:fqdn:domain :base = domain :doc = The parent domain for the FQDN. :extmodel = False :form = inet:fqdn :relname = domain :ro = True :type = inet:fqdn :univ = False
Tag
Tags are annotations applied to nodes. Simplistically, they can be thought of as labels that provide context to the data represented by the node.
Broadly speaking, within Synapse:
Nodes represent things: objects, relationships, or events. In other words, nodes typically represent observables that are verifiable and largely unchanging.
Tags typically represent assessments: observations or judgements that could change if the data or the analysis of the data changes.
For example, an Internet domain is an “observable thing” - a domain exists, was registered through a domain registrar, and can be created as a node such as
inet:fqdn = woot.com. Whether a domain has been sinkholed (i.e., where a supposedly malicious domain is taken over or re-registered by a researcher to identify potential victims attempting to resolve the domain) is an assessment. A researcher may need to evaluate data related to that domain (such as domain registration records or current and past IP resolutions) to decide whether the domain appears to be sinkholed. This assessment can be represented by applying a tag such as
#cno.infra.sink.holed to the
inet:fqdn = woot.com node.
Tags are unique within the Synapse model because tags are both nodes and labels applied to nodes. Tags are nodes based on a form (
syn:tag, of type
syn:tag) defined within the Synapse data model. That is, the tag
#cno.infra.sink.holed can be applied to another node; but the tag itself also exists as the node
syn:tag = cno.infra.sink.holed. This difference is illustrated in the example below.
Tags are introduced here but are discussed in greater detail in Analytical Model - Tag Concepts.
Tag Example
The Storm query below displays the node for the tag
cno.infra.sink.holed:
storm> syn:tag=cno.infra.sink.holed syn:tag=cno.infra.sink.holed :base = holed :depth = 3 :doc = A domain (zone) that has been sinkholed. :title = Sinkholed domain :up = cno.infra.sink .created = 2022/06/23 22:28:43.478
The Storm query below displays the tag
#cno.infra.sink.holed applied to the node
inet:fqdn = hugesoft.org:
storm> inet:fqdn=hugesoft.org inet:fqdn=hugesoft.org :domain = org :host = hugesoft :issuffix = False :iszone = True :zone = hugesoft.org .created = 2022/06/23 22:28:43.547 #cno.infra.sink.holed = (2014/01/11 00:00:00.000, 2018/03/30 00:00:00.000) #rep.feye.apt1
Note that a tag applied to a node uses the “hashtag” symbol (
# ). This is a visual cue to distinguish tags on a node from the node’s secondary properties. The symbol is also used within the Storm syntax to reference a tag as opposed to a
syn:tag node. | https://synapse.docs.vertex.link/en/latest/synapse/userguides/data_model_terms.html | 2022-06-25T10:38:12 | CC-MAIN-2022-27 | 1656103034930.3 | [] | synapse.docs.vertex.link |
Basic Mode
Basic mode provides access to a selected set of settings and aspects of features, displaying a reduced set of options. This mode is suitable for the most common tasks and configurations.
Features
In basic mode, all Expert mode settings and views are hidden from the interface. However, if you select a particular task in basic mode that requires expert mode settings, they will automatically be displayed. | http://docs.intenogroup.com/juci/v314/en/usb/intro/modes/basic | 2022-06-25T10:05:39 | CC-MAIN-2022-27 | 1656103034930.3 | [] | docs.intenogroup.com |
Boot modes
When a computer boots, the first software that it runs is responsible for initializing the platform and providing an interface for the operating system to perform platform-specific operations.
Default boot modes
In EC2, two variants of the boot mode software are supported: Unified Extensible Firmware Interface (UEFI) and Legacy BIOS. By default, Graviton instance types run on UEFI, and Intel and AMD instance types run on Legacy BIOS.
Running Intel and AMD instances types on UEFI
Most Intel and AMD instance types can run on both UEFI and Legacy BIOS. To use UEFI, you must select an AMI with the boot mode parameter set to uefi, and the operating system contained in the AMI must be configured to support UEFI.
Purpose of the AMI boot mode parameter
The AMI boot mode parameter signals to EC2 which boot mode to use when launching an instance. When the boot mode parameter is set to uefi, EC2 attempts to launch the instance on UEFI. If the operating system is not configured to support UEFI, the instance launch might be unsuccessful.
Setting the boot mode parameter does not automatically configure the operating system for the specified boot mode. The configuration is specific to the operating system. For the configuration instructions, see the manual for your operating system.
Possible boot mode parameters on an AMI
The AMI boot mode parameter is optional. An AMI can have one of the following boot mode parameter values: uefi or legacy-bios. Some AMIs do not have a boot mode parameter. For AMIs with no boot mode parameter, the instances launched from these AMIs use the default value of the instance type—uefi on Graviton, and legacy-bios on all Intel and AMD instance types.
Boot mode topics | https://docs.amazonaws.cn/en_us/AWSEC2/latest/UserGuide/ami-boot.html | 2022-06-25T11:07:50 | CC-MAIN-2022-27 | 1656103034930.3 | [] | docs.amazonaws.cn |
- QA runs on Review Apps
- Performance Metrics
- Sample Data for Review Apps
- How to
- How does it work?
- Cluster configuration
- Diagnosing unhealthy Review App releases
- Frequently Asked Questions
- Other resources
Review Apps
Review Apps are deployed using the
start-review-app-pipeline job. This job triggers a child pipeline containing a series of jobs to perform the various tasks needed to deploy a Review App.
For any of the following scenarios, the
start-review-app-pipeline job would be automatically started:
- for merge requests with CI config changes
- for merge requests with frontend changes
- for merge requests with changes to
{,ee/,jh/}{app/controllers}/**/*
- for merge requests with changes to
{,ee/,jh/}{app/models}/**/*
- for merge requests with QA changes
- for scheduled pipelines
- the MR has the
pipeline:run-review-applabel set
QA runs on Review Apps
On every pipeline in the
qa stage (which comes after the
review stage), the
review-qa-smoke and
review-qa-reliable jobs are automatically started. The
review-qa-smoke runs
the QA smoke suite and the
review-qa-reliable executes E2E tests identified as reliable.
You can also manually start the
review-qa-all: it runs the full QA suite.
After the end-to-end test runs have finished, Allure reports are generated and published by
the
allure-report-qa-smoke,
allure-report-qa-reliable, and
allure-report-qa-all jobs. A comment with links to the reports are added to the merge request.
Errors can be found in the
gitlab-review-apps Sentry project and filterable by Review App URL or commit SHA.
Performance Metrics
On every pipeline in the
qa stage, the
review-performance job is automatically started: this job does basic
browser performance testing using a
Sitespeed.io Container.
Sample Data for Review Apps
Upon deployment of a review app, project data is created from the
sample-gitlab-project template project. This aims to provide projects with prepopulated resources to facilitate manual and exploratory testing.
The sample projects will be created in the
root user namespace and can be accessed from the personal projects list for that user.
How to
Redeploy Review App from a clean slate
To reset Review App and redeploy from a clean slate, do the following:
- Run
review-stopjob.
- Re-deploy by running or retrying
review-deployjob.
Doing this will remove all existing data from a previously deployed Review App.
Get access to the GCP Review Apps cluster
You need to open an access request (internal link)
for the
gcp-review-apps-dev GCP group and role.
This grants you the following permissions for:
- Retrieving pod logs. Granted by Viewer (
roles/viewer).
- Running a Rails console. Granted by Kubernetes Engine Developer (
roles/container.pods.exec).
Log into my Review App
For GitLab Team Members only. If you want to sign in to the review app, review the GitLab handbook information for the shared 1Password account.
- The default username is
root.
- The password can be found in the 1Password login item named
GitLab EE Review App.
Enable a feature flag for my Review App
- Open your Review App and log in as documented above.
- Create a personal access token.
- Enable the feature flag using the Feature flag API.
Find my Review App slug
- Open the
review-deployjob.
- Look for
** Deploying review-*.
- For instance for
** Deploying review-1234-abc-defg... **, your Review App slug would be
review-1234-abc-defgin this case.
Run a Rails console
- Make sure you have access to the cluster and the
container.pods.execpermission first.
- Filter Workloads by your Review App slug. For example,
review-qa-raise-e-12chm0.
- Find and open the
toolboxDeployment. For example,
review-qa-raise-e-12chm0-toolbox.
- Click on the Pod in the “Managed pods” section. For example,
review-qa-raise-e-12chm0-toolbox-d5455cc8-2lsvz.
- Click on the
KUBECTLdropdown, then
Exec->
toolbox.
- Replace
-c toolbox -- lswith
-it -- gitlab-rails consolefrom the default command or
- Run
kubectl exec --namespace review-qa-raise-e-12chm0 review-qa-raise-e-12chm0-toolbox-d5455cc8-2lsvz -it -- gitlab-rails consoleand
- Replace
review-qa-raise-e-12chm0-toolbox-d5455cc8-2lsvzwith your Pod’s name.
Dig into a Pod’s logs
- Make sure you have access to the cluster and the
container.pods.getLogspermission first.
- Filter Workloads by your Review App slug. For example,
review-qa-raise-e-12chm0.
- Find and open the
migrationsDeployment. For example,
review-qa-raise-e-12chm0-migrations.1.
- Click on the Pod in the “Managed pods” section. For example,
review-qa-raise-e-12chm0-migrations.1-nqwtx.
- Click on the
Container logslink.
Alternatively, you could use the Logs Explorer which provides more utility to search logs. An example query for a pod name is as follows:
resource.labels.pod_name:"review-qa-raise-e-12chm0-migrations"
How does it work?
CI/CD architecture diagram
Detailed explanation
- On every pipeline during the
preparestage, the
compile-production-assetsjob is automatically started.
- Once it’s done, the
review-build-cngjob starts since the
CNG-mirrorpipeline triggered in the following step depends on it.
- Once
compile-production-assetsis done, the
review-build-cngjob triggers a pipeline in the
CNG-mirrorproject.
- The
review-build-cngjob automatically starts only if your MR includes CI or frontend changes. In other cases, the job is manual.
- The
CNG-mirrorpipeline creates the Docker images of each component (for example,
gitlab-rails-ee,
gitlab-shell,
gitalyetc.) based on the commit from the GitLab pipeline and stores them in its registry.
- We use the
CNG-mirrorproject so that the
CNG, (Cloud Native GitLab), project’s registry is not overloaded with a lot of transient Docker images.
- Once
review-build-cngis done, the
review-deployjob deploys the Review App using the official GitLab Helm chart to the
review-appsKubernetes cluster on GCP.
- The actual scripts used to deploy the Review App can be found at
scripts/review_apps/review-apps.sh.
- These scripts are basically our official Auto DevOps scripts where the default CNG images are overridden with the images built and stored in the
CNG-mirrorproject’s registry.
- Since we’re using the official GitLab Helm chart, this means you get a dedicated environment for your branch that’s very close to what it would look in production.
- Each review app is deployed to its own Kubernetes namespace. The namespace is based on the Review App slug that is unique to each branch.
- Once the
review-deployjob succeeds, you should be able to use your Review App thanks to the direct link to it from the MR widget. To log into the Review App, see “Log into my Review App?” below.
Additional notes:
- If the
review-deployjob keeps failing (and a manual retry didn’t help), please post a message in the
#g_qe_engineering_productivitychannel and/or create a
~"Engineering Productivity"
~"ep::review apps"
~"type::bug"issue with a link to your merge request. Note that the deployment failure can reveal an actual problem introduced in your merge request (that is, this isn’t necessarily a transient failure)!
- If the
review-qa-smokeor
review-qa-reliablejob keeps failing (note that we already retry them once), please check the job’s logs: you could discover an actual problem introduced in your merge request. You can also download the artifacts to see screenshots of the page at the time the failures occurred. If you don’t find the cause of the failure or if it seems unrelated to your change, please post a message in the
#qualitychannel and/or create a ~Quality ~"type::bug" issue with a link to your merge request.
- The manual
review-stopcan be used to stop a Review App manually, and is also started by GitLab once a merge request’s branch is deleted after being merged.
- The Kubernetes cluster is connected to the
gitlabprojects using the GitLab Kubernetes integration. This basically allows to have a link to the Review App directly from the merge request widget.
Auto-stopping of Review Apps
Review Apps are automatically stopped 2 days after the last deployment thanks to the Environment auto-stop feature.
If you need your Review App to stay up for a longer time, you can
pin its environment or retry the
review-deploy job to update the “latest deployed at” time.
The
review-cleanup job that automatically runs in scheduled
pipelines stops stale Review Apps after 5 days,
deletes their environment after 6 days, and cleans up any dangling Helm releases
and Kubernetes resources after 7 days.
Cluster configuration
The cluster is configured via Terraform in the
engineering-productivity-infrastructure project.
Node pool image type must be
Container-Optimized OS (cos), not
Container-Optimized OS with Containerd (cos_containerd),
due to this known issue on the Kubernetes executor for GitLab Runner
Helm
The Helm version used is defined in the
registry.gitlab.com/gitlab-org/gitlab-build-images:gitlab-helm3-kubectl1.14 image
used by the
review-deploy and
review-stop jobs.
Diagnosing unhealthy Review App releases
If Review App Stability
dips this may be a signal that the
review-apps cluster is unhealthy.
Leading indicators may be health check failures leading to restarts or majority failure for Review App deployments.
The Review Apps Overview dashboard aids in identifying load spikes on the cluster, and if nodes are problematic or the entire cluster is trending towards unhealthy.
See the review apps page of the Engineering Productivity Runbook for troubleshooting review app releases.
Frequently Asked Questions
Isn’t it too much to trigger CNG image builds on every test run? This creates thousands of unused Docker images.
We have to start somewhere and improve later. Also, we’re using the CNG-mirror project to store these Docker images so that we can just wipe out the registry at some point, and use a new fresh, empty one.
How do we secure this from abuse? Apps are open to the world so we need to find a way to limit it to only us.
This isn’t enabled for forks.
Other resources
Helpful command line tools
- K9s - enables CLI dashboard across pods and enabling filtering by labels
- Stern - enables cross pod log tailing based on label/field selectors
Return to Testing documentation | https://docs.gitlab.com/14.10/ee/development/testing_guide/review_apps.html | 2022-06-25T11:31:15 | CC-MAIN-2022-27 | 1656103034930.3 | [] | docs.gitlab.com |
Monitor Azure Cosmos DB
APPLIES TO:
SQL API
Cassandra API
Gremlin API
Table API
Azure Cosmos DB API for MongoDB.
You can monitor your data with client-side and server-side metrics. When using server-side metrics, you can monitor the data stored in Azure Cosmos DB with the following options:
Monitor from Azure Cosmos DB portal: You can monitor with the metrics available within the Metrics tab of the Azure Cosmos account. The metrics on this tab include throughput, storage, availability, latency, consistency, and system level metrics. By default, these metrics have a retention period of seven days. To learn more, see the Monitoring data collected from Azure Cosmos DB section of this article.
Monitor with metrics in Azure monitor: You can monitor the metrics of your Azure Cosmos account and create dashboards from the Azure Monitor. Azure Monitor collects the Azure Cosmos DB metrics by default, you will not need to explicitly configure anything. These metrics are collected with one-minute granularity, the granularity may vary based on the metric you choose. By default, these metrics have a retention period of 30 days. Most of the metrics that are available from the previous options are also available in these metrics. The dimension values for the metrics such as container name are case-insensitive. So you need to use case-insensitive comparison when doing string comparisons on these dimension values. To learn more, see the Analyze metric data section of this article.
Monitor with diagnostic logs in Azure Monitor: You can monitor the logs of your Azure Cosmos account and create dashboards from the Azure Monitor. Data such as events and traces that occur at a second granularity are stored as logs. For example, if the throughput of a container is changes, the properties of a Cosmos account are changed these events are captures within the logs. You can analyze these logs by running queries on the gathered data. To learn more, see the Analyze log data section of this article.
Monitor programmatically with SDKs: You can monitor your Azure Cosmos account programmatically by using the .NET, Java, Python, Node.js SDKs, and the headers in REST API. To learn more, see the Monitoring Azure Cosmos DB programmatically section of this article.
The following image shows different options available to monitor Azure Cosmos DB account through Azure portal:
When using Azure Cosmos DB, at the client-side you can collect the details for request charge, activity ID, exception/stack trace information, HTTP status/sub-status code, diagnostic string to debug any issue that might occur. This information is also required if you need to reach out to the Azure Cosmos DB support team.
Monitor overview
The Overview page in the Azure portal for each Azure Cosmos DB account includes a brief view of the resource usage, such as total requests, requests that resulted in a specific HTTP status code, and hourly billing. This information is helpful, however only a small amount of the monitoring data is available from this pane. Some of this data is collected automatically and is available for analysis as soon as you create the resource. You can enable other types of data collection with some configuration. concepts:
-.
Cosmos DB insights
Cosmos DB insights is a feature based on the workbooks feature of Azure Monitor and uses the same monitoring data collected for Azure Cosmos DB described in the sections below. Use Azure Monitor for a view of the overall performance, failures, capacity, and operational health of all your Azure Cosmos DB resources in a unified interactive experience, and use the other features of Azure Monitor for detailed analysis and alerting. To learn more, see the Explore Cosmos DB insights article..
Monitoring data more data collection with some configuration.
Collection and routing
Platform metrics and the Activity log are collected and stored automatically, but can be routed to other locations by using a diagnostic setting.
Resource Logs aren't collected and stored until you create a diagnostic setting and route them to one or more locations.
See Create diagnostic setting to collect platform logs and metrics in Azure for the detailed process for creating a diagnostic setting using the Azure portal and some diagnostic query examples. When you create a diagnostic setting, you specify which categories of logs to collect.
The metrics and logs you can collect are discussed in the following sections.
Analyzing metrics
Azure Cosmos DB provides a custom experience for working with metrics. You can analyze metrics for Azure Cosmos DB with metrics from other Azure services using Metrics explorer by opening Metrics from the Azure Monitor menu. See Getting started with Azure Metrics Explorer for details on using this tool. You can also check out how to monitor server-side latency, request unit usage, and normalized request unit usage for your Azure Cosmos DB resources.
For a list of the platform metrics collected for Azure Cosmos DB, see Monitoring Azure Cosmos DB data reference metrics article.
All metrics for Azure Cosmos DB are in the namespace Cosmos DB standard metrics. You can use the following dimensions with these metrics when adding a filter to a chart:
- CollectionName
- DatabaseName
- OperationType
- Region
- StatusCode
For reference, you can see a list of all resource metrics supported in Azure Monitor.
View operation level metrics for Azure Cosmos DB
Select Monitor from the left-hand navigation bar, and select Metrics.
From the Metrics pane > Select a resource > choose the required subscription, and resource group. For the Resource type, select Azure Cosmos DB accounts, choose one of your existing Azure Cosmos accounts, and select Apply.
Next you can select a metric from the list of available metrics. You can select metrics specific to request units, storage, latency, availability, Cassandra, and others. To learn in detail about all the available metrics in this list, see the Metrics by category article. In this example, let's select Request units and Avg as the aggregation value.
In addition to these details, you can also select the Time range and Time granularity of the metrics. At max, you can view metrics for the past 30 days. After you apply the filter, a chart is displayed based on your filter. You can see the average number of request units consumed per minute for the selected period.
Add filters to metrics
You can also filter metrics and the chart displayed by a specific CollectionName, DatabaseName, OperationType, Region, and StatusCode. To filter the metrics, select Add filter and choose the required property such as OperationType and select a value such as Query. The graph then displays the request units consumed for the query operation for the selected period. The operations executed via Stored procedure aren't logged so they aren't available under the OperationType metric.
You can group metrics by using the Apply splitting option. For example, you can group the request units per operation type and view the graph for all the operations at once as shown in the following image:
Analyzing logs
Data in Azure Monitor Logs is stored in tables where each table has its own set of unique properties.
All resource logs in Azure Monitor have the same fields followed by service-specific fields. The common schema is outlined in Azure Monitor resource log schema. For a list of the types of resource logs collected for Azure Cosmos DB, see Monitoring Azure Cosmos DB data reference.
The Activity log is a platform that provides insight into subscription-level events. You can view it independently or route it to Azure Monitor Logs, where you can do much more complex queries using Log Analytics.
Azure Cosmos DB stores data in the following tables.
Sample Kusto queries
Prior to using Log Analytics to issue Kusto queries, you must enable diagnostic logs for control plane operations. When enabling diagnostic logs, you will select between storing your data in a single AzureDiagnostics table (legacy) or resource-specific tables.
When you select Logs from the Azure Cosmos DB menu, Log Analytics is opened with the query scope set to the current Azure Cosmos DB account. Log queries will only include data from that resource.
Important
If you want to run a query that includes data from other accounts or data from other Azure services, select Logs from the Azure Monitor menu. For more information, see Log query scope and time range in Azure Monitor Log Analytics.
Here are some queries that you can enter into the Log search search bar to help you monitor your Azure Cosmos resources. The exact text of the queries will depend on the collection mode you selected when you enabled diagnostics logs.
To query for all control-plane logs from Azure Cosmos DB:
AzureDiagnostics | where ResourceProvider=="MICROSOFT.DOCUMENTDB" | where Category=="ControlPlaneRequests"
To query for all data-plane logs from Azure Cosmos DB:
AzureDiagnostics | where ResourceProvider=="MICROSOFT.DOCUMENTDB" | where Category=="DataPlaneRequests"
To query for a filtered list of data-plane logs, specific to a single resource:
AzureDiagnostics | where ResourceProvider=="MICROSOFT.DOCUMENTDB" | where Category=="DataPlaneRequests" | where Resource=="<account-name>"
Important
In the AzureDiagnostics table, many fields are case-sensitive and uppercase including, but not limited to; ResourceId, ResourceGroup, ResourceProvider, and Resource.
To get a count of data-plane logs, grouped by resource:
AzureDiagnostics | where ResourceProvider=="MICROSOFT.DOCUMENTDB" | where Category=="DataPlaneRequests" | summarize count() by Resource
To generate a chart for data-plane logs, grouped by the type of operation:
AzureDiagnostics | where ResourceProvider=="MICROSOFT.DOCUMENTDB" | where Category=="DataPlaneRequests" | summarize count() by OperationName | render columnchart
These examples are just a small sampling of the rich queries that can be performed in Azure Monitor using the Kusto Query Language. For more information, see samples for Kusto queries.
For example, the following table lists few alert rules for your resources. You can find a detailed list of alert rules from the Azure portal. To learn more, see how to configure alerts article.
Monitor Azure Cosmos DB programmatically
The account level metrics available in the portal, such as account storage usage and total requests, aren many usage properties such as CollectionSizeUsage, DatabaseUsage, DocumentUsage, and more.
To access more metrics, use the Azure Monitor SDK. Available metric definitions can be retrieved by calling:{SubscriptionId}/resourceGroups/{ResourceGroup}/providers/Microsoft.DocumentDb/databaseAccounts/{DocumentDBAccountName}/providers/microsoft.insights/metricDefinitions?api-version=2018-01-01
To retrieve individual metrics, use the following format:{SubscriptionId}/resourceGroups/{ResourceGroup}/providers/Microsoft.DocumentDb/databaseAccounts/{DocumentDBAccountName}/providers/microsoft.insights/metrics?timespan={StartTime}/{EndTime}&interval={AggregationInterval}&metricnames={MetricName}&aggregation={AggregationType}&`$filter={Filter}&api-version=2018-01-01
To learn more, see the Azure monitoring REST API article.
Next steps
- See Azure Cosmos DB monitoring data reference for a reference of the logs and metrics created by Azure Cosmos DB.
- See Monitoring Azure resources with Azure Monitor for details on monitoring Azure resources.
Feedback
Submit and view feedback for | https://docs.microsoft.com/en-us/azure/cosmos-db/monitor-cosmos-db | 2022-06-25T12:34:51 | CC-MAIN-2022-27 | 1656103034930.3 | [array(['media/monitor-cosmos-db/monitoring-options-portal.png',
'Monitoring options available in Azure portal'], dtype=object)
array(['media/monitor-cosmos-db/overview-page.png', 'Overview page'],
dtype=object)
array(['media/monitor-cosmos-db/add-metrics-filter.png',
'Add a filter to select the metric granularity'], dtype=object)
array(['media/monitor-cosmos-db/apply-metrics-splitting.png',
'Add apply splitting filter'], dtype=object) ] | docs.microsoft.com |
Deploy DNSSEC with Windows Server 2012
Applies To: Windows Server 2012 R2, Windows Server 2012
Use the following concepts and procedures to deploy Domain Name System Security Extensions (DNSSEC) in Windows Server 2012 or in Windows Server 2012 R2.
Deploying DNSSEC
To deploy DNSSEC, review DNSSEC conceptual information below, and then use the DNSSEC deployment checklists that are provided in this guide.
DNSSEC concepts
Overview of DNSSEC: Provides information about how DNSSEC works.
DNS Servers: Describes DNSSEC support in Windows Server.
DNS Clients: Describes the behavior of security-aware and non-security-aware DNS clients.
DNS Zones: Provides information about zone signing and unsigning with Windows PowerShell or DNS Manager.
Trust Anchors: Describes trust anchors, which are public cryptographic keys that must be installed on DNS servers to validate DNSSEC data.
The NRPT: Introduces and provides details about the Name Resolution Policy Table (NRPT).
Why DNSSEC: Describes risks and benefits of DNSSEC.
Stage a DNSSEC Deployment: Provides steps and considerations to help introduce DNSSEC to your environment.
DNSSEC Performance Considerations: Describes the impact of zone signing on a DNS infrastructure.
DNSSEC Requirements: Describes the requirements for deploying DNSSEC.
DNSSEC deployment checklists
See also
DNSSEC Deployment Planning
Appendix A: DNSSEC Terminology
Appendix B: Windows PowerShell for DNS Server | https://docs.microsoft.com/en-us/previous-versions/windows/it-pro/windows-server-2012-R2-and-2012/dn593684(v=ws.11) | 2022-06-25T12:11:39 | CC-MAIN-2022-27 | 1656103034930.3 | [] | docs.microsoft.com |
4 Installation¶
4.1 Try it out¶
It is possible to quickly try out the OptServer without going through the full installation process. The two options are:
Use the public instance running at. It is a demo with size limitations suited for testing small problems. The website has instructions and coordinates of access points.
Use the Docker image available from . It will install the simplest stand-alone OptServer container with latest MOSEK version and with all dependencies installed internally.
The rest of this page describes the complete installation process.
4.2 Requirements¶
The following are prerequisites to run OptServer:
OptServer is only available for 64bit Linux.
Access to a PostgreSQL database is required.
MOSEK binaries are required. If the OptServer is installed from a standard MOSEK distribution, then it will naturally contain the necessary files, however an external MOSEK installation can also be used.
4.3 Locating files¶
The relevant files of the Optimization Server are organized as reported below
where
<MSKHOME> is the folder in which the MOSEK Optimization Suite has been installed.
4.4 Installation¶
To install OptServer and test the installation perform the following steps.
4.4.1 Run install script¶
Run the script
<MSKHOME>/mosek/10.0/opt-server/bin/install_MosekServer to configure the server. The full list of supported options can be obtained via
./install_MosekServer --help
As a reasonable minimum the configuration should specify where to install the server, the port to listen on, the database connection string and the location of an available MOSEK installation. For example:
./install_MosekServer --inplace \ --port $PORT \ --mosekdir ../../.. \ --database-resource "host=/var/run/postgresql user=$USER dbname=$DBNAME sslmode=disable"
The install script creates a configuration file
<MSKHOME>/mosek/10.0/opt-server/etc/Mosek/server.conf which can be edited by hand if necessary.
4.4.2 Initialize the database¶
Run
./MosekServer --create-database
to initialize the database. It will use the information provided in the database connection string specified in the previous step. This step is not necessary if the database exists from a previous installation.
4.4.3 Initialize admin password (optional)¶
If the Web interface to the OptServer is to be used, run
./MosekServer --reset-admin
to set the password for the
admin user. This step is not required if the Web GUI will not be used. Note that the GUI will only be available with SSL enabled, see Sec. 6.3 (Security).
4.4.4 Run the server¶
Start the server by running the script
./MosekServer
The server will print its initial configuration and continue writing the log to a file. To obtain the full list of server options run
./MosekServer --help
For debugging purposes it is convenient to use the options
./MosekServer --debug --logfile -
This will increase the amount of debug output and redirect it to standard output.
4.4.5 Test the installation¶
To test that the OptServer is working properly locate, compile (if necessary) and run the example
opt_server_sync.* for either C, Python, Java or C#. Examples and sample data files can be found in the distribution package under
<MSKHOME>/mosek/10.0/tools/examples.
For example, assuming that MOSEK was installed in Python, one can go to the folder with Python examples and run
python opt_server_sync.py ../data/25fv47.mps $SERVER $PORT
where
$SERVER:$PORT points to the OptServer. If the configuration is correct the example will print a log from solving the problem and a solution summary. In case of issues the log output of the OptServer should be consulted to determine the cause.
This example also demonstrates how to use the OptServer from the MOSEK API. | https://docs.mosek.com/10.0/opt-server/install-and-run.html | 2022-06-25T10:30:18 | CC-MAIN-2022-27 | 1656103034930.3 | [] | docs.mosek.com |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.