content
stringlengths 0
557k
| url
stringlengths 16
1.78k
| timestamp
timestamp[ms] | dump
stringlengths 9
15
| segment
stringlengths 13
17
| image_urls
stringlengths 2
55.5k
| netloc
stringlengths 7
77
|
---|---|---|---|---|---|---|
isogmt¶
isogmt - Run GMT command or script in isolation mode [classic mode only]
Synopsis¶
isogmt command.
Examples¶
Run the shell script script.gmt in isolation mode:
isogmt sh script.gmt | https://docs.generic-mapping-tools.org/6.3/isogmt.html | 2022-06-25T01:50:31 | CC-MAIN-2022-27 | 1656103033925.2 | [] | docs.generic-mapping-tools.org |
Verify a user's CPF number and identity.
MetaMap connects with the Brazilian IRS (Ministério da Fazenda / Treasury) to validate that the CPF (Cadastro de Pessoas Físicas / Registration of Individuals) number present in the ID card exists and its owner matches the data obtained from it.
This endpoint is now legacy
This version of the CPF check only handles individual validation requests. Use the new version of the Brazil CPF endpoint, which can handle batch validation requests. | https://docs.getmati.com/reference/govchecks-brazil-cpf-legacy | 2022-06-25T01:52:26 | CC-MAIN-2022-27 | 1656103033925.2 | [] | docs.getmati.com |
Configure SQL Server protection
Updated: May 13, 2016
Applies To: System Center 2012 SP1 - Data Protection Manager, System Center 2012 - Data Protection Manager, System Center 2012 R2 Data Protection Manager
This topic provides the information that you should consider when you’re planning to protect a Microsoft SQL Server database by using Data Protection Manager (DPM).
Which SQL Server versions and functionality does DPM support?
Versions
SQL Server 2005
SQL Server 2008
SQL Server 2008 R2
SQL Server 2012
SQL Server 2014
Functionality
SQL clustering
In a SQL Server clustering deployment that is protected by a DPM server, when the instance of SQL Server fails over to another node, the DPM server will continue to protect the primary node of the SQL Server cluster without intervention from backup administrators.
DPM is cluster aware when it is protecting a SQL cluster. DPM is aware of the cluster’s identity in addition to the nodes in the cluster. In a SQL clustering scenario, if the instance of SQL Server is changed to a different node, DPM will continue to protect the SQL cluster without any intervention from backup administrators.
SQL mirroring
When a DPM server is protecting a SQL Server database that is mirrored, the DPM server is aware of the mirrored database and correctly protects the shared dataset.
SQL log shipping
In scenarios in which SQL Server log shipping is being used, DPM will automatically discover that log shipping is being used and will auto-configure itself to coexist. This makes sure of correct SQL protection.
SQL AlwaysOn
When DPM is protecting SQL AlwaysOn, it will automatically detect availability groups. It will also detect a failover occurrence and will continue to protect the database.
For more information about SQL protection prerequisites, go to this page Prerequisites. We recommend that you read through the SQL Server prerequisites page before you set up your SQL protection.
Prepare DPM before you configure SQL Server protection
Before you set up SQL Server protection, you should make sure that you follow these steps:
Deploy DPM—Verify that DPM is installed and deployed correctly. If you do not have DPM deployed, go to the following links for guidance:
Set up storage—Check that you have storage set up. Read more about your options in the following articles:
Learn about short-term storage to disk and storage pools in Plan for disk backups.
For storage to Azure with Azure Backup, see Plan for Azure backups.
For long-term storage to tape, see Plan for tape-based backups.
Set up the DPM protection agent—The agent has to be installed on the instance of SQL Server. Read Plan for protection agent deployment, and then Set up the protection agent.
Set up a protection group for the instance of SQL Server
In the DPM console click Protection
Click New to start the Create New Protection Group wizard.
On the Select Group Members page of the Create New Protection Group wizard, under Available members, expand your instance of SQL Server.
The instances of SQL Server on that server will be shown. You have the option of selecting protection at the instance level or protection of individual databases. Continue through the rest of the wizard to complete the setup of your SQL protection.
More information about creating protection groups can be found in this article Create and manage protection groups.
Note: When you are protecting at the instance level, any database that is added to that instance of SQL Server will automatically be added to DPM protection. See the following screen shot for an example of auto-protection at the instance level and of individual database protection.
Note: If you are using SQL Server AlwaysOn availability groups, you can create a protection group that contains the availability groups. The DPM server detects the availability groups and will display them under Cluster Group. Select the whole group to protect it so that any databases that you add to the group are protected automatically. You can also select individual databases instead of the whole group.
For each instance of SQL Server, you can also run a system state backup or full bare metal backup (this includes the system state). This in useful if you want to be able to recover your whole server and not just data. More information about protection groups and bare metal backup can be found in the following articles:
Plan for protection groups
Plan for protection group long-term and short-term protection
Back up and restore server system state and bare metal recovery (BMR)
Then follow the instructions in Create and manage protection groups
After you create the protection group, initial replication of the data occurs. Backup then occurs in line with the protection group settings.
Monitoring notifications
After the protection groups are created, the initial replication occurs, and DPM starts backing up and synchronizing the SQL Server data. You can use DPM to monitor the initial synchronization and successive backups in the following ways:
Using default DPM monitoring can set up notifications for proactive monitoring by publishing alerts and configuring notifications. You can send notifications by email for critical, warning, or informational alerts, and for the status of instantiated recoveries.
If you have System Center Operations Manager (SCOM) deployed in the organization, you can use SCOM for deeper DPM monitoring and management. DPM monitoring and management in SCOM is sufficient in meeting your SQL protection monitoring needs. For more information, see this article: Manage and Monitor DPM.
Set up monitoring notifications
In the DPM Administrator Console, click Monitoring > Action > Options.
Click SMTP Server, and then email address to which you want DPM to send the test message, and then click OK.
Click Options > Notifications, and then select the kinds of alerts about which recipients want to be notified. In Recipients, type the email address of each recipient to whom you want DPM to send copies of the notifications.
To test the SMTP server settings, click Send Test Notification > OK. | https://docs.microsoft.com/en-us/previous-versions/system-center/system-center-2012-R2/hh780998(v=sc.12)?redirectedfrom=MSDN | 2022-06-25T02:45:20 | CC-MAIN-2022-27 | 1656103033925.2 | [] | docs.microsoft.com |
Table Of Contents
Introduction
Tessellation is a computer graphics technique that can make a coarse, low-polygon mesh render smooth. This is achieved through polygonal subdivision, which happens at render-time. Working with low-polygon meshes and letting Redshift do the subdivision during rendering has certain advantages:
- Low-polygon meshes can be simpler to manage for animation reasons
- The 3D program itself doesn't have to maintain large numbers of polygons which can be expensive in terms of system memory
It can be more memory-efficient (which is important for GPUs) when combined with view-dependent and/or adaptive subdivision. Small or distant objects, for example, can render smooth with fewer subdivisions.
Redshift subdivides quad polygons using the Catmull-Clark algorithm. For triangles, it uses the Loop algorithm. Redshift supports both screen-space and world-space adaptive tessellation for improved memory usage.
Displacement is a technique typically combined with tessellation. It allows the user to add extra detail on their meshes through shader networks, i.e. textures, noise shader nodes, etc.
The benefits of displacement include:
- Manipulating textures and shader networks for certain displacement effects (such as a brick wall) is much easier than manipulating lots of vertices in a 3D program
- Sculpting apps like ZBrush and Mudbox are easier to use compared to polygon modeling when creating organic geometry. A displacement (or vector displacement) map allows this sculpted detail to be applied on a fairly low-resolution mesh.
Because displacement can happen on adaptively tessellated meshes, it can be more memory efficient than using a full-detail, full-tessellation mesh in memory at all times and irrespective of its size or viewpoint.
Redshift supports both heightfield (displacing along the vertex normal) and vector displacement maps. The vector displacement maps can be in object or tangent space. Importantly, any displacement detail that couldn't be represented given the existing tessellation settings is represented, instead, using bump mapping – therefore a good level of surface detail can be present even in fairly low-quality tessellation settings.
How To Enable
If you don't care about adaptive tessellation or displacement and prefer to work with XSI's "geometry approximation", you can!
On the other hand, if you do care about adaptive tessellation and/or displacement, you'll need to create a "Redshift Mesh Parameters" property on your mesh(es).
If you don't care about adaptive tessellation or displacement and prefer to work with Maya's "Smooth Mesh" properties, you can!
If, on the other hand, you do care about adaptive tessellation and/or displacement, you'll need to use the Redshift-specific tessellation/displacement properties. You have two options for that.
The easiest way is to use the object's Redshift properties, as shown below.
Please note that once you enable Redshift's tessellation, the equivalent Maya Smooth Mesh options will be overridden by the Redshift ones!
Or, alternatively, you can create a "Redshift Mesh Parameters" node for your mesh(es). You do that through the menu Redshift -> Object Properties -> Create mesh parameter node for selection.
The benefit of this method is that you can use a single mesh parameter node for a hierarchy/group of objects, which is useful when multiple objects need to share the same tessellation/displacement options.
To enable Redshift tessellation and displacement in 3ds Max, you'll need to attach a "Redshift Mesh Params" modifier on your object, as shown below.
The object Tessellation and Displacement options are part of the Redshift Object Tag. In the scene tree, right-click on the desired object and select the Redshift Object tag from the Redshift Tags category.
After selecting the tag, and navigate to the Geometry tab. To activate the settings check the Override option.
The Tessellation and Displacement settings are effective on the object that hosts the Redshift Object tag as well as any child objects.
The object Tessellation and Displacement options are part of the Redshift OBJ Spare Parameters that can be added selecting the object(s) nodes and clicking the ObjParms icon in the Redshift toolbar. There is also a command to remove the spare parameter from the selected objects if needed.
The object Tessellation and Displacement options are part of the RedshiftObjectSettings node.
Tessellation Settings
Subdivision Rule
Redshift supports two different algorithms for polygon subdivision: "Loop", which is used for triangles and "Catmull-Clark", which is used for quads. These algorithms are also called "Subdivision Rules".
The "CC+Loop" subdivision rule uses Catmull-Clark for quads and Loop for triangles. On the other hand, the "CC Only" option uses Catmull-Clark for triangles too, by first splitting each triangle into three quads. The "CC Only" mode should be used when Redshift is combined with other software that doesn't support Loop subdivision.
Screen Space Adaptive
Enabling screen-space adaptive tessellation means that objects that are further away from the camera will be subdivided less and will, therefore, use fewer polygons and less GPU memory. If this option is disabled, then subdivision becomes "world space adaptive". This option affects the unit used for the "minimum edge length" setting, as explained below.
Smooth Subdivision
This controls whether Redshift should subdivide quads and triangles using the Catmull-Clark and Loop algorithms respectively or whether it should do a simple linear subdivision instead. If you are adding displacement on simple angular meshes (such as walls or a box) and don't want them turned into smooth, curvy objects, disabling smooth subdivision might be the right option for you.
Minimum Edge Length
Adaptive subdivision keeps dividing quads/triangles while their edges are longer than this setting. If you are using screen-space adaptive subdivision, this length is measured in screen pixels. If you are not using screen-space adaptive subdivision, this means "world space adaptive subdivision" so the length is measured in world-space units. The smaller this value, the more tessellation will be applied to the mesh. If you set the value to zero, tessellation will continue until "maximum subdivisions" (see below) has been reached.
The following pictures show a tessellated cube which becomes this spherical-like shape with smooth tessellation. Notice how there is more tessellation when the min edge length becomes smaller. This is showing screen-space adaptive subdivision so 8 means "8 pixels" while 2 means "2 pixels".
Maximum Subdivisions
Subdivision happens in 'passes'. Each pass can turn single quad/triangle into 4 quads/triangles respectively. This means that the number of polygons can grow extremely quickly with this option. It is a "power of four".
- A setting of 1 can turn 1 quad into 4 quads
- A setting of 2 can turn 1 quad into 16 quads
- A setting of 3 can turn 1 quad into 64 quads
- A setting of 4 can turn 1 quad into 256 quads
- A setting of 5 can turn 1 quad into 1024 quads
- A setting of 6 can turn 1 quad into 4096 quads
- A setting of 7 can turn 1 quad into 16384 quads
- A setting of 8 can turn 1 quad into 65536 quads
So a mesh containing only 1000 quads, using a "minimum edge length" of 0.0 and a "maximum subdivisions" 8, could become a 65 million quad mesh
which could take a long time to generate and would consume lots of memory! For this reason, great care has to be applied when adjusting both "maximum subdivisions" and "minimum edge length".
Out-Of-Frustum Tessellation Factor
This option allows objects that are outside the camera frustum (i.e. objects that are not directly visible to the camera) to be tessellated to a lesser degree. The larger the factor, the lesser this out-of-frustum tessellation will be. This setting can help save memory by tessellating "unimportant" objects less. However, sometimes an object might be outside the camera frustum but still very visible through reflections. Or it might be casting a well-defined shadow within the camera frustum. For such objects, smaller factors should be used. A factor of 0.0 disables this optimization. For a more detailed discussion on this, please see the relevant section below.
Limit Out-Of-Frustum Tessellation / Max Out-Of-Frustum Subdivs
When "Limit Out-Of-Frustum Tessellation" is enabled, you can specify the maximum number of subdivisions that should happen outside the camera frustum with the "Max Out-Of-Frustum Subdivs" option. This setting is useful when the "Out-Of-Frustum Tessellation Factor" setting can still yield excessive tessellation. This condition can happen when the mesh is using large displacements. For a more detailed discussion on this, please see the relevant section below.
Displacement Settings
Maximum Displacement
This parameter tells Redshift what is the maximum length the displacement shaders/textures will be displacing the vertices by. For example, if you're adding two displacement textures in the shader graph and each displacement texture can push the vertices by 1 unit, then both of them can push the vertices by a maximum of 2 units, so a setting of 2.0 should be used for this setting. Unfortunately, due to flexible nature of shaders, it's not currently possible to compute this value automatically. Settings similar to this can be found on other renderers, too. They might be called "bounds padding" or "min/max bound".
If the value for this setting is set too low, you will see a 'ceiling' on your displacements, i.e. the maximum displacement will be clamped. If, on the other hand, this value is set too high, there won't be any visual artifacts but the performance could suffer. It is, therefore, advisable to use a value that is as low as possible before seeing any artifacts.
Displacement Scale
This scales the displacement results, which has the effect of accentuating or toning down the displacement. While it is possible to scale the displacement in the shader graph itself, this setting was added for the case where the same displacement shader is used on different meshes but different levels of displacement 'strength' are required among these meshes.
Below is a sphere displaced with a fractal shader.
Enable Auto Bump Mapping
Very high-detail surface detail can require very high tessellation levels to be captured sufficiently, otherwise the result might look too soft. However, this can mean longer rendering times and higher memory usage!
The 'Enable Auto Bump Mapping' option effectively emulates what would happen if you were to tessellate your geometry to a sub-pixel level and modifies the surface normals accordingly, as if they were bump-mapped.
The following two spheres were rendered with exactly the same tessellation setitngs but the sphere on the right uses auto bumpmapping. Notice how it's able to capture more surface detail.
How To Use Displacement
After configuring the tessellation settings, the displacement shader should be set. Please click here for more information.
UV Smoothing
When "Smooth subdivision" is enabled, Redshift will smooth not only the vertex positions but also the UV coordinates and tangent space vectors. Smoothing UV coordinates means that the UVs will be shifted to remove any 'zig-zagging' and 'UV breaks' during tessellation and maintain smooth UV-space curves. In the majority of cases, this is the desirable way to treat UVs. However, there are cases where strict UV layouts (such as with when UV tiles are aligned to quads) need to be preserved and not smoothed.
For this reason, Redshift supports enabling/disabling UV smoothing.
In XSI, Redshift uses the option in the Texture Editor window, menu File -> UV Properties -> Smooth when subdividing
In Maya, Redshift uses the "Smooth UVs" attribute which belongs to the shape's Smooth Mesh -> Extra Controls set of attributes.
Tessellation And Instancing
When an mesh is instanced in Redshift, adaptive tessellation is no longer supported. However, you can still use fixed-rate tessellation (and, subsequently, displacement too).
To do fixed-rate tessellation you need to:
- Disable screen-space adaptive tessellation
- Set minimum edge legth to zero
- Set the maximum subdivisions to something reasonable
Please pay particular attention to the third point above, as the default setting (6 subdivisions) is too high for fixed rate subdivision. As explained above, 6 means 4096 primitives will be generated for each original primitive. So a mesh with just 1000 faces will turn into a mesh with 4 million faces!
Out Of Frustum Tessellation
The following images show what "out of frustum tessellation factor" does for polygons that are outside the camera frustum.
The test scene is simple: it's just a quad with a mild fractal displacement. The camera is looking at the quad (slightly tilted down) from a close position.
Scene setup
We use a wireframe shading node so we can visualize tessellation.
In the images below, we rendered once from the camera, then we froze tessellation in the RenderView and then we "pulled the camera back" so we could see the effect of polygon tessellation outside the camera frustum. As you can see, the polygons that are inside the camera frustum are tessellated the most. Polygons that are away from the camera frustum are tesselated less and depending on the "out-of-frustum tessellation factor". The farther away a polygon is from the camera frustum, the less it gets tessellated.
This setting should be used carefully! Even though a polygon is outside the camera frustum, it might be visible through a mirror or it might be casting a defined shadow inside the camera frustum! So, in these cases, making it tessellate less might make it look blocky/angular and generally too-low-poly in reflections or shadows!
While "out-of-frustum tessellation factor" allows us to get tessellation under control and save on Redshift's memory usage and rendering speed, there does exist one case where it might prove ineffective: scenes with large displacements and the camera being close to the displaced geometry.
To explain: When you set up displacement in Redshift, you have to declare a "maximum displacement" setting. This setting tells Redshift that displacement can go up to a certain distance. This setting is also used by Redshift when it tries to determine if a polygon is inside the frustum or outside it. The reasoning behind that is that, if a polygon is going to be displaced ("moved") by a lot, it might actually be moved inside the camera frustum, even though it was originally outside the camera frustum. In other words, Redshift applies "conservative" tessellation when it comes to displaced polygons that might end up being inside the camera frustum.
When this happens, Redshift might incorrectly "think" that too many polygons are inside the camera frustum and it might, therefore, tessellate these polygons a lot!
To show this, we'll edit our scene and make "max displacement" several times larger. Notice how there is much more tessellation because Redshift now thinks that all the polygons outside the frustum could possibly end up inside the frustum!
Increasing "max displacement" produces much more tessellation
For these cases, the "Limit Out-Of-Frustum Tessellation" control should be used. In the picture below, we enabled it and set the "max out-of-frustum sudivs" to 5 (the mesh uses max subdivs 8). This means "If a polygon is outside the frustum, ensure it doesn't get subdivided more than 5 times".
Setting "max out-of-frustum subdivs" to 5 limits the tessellationSetting "max out-of-frustum subdivs" to 5 limits the tessellation
You can think of this setting as a "subdivision clamp".
Unsupported Features
Redshift doesn't currently support edge/vertex crease values in Softimage. They are supported in Maya. | https://docs.redshift3d.com/display/RSDOCS/Tessellation+And+Displacement | 2022-06-25T02:20:49 | CC-MAIN-2022-27 | 1656103033925.2 | [] | docs.redshift3d.com |
Yapily Data overview
Yapily Data brings you easy access to financial data in real time to help develop personalised financial products and services for consumers and businesses.
Using our secure and standardised APIs, the account holder can consent to you accessing their data, so you can retrieve live and historical account data and transactions. This allows you to gain a deeper understanding of your customers' financial behaviour so you can serve them better.
We cleanse and standardise the data across all financial institutions in all territories. You can have confidence in the reliability and consistency of the data you receive and can consume it directly into your internal processes.
You can upgrade to Yapily Data Plus to get an enriched, categorised view of your customers' data, to help you make rapid and accurate decisions about your customers and their needs. | https://docs.yapily.com/pages/products/data/overview/ | 2022-06-25T02:10:53 | CC-MAIN-2022-27 | 1656103033925.2 | [] | docs.yapily.com |
You're viewing Apigee Edge documentation.
View Apigee X documentation.
New features and enhancements
Integrate with Apigee Edge in three ways: public or private cloud org, Microgateway in a separate container, or Microgateway coresident with a CF app
This version support three plans: org, microgateway, and microgateway-coresident.
- org -- Support for the full feature set of Apigee Edge, whether in public or private cloud.
- microgateway -- Support for basic Apigee Edge features by integrating a CF app with Apigee Microgateway as a separate application within CF.
- microgateway-coresident -- Support for integrating a CF app with Apigee Microgateway coresident in the app's container. | https://docs.apigee.com/release/notes/300-apigee-edge-cloud-foundry-integration-release-notes?hl=vi | 2022-06-25T01:58:17 | CC-MAIN-2022-27 | 1656103033925.2 | [] | docs.apigee.com |
using the setting-query command.
The table below contains details of all cluster-level query settings.
Access
Node-Level Query Settings
To set a node-level query setting, use the Admin REST API (
/admin/settings endpoint) with a cURL statement.
These settings cannot be set by
cbq.
To see a list of the current Query Settings, while the Query Service is running, enter:
$ curl -u user:pword
This will output the entire list of node-level query settings:
{ "atrcollection": "", "auto-prepare": false, "cleanupclientattempts": true, "cleanuplostattempts": true, "cleanupwindow": "1m0s", "completed": { "aborted": null, "threshold": 1000 }, "completed-limit": 4000, "completed-threshold": 1000, "controls": false, "cpuprofile": "", "debug": false, "functions-limit": 16384, "keep-alive-length": 16384, "loglevel": "INFO", "max-index-api": 4, "max-parallelism": 1, "memory-quota": 0, "memprofile": "", "mutexprofile": false, "n1ql-feat-ctrl": 76, "numatrs": 1024, "pipeline-batch": 16, "pipeline-cap": 512, "plus-servicers": 16, "prepared-limit": 16384, "pretty": false, "profile": "off", "request-size-cap": 67108864, "scan-cap": 512, "servicers": 4, "timeout": 0, "txtimeout": "0s", "use-cbo": true }
To output to a file for editing multiple settings at a single time, add the
-o filename option.
For example:
$ curl -u user:pword -o ./query_settings.json
The table below contains details of all node-level query settings.
Logging parameters
Request-Level Parameters
To set a request-level parameter, use the N1QL REST API (
/query/service endpoint) with a cURL statement, or the cbq command, or a client program.
You can also set request-level parameters using the Run-Time Preferences window in the Query Workbench.
While
cbq is a sandbox to test code on your local machine, your production query settings are set with the cURL commands on your server.
To set request-level parameters in
cbq, use the
\SET command.
The parameter name must be prefixed by a hyphen.
\SET -timeout "30m"; \SET -pretty true; \SET -max_parallelism 3; SELECT * FROM "world" AS hello;
To set request-level parameters with the REST API, specify the parameters in the request body or the query URI.
curl -u Administrator:password \ -d 'statement=SELECT * FROM "world" AS hello; & timeout=30m & pretty=true & max_parallelism=3'
The table below contains details of all request-level parameters, along with examples.
Credentials
Transactional Scan Consistency
If the request contains a
BEGIN TRANSACTION statement, or a DML statement with the tximplicit parameter set to
true, then the scan_consistency parameter sets the transactional scan consistency.
If you specify a transactional scan consistency of
request_plus,
statement_plus, or
at_plus, or if you specify no transactional scan consistency, the transactional scan consistency is set to
request_plus; otherwise, the transactional scan consistency is set as specified.
Any DML statements within the transaction that have no scan consistency set will inherit from the transactional scan consistency.
Individual DML statements within the transaction may override the transactional scan consistency.
If you specify a scan consistency of
not_bounded for a statement within the transaction, the scan consistency for the statement is set as specified.
When you specify a scan consistency of
request_plus,
statement_plus, or
at_plus for a statement within the transaction, the scan consistency for the statement is set to
request_plus.
However,
request_plus consistency is not supported for statements using a full-text index.
If any statement within the transaction uses a full-text index, by means of the SEARCH function or the Flex Index feature, the scan consistency is set to
not_bounded for the duration of the full-text search.
Named Parameters and Positional Parameters
Named parameters use a variable name to define the value of each parameter, while positional parameters use a list of arguments to define the value of each parameter by position. Requests which use these two types of parameter should contain the appropriate placeholders, as summarized in the table below:. | https://docs.couchbase.com/server/current/settings/query-settings.html | 2022-06-25T01:36:43 | CC-MAIN-2022-27 | 1656103033925.2 | [] | docs.couchbase.com |
The purpose of this document is to provide information that will help you to quickly see what is new or changed in the Delphix Dynamic Data Platform Release.
- What's New Guide for 5.3
- Release Notes 5.3.x
- Data Source Integration (Plugin) Release Notes
- PDF Versions of Documentation | https://docs.delphix.com/docs536/release-information | 2022-06-25T02:26:00 | CC-MAIN-2022-27 | 1656103033925.2 | [] | docs.delphix.com |
Give
Feedback Event Args Class
Definition
Important
Some information relates to prerelease product that may be substantially modified before it’s released. Microsoft makes no warranties, express or implied, with respect to the information provided here.
Provides data for the GiveFeedback event, which occurs during a drag operation.
public ref class GiveFeedbackEventArgs : EventArgs
[System.Runtime.InteropServices.ComVisible(true)] public class GiveFeedbackEventArgs : EventArgs
public class GiveFeedbackEventArgs : EventArgs
[<System.Runtime.InteropServices.ComVisible(true)>] type GiveFeedbackEventArgs = class inherit EventArgs
type GiveFeedbackEventArgs = class inherit EventArgs
Public Class GiveFeedbackEventArgs Inherits EventArgs
- Inheritance
-
- Attributes
-
ExamplesArgs class. See the DoDragDrop method for the complete code example. void ListDragSource_GiveFeedback(object sender,
Remarks
The GiveFeedback event occurs during a drag operation. It allows the source of a drag event to modify the appearance of the mouse pointer in order to give the user visual feedback during a drag-and-drop operation. A GiveFeedbackEventArgs object specifies the type of drag-and-drop operation and whether default cursors are used.
For information about the event model, see Handling and Raising Events. | https://docs.microsoft.com/en-us/dotnet/api/system.windows.forms.givefeedbackeventargs?view=windowsdesktop-6.0 | 2022-06-25T03:32:22 | CC-MAIN-2022-27 | 1656103033925.2 | [] | docs.microsoft.com |
Start-Mp
WDOScan
Starts a Windows Defender offline scan.
Syntax
Start-Mp
WDOScan [-CimSession <CimSession[]>] [-ThrottleLimit <Int32>] [-AsJob] [<CommonParameters>]
Description
The Start-MpWDOScan cmdlet starts a Windows Defender offline scan on a computer.
Examples
Example 1: Start an offline scan
PS C:\>Start-MpWDOScan
This command starts a Windows Defender offline scan on the computer where you run the command. This command causes the computer to start in Windows Defender offline and begin the scan..
Feedback
Submit and view feedback for | https://docs.microsoft.com/en-us/powershell/module/defender/start-mpwdoscan?view=windowsserver2022-ps | 2022-06-25T03:10:52 | CC-MAIN-2022-27 | 1656103033925.2 | [] | docs.microsoft.com |
WSAEnumNameSpaceProvidersExW function (winsock2.h)
The WSAEnumNameSpaceProvidersEx function retrieves information on available namespace providers.
Syntax
INT WSAAPI WSAEnumNameSpaceProvidersExW( [in, out] LPDWORD lpdwBufferLength, [out] LPWSANAMESPACE_INFOEXW lpnspBuffer );
Parameters
[in, out] lpdwBufferLength
On input, the number of bytes contained in the buffer pointed to by lpnspBuffer. On output (if the function fails, and the error is WSAEFAULT), the minimum number of bytes to allocate for the lpnspBuffer buffer to allow it to retrieve all the requested information. The buffer passed to WSAEnumNameSpaceProvidersEx must be sufficient to hold all of the namespace information.
[out] lpnspBuffer
A buffer that is filled with WSANAMESPACE_INFOEX structures..
Note
The winsock2.h header defines WSAEnumNameSpaceProvider
NAPI_PROVIDER_INSTALLATION_BLOB
WSAEnumNameSpaceProviders
WSCEnumNameSpaceProvidersEx32 | https://docs.microsoft.com/en-us/windows/win32/api/winsock2/nf-winsock2-wsaenumnamespaceprovidersexw | 2022-06-25T03:09:43 | CC-MAIN-2022-27 | 1656103033925.2 | [] | docs.microsoft.com |
This state provides access to idem states
New in version 3002.
salt.states.idem.
state(name, sls, acct_file=None, acct_key=None, acct_profile=None, cache_dir=None, render=None, runtime=None, source_dir=None, test=False)¶
Execute an idem sls file through a salt state
A list of idem sls files or sources
Path to the acct file used in generating idem ctx parameters. Defaults to the value in the ACCT_FILE environment variable.
Key used to decrypt the acct file. Defaults to the value in the ACCT_KEY environment variable.
Name of the profile to add to idem's ctx.acct parameter Defaults to the value in the ACCT_PROFILE environment variable.
The location to use for the cache directory
The render pipe to use, this allows for the language to be specified (jinja|yaml)
Select which execution runtime to use (serial|parallel)
The directory containing sls files
cheese: idem.state: - runtime: parallel - sls: - idem_state.sls - sls_source
new
acct, pop, pop-config, idem
all | https://docs.saltproject.io/en/3003/ref/states/all/salt.states.idem.html | 2022-06-25T01:49:57 | CC-MAIN-2022-27 | 1656103033925.2 | [] | docs.saltproject.io |
Constant Editor¶
Include the Fluid Content Elements TypoScript template
- The Constant Editor can be found in the Web > Template module.
- Select the page in the page tree which contains the root template of your website.
- Select Constant Editor in the dropdown at the top of the Web > Template module.
- In the dropdown list select the category CONTENT.
- This will give you a list with all the constants of this extension. All constants are described and can be edited by clicking the pencil in front of the current value or by editing the available field.
- Do not forget to save the new values. The new values will be stored in the “Constants” field of the root template of your website.
Note
If you use the Constant Editor the configuration gets written to the database and cannot be kept under version control. You can cut all values from the constants field of the root template record and move them to a file in your site package extension. This way you can keep the values under version control. | https://docs.typo3.org/c/typo3/cms-fluid-styled-content/11.5/en-us/Configuration/ConstantEditor/Index.html | 2022-06-25T02:30:01 | CC-MAIN-2022-27 | 1656103033925.2 | [array(['../../_images/ConstantEditor.png',
'../../_images/ConstantEditor.png'], dtype=object)] | docs.typo3.org |
Introduction
GameSparks MatchMaking is a very flexible and powerful feature. It incorporates a number of complex features like Real-Time servers and Matchmaking Scripts. These features may not be available for alternative platforms, so in this topic we are going to deal with two fundamental components needed to transition your existing Matchmaking feature.
Thresholds
The basic Matchmaking config in GameSparks consists of a min and max set of players you would like to match with and a set of thresholds.
These thresholds are used to create conditions upon which matchmaking decisions can be made.
In GameSparks, these thresholds are pretty simple and are controlled by a single parameter called “skill” which is passed in through the MatchMakingRequest. Alternative platforms have similar functionality for their Matchmaking, though with different setup and API calls to enter a player into Matchmaking.
Matchmaking API
Oftentimes Matchmaking needs additional functionality in order for the feature to comply with the designer’s needs. There are a number of ways to do this discussed in the following section, but one way is with the SparkMatch API.
SparkMatch allows developers to control an existing instance of a match, add or remove players or edit the match-data payload. This is an extremely powerful tool to create custom match features however, it is not present for many alternative platforms. Where possible we will demonstrate workarounds for this.
Other Features
As mentioned above, the SparkMatch API can be used to control and extend the matchmaking feature, but it is also possible to add custom context to the match outside of the “skill” value passed into the MatchMakingRequest. You can do this with the “participantData” field, for example, matching only players from a specific region.
{ "@class": ".MatchmakingRequest", "participantData": { "region" : "eu" }, "skill": 0 }
This may not be possible in all alternative platforms but we will cover it where possible.
Matchmaking Scripts
Where an even more complex set of Matchmaking rules is required you might be using matchmaking scripts to manipulate player and match data during matchmaking. This is a very complex feature that is not common on other platforms. Where there is some overlap between this feature and the destination platforms matchmaking offering we will show examples, however, something to keep in mind is that where these platforms offer a “cancel matchmaking” API, you can do a lot of this work client-side by setting up custom timers that will cancel a matchmaking request after a given period of time and then issue another matchmaking request with different params
GameLift FlexMatch
If these alternative platforms do not offer a solution to your existing Matchmaking configuration it is worth checking out AWS FlexMatch. FlexMatch offers a matchmaking service with a high-degree of configurability. In comparison with GameSparks, matchmaking criteria are designed through a script which allows you to add custom data to the match-instance and the player’s individual matchmaking ticket. You can also change criteria over time as with GameSparks thresholds and matchmaking scripts.
You can get more information on FlexMatch here and you can see an example of how to create a serverless implementation of the service here.
Beamable
Beamable has a relatively straightforward matchmaking feature compared to GameSparks, however, there is a good deal of custom-code in order to set it up, so we will cover that in this topic.
Currently, Beamable only offers a basic matchmaking service. You cannot match by specific values or thresholds, like you can with GameSparks. Instead, everyone who is looking for a particular type of match is grouped together by default.
There is a guide here on their matchmaking functionality and this is also demonstrated in their example game here. We will cover the basics needed anyway so you have further information as to how this feature compares to GameSparks.
Note - This feature is still in development by Beamable and is actively being worked on to add features like thresholds and skill matching like you find in GameSparks. Check with Beamable to see what updates have been made to this feature since we completed this topic.
Game-Types
The first thing we need to start with is creating a new Game-Type content object. You can do this by going to the Content Manager window and right-clicking on the “game_types” menu option.
For this example the configuration is pretty simple, we don't need any rewards or leaderboard updates for this game-type, we just need a max-players (2 in the case of our 1v1 match) and a max wait duration after which the matchmaking will stop.
This SimGameType content object is going to act as the matchmaking options for the matches we want to create. We will use these in the next section.
You can see a few other attributes on this game-type object. “Min Players To Start” is the same as the minPlayers attribute in GameSparks matches. “Wait After Min Reached Secs” acts like the “Accept Min. Players” option for GameSparks thresholds. This is the number of seconds after which the match will revert to accepting the minimum number of players if there are no other players found.
There are two other options, Leaderboard Updates and Rewards. These are used by Beamable’s multiplayer service and can be ignored for matchmaking in this case.
Matchmaking API The next step is to create a matchmaking API which can control our matches. This is pretty simple. It need to be able to:
- Take the match-type (the SimGameType we created above)
- Take a callback for match updates
- Take a callback for match complete
- Request matchmaking
- Cancel matchmaking
- Get a list of all other players in the match upon completion
This is roughly the same flow as GameSparks, but in Beamable it requires some customization.
GSMatchResult
The first thing we need is to create a class to represent the data we need out of a successful match. In GameSparks this is pretty simple, we need a matchId and a list of players and their Ids. In this example we aren't going to create a matchId because this is not automatically generated by Beamable, however, you could create a temporary group or room out of the match and give the match that ID if necessary. The other two attributes that can be useful is the target number of players and the remaining time.
Using this information you can match useful decisions as the match progresses.
/// <summary> /// This is the object that will be returned from the match /// </summary> public class GSMatchResult { /// <summary> /// List of playerIds for the players in the match /// </summary> public List<long> PlayersIds = new List<long>(); /// <summary> /// The number of players required to match a match /// </summary> public int TargetPlayerCount; /// <summary> /// Remaining seconds in match /// </summary> public int SecondsRemaining; /// <summary> /// Creates a new instance of a match response used throughout matchmaking /// </summary> /// <param name="targetPlayerCount">The target number of players to complete a match</param> public GSMatchResult(int targetPlayerCount) { TargetPlayerCount = targetPlayerCount; } }
Note - We are creating a simplified version of the code-examples in the example game here. Check that example out for more details.
GSMatch
Now we can create the class that is going to handle all the matchmaking and callbacks. This is not too difficult but there are several parts that need to be considered.
- We need a loop which can keep checking for match updates at regular intervals. We have set this interval to 1 second which is usually fast enough for most games. This update process is going to be a new thread.
- We need to be able to cancel this thread and therefore cancel matchmaking for our player.
- We need to be able to raise a callback for updates (like new players joining the match), matchmaking completed (we found all the players we need) or the matchmaking process timed out.
/// <summary> /// This is the match object where you can start the matchmaking process or cancel it /// </summary> public class GSMatch { // Event callbacks public event Action<GSMatchResult> OnProgress; public event Action<GSMatchResult> OnComplete; public event Action<GSMatchResult> OnTimeout; private GSMatchResult _matchResult; private GSMatchResult _gsMatchResult; private MatchmakingService _matchmakingService; private SimGameType _simGameType; private CancellationTokenSource _matchmakingOngoing; /// <summary> /// Creates a new instance of the match /// </summary> /// <param name="matchmakingService"></param> /// <param name="simGameType"></param> public GSMatch(MatchmakingService matchmakingService, SimGameType simGameType) { _matchmakingService = matchmakingService; _simGameType = simGameType; _gsMatchResult = new GSMatchResult(_simGameType.maxPlayers); } /// <summary> /// Kicks off the matchmaking process /// Updates will be delivered using the event callbacks /// </summary> public async Task RequestMatch() { var handle = await _matchmakingService.StartMatchmaking(_simGameType.Id); try { _matchmakingOngoing = new CancellationTokenSource(); var token = _matchmakingOngoing.Token; do { if (token.IsCancellationRequested) return; // Check if a new player has joined or left the match // if (handle.Status.Players.Count != _gsMatchResult.PlayersIds.Count) { _gsMatchResult.PlayersIds = handle.Status.Players; _gsMatchResult.SecondsRemaining = handle.Status.SecondsRemaining; OnProgress.Invoke(_gsMatchResult); // raise the progress update callback } // Tick down the matchmaking progress // MatchmakingUpdate update = new MatchmakingUpdate(); update.players = handle.Status.Players; update.secondsRemaining = (handle.Status.SecondsRemaining-1); handle.Status.Apply(update); if (handle.Status.SecondsRemaining <= 0) { OnTimeout.Invoke(_gsMatchResult); await CancelMatchMaking(); return; } await Task.Delay(1000, token); } while (!handle.Status.MinPlayersReached); } finally { _matchmakingOngoing.Dispose(); _matchmakingOngoing = null; } // Invoke Complete // OnComplete.Invoke(_gsMatchResult); } /// <summary> /// Cancels the matchmaking process /// </summary> public async Task CancelMatchMaking() { await _matchmakingService.CancelMatchmaking(_simGameType.Id); _matchmakingOngoing?.Cancel(); } }
The important part of the class above is the RequestMatch() function. You can see where it is checking for a change in the player-count indicating someone has joined the match. It is also ticking down the match progress and checking if the match has ended.
Now that we have our request and response objects mocked up, we can look at an example of how to kick off matchmaking. For this example we will show an async method which you could call from a button click or anywhere else in your code.
async void StartMatchMaking() { Debug.Log("Starting Matchmaking..."); // get the game-type content for this match // var gameType = (SimGameType)await beamableAPI.ContentService.GetContent("game_types.1v1"); Debug.Log($"GameType: {gameType.Id}..."); // Now we can create our match object // GSMatch newMatch = new GSMatch(beamableAPI.Experimental.MatchmakingService, gameType); // Some examples of these matchmaking callbacks // // timeout callback // newMatch.OnTimeout += delegate(GSMatchResult result) { Debug.Log("Match Not Found..."); }; // progress callback - called when anything in the match is updated // newMatch.OnProgress += delegate(GSMatchResult result) { Debug.Log("Match Updated..."); // How many players do we have atm // Debug.Log($" {result.PlayersIds.Count} / {result.TargetPlayerCount} Players..."); Debug.Log($" {result.SecondsRemaining} Seconds Remaining..."); foreach (long playerId in result.PlayersIds) { Debug.Log($"PlayerId: {playerId}"); } }; // Found Match callback // newMatch.OnComplete += delegate(GSMatchResult result) { Debug.Log("Match Found..."); foreach (long playerId in result.PlayersIds) { Debug.Log($"PlayerId: {playerId}"); } // >> use this player list to create a room or a game session // }; await newMatch.RequestMatch(); // << Kick of matchmaking }
You’ll need more than one player to test this process, but if you do kick off matchmaking for both players you should see the OnProgress and OnComplete callbacks being triggered and the logs appear in the console.
As mentioned already, from here you would want to do something with your player Ids like pass them to your multiplayer service.
Obviously if you are transitioning from GameSparks to Beamable you will have your own multiplayer implementations. Those playerIds should be sufficient to get your players connected but if you do need a common Id between players like a matchId you could consider creating a temporary group, or even a temporary stat which you can apply to all players using an OID generator.
AccelByte
AccelByte has a pretty robust matchmaking feature which allows you to create something like GameSparks’ thresholds along with more complex matchmaking rules.
Something that is important to explain about AccelByte’s matchmaking feature is that it is designed around their Lobby and Party feature. This can make it somewhat confusing when trying to understand which feature comes first in your implementation. We will cover these features in this topic with a simple player-to-player matchmaking example.
Something else to note is that AccelByte’s matchmaking feature uses party-to-party matching. In contrast to GameSparks, this means that you match groups of players to other groups of players instead of players to player.
This is a great feature and opens up matchmaking to a lot more possibilities, however, it can come across as though you aren't able to perform simple player-to-player matchmaking. This is not the case, you can get around this by creating a party with a single player and starting the matchmaking process that way. We will show an example of this process in this topic.
Important - Before we continue it is important to note that while AccelByte has a very good matchmaking feature with a number of similarities to GameSparks, it is designed to be incorporated with their multiplayer service. It is not necessary to do this as you will see below, but it does mean that you cannot get a list of players in your match without some external tracking. This may rule AccelByte matchmaking out as a possible transition solution.
The Lobby Service
In Accelbyte, the Lobby Service is a set of APIs that integrates with a number of other social features. The name of this feature can be somewhat misleading as it assumes that a player can set up and join different lobbies where they can wait until a game-session starts and possibly also chat with other players while they wait.
The Lobby Service is actually global. When you connect to the Lobby Service it allows you to connect to or create parties and you can start Matchmaking from there.
The Lobby Service is also used as part of the Groups Service to get updates about a Group like when a player has joined or sent an invite. It is basically a way to subscribe to different components that need asynchronous updates or messages similar to GameSparks web-sockets. The Lobby Service itself is actually using web-sockets.
This is important because conceptually, what you would consider to be a “Lobby” is more closely comparable to AccelByte’s party feature. The Lobby Service is the service which facilitates all this asynchronous messages and updates like parties, group-notifications and matchmaking notifications, etc.
We therefore start with the Lobby Service before we can set anything else up.
Setup Lobby Config
We need to configure the Lobby Service before connecting to it. We can configure it from the Admin Portal.
In Admin Portal, we should go to Lobby Configuration under the Lobby and Matchmaking category as shown below.
As you can see, we are able to choose to auto-kick players when they are disconnected and specify the number of players that should be in a party.
And that’s it! We are ready to start using the Lobby Service.
Joining The Lobby Service
We can connect to AccelByte’s Lobby Service with just a simple API call as shown below.
private Lobby abLobby; abLobby.Connect();
We have events to include some functionality when momething happens in AccelByte. For example, once we are able to connect to the Lobby service then create a party. We will see a sample example to demonstrate events.
abLobby.Connected += OnConnected; private void OnConnected() { Debug.Log("Connected to the AccelByte’s Lobby service"); }
Parties
As mentioned before, with AccelByte, we need to create parties in order to make use of the Matchmaking Service. We can create parties with the help of Lobby Service because we have already mentioned Lobby Service includes social features.
Creating A Party
Below is an example of how to create a party from the SDK.
private Lobby abLobby; abLobby.Connect(); abLobby.CreateParty(CreatePartyCallback); private void CreatePartyCallback(Result<PartyInfo> result) { if (result.IsError) { Debug.Log($"Error. Code: {result.Error.Code}, Reason: {result.Error.Message}); } else { Debug.Log("Successfully created a party"); } }
Therefore, if you wish to replicate something like a GameSparks MatchmakingRequest you can always create a single-user party immediately after connecting to the Lobby Service as shown below.
private Lobby abLobby; abLobby.Connect(); abLobby.Connected += OnConnected; private void OnConnected() { Debug.Log("Connected to the AccelByte’s Lobby service"); abLobby.CreateParty(CreatePartyCallback); } private void CreatePartyCallback(Result<PartyInfo> result) { if (result.IsError) { Debug.Log($"Error. Code: {result.Error.Code}, Reason: {result.Error.Message}); } else { Debug.Log("Successfully created a party"); } }
Addition Party Functionality
The AccelByte Party feature has more functionality than we will not cover in this topic. You can read about those here.
Below are the other API calls available for Party service in AccelByte.
- InviteToParty - A party leader can invite more players to join
- JoinParty - A player can join the party.
- RejectPartyInvitation - A player can reject the party invitation.
- PromotePartyLeader - A player can be promoted to the leader.
- KickPartyMember - A party leader can kick a player from the party.
- LeaveParty - A player can leave the party.
These are similar to the functionality available from the Groups/Teams feature which we covered in a topic here, however, they are different and parties should be treated more like a lobby where players can be grouped together before starting a game-session.
Matchmaking
The next step in order to get these parties into matches is to configure the Matchmaking Service in the Admin portal or through REST calls. We are not going to cover REST calls in this section as it is easier to demonstrate this setup through the portal.
Before we do that, we need to have at least one Stat configured as a prerequisite to the Matchmaking service.
Matchmaking Statistics
The stat used by the Matchmaking Service is essentially like the “Skill” field that GameSparks uses for its matchmaking. The difference here is that the stat is not passed into the request, but is instead handled server-side as it exists on the player’s account.
We have already covered how to create a stat with Statistic service in the Leaderboard topic available here here.
We will use LEVEL as the stat for this example.
Matchmaking Configuration
From the Admin portal, navigate to the Matchmaking Ruleset section under the Lobby and Matchmaking category.
In the Matchmaking window, we need to click on the Add Configuration button on the right hand side of the window.
This will bring up a small popup window where we need to supply valid information regarding matchmaking configuration.
The form above is an example of how to configure this match for 1v1 matches, but you can have any different game modes you want.
Rulesets
The next step is to set up some Rules Sets.
Rule sets are similar to GameSparks thresholds as they define a set of rules which control what values of the stat are acceptable when forming the match.
Similar to thresholds, you can set multiple rules which can be configured to change over time. “Distance” defines the relative value of the stat, similar to GameSparks, however, that is the only option you have as there is no percentage option.
With AccelByte there are two rules-sets you can choose from. The Flexing Rules option at the bottom is configured the same as the Matchmaking Rules above it, however, the flexing rules can be configured to kick-in when the service is having trouble finding a match for the player.
Something important to note is that you need to include the StatCode that we have created earlier.
Starting The Matchmaking Process
Only a party leader can start the Matchmaking process. This is usually done after players join the party according to the game-mode configuration.
Or you can simply start the process with the help of the below API call.
abLobby.Connect(); var channelName = "1vs1"; abLobby.StartMatchmaking(channelName, StartMatchMakingCallback); /// <summary> /// Callback to know the status of the matchmaking process /// </summary> private void StartMatchMakingCallback(Result<MatchmakingCode> result) { if(result.IsError) { Debug.Log($"Error. Code: {result.Error.Code}, Reason: {result.Error.Message}); } else { Debug.Log($"MatchMakingCode : {result.Value.code}"); } }
Here, the channelName field is the name of the Ruleset we just created.
If we received the code “0” in the response enum it means our request is successfully received and passed to the matchmaking queue.
Below is an example of our 1v1 match config, along with the previous example of how to create a 1-player party so that we can immediately kick off matchmaking for a 1v1 game.
private Lobby abLobby; abLobby.Connect(); abLobby.Connected += OnConnected; /// <summary> /// Callback when we have successfully connected to the Lobby Service /// </summary> private void OnConnected() { Debug.Log("Connected to the AccelByte’s Lobby service"); abLobby.CreateParty(CreatePartyCallback); } /// <summary> /// Callback when we have successfully created a party /// </summary> /// <param name="result">Party information like party leader, members. etc<param> private void CreatePartyCallback(Result<PartyInfo> result) { if (result.IsError) { Debug.Log($"Error. Code: {result.Error.Code}, Reason: {result.Error.Message}); } else { var channelName = "1vs1"; Debug.Log("Successfully created a party"); abLobby.StartMatchmaking(channelName, StartMatchMakingCallback); } } /// <summary> /// Callback to know the status of the matchmaking process /// </summary> private void StartMatchMakingCallback(Result<MatchmakingCode> result) { if(result1.IsError) { Debug.Log($"Error. Code: {result.Error.Code}, Reason: {result.Error.Message}); } else { Debug.Log($"MatchMakingCode : {result.Value.code} "); } }
MatchFound Callback
We need some way of getting updates about the progress of the match. There is an optional callback we can configure for getting matchmaking results when the match has been found.
abLobby.Connect(); abLobby.MatchmakingCompleted += OnMatchMakingCompleted; /// <summary> /// Notification to inform matchmaking process is completed /// </summary> /// <param name="result"> matchmaking process status and matchId</param> private void OnMatchMakingCompleted(Result<MatchmakingNotif> result) { if (result.Value.status == "done") { Debug.Log("Match found"); } }
Using Matchmaking Callbacks
Ideally you can use these callbacks to perform some custom logic when a match has been found. In GameSparks this callback would contain the matchId along with a list of players or playerIds you can group together. In AccelByte you will get a matchId but you cannot get the list of players.
You will therefore need to create your own solution for getting these players and their Ids. This would need to be a custom solution so check out the tutorial on Cloud-Code here for more information.
Cancel Match
Finally we will show how a player can cancel the Matchmaking process once it has been started. We can simply call the below API call to do the same.
var channelName = "1vs1"; abLobby.CancelMatchmaking(channelName, result => { Debug.Log(string.Format("Cancel matchmaking response {0}",result.Value.code)); });
We have started the matchmaking process and immediately cancelled for demonstration purposes with the above call and below is the response received. When we receive a code ‘0’, generally, it means that the process is terminated.
Nakama
Nakama has a very flexible matchmaking system which allows you to make complex queries on your matchmaking parameters. This begins with a set of matchmaking Properties which are sent to the server using an equivalent to the GameSparks MatchmakingRequest.
Let's take a look at a GameSparks Match configuration. You can get a copy of your existing match configurations using the GameSparks REST API.
{ "@id": "/~matches/bomber_man", "description": "bomber_man", "dontAutoJoinMatch": false, "dropInDropOut": false, "dropInDropOutExpireSeconds": null, "maxPlayers": 4, "minPlayers": 4, "name": "bomber_man", "playerDisconnectThreshold": null, "realtime": false, "realtimeScript": null, "script": null, "shortCode": "bomber_man", "~thresholds": [ { "@id": "/~matches/bomber_man/thresholds/10", "acceptMinPlayers": false, "max": 1, "min": 10, "period": 10, "type": "ABSOLUTE" } ] }
In the portal, this match would look something like this.
Let us take a look at how we could recreate this match config using Nakama’s Match Properties.
In GameSparks we create a match config through the portal and then use the match short code to let the server know the settings we want for the match. We then submit a matchmaking request along with the “skill” value.
In Nakama, the match config (Match Properties) are defined in the client and submitted in a matchmaking request.
The GameSparks example above in Nakama would look like this in Unity.
/// <summary> /// Submits a matchmaking request to Nakama with the given values /// </summary> /// <param name="minPlayers"></param> /// <param name="maxPlayers"></param> /// <param name="skill"></param> private async void SubmitSimpleMatchRequest(int minPlayers, int maxPlayers, int skill) { Debug.Log($"Submitting Matchmaking Request | min:{minPlayers}, max:{maxPlayers}, skill:{skill}"); string query = "+properties.skill:>=1 +properties.skill:<=10"; Debug.Log($"Query: [{query}]"); var numericProperties = new Dictionary<string, double>() {{ "skill", skill }}; IMatchmakerTicket matchTicket = await sessionSocket.AddMatchmakerAsync(query, minPlayers, maxPlayers, null, numericProperties); Debug.Log($"MatchTicket: {matchTicket.Ticket}..."); }
Note - This code requires a socket to be created from your session. We will not cover that here as it is already covered in a number of other topics. Check out our topic on Achievements here for an example, or you can look at some examples in the Nakama documentation here.
Let us break down the example above.
Match Query
Match queries allow you to control what kind of matches you want for your player.
As you can see from our match query
"+properties.skill:>=1 +properties.skill:<=10"
We are setting the absolute value of the skill level to between 1 and 10, just like in the GameSparks example.
For this to work we need to add properties to our request which will include our skill. You can see that being set using the C# Dictionary in the above example.
Note - Nakama uses the Bleve search and indexing engine so you can check that out for more examples of the kinds of queries you can use here.
Match Properties
You can see from the example above that we have added a Dictionary called “numericProperties” in which we have included our player’s skill value. You can add multiple properties to this field depending on your requirements and they are referenced from the query by adding the “+properties.” prefix.
We can also add string properties to the matchmaking request. We’ll see an example of how to do that later in this topic.
Before we can test this matchmaking request we need to create a listener for our matchmaking messages and assign it to the socket created from our session.
For this example we will just print the details of the successful match message to the console.
sessionSocket.ReceivedMatchmakerMatched += matched => { Debug.LogFormat("Received: {0}", matched); foreach (IMatchmakerUser user in matched.Users) { Debug.Log($" UserName: {user.Presence.Username}, Id: {user.Presence.UserId}" ); } };
Note - We are using a minimum of 2 players, a maximum of 4 players and both players have a skill value of 5 for this example.
Note - Matchmaking with Nakama can take up to 30 seconds before you get a result so be patient and wait for the logs to appear to confirm your code is working correctly.
This example covers very basic matchmaking with the skill parameter, but what if we need to transition some more complex matchmaking from GameSparks like Thresholds or Participant Data?
Participant Data
We’ve already touched on how you can replicate this in Nakama using match properties but let's take a look at a simple example you might have in GameSparks where you want to match players via skill level but also by region or country.
In GameSparks, the MatchMakingRequest would look something like this…
{ "@class": ".MatchmakingRequest", "customQuery": {"players.participantData.countryCode":"US"}, "participantData": {"countryCode":"US"}, "matchShortCode": "4v4", "skill": 5 }
With Nakama we can add string properties to the matchmaking request and include the country code to the matchmaking query.
string query = "+properties.skill:>=1 +properties.skill:<=10"; // add country property to query // query += " +properties.country:" + countryCode; Debug.Log($"Query: [{query}]"); var numericProperties = new Dictionary<string, double>() {{ "skill", skill }}; var stringProperties = new Dictionary<string, string>() {{ "country", countryCode }}; IMatchmakerTicket matchTicket = await sessionSocket.AddMatchmakerAsync(query, minPlayers, maxPlayers, stringProperties, numericProperties);
Remember that you can include multiple numeric and string properties to your request and add them to your query to construct more complex matchmaking.
Thresholds
Because matchmaking in Nakama is initiated from the client, in order to create thresholds which change matchmaking parameters over time, we need to use something like a Coroutine in Unity.
We will need to loop through a list of thresholds and create a new matchmaking request after each threshold has timed out. This is simple in Unity but it does require some preparation.
To begin with we are going to create a GSMatchConfig class. To keep this example simple, this class will have a min and max player attribute, along with an array of thresholds.
Thresholds will be a struct with a duration attribute and a string which will represent a Nakama matchmaking query.
public class GSMatchConfig { public int maxPlayers { get; set; } public int minPlayers { get; set; } public Threshold[] thresholdQueries { get; set; } public struct Threshold { public Threshold(int _duration, string _query) { duration = _duration; query = _query; } public int duration; public string query; } }
And now we can create an instance of this class and add our threshold details.
GSMatchConfig thresholdMatchConfig = new GSMatchConfig(); thresholdMatchConfig.minPlayers = minPlayers; thresholdMatchConfig.maxPlayers = maxPlayers; thresholdMatchConfig.thresholdQueries = new GSMatchConfig.Threshold[] { new GSMatchConfig.Threshold(20, "+properties.skill:>=1 +properties.skill:<=10"), new GSMatchConfig.Threshold(20, "+properties.skill:>=1 +properties.skill:<=50"), new GSMatchConfig.Threshold(20, "+properties.skill:>=1 +properties.skill:<=100") };
For this example we are just broadening the skill range over time but you can add more complex query strings using Bleve search parameters.
Next we need to create the Coroutine method which will actually run through each threshold. We want the process to wait until the threshold duration has passed before starting the next matchmaking request so this is why we are using Coroutine.
This will be started from the matchmaking request method where our GSMatchConfig object is defined.
/// <summary> /// Iterates over an array of thresholds and creates new matchmaking requests for each threshold /// </summary> /// <param name="matchConfig"></param> /// <param name="sessionSocket"></param> /// <param name="skill"></param> /// <returns></returns> IEnumerator StartMatchmakingThresholds(GSMatchConfig matchConfig, ISocket sessionSocket, int skill) { Task<IMatchmakerTicket> matchTicket = null; for (int i = 0; i < matchConfig.thresholdQueries.Length; i++) { var threshold = matchConfig.thresholdQueries[i]; int duration = threshold.duration; string query = threshold.query; if (matchFound) { break; } if (i > 0) { Debug.Log($"Cancelling preview matchmaking request {matchTicket.Result.Ticket}"); sessionSocket.RemoveMatchmakerAsync(matchTicket.Result); } Debug.Log($"Sending matchmaking request..."); Debug.Log($"Query [{query}]"); var numericProperties = new Dictionary<string, double>() {{ "skill", skill }}; matchTicket = sessionSocket.AddMatchmakerAsync(query, matchConfig.minPlayers, matchConfig.maxPlayers, null, numericProperties); yield return new WaitForSeconds(duration); } if (!matchFound) { Debug.LogWarning("Match not found..."); sessionSocket.RemoveMatchmakerAsync(matchTicket.Result); } else { Debug.Log("Match Found..."); } }
Let us break down what is happening in the flow above:
- Before we start the matchmaking request, we check to see if a match has been found. If so, we can break out of the loop. You could also stop the Coroutine when the matchmaking notification is picked up by the ReceivedMatchmakerMatched listener but this has to be done within the main thread. A Coroutine handler could achieve this but for the sake of simplicity we are just going to use the matchFound bool.
- If this is not the first threshold we can cancel the matchmaking request for the previous threshold.
- Now we can create our matchmaking request. You can see from the above example that we are using the min and max players from the matchConfig object, the query for the given threshold, and the same numeric properties we used in the previous examples to assign the player’s skill.
- If no match was found over the duration of all thresholds, we cancel matchmaking.
Now we can start this Coroutine from our matchmaking method.
/// <summary> /// Starts the matchmaking process when using matchmaking thresholds /// </summary> /// <param name="minPlayers"></param> /// <param name="maxPlayers"></param> /// <param name="skill"></param> private async void SubmitThresholdMatchRequest(int minPlayers, int maxPlayers, int skill) { GSMatchConfig thresholdMatchConfig = new GSMatchConfig(); thresholdMatchConfig.minPlayers = minPlayers; thresholdMatchConfig.maxPlayers = maxPlayers; thresholdMatchConfig.thresholdQueries = new GSMatchConfig.Threshold[] { new GSMatchConfig.Threshold(60, "+properties.skill:>=1 +properties.skill:<=10"), new GSMatchConfig.Threshold( 60, "+properties.skill:>=1 +properties.skill:<=50"), new GSMatchConfig.Threshold( 60, "+properties.skill:>=1 +properties.skill:<=100") }; StartCoroutine(StartMatchmakingThresholds(thresholdMatchConfig, sessionSocket, skill)); }
You can run this code with a single player just to check the process is working. You should see each step in the console logs.
Note - remember to set the matchFound bool to ‘true’ in the ReceivedMatchmakerMatched listener.
There are several things to note about this process as it relates to Nakama.
As mentioned before, Nakama matchmaking can take much longer than what you expect from GameSparks matchmaking. Therefore short threshold durations may not be useful and you may need to increase your threshold durations in order for your matchmaking to be effective.
You can also choose not to cancel match-tickets at the start of each new threshold. This would keep your match ticket in the pool throughout all thresholds so that you can still get potential matches at lower thresholds before the duration of all thresholds has passed.
As you can see from what we have covered in this topic, Nakama’s matchmaking feature is very flexible and should be able to adapt to the vast majority of gameSparks matchmaking configurations.
For more information consult Nakama’s documentation on matchmaking here.
brainCloud
brainCloud provides two different matchmaking systems.
The MatchMaking service is primarily for use by games that are using brainCloud’s Async Match and One-way Match APIs. Games of these types are often played offline – and thus this matchmaking service only selects offline players.
Online games should use the brainCloud Lobby service for matchmaking. Lobby Matchmaking identifies groups of suitable players for online play.
Lobby Matchmaking is highly configurable, with support for:
- skill level matching
- min / max players (by team)
- geo matching <- prioritizing players that are close together
- filter scripts
For more information, see the brainCloud Lobby service. | http://gsp-docs-a2.s3-website-eu-west-1.amazonaws.com/transition/matchmaking.html | 2022-06-25T02:25:47 | CC-MAIN-2022-27 | 1656103033925.2 | [array(['img/intro_mm_1.png', None], dtype=object)
array(['img/beam_mm_1.png', None], dtype=object)
array(['img/beam_mm_2.png', None], dtype=object)
array(['img/beam_mm_3.png', None], dtype=object)
array(['img/accelbyte_matchmaking1.png', None], dtype=object)
array(['img/accelbyte_matchmaking2.png', None], dtype=object)
array(['img/accelbyte_matchmaking3.png', None], dtype=object)
array(['img/accelbyte_matchmaking4.png', None], dtype=object)
array(['img/accelbyte_matchmaking5.png', None], dtype=object)
array(['img/accelbyte_matchmaking6.png', None], dtype=object)
array(['img/accelbyte_matchmaking7.png', None], dtype=object)
array(['img/accelbyte_matchmaking8.png', None], dtype=object)
array(['img/mm_nakama_1.png', None], dtype=object)
array(['img/mm_nakama_2.png', None], dtype=object)
array(['img/mm_nakama_3.png', None], dtype=object)] | gsp-docs-a2.s3-website-eu-west-1.amazonaws.com |
inspur.sm.edit_boot_image module – Set bmc boot image._boot_image.
New in version 0.1.0: of inspur.sm
Synopsis
Set bmc boot image on Inspur server.
Parameters
Examples
- name: Boot image test hosts: ism connection: local gather_facts: no vars: ism: host: "{{ ansible_ssh_host }}" username: "{{ username }}" password: "{{ password }}" tasks: - name: "Set bmc boot image" inspur.sm.edit_boot_image: image: 2 provider: "{{ ism }}"
Return Values
Common return values are documented here, the following are the fields unique to this module:
Collection links
Issue Tracker Repository (Sources) | https://docs.ansible.com/ansible/latest/collections/inspur/sm/edit_boot_image_module.html | 2022-06-25T01:17:31 | CC-MAIN-2022-27 | 1656103033925.2 | [] | docs.ansible.com |
Model Configuration
Insights: Create Multiple Engagement Score Models
Create Multiple Engagement Model. Add Rules Instructions. This is a multi-step tutorial. Complete Video Instructions.. Add Rule Introduction. This module aims to explain why you would want a differen…
How to Change Event Touch Scores
Things change in marketing. Sometimes a tactic that used to bring in scores of leads fizzles out and at other times, the inspiration to try something new hits. Our lead score configuration tool will help you make adjustments as you need them.
Engagement Scoring Time Decay - How It Works & How to Change It
How to Add or Change Engagement Score Time Decay. Any newly created model will take up to 24 hours to update for use. What is Time Decay? CaliberMind's engagement models all include a linear time dec…
Modifying Engagement Scoring Models (START HERE)
Are you interested in fine-tuning your engagement scores? Our configuration screens allow you to change how many points you allocate to each event type and add multipliers for high-value contacts.
How to Add or Change Engagement Score Multipliers
How to Add or Change Engagement Score Multipliers. Any newly created model will take up to 24 hours to update for use. This is a multi-step tutorial. What Is a Multiplier? Behind our scoring models,… | https://docs.calibermind.com/category/zv0l7ted54-model-config | 2022-06-25T00:56:13 | CC-MAIN-2022-27 | 1656103033925.2 | [] | docs.calibermind.com |
Implements: IReferenceable, IConfigurable
Description
The HttpConnectionResolver class is used to retrieve connections for HTTP-based services and clients.
Important points
- In addition to its regular functions, ConnectionResolver is able to parse http:// URIs and validate connection parameters before returning them.
Configuration parameters
- connection:
- discovery_key: (optional) key to retrieve the connection from IDiscovery
- … : other connection parameters
- connections: alternative to connection
- [connection params 1]: first connection parameters
- …
- [connection params N]: Nth connection parameters
- …
References
- *:discovery:*:*:1.0 - (optional) IDiscovery services to resolve a connection
Fields
Instance methods
configure
Configures component by passing configuration parameters.
publicconfigure(config: ConfigParams): void
- config: ConfigParams - configuration parameters to be set.
Registers the given connection in all referenced discovery services. This method can be used for dynamic service discovery.
publicregister(correlationId: string): void
- correlationId: string - (optional) transaction id used to trace execution through the call chain.
resolve
Resolves a single component connection. If the connections are configured to be retrieved from Discovery service, it finds a IDiscovery and resolves the connection there.
publicresolve(correlationId: string): Promise<ConfigParams>
- correlationId: string - (optional) transaction id used to trace execution through the call chain.
- returns: Promise<ConfigParams> - resolved connection.
resolveAll
Resolves all component connections. If connections are configured to be retrieved from Discovery service it finds a IDiscovery and resolves the connection there.
resolveAll(correlationId: string): Promise<ConfigParams>
- correlationId: string - (optional) transaction id used to trace execution through the call chain.
- returns: Promise<ConfigParams> - resolved connections.
setReferences
Sets references to dependent components.
publicsetReferences(references: IReferences): void
- references: IReferences - references to locate the component dependencies.
Examples
let config = ConfigParams.fromTuples( "connection.host", "10.1.1.100", "connection.port", 8080 ); let connectionResolver = new HttpConnectionResolver(); connectionResolver.configure(config); connectionResolver.setReferences(references); let connection = await connectionResolver.resolve("123"); // Now use connection... | http://docs.pipservices.org/node/rpc/connect/http_connection_resolver/ | 2022-06-25T01:56:07 | CC-MAIN-2022-27 | 1656103033925.2 | [] | docs.pipservices.org |
Altinity ODBC Driver for ClickHouse™ 1.1.10.20210822 Release Notes
Altinity ODBC Driver for ClickHouse™ Release Notes for version 1.1.10.20210822
WARNING: Before upgrading to a new version of the driver, make sure you read the release notes of all intermediate versions!
- Fixed compilation for recent versions of GCC/Clang/AppleClang (#356)
- Fixed handling of Null values and Nothing type (#356)
- Fixed DateTime type as String issues (#356)
- Implemented parametrized early connection/credential error detection (#356)
- Implemented parametrized big integer types handling as strings (#356)
- Various lesser fixes for ClickHouse ODBC-based Tableau connector
Download at:
Last modified 2021.08.22 | https://beta.docs.altinity.com/releasenotes/clickhouse-odbc-release-notes/1.1.10.20210822/ | 2022-06-25T00:49:43 | CC-MAIN-2022-27 | 1656103033925.2 | [] | beta.docs.altinity.com |
Caliban¶
Caliban is a tool for developing research workflow and notebooks in an isolated Docker environment and submitting those isolated environments to Google Compute Cloud.
For a short tutorial introduction to Caliban, see the GitHub page.
Overview¶
Caliban provides five subcommands that you run inside some directory on your laptop or workstation:
caliban shell generates a Docker image containing any dependencies you’ve declared in a
requirements.txtand/or
setup.pyin the directory and opens an interactive shell in that directory. The
caliban shellenvironment is ~identical to the environment that will be available to your code when you submit it to AI Platform; the difference is that your current directory is live-mounted into the container, so you can develop interactively.
caliban notebook starts a Jupyter notebook or lab instance inside of a docker image containing your dependencies; the guarantee about an environment identical to AI Platform applies here as well.
caliban run packages your directory’s code into the Docker image and executes it locally using
docker run. If you have a workstation GPU, the instance will attach to it by default - no need to install the CUDA toolkit. The docker environment takes care of all that. This environment is truly identical to the AI Platform environment. The docker image that runs locally is the same image that will run in AI Platform.
caliban cloud allows you to submit jobs to AI Platform that will run inside the same docker image you used with
caliban run. You can submit hundreds of jobs at once. Any machine type, GPU count, and GPU type combination you specify will be validated client side, so you’ll see an immediate error with suggestions, rather than having to debug by submitting jobs over and over.
caliban build builds the docker image used in
caliban cloudand
caliban runwithout actually running the container or submitting any code.
caliban cluster creates GKE clusters and submits jobs to GKE clusters.
caliban status displays information about all jobs submitted by Caliban, and makes it easy to interact with large groups of experiments. Use caliban status when you need to cancel pending jobs, or re-build a container and resubmit a batch of experiments after fixing a bug.
These all work from your Macbook Pro. (Yes, you can build and submit GPU jobs to Cloud from your Mac!)
The only requirement for the directory where you run these commands is that it
declare some set of dependencies in either a
requirements.txt or
setup.py file. See the requirements docs for more detail.
The rest of this document contains detailed information and guides on Caliban’s various modes. If you want to get started in a more interactive way, head over to the Caliban tutorials directory.
Caliban’s code lives on Github.
Using Caliban¶
If you want to practice using Caliban with a proper getting-started style guide, head over to Caliban’s tutorials (Coming Soon!).
See the sidebar for information on the subcommands exposed by Caliban and a whole series of tutorials and guides that you might find interesting as you work with Caliban.
Getting Started
Using Caliban
Exploring Further
Common Recipes
Cloud-Specific Tutorials
Caliban + GKE
Composable transformations of Python+NumPy programs: differentiate, vectorize, JIT to GPU/TPU, and more.
For an introduction to Caliban, start at the Caliban GitHub page. | https://caliban.readthedocs.io/en/latest/ | 2022-06-25T02:16:23 | CC-MAIN-2022-27 | 1656103033925.2 | [] | caliban.readthedocs.io |
UiaRaiseTextEditTextChangedEvent function (uiautomationcoreapi.h)
Called by a provider to notify the Microsoft UI Automation core that a text control has programmatically changed text.
Syntax
HRESULT UiaRaiseTextEditTextChangedEvent( [in] IRawElementProviderSimple *pProvider, [in] TextEditChangeType textEditChangeType, [in] SAFEARRAY *pChangedData );
Parameters
[in] pProvider
Type: IRawElementProviderSimple*
The provider node where the text change occurred.
[in] textEditChangeType
Type: TextEditChangeType
The type of text-edit change that occurred.
[in] pChangedData
The event data. Should be assignable as a VAR of type VT_BSTR.
Return value
If this function succeeds, it returns S_OK. Otherwise, it returns an HRESULT error code.
Remarks
This is a helper function for providers that implement ITextEditProvider and are raising the pattern's required events. Follow the guidance given in TextEdit Control Pattern that describes when to raise the events and what payload the events should pass to UI Automation.
If there are no clients listening for a particular change type, no event is raised.
The event data should contain different payloads for each change type (per TextEditChangeType):
- TextEditChangeType_AutoCorrect: pChangedData should be the new corrected string .
- TextEditChangeType_Composition: pChangedData should be the updated string in the composition (only the part that changed).
- TextEditChangeType_CompositionFinalized: pChangedData should be the finalized string of the completed composition (this may be empty if composition was canceled or deleted).
Requirements
See also
HandleTextEditTextChangedEvent
IUIAutomation3::AddTextEditTextChangedEventHandler | https://docs.microsoft.com/en-us/windows/win32/api/uiautomationcoreapi/nf-uiautomationcoreapi-uiaraisetextedittextchangedevent | 2022-06-25T02:18:43 | CC-MAIN-2022-27 | 1656103033925.2 | [] | docs.microsoft.com |
upgrade/degrade App Plan
1. How to upgrade App Plan
Under the “
Settings
” tab, you can view the pricing table with detailed information. Just click the “
Choose Plan
” button for which plan you want to upgrade.
There are 2 options for you to pay:
Pay monthly
Pay annually
After choosing the plan, you will be asked to process old orders or not.
Note:
- The old orders will count to your number of usage this month.
- If you are using multi-store integration, the subscription fee will apply to your main store.
2. How to degrade App Plan
Just click the
Choose plan
button of which plan you want to downgrade.
FAQs Documents - Previous
SyncTrack - General FAQs
Next - Notification Set Up
Are customers notified when the app submit tracking?
Last modified
16d ago
Copy link
Contents
1. How to upgrade App Plan
2. How to degrade App Plan | https://docs.synctrack.io/welcome-to-synctrack/faqs-documents/how-to-upgrade-degrade-app-plan | 2022-06-25T02:06:21 | CC-MAIN-2022-27 | 1656103033925.2 | [] | docs.synctrack.io |
Virtuozzo Hybrid Server 7.5 Update 2 (7.5.2-436)¶
Issue date: 2021-09-29
Applies to: Virtuozzo Hybrid Server 7.5
Virtuozzo Advisory ID: VZA-2021-047
1. Overview¶
Virtuozzo Hybrid Server 7.5 Update 2 introduces new features and provides stability and usability bug fixes. It also introduces a new kernel 3.10.0-1160.41.1.vz7.183.5.
2. New Features¶
AMD Milan CPU support. (PSBM-127046)
The latest Virtuozzo Storage core. (PSBM-129489)
Ability to convert CentOS 7 to VzLinux 8 in containers. (PSBM-125565)
The new ‘density’ policy for VE memory management. The new policy provides better density than the ‘performance’ policy. It also provides higher performance if memory is overcommitted or the same performance if it is not. (PSBM-133760)
Ability to install and update the guest tools from package repositories instead of ISO images. (PSBM-129749)
cPanel EZ template for CentOS 7 containers. (PSBM-126958)
Plesk EZ template for CentOS 7 containers. (PSBM-126957)
ISPmanager EZ template for CentOS 7 and VzLinux 8 containers. (PSBM-126956, PSBM-126960)
Support for AlmaLinux as a guest OS in containers. (PSBM-126392)
Support for Microsoft Windows Server 2022 as a guest OS in virtual machines. (PSBM-126773)
Support for Debian 11 as a guest OS in virtual machines and containers. (PSBM-133078, PSBM-133085)
Improvements in migration of virtual environments from Virtuozzo 6 to Virtuozzo Hybrid Server 7. In particular, VNC is now enabled in all virtual machines to be migrated. In addition, Parallels guest tools are now removed during the installation of Virtuozzo guest tools. (PSBM-123987, PSBM-123390)
3. Bug Fixes¶
The rate set with the ‘–rate’ option could not be unset. (PSBM-130799)
pcompact could run for too long and consume too much resources. (PSBM-132757)
pcompact could run longer than the set timeout. (PSBM-132275)
The ‘rate’ and ‘ratebound’ values were missing from the output of ‘prlctl list’. (PSBM-130798)
Failed migration of a container could result in that container being present on both the target and destination nodes. (PSBM-130335)
Unable to reboot a container that is being backed up. (PSBM-129771)
Configuring multiple VM settings at the same time could fail. (PSBM-128213)
Containers with ‘ONBOOT’ set to ‘yes’ located on an iSCSI partition did not start automatically after node reboot. (PSBM-127669)
Virtuozzo release and ReadyKernel patch version were missing from the MOTD. (PSBM-132935)
Other fixes. (PSBM-129234, PSBM-130693, PSBM-131580, PSBM-131874, PSBM-132267, PSBM-132823, PSBM-133530)
4. Known Issues¶
Microsoft Windows Server 2022 virtual machines do not yet support Hyper-V disks. (PSBM-133918) | https://docs.virtuozzo.com/virtuozzo_advisory_archive/virtuozzo-hybrid-server/VZA-2021-047.html | 2022-06-25T01:10:12 | CC-MAIN-2022-27 | 1656103033925.2 | [] | docs.virtuozzo.com |
Role-based management allows a tenant to distribute the configuration management functionality among multiple roles. Role-based management in Virtual Contact Center allows you to create roles, define privileges or permissions to manage varying scope of tenant configuration, and add and assign administrators to the roles. For example, you can define a campaign manager role with exclusive permissions to create, edit, delete, and control campaigns, and restrict access to any other functionality in Configuration Manager. To create a campaign role, you must grant permissions to Campaigns only.
Role-based management offers the following features:
See Also | https://docs.8x8.com/8x8WebHelp/VCC/Configuration_Manager_UnifiedLogin/content/rolebasedmanagement.htm | 2019-04-18T14:47:02 | CC-MAIN-2019-18 | 1555578517682.16 | [] | docs.8x8.com |
Microsoft Project client integration
Planning and maintaining a project schedule can be complex, so project managers need to use tools that help them manage this task. Integration with Microsoft Project Client provides support to open and manage a project work breakdown structure. The project manager can publish any changes back to the Finance and Operations project work breakdown structure.
Note
If you are using Microsoft Dynamics 365 for Finance and Operations, July update, you must install KB 4054797 and 4055884.
Configure the Microsoft Project Client add-in
To enable the integration with Microsoft Project Client, a Microsoft Dynamics 365 add-in is required to be installed in the user’s client Microsoft Project application. This is done by opening the Project management workspace.
• Click Configure project client add-in from the Links > Setup section of the workspace.
• Click Open, then click Run when prompted.
Open and edit an existing draft work breakdown structure in Microsoft Project Client
If a project in Finance and Operations already has a work breakdown structure created, the work breakdown structure can be opened in the Microsoft Project Client application if the work breakdown structure is in a draft status. To open from the Project page, click Open in Microsoft Project link from the Plan tab. This page can also be opened from within the Microsoft Project Client application by clicking Open in the Microsoft Dynamics 365 tab. Select the Legal entity and Project from the list.
Note
If you're using Internet Explorer as your browser, you will need to click Save to manually open from the location that the file is downloaded to. Or, click Save and open to open the file in Microsoft Project Client. Do not rename the file name when saving.
Before making any edits to the file using Microsoft Project Client, you need to check it out. Click Check out in the Microsoft Dynamics 365 tab. This will prevent other users from editing the work breakdown structure from within Finance and Operations at the same time. To publish the work breakdown structure after completing any edits, click Check in on the Microsoft Dynamics 365 tab.
If a project team has already been added to the project in Finance and Operations, the resource list will be populated with the team members. If a project team has not yet been added to the project, you can select resources and build the team within Microsoft Project Client by clicking the Resources button on the Microsoft Dynamics 365 tab.
The following data will be synced back to Finance and Operations as part of the check in process:
• Task name
• Start date
• Finish date
• Predecessors
• Resource names
• Category
• Resource category
• Work hours
• Notes
• Priority
Note
If you add any other columns to your Microsoft Project Client file, they will not be saved to the file and will not be displayed when the file is opened again.
Create the work breakdown structure for an existing project using Microsoft Project Client
To create a new work breakdown structure using Microsoft Project Client, follow these steps:
Open Microsoft Project Client.
On the Microsoft Dynamics 365 tab, click Open.
Select the Legal entity for the project.
Select the Project.
Click Check out on the Microsoft Dynamics 365 tab.
When ready to publish to Finance and Operations, click Check in on the Microsoft Dynamics 365 tab.
Replace the existing work breakdown structure for an existing project using Microsoft Project Client
To create a new work breakdown structure using Microsoft Project Client and replace an existing work breakdown structure for an existing project, follow these steps:
Open the Microsoft Project Client.
Create the schedule in Microsoft Project Client.
On the Microsoft Dynamics 365 tab, click Save changes > Replace existing project.
Select the Legal entity for the project.
Select the Project.
Click OK.
Create a new project from within Microsoft Project Client
Open the Microsoft Project Client.
Create the schedule in Microsoft Project Client.
On the Microsoft Dynamics 365 tab, click Save changes > Save to new Project.
Select the Legal entity for the project.
Enter the Project ID, if necessary.
Enter the Project name.
Select the Project type, Project group and the Project contract ID. Alternatively, you can create a new project contract by clicking New.
Select the Calendar to be used for resourcing.
Click OK.
Feedback
Send feedback about: | https://docs.microsoft.com/en-us/dynamics365/unified-operations/financials/project-management/project-integration | 2019-04-18T15:08:09 | CC-MAIN-2019-18 | 1555578517682.16 | [] | docs.microsoft.com |
Sparxling Wine
In order to optimize docToolchain for Mac and Linux, I wanted to give Sparx Enterprise Architect a try on a *nix environment.
I once already tried to install EA on Wine on Ubuntu within Hypervisor on Windows 10, but with my first try I didn’t succeed. I tried to follow the instructions on the Sparx website, but it made the impression that some of those instructions are a bit outdated.
So today I just thought "hey, you should check the wine home-page and follow these instructions". That was a success!
Now with a Sparx EA 14.1 running on my machine on Ubuntu, expect some updates to docToolchain soo 😎 | https://docs-as-co.de/news/sparkling-wine/ | 2019-04-18T15:36:31 | CC-MAIN-2019-18 | 1555578517682.16 | [] | docs-as-co.de |
ft_api_clear_form_sessions
API v1.x
- Finalized Submissions
- Debugging Mode
- API Error Codes
- Namespaces
- API Sessions
- Feature Suggestions
This function should always be called on the final "thankyou" page of your form. The two functions, ft_api_init_form_page and ft_api_process_form store various values in sessions (a built-in PHP temporary storage mechanism) while the user progresses through your form. This function clears them.
Why is this needed?
If the sessions aren't emptied, the user may not be able to put through a second form submission. The reason for this is that when they return to the form, the API functions may well load the old submission ID from the previous submission. Then, when it comes to updating the submission information, it will realize that the submission has already been finalized by the original form submission - and fail!
Usage
The function is generally called without any parameters. It has no return value.
If you originally passed a custom sessions namespace string to the ft_api_init_form_page function, you need to pass the same namespace string to this function. For example, if your namespace was "my_form", you'd call this function like so: | https://docs.formtools.org/api/ft_api_clear_form_sessions/ | 2019-04-18T15:21:33 | CC-MAIN-2019-18 | 1555578517682.16 | [] | docs.formtools.org |
Settings
Whither the WYSIWYG tab?
For users who are familiar with 2.0.x versions of the script, you'll notice the WYSIWYG tab has disappeared. Egads! But don't worry: the default settings for the WYSIWYG field are still around, just now found in the TinyMCE field type module. At the time of writing this, all the same functionality is still available - and now it'll be easier to expand on it. | https://docs.formtools.org/userdoc/settings/ | 2019-04-18T14:42:07 | CC-MAIN-2019-18 | 1555578517682.16 | [] | docs.formtools.org |
Working Example - Tutorial
Please take a look at the GRS101 video series for a complete end-to-end tutorial on how to build a template from scratch, create rules and execute calls to the rules engine from Composer.
This page was last modified on August 1, 2017, at 02:24.
Feedback
Comment on this article: | https://docs.genesys.com/Documentation/GRS/latest/Deployment/Example | 2019-04-18T14:25:04 | CC-MAIN-2019-18 | 1555578517682.16 | [] | docs.genesys.com |
Platform9 2.2 Release Notes
Platform9 Release 2.2 comes with a slew of new features, and enhancements, including:
1. Entirely new User Experience
With this release, the Platform9 Clarity UI gets a complete facelift. Share your feedback with us about what you think of the new look at feel.
2. High Availability and Load Balancing for Glance Image Service
The Platform9 Glance image service now supports high availability and load balancing within a region to support large scale production workloads.
More details about this feature can be found here.
3. Murano Application Catalog
Platform9 deployments now include a full-featured Murano application catalog. The catalog allows developers or administrators to create new application templates based on OpenStack Heat and upload them to the catalog. End users can then perform 1-click deployment of applications into their environment.
More details about this feature can be found here.
4. Software-defined Networking with VMware NSX (beta)
Platform9 Managed OpenStack for VMware now provides out of box integration with VMware NSX via OpenStack Neutron. This feature is currently in Beta - contact [email protected] if you'd like to try it out!
5. Enhancements for Managed Kubernetes
- Secured cluster control plane
- Kubernetes API is now available via HTTPS only.
- All cluster components perform mutual TLS authentication.
- Support for using separate networks for the cluster control and data
planes
- This support is not yet available through the UI and requires the
Managed Kubernetes CLI. For more information, please contact your
Platform9 Customer Representative.
- Cluster connectivity validation
- Before a host joins a cluster as a master or worker, we verify the
network connectivity requirements for the cluster control and data
planes.
- For details on the requirements, please see our Managed Kubernetes Pre-Requisites.
6. Bug fixes and product improvements
This release also contains a number of performance optimizations and bug-fixes that should result in a better user experience for your Platform9 cloud platform!
Known Issues:
1..
2. If you have deployed an application that calls the Kubernetes API server and uses a Kubernetes Service Account, the upgrade will invalidate the tokens your application uses to authenticate itself with the API server. Please see our support article for instructions on refreshing the tokens.
3. If you run CentOS or RHEL 7, please see our instructions for optimizing storage and ensuring connectivity.
September 10, 2016 | https://docs.platform9.com/support/platform9-2-2-release-notes-2/ | 2019-04-18T15:13:58 | CC-MAIN-2019-18 | 1555578517682.16 | [array(['/assets/f75107cb-fb4f-4812-ae49-177335aed3c5.png', None],
dtype=object)
array(['/assets/739622c3-bdfb-4fe1-b94c-06a712981fc0.png', None],
dtype=object) ] | docs.platform9.com |
Change the size of the maximum transmission unit (MTU) on a vSphere Standard Switch to improve the networking efficiency by increasing the amount of payload data transmitted with a single packet, that is, enabling jumbo frames.
Procedure
- In the vSphere Web Client, navigate to the host.
- On the Configure tab, expand Networking and select Virtual switches.
- Select a standard switch from the table and click Edit settings.
- Change the MTU (Bytes) value for the standard switch.
You can enable jumbo frames by setting an MTU value greater than 1500. You cannot set an MTU size greater than 9000 bytes.
- Click OK. | https://docs.vmware.com/en/VMware-vSphere/6.7/com.vmware.vsphere.networking.doc/GUID-40856C1E-7631-4228-A111-13A783316595.html | 2019-04-18T15:03:54 | CC-MAIN-2019-18 | 1555578517682.16 | [] | docs.vmware.com |
hosts Package¶
hosts Package¶
This is a convenience module to import all available types of hosts.
Implementation details: You should ‘import hosts’ instead of importing every available host module.
base_classes Module¶
This module defines the base classes for the Host hierarchy.
Implementation details: You should import the “hosts” package instead of importing each type of host.
Host: a machine on which you can run programs
- class
autotest.client.shared.hosts.base_classes.
Host(*args, **dargs)[source]¶
This class represents a machine on which you can run programs.
It may be a local machine, the one autoserv is running on, a remote machine or a virtual machine.
Implementation details: This is an abstract class, leaf subclasses must implement the methods listed here. You must not instantiate this class but should instantiate one of those leaf subclasses.
When overriding methods that raise NotImplementedError, the leaf class is fully responsible for the implementation and should not chain calls to super. When overriding methods that are a NOP in Host, the subclass should chain calls to super(). The criteria for fitting a new method into one category or the other should be:
- If two separate generic implementations could reasonably be concatenated, then the abstract implementation should pass and subclasses should chain calls to super.
- If only one class could reasonably perform the stated function (e.g. two separate run() implementations cannot both be executed) then the method should raise NotImplementedError in Host, and the implementor should NOT chain calls to super, to ensure that only one implementation ever gets executed.
check_diskspace(path, gb)[source]¶
Raises an error if path does not have at least gb GB free.
:param path The path to check for free disk space. :param gb A floating point number to compare with a granularityof 1 MB.
1000 based SI units are used.
:raise AutoservDiskFullHostError if path has less than gb GB free.
check_partitions(root_part, filter_func=None)[source]¶
Compare the contents of /proc/partitions with those of /proc/mounts and raise exception in case unmounted partitions are found
root_part: in Linux /proc/mounts will never directly mention the root partition as being mounted on / instead it will say that /dev/root is mounted on /. Thus require this argument to filter out the root_part from the ones checked to be mounted
filter_func: unnary predicate for additional filtering out of partitions required to be mounted
Raise: error.AutoservHostError if unfiltered unmounted partition found
cleanup_kernels(boot_dir='/boot')[source]¶
Remove any kernel image and associated files (vmlinux, system.map, modules) for any image found in the boot directory that is not referenced by entries in the bootloader configuration.
erase_dir_contents(path, ignore_status=True, timeout=3600)[source]¶
Empty a given directory path contents.
get_boot_id(timeout=60)[source]¶
Get a unique ID associated with the current boot.
Should return a string with the semantics such that two separate calls to Host.get_boot_id() return the same string if the host did not reboot between the two calls, and two different strings if it has rebooted at least once between the two calls.
:param timeout The number of seconds to wait before timing out.
get_meminfo()[source]¶
Get the kernel memory info (/proc/meminfo) of the remote machine and return a dictionary mapping the various statistics.
get_open_func(use_cache=True)[source]¶
Defines and returns a function that may be used instead of built-in open() to open and read files. The returned function is implemented by using self.run(‘cat <file>’) and may cache the results for the same filename.
- :param use_cache Cache results of self.run(‘cat <filename>’) for the
- same filename
log_kernel()[source]¶
Helper method for logging kernel information into the status logs. Intended for cases where the “current” kernel is not really defined and we want to explicitly log it. Does nothing if this host isn’t actually associated with a job.
log_reboot(reboot_func)[source]¶
Decorator for wrapping a reboot in a group for status logging purposes. The reboot_func parameter should be an actual function that carries out the reboot.
record(*args, **dargs)[source]¶
Helper method for recording status logs against Host.job that silently becomes a NOP if Host.job is not available. The args and dargs are passed on to Host.job.record unchanged.
repair_with_protection(protection_level)[source]¶
Perform the maximal amount of repair within the specified protection level.
request_hardware_repair()[source]¶
Should somehow request (send a mail?) for hardware repairs on this machine. The implementation can either return by raising the special error.AutoservHardwareRepairRequestedError exception or can try to wait until the machine is repaired and then return normally.
run(command, timeout=3600, ignore_status=False, stdout_tee=<object object>, stderr_tee=<object object>, stdin=None, args=())[source]¶
Run a command on this host.
symlink_closure(paths)[source]¶
Given a sequence of path strings, return the set of all paths that can be reached from the initial set by following symlinks.
wait_for_restart(timeout=1800, down_timeout=840, down_warning=540, log_failure=True, old_boot_id=None, **dargs)[source]¶
Wait for the host to come back from a reboot. This is a generic implementation based entirely on wait_up and wait_down. | https://autotest.readthedocs.io/en/latest/api/autotest.client.shared.hosts.html | 2019-04-18T15:06:32 | CC-MAIN-2019-18 | 1555578517682.16 | [] | autotest.readthedocs.io |
Let’s add Content!
Through the last blog posts, the build script advanced from a simple asciidoc example to a dual build asciidoc archetype with plantUML.
But when you want to document your project, you also need content. Wouldn’t it be great to have a template to get you started?
If you want to document your solution architecture, this template already exists. It’s called arc42 and additional information about it is available at. On this page, you will find downloads for several source formats, but we only need (who could have guessed) the asciidoc version.
The tempalte is available as English and German version, so I just copied both version into the docToolchain project. They both reside in their own folder with their own main template in
/src/docs. Feel free to just delete the language version you don’t need.
In order to make it look good, I had to fix the
imagesDir and add some rendering options:
:toc: left displays a table of contents on the left
:icons: font displays the so called admonitions with a nice looking icon
:source-highlighter: coderay will render source code in a nice way
The result should look like this:
The current version of the build should already cover most problems when you start to document your software architecture. But the next blog posts will still add some spice - follow me on twitter if you want to get updates!
The updated docToolchain project can be found here: | https://docs-as-co.de/news/arc42/ | 2019-04-18T15:36:27 | CC-MAIN-2019-18 | 1555578517682.16 | [array(['/images/oldblog/arc42_sample.png', 'arc42 sample'], dtype=object)] | docs-as-co.de |
How do I create a new team?
You can create and belong to as many teams as you like. Each team has their own space for campaigns, teammates, and mail accounts, and has their own billing and subscription settings.
To create a new team, click the team switcher in the menu:
And then just click the "Create a new team" link and enter the name of your team!
Once your new team is created, you'll want to set up your billing details, so click on the "Billing" link in the sidebar:
| https://docs.mailshake.com/article/84-how-do-i-create-a-new-team | 2019-04-18T14:48:07 | CC-MAIN-2019-18 | 1555578517682.16 | [array(['http://d33v4339jhl8k0.cloudfront.net/docs/assets/56f5e15f9033601eb6736648/images/5835b77ec697916f5d054724/file-lCRRSTSyh3.png',
None], dtype=object)
array(['http://d33v4339jhl8k0.cloudfront.net/docs/assets/56f5e15f9033601eb6736648/images/5835b7a3903360645bfa819c/file-eDtpcukO1M.png',
None], dtype=object) ] | docs.mailshake.com |
Preparing CentOS or RHEL 7 system for running Platform9 Managed Kubernetes
Before you can run Platform9 Managed Kubernetes, you must prepare your CentOS or RHEL machine for it.
Read through and follow the general requirements checklist related to the memory and networking prerequisites for Platform9 Managed Kubernetes.
Once the prerequisites are met, follow the steps given below to prepare your CentOS or RHEL 7 host ready for Platform9 Managed Kubernetes.
- Disable incompatible services
- Prepare Docker storage
Let us look at each of the aforementioned steps in detail.
Disable Incompatible Services
Network applications should be uninstalled or disabled because they can interfere with Docker and Kubernetes networking services.
The firewalld service must be disabled. There is a known incompatibility between firewalld and Docker's use of iptables, and it is documented at.
Run the following command to disable firewalld.
systemctl stop firewalld
systemctl disable firewalld
Prepare Docker Storage
Follow the steps given below to prepare Docker storage.
- Choose a free block device, or create a new block device.
- Create an LVM thin pool.
Let us look at each of the aforementioned steps in detail.
Choose Block Device or Create Block Device
On CentOS/RHEL 7, Docker uses the devicemapper storage driver, by default, to manage container images and disk layers. For production, the storage driver must be configured to use direct-lvm mode (The loop-lvm mode is acceptable for testing, but is not supported for production deployments). The direct-lvm mode requires one free block device (a disk or a partition).
If a free block device is available, note the path of the block device, e.g., /dev/sdb for a disk, /dev/sdc1 for a partition.
If a free block device is not available, create a new block device, then note the block device path. You can attach a new disk, or create a new partition. The block device should be at least 40 GB in size. Attaching a new disk is outside the scope of these instructions. To create a new partition, use fdisk (man 8 fdisk). Set the partition type to 8e (Linux LVM). See for detailed information on fdisk.
Create LVM thin pool
Follow the steps given below to create an LVM thin pool.
- Ensure that LVM is installed on the host by running the following command.
yum list installed lvm2
The lvm2 package should be listed as installed. If it is not installed, run the following command to install the lvm2 package.
yum install lvm2
- Download the bash script to create an LVM thin pool from GitHub by running the following command.
wget
- Change the file permissions of the downloaded shell script so that it can be run.
chmod +x bd2tp.sh
- Run the downloaded bash script with the path of the free block device chosen or created above and the name of the volume group.
./bd2tp.sh <block_device_name> <volume_group_name> | https://docs.platform9.com/support/preparing-centos-7-system-running-containers/ | 2019-04-18T14:17:56 | CC-MAIN-2019-18 | 1555578517682.16 | [] | docs.platform9.com |
Document Type
Article
Abstract
The author argues reading, hearing, and then composing musical lyrics involving grammatical concerns can help college writing students to edit more effectively for a song's grammar topic. Explaining that the songs need to offer specific advice, such as how to both spot and correct the grammatical problem, the writer offers lyrical examples and provides scholarly evidence for this approach. The essay explains what Grammar Jam is, why music can work, and how to use the tactic in the classroom.
Recommended Citation
Gillespie, David. 2018. "Grammar Jam: Adding a Creative Editing Tactic" Prompt: A Journal of Academic Writing Assignments 2 (1).
Published in: Prompt: A Journal of Academic Writing Assignments, v. 2, no. 1, 2018. | https://docs.rwu.edu/fcas_fp/316/ | 2019-04-18T14:25:44 | CC-MAIN-2019-18 | 1555578517682.16 | [] | docs.rwu.edu |
sunny health exercise bike overview and fitness twisting stair stepper with bands.
Related Post
Seventh Generation Ultra Power Plus Mini Chalk Boards Game Room Seating Forever Collectibles Bobbleheads Aquarest Spas Extra Large Binder Clips Double Sided Dry Erase Board 6 Man Tent Plug And Play Hot Tub Bocce Ball Size Hummingbird Fish Vertical Organizer Mobile Laser Printer Toy Vehicles Fold Up Cot | http://top-docs.co/sunny-health/sunny-health-exercise-bike-overview-and-fitness-twisting-stair-stepper-with-bands/ | 2019-04-18T14:43:40 | CC-MAIN-2019-18 | 1555578517682.16 | [array(['http://top-docs.co/wp-content/uploads/2018/01/sunny-health-exercise-bike-overview-and-fitness-twisting-stair-stepper-with-bands.jpg',
'sunny health exercise bike overview and fitness twisting stair stepper with bands sunny health exercise bike overview and fitness twisting stair stepper with bands'],
dtype=object) ] | top-docs.co |
Filters
A Dialing Filter restricts Calling Lists so that only certain numbers are dialed during a Campaign.
The Filters list shows the filters Filter Filter.
- Move To—Move a Filter to another hierarchical structure.
- Enable or disable Filters.
- Create a folder, configuration unit, or site. See Object Hierarchy for more information.
Click the name of a Filter to view additional information about the object. You can also set options and permissions, and view dependencies.
Procedure: Creating Filter Objects
Steps
- Click New.
- Enter the following information. For some fields, you can either enter the name of a value or click Browse to select a value from a list:
- Name—The name of the Filter.
- Description—A brief description of the Filter.
- Format—The format to which this filter is applied. Once it is specified, it cannot be changed. You assign a Filter object to a Calling List object with the same: | https://docs.genesys.com/Documentation/GA/latest/user/CfgFilter | 2019-04-18T15:32:26 | CC-MAIN-2019-18 | 1555578517682.16 | [] | docs.genesys.com |
NMA Tutorial¶
Enterovirus 71 (EV-71) is a human pathogen that predominantly infects small children. The capsid is icoshedral and contains 60 protomer units. In a mature capsid the protomers are assembled as a set of 12 pentamers. Each protomer contains a single copy of the proteins VP1-VP4. During infection, the virus capsid expands to release its RNA into the host cell. This expanded capsid is known as the A-particle.
Aim¶
In this tutorial we will apply the ANM model to a single pentamer of the mature EV-71 capsid. We aim to identify the normal modes that contribute to the conformational changes within a pentamer during capsid expansion.
Create a working directory¶ ModeTask directory.
Preparation of structure of the mature capsid¶
- Download the 3VBS biological assembly (3VBS.pdb1) of the mature EV-71 capsid from the PDB.
- Open 3VBS.pdb1 in PyMOL.
- Use the split_states 3VBS command to visualise the full capsid.
- Save the capsid: File – Save Molecule – Select the first five states. Save as EV71_Pentamer.pdb into the ModeTask/Tutorial directory.
Each protomer has four subunits: VP1-VP4. VP4 is an internal capsid protein.
- Number of residues per protomer = 842
- Number of residues per pentamer = 4210
The estimated run time to perfom ANM on a complex of 4210 residues, using Mode Task is 25 hours.
For the sake of this tutorial we will use the coarseGrain.py script to construct a lower resolution pentamer.
Preparation of the structure of the A-particle capsid¶
- Download the 4N43 biological assembly (4N43.pdb1) of the A-partcile EV-71 capsid from the PDB.
- Open 4N43.pdb1 in PyMOL.
- Use the split_states 4N43 command to visualise the full capsid.
- Save the capsid: File – Save Molecule – Select the first five states. Save as Apart_Pentamer.pdb into the ModeTask/Tutorial directory.
Coarse grain¶
The MODE-TASK package is designed to analyse both single proteins and larger macromolecules such as a virus capsid. The ANM.cpp script contructs an elastic network model on all CA or CB atoms in a given PDB file. This is ideal for smaller protein complexes. For larger protein complexes, the coarseGrained.py script can be used to construct an additional coarse grained PDB file.
- Create a two models of the EV71 Pentamer complex with additional coarse graining set at levels 3 and 4 of selected CB atoms:
coarseGrain.py --pdb Tutorial/EV71_Pentamer.pdb --cg 3,4 --startingAtom 1 --output EV71_CG3.pdb --outdir Tutorial --atomType CB
The input paramaters include:
- pdb: This is the pdb structure that you wish to coarse grain
- cg: This specifies the levels of coarse graining. To select fewer atoms increase the level
- starting atom: This specifies the first residue to be selected in the complex
- output: The filename of the coarse grained pdb file
- outdir: The directory in which to save the coarse grained pdb file
Output:
- EV71_CG3.pdb and EV71_CG4.pdb : Two separate coarse grained pdb files that have the coordinates of selected CB atoms from residues that are equally distributed across the complex. As an example, EV71_CG3.pdb is shown in the figure below.
- Command line output
============================================================ Started at: 2017-12-12 11:34:36.399300 ------------------------------------------------------------ SUMMARY OF COARSE GRAINING PERFORMED AT LEVEL 3 No. atoms selected per unit: 122 from 842 original residues No. atoms selected per macromolecule: 610 from 4210 orignal residues ------------------------------------------------------------ ------------------------------------------------------------ SUMMARY OF COARSE GRAINING PERFORMED AT LEVEL 4 No. atoms selected per unit: 54 from 842 original residues No. atoms selected per macromolecule: 270 from 4210 orignal residues ------------------------------------------------------------ Completed at: 2017-12-12 11:34:36.541637 - Total time: 0:00:00
Note that, the same set of 122 atoms from each protomer were selected for CG3, likewise, the same set of 54 atoms from each protomer were selected for CG4 – thus the symmetry of the pentamer is retained.
Left) Crystal structure of the EV71 Pentamer (3VBS). Right) EV71_CG3.pdb contains 610 CB atoms from 4210 total residues.
Mode decomposition¶
The ANM.cpp script accepts a PDB file and a cutoff distance. The script constructs the Hessian matrix connecting all CB atoms within the specific cutoff radius. The script then performs singular value decomposition to return the eigenvalues and eigenvectors of the Hessian matrix.
Input parameters:
- pdb: path to PDB file
- cutoff: cutoff radius in A. The script will construct an elastic network model by connecting all atoms that interact within the cutoff distance (default = 15Å)
- outdir: folder in which output is saved
Output:
W_values.txt: A list of 3N eigenvalues of the system. Eigenvalues are ordered from slowest to fastest.
VT_values.txt: A 3Nx3N list of the eigenvectors for each mode. Eigenvectors are printed as a set of rows.
U_values.txt: A 3Nx3N list of the eigenvectors for each mode. Eigenvectors are printed as a set of columns.
- Compile the ANM.cpp script
The ANM.cpp script requires classes of the AlgLib library. These classes can be found in the cpp/src folder in the GitHub Directory. The path to these classes must be specified in the compile command using the -I parameter:
g++ -I cpp/src/ ANM.cpp -o ANM
In this tutorial, we will perform a comparative analysis between the normal modes of the EV71_CG3.pdb and EV71_CG4.pdb
- Run ./ANM to analyse EV71_CG4.pdb with a cutoff of 24Å
./ANM --pdb Tutorial/EV71_CG4.pdb --outdir Tutorial --atomType CB --cutoff 24
Example of the command line output:
Started at: 2017-08-22 11:55:33 Starting Decomposition Completed at: 2017-08-22 11:55:47 - Total time: 0:00:13
- Run ./ANM to analyse EV71_CG3.pdb
3.1) First make a sub-directory to avoid overwriting of your previous ANM output:
mkdir Tutorial/CG3
3.2)
./ANM --pdb Tutorial/EV71_CG3.pdb --outdir Tutorial/CG3 --atomType CB --cutoff 24
Example of command line output:
Started at: 2017-08-22 11:56:42 Starting Decomposition Completed at: 2017-08-22 11:59:14 - Total time: 0:02:0-704
Identification of modes that contribute to the conformational change¶
We have performed ANM on two separate pentamer complexes. From each model, we have obtained a set of eigenvalues and eigenvectors corresponding to each normal mode:
- EV71_CG4.pdb, total non-trivial modes = 804
- EV71_CG3.pdb, total non-trivial modes = 1824
For each model we will now identify the modes that contribute to the conformational change of a pentamer during capsid expansion.
We will then compare the modes from the respective models and determine if the additional coarse graining affected the ability to capture such modes.
To determine if our modes overlap with the direction of the conformational change, we must first determine the conformational change between the crystal structures of the mature and A-particle pentamer. The conformationMode.py scripts take two UNALIGNED pdb files and the set of all eigenvectors determined for the complex. The script aligns the structures, calculates the known conformational change and then identifies which modes contribute to the change.
Prepare the A-particle pentamer in PyMOL, using the biological assembly: 4n43.pdb1
Conformation mode¶
- Compute the overlap between all modes of the EV71_CG4 model:
conformationMode.py --pdbANM Tutorial/EV71_CG4.pdb --vtMatrix Tutorial/VT_values.txt --pdbConf Tutorial/Apart_Pentamer.pdb --outdir Tutorial/ --atomType CB
Input paramters:
–pdbANM: This is the PDB file that you use to run ANM. Do not use the aligned file here
–vtMatrix: The eigenvalues obtained from ANM of the EV71_CG4 model
–pdbConf: This is the pdb file of the conformational change. In this case, the pentamer of the A-particle (The –pdbANM and –pdbConf must NOT BE ALIGNED)
Output:
A text file with the overlap and correlation of each mode to the conformational change. The modes are ordered by the absolute value of their overlap.
- Compute overlap between all modes of the EV71_CG3 model (Remember to specify the correct directory):
conformationMode.py --pdbANM Tutorial/EV71_CG3.pdb --vtMatrix Tutorial/CG3/VT_values.txt --pdbConf Tutorial/Apart_Pentamer.pdb --outdir Tutorial/CG3 --atomType CB
Top output from conformationalMode.py of EV71_CG4:
MODE Overlap Correlation Mode: 9 0.759547056636 0.502678274421 Mode: 37 0.274882204134 0.0404194084198 Mode: 36 -0.266695656516 0.116161361929 Mode: 23 0.260184892921 0.0752811758038 Mode: 608 0.224274263942 0.0255344947974 Mode: 189 -0.208122679764 0.143874874887 Mode: 355 0.165654954812 0.0535734675763 Mode: 56 0.14539061536 0.11985698672 Mode: 387 -0.137880035134 0.245587436772 Mode: 307 -0.130040876389 0.145317107434
Top output from conformationalMode.py of EV71_CG3:
MODE Overlap Correlation Mode: 9 -0.663942246191 0.236900852193 Mode: 30 -0.235871923574 0.192794743468 Mode: 56 0.159507003696 0.083164362262 Mode: 101 0.157155354273 0.272502734273 Mode: 172 0.156716125374 0.275230637373 Mode: 166 -0.153026188385 0.332283689479 Mode: 189 -0.147803049356 0.372767489438 Mode: 38 -0.13204901279 0.196369524407 Mode: 423 -0.131685652034 0.334715006091 Mode: 76 -0.129977918229 0.296798866026
In addition, the command line output will specify the precise atoms over which the calculations were performed. (Of course, this will correspond to all atoms that are present in both conformations). The RMSD between the two structures will also be specified:
Started at: 2017-12-12 12:50:48.922586 ***************************************************************** WARNING!!!: Not all chains from PDB files were selected Suggested: Chain IDs do not match between PDB Files ***************************************************************** Correlations calculated across 465 common residues (93 per 5 asymmetric units). Breakdown per chain: A: 32 residues per asymmetric unit Residues selected include: 74 79 92 98 101 105 108 112 122 139 142 148 155 158 161 171 175 180 189 198 203 213 216 224 240 253 265 269 273 282 290 293 B: 29 residues per asymmetric unit Residues selected include: 17 37 44 58 65 76 79 83 90 108 115 128 134 141 151 155 180 186 189 202 208 219 222 227 231 234 241 245 249 C: 32 residues per asymmetric unit Residues selected include: 2 7 12 15 18 28 32 36 40 65 78 82 86 92 98 104 112 133 139 147 152 158 169 174 202 205 209 214 219 222 229 233 ***************************************************************** RMSD between the two conformations = 3.95802072351 Completed at: 2017-12-12 12:50:49.269902 - Total time: 0:00:00
Combination mode¶
This option allows to calculate the overlap and correlation to a conformational change, over a combination of modes. In this example, we will use the EV71_CG3 Model and perform the calculation over the modes 9 and 30.
combinationMode.py –pdbANM Tutorial/EV71_CG3.pdb –vtMatrix Tutorial/CG3/VT_values.txt –pdbConf Tutorial/Apart_Pentamer.pdb –modes 9,30 –outdir Tutorial/CG3 –atomType CB
Output from combinationMode.py
The command line output is the same as described for conformationMode.py
The script will also print out two text files:
- A file that specifies that calculated overlap and correlation over the full model:
MODE Overlap Correlation Mode: 9 -0.663942246191 0.236900852193 Mode: 30 -0.235871923574 0.192794743468 ***************************************************************** Combined Overlap = 0.616937749679 Combined Correlation = 0.219893695954 *****************************************************************
- A file that gives a breakdown of the calculated overlap and correlation per chain in each asymmetric unit of the model. This is very useful for identifying which regions of the complex contribute the most to the conformational change for a given mode:
================================================================= ================================================================= ASYMMETRIC UNIT: 1 CHAIN: A MODE Overlap Correlation Mode: 9 -0.677454134085 0.101259205597 Mode: 30 -0.396594527376 0.601345215538 Combined Overlap = 0.620398046618 Combined Correlation = 0.337867917512 ----------------------------------------------------------------- CHAIN: B MODE Overlap Correlation Mode: 9 -0.717931968623 0.491498558701 Mode: 30 -0.348260895864 0.249005547277 Combined Overlap = 0.679846136775 Combined Correlation = 0.321369216974 ----------------------------------------------------------------- CHAIN: C MODE Overlap Correlation Mode: 9 -0.637082761027 0.198091140187 Mode: 30 0.0309855898365 0.149051660589 Combined Overlap = 0.532447057412 Combined Correlation = 0.14767859844 ----------------------------------------------------------------- ================================================================= ================================================================= ASYMMETRIC UNIT: 2 CHAIN: A MODE Overlap Correlation Mode: 9 -0.677486033685 0.101126894833 Mode: 30 -0.396528584512 0.601655942534 Combined Overlap = 0.620396963618 Combined Correlation = 0.337655761311 ----------------------------------------------------------------- CHAIN: B MODE Overlap Correlation Mode: 9 -0.717946715867 0.491379282027 Mode: 30 -0.34820663545 0.249321165251 Combined Overlap = 0.679888476475 Combined Correlation = 0.321447980441 ----------------------------------------------------------------- CHAIN: C MODE Overlap Correlation Mode: 9 -0.637045607049 0.19801176313 Mode: 30 0.0310759318839 0.149266120068 Combined Overlap = 0.53259259653 Combined Correlation = 0.147730501227 ----------------------------------------------------------------- ================================================================= ================================================================= ASYMMETRIC UNIT: 3 . . . ASYMMETRIC UNIT: 4 . . . ASYMMETRIC UNIT: 5
Mode visualisation¶
From each model we have identified which mode overlaps the most with the direction of the conformational change. We can now project these vectors onto the respective models using the visualiseVector.py script and then visualise them as a set of frames in VMD:
1) Standard visualisation This option uses the default settings: Radius of arrow head = 2.20 Radius of arrow tail = 0.80 Arrow are coloured by chain in ascending order of PDB file according to the list:
In a biological assembly, respective chains from each asymmetric unit are presented in the same colour. The script can handle 20 non-identical changes, after which all arrows will be coloured black by default
1.1) Visualise eigenvectors for mode 9 of the CG4 model. Note this overlap is positive, thus the vectors act in the direction to conformational change. Therefore we can specify the direction as 1 (or rely on the default setting of direction = 1) when visualising the vectors:
visualiseVector.py –pdb Tutorial/EV71_CG4.pdb –vtMatrix Tutorial/VT_values.txt –mode 9 –atomType CB –direction 1 –outdir Tutorial OR visualiseVector.py –pdb Tutorial/EV71_CG4.pdb –vtMatrix Tutorial/VT_values.txt –mode 9 –atomType CB –outdir Tutorial
1.2) Visualise eigenvectors for mode 9 of the CG3 model. Note this overlap is negative, thus the vectors act in the opposite direction to conformational change. Therefore we must specify the direction as -1 when visualising the vectors:
visualiseVector.py –pdb Tutorial/CG3/EV71_CG3.pdb –vtMatrix Tutorial/CG3/VT_values.txt –mode 9 –atomType CB –direction -1 –outdir Tutorial/CG3
Output from visualiseVector.py
The script will produce a folder named VISUALISE. For every mode that you give to visualiseVector.py two files will be produced:
- A VISUAL PDB file. This can be opened in VMD and visualised as a set of 50 frames.
- A VISUAL_ARROWS text file. This file contains a Tcl script that can be copied into the VMD TK console. The script plots a set of arrows indicating the direction of each atom.
Visualising the results in VMD
- Open VMD.
- To load the VISUAL_9.pdb file click the following tabs:
File >> New Molecule >> Browse >> Select VISUAL_9.pdb.
- The VISUAL_9.pdb file contains a set of 50 frames of the eigenvectors of mode 9. This can be visualised as a movie by clicking on the Play button. The frame set can also be coloured to the user’s desire using the options under the
Graphics >> Representations
- The VISUAL_ARROWS text file contains a script that can be copied and pasted straight into the Tk Console in VMD:
Extensions >> Tk Console
- To obtain a clearer observation, change the background to white:
Graphics >> Colors >> Under Categories select Display >> Under Names select Background >> Under Colors select White
- To obtain only the arrows, delete all frames of the VISUAL_9.pdb molecules:
Right click on the number of frames >> Delete frames >> Delete frames 0 to 49
7) Alternatively you can plot the arrows onto the original PDB (uncoarse grained) PDB file and visualise it in cartoon format: Load EV71_Pentamer.pdb into VMD >>``Graphics >> Representations >> Drawing method >> NewCartoon`` >> copy and paste the VISUAL_ARROWS text file into the Tk Console.
To improve clarity under the
NewCartoon options select:
Material >> Transparent
Spline Style >> B-Spline
- To colour tha protein complex by chain:
Graphics >> Colours >> Under Categories select Chain >> Under Name select A >> Under Colours select RedTo match the arrows colours as: Chain A = Red Chain B = Blue Chain C = Orche Chain D = Purple Finally instruct VMD to colour by chain
Graphics >> Representations >> Coloring Method >> Chain
Fig: Visualisation in VMD. Left) Only arrows depicted Right) Arrows plotted onto cartoon depiction of pentamer
- Additional options for visualisation
Here you have the options to: 2.1) Change the thickness and length of the arrows 2.2) Specify the colours of the arrows for each change 2.3) Visualise the motion and draw arrows for a single or specified set of asymmetric units 2.4) Draw arrows for a single chain
We will demonstrate each of the above options using the EV71_CG4 model.
2.1) Change the thickness and length of the arrows Here we will increase the thickness of the arrow head to 3.0, increase the thickness of the arrow tail to 1.5 and the increase the length pf each arrow by a factor of 2
visualiseVector.py –pdb Tutorial/EV71_CG4.pdb –vtMatrix Tutorial/VT_values.txt –mode 9 –atomType CB –outdir Tutorial –head 3.0 –tail 1.5 –arrowLength 2
Fig: Visualisation in VMD after increasing arrow sizes
2.2) specify the colours of the arrows for each change
- Here we will colour the arrows as follows:
Chain A = Yellow Chain B = Blue Chain C = Pink Chain D = Green
visualiseVector.py –pdb Tutorial/EV71_CG4.pdb –vtMatrix Tutorial/VT_values.txt –mode 9 –atomType CB –outdir Tutorial –colourByChain yellow,blue,pink,green
Fig: Visualisation in VMD with arrows coloured as specified by user
2.3) Visualise the motion and draw arrows for a single or specified set of asymmetric units
Here we will visualise the motion of asymmetric units 1 and 3.
visualiseVector.py –pdb Tutorial/EV71_CG4.pdb –vtMatrix Tutorial/VT_values.txt –mode 9 –atomType CB –outdir Tutorial –aUnits 1,3
The motion will be captured in the frame set: VISUAL_AUNITS_9.pdb in the Tutorial folder, and can be played in VMD.
Fig: Vectors arrows for asymmetric units 1 and 3 of the pentamer
2.4) Draw arrows for a single chain
- Here we will draw arrows only for A chain of asymmetric unit 1 of the EV71_CG4 pentamer, in colour gray
- visualiseVector.py –pdb Tutorial/EV71_CG4.pdb –vtMatrix Tutorial/VT_values.txt –mode 9 –atomType CB –outdir Tutorial –aUnits 1 –chain A –colourByChain gray
Fig: Vectors arrows for Chain A of asymmetric units 1 in colour gray
Mean square fluctuation (MSF)¶
Next, we will use the meanSquareFluctuations.py script to calculate the MSF of the CB atoms. The scripts allows you to calculate:
- the MSFs, calculated over all modes
- the MSFs of the CB atoms for a specific mode, or a specific range of modes.
The script also allows for comparison of MSF obtained from modes of different models. We can use the –pdbConf2 parameter to send the script a second PDB model. The script will then calculate the MSF of atoms corresponding to residues that are common between both models.
In this tutorial, we will analyse and compare the MSF between EV71_CG4 and EV71_CG3. This will give an indication as to whether or not the higher coarse grained model is also suitable to study the virus.
- We will compare the MSFs between the two models for a) all modes, and b) mode 9
meanSquareFluctuation.py --pdb Tutorial/EV71_CG3.pdb --wMatrix Tutorial/CG3/W_values.txt --vtMatrix Tutorial/CG3/VT_values.txt --pdbConf2 Tutorial/EV71_CG4.pdb --wMatrixC Tutorial/W_values.txt --vtMatrixC Tutorial/VT_values.txt --modes 9 --outdir Tutorial/ --atomType CB
Output for Model CG3:
1) PDB1_msf.txt: Text file of the overall MSFs values for all residues of CG3
2) PDB1__msfSpecificModes.txt: MSFs for all residues for mode 9 of CG3
3) PDB1CommonResidues_msf.txt: Overal MSFs for residues (of CG3) common to CG3 and CG4
4) PDB1_CommonResidues_msfSpecificModes.txt: MSFs for residues (of CG3) common to CG3 and CG4 calculated for mode 9
Output for Model CG4:
1) PDBCompare_msf.txt:: Text file of the overall MSFs values for all residues of CG4
2) PDBCompare__msfSpecificModes.txt: MSFs for all residues for mode 9 of CG4
3) PDBCompareCommonResidues_msf.txt: overal MSFs for residues (of CG4) common to CG4 and CG3.
4) PDBCompare_CommonResidues_msfSpecificModes.txt: MSFs for residues (of CG4) common to CG4 and CG3 calculated for mode 9
Assembly Covariance¶
Now, we will use the assemblyCovariance.py script to calculate to plot various covariance matrices of the complex. For this example we will use the EV71_CG3 Model.
First, we will plot the overall covariance for the full model, as calculated over all modes:
assemblyCovariance.py –pdb Tutorial/EV71_CG3.pdb –wMatrix Tutorial/CG3/W_values.txt –vtMatrix Tutorial/CG3/VT_values.txt –modes all –outdir Tutorial/CG3/ –atomType CB
The above function will produce a plot corresponding to the full model, AND as a default a second plot that zooms into the first asymmetric unit will also be produced
Fig: Overall covariance matrix for the full EV71_CG3 Model
Fig: Overall covariance matrix for a single protomer within the EV71_CG3 Model
Now we will use the additional options to calculate the covariance for mode 7 only (the first non-trivial mode). We will also plot the covariance between the asymmetric units 1 and 3, and then zoom into chain A of the first asymmetric unit. We have also adjusted the values of the axes to increase sensitivity for a single mode.
assemblyCovariance.py –pdb Tutorial/EV71_CG3.pdb –wMatrix Tutorial/CG3/W_values.txt –vtMatrix Tutorial/CG3/VT_values.txt –modes 7 –aUnits 1,3 –zoom 1,A –outdir Tutorial/CG3/M7 –atomType CB –vmin -0.005 –vmax 0.005
The above function will produce a plot corresponding to the full model for mode 7, a second plot that zooms into covariance between the first and third asymmetric units, and a third plot for the covariance of Chain A and Unit 1.
Fig: Covariance matrix for the full EV71_CG3 Model calculated over Mode 7
Fig: Covariance matrix for the asymmetric units 1 and 3 of the EV71_CG3 Model calculated over Mode 7
Fig: Covariance matrix for Chain A in asymmetric units 1 the EV71_CG3 Model calculated over Mode 7
For each of the steps above, the script also outputs each covariance matrix in txt file format. | https://mode-task.readthedocs.io/en/latest/nma_tut.html | 2019-04-18T14:30:54 | CC-MAIN-2019-18 | 1555578517682.16 | [array(['_images/Default_Visualisation.png',
'_images/Default_Visualisation.png'], dtype=object)
array(['_images/Arrows_Visualisation.png',
'_images/Arrows_Visualisation.png'], dtype=object)
array(['_images/Colours_Visualisation.png',
'_images/Colours_Visualisation.png'], dtype=object)
array(['_images/Units_Visualisation.png',
'_images/Units_Visualisation.png'], dtype=object)
array(['_images/Chains_Visualisation.png',
'_images/Chains_Visualisation.png'], dtype=object)] | mode-task.readthedocs.io |
Website Building
The website is built using Middleman 3 and the source in this repository. The development happens on GitHub directly and does not use Gerrit.
Documentation on the source layout and editorial policy is documented in the README file at the root of the repository.
PRs are reviewed by oVirt developers and must pass the test build. This build is run via Travis and a GitHub webhook.
The repository is regularly scrutinized for new merged commits by the web builder, which is in charge of the final build. If the build is successful, then it is published on the webserver. The published content is purely static for performance and security reasons. The web builder is not accessible from the outside world. The latest build log is available to help debug build problems. | https://ovirt-infra-docs.readthedocs.io/en/latest/Community_Cage/Website_Building/index.html | 2019-04-18T14:17:37 | CC-MAIN-2019-18 | 1555578517682.16 | [] | ovirt-infra-docs.readthedocs.io |
collabnet-testlink-1.0.2.jar, supports integration with Testlink 1.9.15 and 1.9.16.
TeamForge 18.3). | http://docs.collab.net/teamforge183/testlinkoverview.html | 2019-04-18T15:12:40 | CC-MAIN-2019-18 | 1555578517682.16 | [array(['images/status-success-small.png', None], dtype=object)
array(['images/status-success-small.png', None], dtype=object)
array(['images/status-success-small.png', None], dtype=object)] | docs.collab.net |
Are there any free courses or materials?
Yes!
Free for all students
- The first two modules of most of our courses are available for free, so you don't need a subscription to get started!
- All of our short courses are also available for free.
Free for Australian students in Grades 3 to 8
- The DT Challenges created by our partner, the Australian Computing Academy, have been paid for by the federal government for Australian students in grades 3 to 8.
- Other Grok users (including students at home schools) can access these with a subscription.
Free for Australian students in Grades 7 to 12
- The Cyber Security Challenges created by our partner, the Australian Computing Academy, have been funded for Australian students in grades 7 to 12. This includes students at home schools.
- Other Grok users can access these with a subscription.
Free for students at Queensland state schools
- The QCA courses, created in partnership with the Queensland government, are free for students at state schools in Queensland. The QCA courses were written in consultation with the Queensland Coding Academy (). There are two short courses per stage (3-4, 5-6, 7-8, 9-10) that cover the relevant sections of the DT curriculum for those stages.
To arrange access to any free resources available to your students, simply register free student accounts. | https://docs.groklearning.io/article/32-free-courses-materials | 2019-04-18T15:14:40 | CC-MAIN-2019-18 | 1555578517682.16 | [] | docs.groklearning.io |
Getting Started
InAccel Coral is a fast and general-purpose FPGA resource management system. It provides high-level APIs in Java, Scala, Python and C++, and a unified engine that supports every multi-FPGA platform. Coral is also shipped with a rich set of higher-level integrations including Apache Arrow for zero-copy, lightning-fast data accesses and Apache Spark for seamlessly accelerated machine learning.
This document gives a short overview of how Coral runs on clusters of FPGAs, to make it easier to understand the components involved. Read through the accelerator deployment guide to learn about submitting your accelerators on an FPGA cluster through Coral.
Installing Coral-
All Coral versions are packaged as docker images hosted in InAccel Docker Hub.
Docker Pull Command
docker pull "inaccel/coral:latest"
docker pull "inaccel/coral:latest"
Coral runs on any UNIX-like system (e.g. Linux). It’s really easy to deploy on any machine, since it lives inside a containerized environment — all you need is to have the vendor-specific FPGA runtime installed on your system and run it with the container pointing to that installation.
For example:
CORAL_PLATFORM=/path/to/my/intel/platform docker run --runtime=intel -"
CORAL_PLATFORM=/path/to/my/xilinx/platform docker run --runtime=xilinx -"
Installing InAccel Docker Service-
InAccel's FPGA Container Runtime enables Coral to run seamlessly across heterogeneous driver/toolkit environments with the only requirement, the FPGA driver to be installed on the host.
Docker do not natively support FPGAs since any specialized hardware requires a version specific installation of a driver on both the host and the containers. InAccel Docker Service, allows Coral FPGA resource manager to be agnostic of the host FPGA driver/vendor.
wget sudo apt install -y inaccel-service.deb sudo systemctl restart docker
wget sudo yum install -y inaccel-service.rpm sudo systemctl restart docker
Usage:
-
inaccel start: Starts inaccel-coral container.
-
inaccel stop: Stops inaccel-coral container.
-
inaccel restart: Restarts inaccel-coral container.
-
inaccel config: Configures inaccel CLI tool.
-
inaccel logs: Displays log messages.
-
LEVEL(all, trace, debug, info, warn, error, fatal)
-
-f, --followFollow stdout & stderr.
-
inaccel fetch: Download all the available accelerators.
-
TARGET(ALVEO_U200, AMAZON_F1, PAC_A10)
After you have set the proper docker command configuration, you can also simply use InAccel Service, e.g.
systemctl start inaccel.
Using Coral API-
This documentation is for Coral API version 1.3. Java and Scala users can include Coral API in their projects using its Maven coordinates, C++ users can install it through the available Debian or RPM packages and in the future Python users can also install Coral API from PyPI. | https://docs.inaccel.com/latest/manager/overview/ | 2019-04-18T15:06:44 | CC-MAIN-2019-18 | 1555578517682.16 | [] | docs.inaccel.com |
#include <TransformationSystem.hpp>
the current values of the nonlinear function
gaussian kernel model used to approximate the nonlinear function
the goal state
flag that indicates whether the system is initialized
internal variables that are used to compute the normalized mean squared error during learning
the targets used during supervised learning
internal variable that is used to store the target function for Gaussian kernel model
the id of the transformation system
determines which DMP version is used. (not used yet)
the start state
external states
internal states | https://docs.leggedrobotics.com/local_guidance_doc/classdmp_1_1_transformation_system.html | 2019-04-18T15:28:17 | CC-MAIN-2019-18 | 1555578517682.16 | [] | docs.leggedrobotics.com |
Mail Integration with Reporting Module
Prerequisites
Before you get started with this guide, you have to complete some basic steps.
First, we'll assume that you have an access of SafeSquid server.
We are also assuming that you've Reporting Module set up. You can follow this guide to Setup Reporting Module and come back.
Mail Integration
Change directory to /usr/local/safesquid/api
Command:cd /usr/local/safesquid/api
Open config.ini from SafeSquid console
Command:vim config.ini
Find mail_details block and fill the mailing details. Refer the below table while filling the details.
Important Notes
- All the fields are mandatory except ccinfo. Set the value as none if you would like to leave the field empty.
- toaddr and ccinfo can have comma separated values
Eg: [email protected],[email protected],[email protected]
- Leaving any of the fields empty would lead to malfunctioning of the Reporting Module. So make sure to set the value as none for empty fields.
The mail has been scheduled to be sent at 7am everyday using a cronjob which is configured automatically while installing the Reporting Module as shown below.
0 7 * * * root /bin/bash /usr/local/safesquid/api/sendmail.py
The cronjob can be modified to reschedule the mailing as per your requirement by editing the file
/etc/crontab | https://docs.safesquid.com/wiki/Mail_Integration_with_Reporting_Module | 2019-04-18T15:31:28 | CC-MAIN-2019-18 | 1555578517682.16 | [] | docs.safesquid.com |
You can connect the CD/DVD.
Prerequisites
Ensure that the host is powered off before you add USB CD/DVD-ROM devices.
Procedure
- In the vSphere Client inventory, right-click the virtual machine and select Edit Settings.
- Click the Hardware tab and select the CD/DVD drive.
- Select or deselect the Connected check box to connect or disconnect the device.
- If you do not want the CD-ROM drive connected when the virtual machine starts, deselect Connect at power on.
- Select Host Device under Device Type and select a device from the drop-down menu.
- (Optional) In the drop-down menu under Virtual Device Node, select the node the drive uses in the virtual machine.
- Click OK to save your changes. | https://docs.vmware.com/en/VMware-vSphere/6.0/com.vmware.vsphere.hostclient.doc/GUID-11EEC66C-7FA0-402F-BA1D-DEDDDD2563AB.html | 2019-04-18T14:18:01 | CC-MAIN-2019-18 | 1555578517682.16 | [] | docs.vmware.com |
Knowledgebase
Here you can find out how to use your Divi Stride plugins and how to solve
some common issues that people experince.
Smack enter to search bru.
More products coming soon
We are working on quite a few new products which we hope to introduce soon, is there something you would like to see added to Divi or Extra? | http://docs.divistride.com/ | 2019-04-18T15:41:44 | CC-MAIN-2019-18 | 1555578517682.16 | [] | docs.divistride.com |
Topology
Displays the topology of a selected blueprint or deployment. The blueprint or deployment ID must be selected in one of the following ways:
- By placing the widget in the blueprints/deployments drill-down page, meaning the blueprint/deployment has been selected before entering the page, and its id is included in the page’s context.
- By adding to the page a widget allowing to select blueprints or deployments, such as the resources filter, the blueprints list or the blueprint deployments.
When executing a
Workflow for a
Deployment (e.g. the
install workflow), the topology nodes show badges that reflect the workflow execution state.
Badges
- Install state - The workflow execution is in progress for this node
- Done state - The workflow execution was completed successfully for this node
- Alerts state - The workflow execution was partially completed for this node
- Failed state - The workflow execution failed for this node
When you hover over the badge and the topology is displayed for specific deployment (not a blueprint), then you will see summary of node instances states related to specific node:
Workflow states represented by badges
A deployment before any workflow was executed
A deployment with a workflow execution in progress
A deployment with a workflow execution in progress, partially completed
A deployment with a workflow execution completed successfully
A deployment with a workflow execution partially completed successfully with some alerts
A deployment with a workflow execution that partially failed
A deployment with a workflow execution that failed
Widget Settings
Refresh time interval- The time interval in which the widget’s data will be refreshed, in seconds. Default: 10 seconds.
The following settings allow changing the presentation of the widget in different aspects, and are by default marked as “on”:
Enable group click
Enable zoom
Enable drag
Show toolbar | https://docs.cloudify.co/4.5.5/working_with/console/widgets/topology/ | 2019-04-18T14:47:42 | CC-MAIN-2019-18 | 1555578517682.16 | [array(['../../../../images/ui/widgets/show-topology.png', 'show-topology'],
dtype=object)
array(['../../../../images/ui/widgets/topology-widget-badges.png',
'Deployment Topology Node Badges'], dtype=object)
array(['../../../../images/ui/widgets/topology-widget-node-instances-details.png',
'Deployment Topology Node Instances Details'], dtype=object) ] | docs.cloudify.co |
Can I enrol if I'm not a school student or teacher?
If you are a pre-service teacher you can get free teacher access by registering an account as a teacher and then emailing us.
If you are a coding club wanting to use Grok for your students, you can follow these instructions. | https://docs.groklearning.io/article/53-can-i-enrol-if-im-not-a-school-student-or-teacher | 2019-04-18T14:39:23 | CC-MAIN-2019-18 | 1555578517682.16 | [] | docs.groklearning.io |
Assign Azure resource roles in PIM
Azure Active Directory (Azure AD) Privileged Identity Management (PIM) can manage the built-in Azure resource roles, as well as custom roles, including (but not limited to):
- Owner
- User Access Administrator
- Contributor
- Security Admin
- Security Manager, and more
Note
Users or members of a group assigned to the Owner or User Access Administrator roles, and Global Administrators that enable subscription management in Azure AD are Resource Administrators. These administrators may assign roles, configure role settings, and review access using PIM for Azure resources. View the list of built-in roles for Azure resources.
Assign a role
Follow these steps to make a user eligible for an Azure resource role.
Sign in to Azure portal with a user that is a member of the Privileged Role Administrator role.
For information about how to grant another administrator access to manage PIM, see Grant access to other administrators to manage PIM.
Open Azure AD Privileged Identity Management.
If you haven't started PIM in the Azure portal yet, go to Start using PIM.
Click Azure resources.
Use the Resource filter to filter the list of managed resources.
Click the resource you want to manage, such as a subscription or management group.
Under Manage, click Roles to see the list of roles for Azure resources.
Click Add member to open the New assignment pane.
Click Select a role to open the Select a role pane.
Click a role you want to assign and then click Select.
The Select a member or group pane opens.
Click a member or group you want to assign to the role and then click Select.
The Membership settings pane opens.
In the Assignment type list, select Eligible or Active.
PIM for Azure resources provides two distinct assignment types:
Eligible assignments require the member of the role to perform an action to use the role. Actions might include performing a multi-factor authentication (MFA) check, providing a business justification, or requesting approval from designated approvers.
Active assignments don't require the member to perform any action to use the role. Members assigned as active have the privileges assigned to the role at all times.
If the assignment should be permanent (permanently eligible or permanently assigned), select the Permanently check box.
Depending on the role settings, the check box might not appear or might be unmodifiable.
To specify a specific assignment duration, clear the check box and modify the start and/or end date and time boxes.
When finished, click Done.
To create the new role assignment, click Add. A notification of the status is displayed.
Update or remove an existing role assignment
Follow these steps to update or remove an existing role assignment.
Open Azure AD Privileged Identity Management.
Click Azure resources.
Click the resource you want to manage, such as a subscription or management group.
Under Manage, click Roles to see the list of roles for Azure resources.
Click the role that you want to update or remove.
Find the role assignment on the Eligible roles or Active roles tabs.
Click Update or Remove to update or remove the role assignment.
For information about extending a role assignment, see Extend or renew Azure resource roles in PIM.
Next steps
Feedback
Send feedback about: | https://docs.microsoft.com/en-us/azure/active-directory/privileged-identity-management/pim-resource-roles-assign-roles | 2019-04-18T14:48:31 | CC-MAIN-2019-18 | 1555578517682.16 | [] | docs.microsoft.com |
Push Messaging
8/28/2008
This code sample is named FileClient. It demonstrates how to implement a push-client that receives push-messages, saves the message's body to a file, and then calls ShellExecuteEx to perform an action on the file (which can be an executable file or a document).
Feature Area
Relevant APIs
- PushRouter_Close function
- PushRouter_FreeMessage function
- PushRouter_GetMessage function
- PushRouter_Open function
- PushRouter_RegisterClient function
- PushRouter_UnRegisterClient function
- ShellExecuteEx function
- SHELLEXECUTEINFO structure
- Wireless Application Protocol (WAP) API
Source File Listing
- fileclient.cpp
Contains functions for saving a file, getting user permissions, checking to see if the file type has permissions for download, and processing push-messages.
- fileclient.h
Contains function prototypes and forward declarations for string constants.
- fileclient.rc
The resource script.
- main.cpp
Contains the application entry point.
- precomp.h
Defines the precompiled header.
- resource.h
the header file for the resource script.
- string.cpp
Defines all constant string values.
- utils.cpp
Contains functions for extracting data from the SMS header, and for copying strings.
Usage
To run the code sample
Navigate to the solution file (*.sln), and double-click it. By default, the solution files are copied to the following folders:
C:\Program Files\Windows Mobile 6 SDK\Samples\Common\CPP\Win32\fileclient
Microsoft Visual Studio 2005 launches and loads the solution.
Build the solution (Ctrl+Shift+B).
Deploy the solution (F5).
To use the application
Register the application by running it on the mobile device once with "/register" as the command-line argument.
The application will execute when it receives an SMS message with X-WAP-Application-ID with a value of "fileclient".
Remarks
Network coverage is required to receive push messages.
The Setup in a CAB code sample can be used to create a CAB file for deploying this application.
The value of the X-MS-FileName header field in the message's headers section specifies the name of the saved file.
Application parameters can be set by filling-in the X-MyCompany-Params header field.
The value of CSIDL_WINDOWS contains the name of the directory where the file is saved.
Since the message is traveling over the SMS transport, the maximum message size determines the maximum size of an application that can be pushed to a mobile device. Most Short Message Service Center's (SMSC's) limit this size to 64KB.
The Application ID of this push-client is "fileclient", To be intercepted by this push-client, a push-message must have a value of "fileclient" for the X-WAP-Application-ID header field.
Development Environments
SDK: Windows Mobile 6 Professional SDK and Windows Mobile 6 Standard SDK
Development Environment: Visual Studio 2005.
ActiveSync: Version 4.5.
See Also
Concepts
Code Samples for Windows Mobile
Setup in a CAB
Other Resources
Push Message Features
Data from Push Router
WAP Push Message Format
Setting Push Router Policies
WAP Push OTA Protocol Features
Security Roles | https://docs.microsoft.com/en-us/previous-versions/bb158647%28v%3Dmsdn.10%29 | 2019-04-18T14:30:47 | CC-MAIN-2019-18 | 1555578517682.16 | [array(['images/bb158626.windowsmobile_on%28en-us%2cmsdn.10%29.gif',
'A checkmark indicates that the information in this topic is relevant for this platform, and that any API listed is also supported on this platform. Windows Mobile Supported'],
dtype=object)
array(['images/bb158626.windowsembeddedce_off%28en-us%2cmsdn.10%29.gif',
'A hyphen indicates that the information in this topic is not relevant for this platform, or that the platform does not support the API that is listed. Windows Embedded CE Not Supported'],
dtype=object) ] | docs.microsoft.com |
A region is a geographic area where MRS is located.
MRS services in the same region can communicate with each other over an intranet, but those in different regions cannot.
Public cloud data centers are deployed worldwide in places such as North America, Europe, and Asia. MRS is therefore available in different regions. For example, applications can be designed to meet user requirements in specific regions or comply with local laws or regulations.
Each region contains many availability zones (AZs) where power and networks are physically isolated. AZs in the same region can communicate with each other over an intranet. Each AZ provides cost-effective and low-latency network connections that are unaffected by faults that may occur in other AZs. Using MRS deployed in an independent AZ protects your applications against failures in a single place.
Projects are used to group and isolate OpenStack resources (computing resources, storage resources, and network resources). A project can be a department or a project team. Multiple projects can be created for one tenant account.
A region has multiple projects, but one project is related to one region.
An MRS cluster in a project cannot communicate with an MRS cluster in another project. | https://docs.otc.t-systems.com/en-us/usermanual/mrs/mrs_01_0023.html | 2019-04-18T15:46:59 | CC-MAIN-2019-18 | 1555578517682.16 | [] | docs.otc.t-systems.com |
If you're a BigCommerce customer you can use PayWhirl to easily store credit card data for your customers.
In fact, when a customer purchases a plan through PayWhirl their payment method (credit card, debit card, etc.) will be vaulted securely within your connected payment gateway so both you and/or your customer can use the payment method again if needed.
You can also use a customer's saved payment method (credit card, debit card or bank via ach) to process a "one-time" payment for an existing product in your BigCommerce store.
Once complete, PayWhirl will generate an order in your BigCommerce store with all of the product information (sku, price, etc) and customer information (address, profile questions, etc) so you can fulfill the item(s) as you normally would in BigCommerce.
How to create a manual order in BigCommerce using a saved payment method within PayWhirl:
1) Navigate to Main Menu > Invoices > Manage Invoices.
2) Click the "Create Invoice" button.
3) Search for your customer by name or email address.
4) Next, search for an existing product in your Bigcommerce store.
NOTE: If you do not see this section on the create invoice page, please make sure you have the PayWhirl app installed and/or have accepted the latest app permissions in Bigcommerce.
5) Once you have selected a product from your Bigcommerce store PayWhirl will fill out the line item fields automatically using the latest information from your Bigcommerce product catalog. Once you have reviewed and/or edited click "Add" to continue.
6) When you have finished adding products / line items to the invoice click "Save & Continue"
7) Finally, review the invoice to make sure everything is correct before processing the payment. You can select which saved credit or debit card to process the payment with using the "Payment Method" dropdown in the right column under "Invoice Details."
Once the invoice has been successfully processed using the stored card on file you will see an order with a status of "awaiting fulfillment" in your BigCommerce account.
If you have any questions about storing credit cards with PayWhirl + BigCommerce please let us know!
Team PayWhirl | https://docs.paywhirl.com/PayWhirl/apps-and-integrations/bigcommerce/how-to-use-stored-credit-cards-to-create-manual-orders-in-bigcommerce | 2019-04-18T15:11:09 | CC-MAIN-2019-18 | 1555578517682.16 | [array(['https://uploads.intercomcdn.com/i/o/16958006/8fd43c647157d80f1821270c/Screen+Shot+2017-01-25+at+3.06.20+PM.png',
None], dtype=object)
array(['https://uploads.intercomcdn.com/i/o/16958088/bf6d12a3377d5ddf461935f1/Screen+Shot+2017-01-25+at+3.06.38+PM.png',
None], dtype=object)
array(['https://uploads.intercomcdn.com/i/o/16958115/27cca49ccc5d15e4c91943fd/Screen+Shot+2017-01-25+at+3.07.24+PM.png',
None], dtype=object)
array(['https://uploads.intercomcdn.com/i/o/16958160/3e8565b1d27db852c2e08abd/Screen+Shot+2017-01-25+at+3.07.48+PM.png',
None], dtype=object)
array(['https://uploads.intercomcdn.com/i/o/16958294/ad7acec277cea355d5a6210a/Screen+Shot+2017-01-25+at+3.08.03+PM.png',
None], dtype=object)
array(['https://uploads.intercomcdn.com/i/o/16958321/83048c5f5620831b5ae9a1b7/Screen+Shot+2017-01-25+at+3.08.20+PM.png',
None], dtype=object)
array(['https://uploads.intercomcdn.com/i/o/16958615/2a33d0f903e0cb2aa9d00f5a/Screen+Shot+2017-01-25+at+3.10.41+PM.png',
None], dtype=object)
array(['https://uploads.intercomcdn.com/i/o/16959198/ddbeadb2e08a4d62b8d17f82/Screen+Shot+2017-01-20+at+9.17.34+AM.png',
None], dtype=object) ] | docs.paywhirl.com |
Built-In Function Changes¶
Python 3 saw some changes to built-in functions. These changes are detailed in this section.
The
print() function¶
Before Python first introduced keyword arguments, and even functions with
variable numbers of arguments, it had the
print 'a + b =', print a + b print >> sys.stderr, 'Computed the sum'
In Python 3, the statement is gone. Instead, you can use the
print()
function, which has clear semantics (but requires an extra pair of
parentheses in the common case):
print('a + b =', end=' ') print(a + b) print('Computed the sum', file=sys.stderr)
The function form of
from __future__ import print_function
The recommended fixer will add the future import and rewrite all uses
of
Safe
input()¶
In Python 2, the function
input() read a line from standard input,
evaluated it as Python code, and returned the result.
This is almost never useful – most users aren’t expected to know Python syntax.
It is also a security risk, as it allows users to run arbitrary code.
Python 2 also had a sane version,
raw_input(), which read a line and
returned it as a string.
In Python 3,
input() has the sane semantics, and
raw_input was
removed.
The Compatibility library: six library includes a helper,
six.moves.input, that has the
Python 3 semantics in both versions.
The recommended fixer will import that helper as
input, replace
raw_input(...) with
input(...), and replace
input(...) with
eval(input(...)).
After running it, examine the output to determine if any
eval()
it produces is really necessary.
Removed
file()¶
In Python 2,
file() was the type of an open file. It was used in two
ways:
- To open files, i.e. as an alias for
open(). The documentation mentions that
openis more appropriate for this case.
- To check if an object is a file, as in
isinstance(f, file).
The recommended fixer addresses the first use: it will rewrite all calls to
file() to
open().
If your code uses the name
file for a different function, you will need
to revert the fixer’s change.
The fixer does not address the second case. There are many kinds of file-like
objects in Python; in most circumstances it is better to check for
a
read or
write method instead of querying the type.
This guide’s section on strings even recommends using
the
io library, whose
open function produces file-like objects that
aren’t of the
file type.
If type-checking for files is necessary, we recommend using a tuple of types
that includes
io.IOBase and, under Python 2,
file:
import io try: # Python 2: "file" is built-in file_types = file, io.IOBase except NameError: # Python 3: "file" fully replased with IOBase file_types = (io.IOBase,) ... isinstance(f, file_types)
Removed
apply()¶
In Python 2, the function
apply() was built in.
It was useful before Python added support for passing an argument list
to a function via the
* syntax.
The code:
arguments = [7, 3] apply(complex, arguments)
can be replaced with:
arguments = [7, 3] complex(*arguments)
The recommended fixer replaces all calls to
apply with the new syntax.
If the variable
apply names a different function
in some of your modules, revert the fixer’s changes in that module.
Moved
reduce()¶
In Python 2, the function
reduce() was built in.
In Python 3, in an effort to reduce the number of builtins, it was moved
to the
functools module.
The new location is also available in Python 2.6+, so this removal can be fixed by importing it for all versions of Python:
from functools import reduce
The recommended fixer will add this import automatically.
The
exec() function¶
In Python 2,
exec() was a statement. In Python 3, it is a function.
There were three cases for the statement form of
exec:
exec some_code exec some_code in globals exec some_code in globals, locals
Similarly, the function
exec takes one to three arguments:
exec(some_code) exec(some_code, globals) exec(some_code, globals, locals)
In Python 2, the syntax was extended so the first expression may be a 2- or 3-tuple. This means the function-like syntax works even in Python 2.
The recommended fixer will convert all uses of
exec to the function-like
syntax.
Removed
execfile()¶
Python 2 included the function
execfile(), which executed
a Python file by name.
The call:
execfile(filename)
was roughly equivalent to:
from io import open def compile_file(filename): with open(filename, encoding='utf-8') as f: return compile(f.read(), filename, 'exec') exec(compile_file(filename))
If your code uses
execfile, add the above
compile_file function to
an appropriate place, then change all calls to
execfile to
exec
as above.
Although Automated fixer: python-modernize has an
execfile fixer, we don’t recommend
using it, as it doesn’t close the file correctly.
Note that the above hard-codes the
utf-8 encoding (which also works if your
code uses ASCII).
If your code uses a different encoding, substitute that.
If you don’t know the encoding in advance, you will need to honor PEP 263
special comments: on Python 3 use the above with
tokenize.open()
instead of
open(), and on Python 2 fall back to the old
execfile().
The io.open() function is discussed in this guide’s section on strings.
Moved
reload()¶
The
reload() function was built-in in Python 2.
In Python 3, it is moved to the
importlib module.
Python 2.7 included an
importlib module, but without a
reload function.
Python 2.6 and below didn’t have an
importlib module.
If your code uses
reload(), import it conditionally if it doesn’t exist
(using feature detection):
try: # Python 2: "reload" is built-in reload except NameError: from importlib import reload
Moved
intern()¶
The
intern() function was built-in in Python 2.
In Python 3, it is moved to the
sys module.
If your code uses
intern(), import it conditionally if it doesn’t exist
(using feature detection):
try: # Python 2: "intern" is built-in intern except NameError: from sys import intern
Removed
coerce()¶
Python 3 removes the deprecated function
coerce(), which was only
useful in early versions of Python.
If your code uses it, modify the code to not require it.
If any of your classes defines the special method
__coerce__,
remove that as well, and test that the removal did not break semantics. | https://portingguide.readthedocs.io/en/latest/builtins.html | 2019-04-18T14:25:47 | CC-MAIN-2019-18 | 1555578517682.16 | [] | portingguide.readthedocs.io |
We have a number of communication channels that our clients can choose from to be contacted through. We outline how you can communicate with us in your SLA, therefore please ask your CSM if you are unsure about the options you can currently use.
- To ask questions without leaving the Exponea application you can use in-app chat where a member of our Support team will answer any problems you might have while using Exponea.
- You can ask questions at our Helpdesk which is available to all our clients from 9 am to 5 pm.
- You can also contact us via direct phone support. If you currently do not have this communication option and wish to utilize the service, please get in touch with your CSM.
Updated about a year ago | https://docs.exponea.com/docs/communication-interfaces | 2021-10-16T09:52:25 | CC-MAIN-2021-43 | 1634323584554.98 | [] | docs.exponea.com |
Monitor
Checking the health of a Microsegmentation Console
The Microsegmentation Console provides a
healthchecks endpoint that does not require authentication.
An example query follows.
curl\?quiet\=true | jq
Example response:
[ { "responseTime": "195µs", "status": "Operational", "type": "Cache" }, { "responseTime": "2ms", "status": "Operational", "type": "Database" }, { "responseTime": "346µs", "status": "Operational", "type": "MessagingSystem" }, { "responseTime": "0s", "status": "Operational", "type": "Service" }, { "responseTime": "2ms", "status": "Operational", "type": "TSDB" } ]
It also returns one of the following codes.
200 OKthe services and API gateway are operational
218a critical service is down
503the API gateway is down
You can use the endpoint as an external health check probe.
If you prefer, you can also use
apoctl to discover the status of Microsegmentation Console.
For example, the following command returns status information in a table format.
apoctl api list healthchecks -o table -c type,status
A healthy Microsegmentation Console would respond with the following.
status | type --------------+------------------ Operational | Cache Operational | Database Operational | MessagingSystem Operational | Service Operational | TSDB
Enabling additional monitoring tools
About the additional monitoring tools
After migrating to the multi-container Microsegmentation Console you can use the following monitoring tools.
- Prometheus and Grafana to scrape and display metrics about resource usage
- Jaeger to help you trace and debug issues with the Microsegmentation Console API
These tools are not available for single container deployments.
Enabling additional monitoring tools
The first time you create your Voila environment, the installer will ask you if you want to deploy the monitoring facilities:
[Optional] Do you want to configure the monitoring stack? (y/n) default is no: y [Optional] What will be the public url of the metrics monitoring dashboard? > Public Monitoring URL (example:, leave it empty to not deploy this compoment (default): [Optional] What will be the public url of the metrics API proxy endpoint (used for prometheus federation)? > Public metric URL (example:, leave it empty to not deploy this compoment (default): [Optional] Do you want to install the opentracing facility? (y/n) default is no: y [Optional] Do you want to install the logging facility? (y/n) default is yes: y
You can modify the deployment of monitoring facilities after installation as explained below: To enable the monitoring stack:
set_value global.public.monitoring override
This will deploy Prometheus to scrape service data and serve it through Grafana dashboards. It also enables alerts (see below). From there you can add more facilities.
To enable logging facility (recommended):
set_value enabled true|false loki override
This will deploy/remove Loki to scrape all service logs. This is useful to report crashes if any or trouble shot issues.
To deploy a metrics proxy that can be used to remotely query alerts and metrics and perform Prometheus federation:
set_value global.public.metrics override
To enable tracing facility (optional):
set_value enabled true|false jaeger override set_value enabled true|false elasticsearch override
This will deploy/remove the tracing facility that will allow you look at every Microsegmentation Console API request. In general, we recommend enabling this to help during development or to analyze slow requests. This can generate several gigabytes of data every hour causing the storage to fill up very quickly.
To enable metrics proxy (optional):
set_value global.public.metrics override
This will create service that you can use to pull metrics from outside (see below).
You can pick which facility you want to to deploy depending on what your needs are. Once you have enabled the tools that you need, issue the following command to apply the changes.
doit
Monitoring dashboards
Use the following command to obtain the monitoring dashboard URL.
get_value global.public.monitoring
To access the monitoring dashboards you need to have an
auditer certificate.
Use the
gen-auditer tool from your
voila environment to generate a
p12 certificate file in
certs/auditers.
You must import this to your workstation.
Dashboard metrics
Several metrics are collected using
prometheus and shown as dashboards through
grafana.
An example follows.
This dashboard gives you an operational overview of the platform, the load on the nodes, the storage and the current alerts and logs. This is your go to dashboard when you want to check the health of your platform.
This dashboard gives you more details about the resource usage of nodes and services. You can use these to pinpoint any compute resource contention.
This dashboard provides an overview of the state of the Microsegmentation Console and the general state of the component and compute resources usage. This is your second go to dashboard when you want to check the health of your platform.
This dashboard provide a detailed view of all the microservices. You can use this mostly to debug issues and track leaks.
This dashboard provides a Kubernetes resources allocation view. You can use it to locate resource starvation or overuse.
This dashboard provides advanced information about MongoDB and sharding.
This dashboard is reachable via the
Explore feature on Grafana, available from the compass icon on the left.
Use the top bar to select the facility you wish to explore.
If you enabled the tracing facility, you can select
jaeger-aporeto to see the traces.
You can get logged in as an admin in Grafana if needed.
The username is
admin and you can get the password with
get_value global.accounts.grafana.pass.
By default there is no data persistency on the dashboards, if you want to perform some persistent changes, you can enable the persistency by adding storage to Grafana with:
set_value storage.class <sc> grafana override
Where
<sc> is the storage class of your Kubernetes cluster.
Then update the Grafana deployment with
snap -u grafana --force
Configuring alerts
Using the metrics gathered some alerts are defined to check the health of Microsegmentation Console and report issues.
Through the
/healthchecks API you can get a summary of the current firing alerts:
[ { "responseTime": "1.123ms", "status": "Operational", "type": "Cache" }, { "responseTime": "6ms", "status": "Operational", "type": "Database" }, { "status": "Operational", "type": "MessagingSystem" }, { "alerts": [ "1 critical active alert reported for database type." ], "name": "Monitoring", "status": "Degraded", "type": "General" }, { "status": "Operational", "type": "Service" }, { "responseTime": "7ms", "status": "Operational", "type": "TSDB" } ]
The lack of details in intentional as this endpoint is public.
Alerts can be sent to a Slack channel by configuring the following:
set_value global.integrations.slack.webhook "" set_value global.integrations.slack.channel "#mychannel"
Then update the Prometheus deployment with
snap -u prometheus-aporeto --force
If you want to define your own alerting provider you can pass a custom AlertManager configuration as follow:
In
conf.d/prometheus-aporeto/config.yaml you can define
custom: alerts: # Your alertmanager configuration goes here global: .... rules: # your prometheurs rules goes here groups: ....
Configuring a metrics proxy
About the metrics proxy
The Microsegmentation Console uses Prometheus to gather statistics on all microservices and Kubernetes endpoints. The metrics proxy allows you to expose those metrics to perform Prometheus federation for instance.
Generating a client certificate
You will need to generate a client certificate that will be used to access the Prometheus federation endpoint: Generate a client certificate with the following command:
gen-colonoscope
Now you will need to configure the client that will scrape the data from the Prometheus federation endpoint with the following parameters:
- the metrics endpoint you set above (
get_value global.public.metrics)
- the Prometheus endpoint certificate authority (located in
certs/ca-chain-public.pem) the client certificate generated above (located in
certs/colonoscopes/<name>-cert.pem) the client key associated to the client certificate (located in
certs/colonoscopes/<name>-key.pem)
Once done the
alerts and
federate endpoints will be available.
Pulling alerts
You can retrieve alerts from the following endpoint.
https://<fqdn>/alertmanager/api/v1/alerts
(where
<fqdn> is what you configured as
global.public.metrics).
This can be used to pull alerts as documented on Prometheus website.
Example of output:
"status": "success", "data": [ { "labels": { "alertname": "Backend service restarted", "color": "warning", "exported_pod": "canyon-75c9f966dc-g7rgj", "icon": ":gear:", "prometheus": "default/aporeto", "reason": "Completed", "recover": "false", "severity": "severe" }, "annotations": { "summary": "canyon-75c9f966dc-g7rgj restarted. Reason: Completed." }, "startsAt": "2020-01-22T20:59:10.521318559Z", "endsAt": "2020-01-22T21:02:10.521318559Z", "generatorURL": "", "status": { "state": "active", "silencedBy": [], "inhibitedBy": [] }, "receivers": [ "norecover" ], "fingerprint": "5a483f5586d6de87" } ] }
Federating Prometheus
You can use the following endpoint to federate Prometheus instances together.
https://<fqdn>/prometheus/federate
Example request:
curl -k https://<fqdn>/prometheus/federate --cert ./certs/colonoscopes/example-cert.pem --key ./certs/colonoscopes/example-key.pem -G --data-urlencode 'match[]={type=~"aporeto|database"}'
This request will pull all current metrics.
Subset of output:
http_requests_total{code="200",endpoint="health",instance="10.64.241.42:1080",job="health-cactuar",method="GET",namespace="default",pod="cactuar-5cdddc64c7-sfwp8",service="cactuar",type="aporeto",url="/oidcproviders",prometheus="default/aporeto",prometheus_replica="prometheus-aporeto-0"} 2="/appcredentials",prometheus="default/aporeto",prometheus_replica="prometheus-aporeto-0"} 31="/servicetoken",prometheus="default/aporeto",prometheus_replica="prometheus-aporeto-0"} 1689 1579736744156 http_requests_total{code="200",endpoint="health",instance="10.64.241.42:1080",job="health-cactuar",method="PUT",namespace="default",pod="cactuar-5cdddc64c7-sfwp8",service="cactuar",type="aporeto",url="/appcredentials/:id",prometheus="default/aporeto",prometheus_replica="prometheus-aporeto-0"} 25 1579736744156
Example of Prometheus configuration used to scrape data from the Microsegmentation Prometheus instance:
scrape_configs: - job_name: "federate" scheme: https scrape_interval: 15s tls_config: ca_file: path-to-ca-cert.pem cert_file: path-to-client-cert.pem key_file: path-to-client-cert-key.pem insecure_skip_verify: false honor_labels: true metrics_path: "/prometheus/federate" params: "match[]": - '{type=~"aporeto|database"}' static_configs: - targets: - "<fqdn>"
Checking capacity
Among all the metrics reported, some capacity metrics are also available:
# HELP aporeto_enforcers_collection_duration_seconds The enforcer count collection duration in seconds. # TYPE aporeto_enforcers_collection_duration_seconds gauge aporeto_enforcers_collection_duration_seconds 0.003 # HELP aporeto_enforcers_total The enforcer count metric # TYPE aporeto_enforcers_total gauge aporeto_enforcers_total{unreachable="false"} 0 aporeto_enforcers_total{unreachable="true"} 0 # HELP aporeto_flowreports The flowreports metric for interval # TYPE aporeto_flowreports gauge aporeto_flowreports{action="accept",interval="15m0s"} 0 aporeto_flowreports{action="reject",interval="15m0s"} 0 # HELP aporeto_flowreports_collection_duration_seconds The flowreports collection duration in seconds. # TYPE aporeto_flowreports_collection_duration_seconds gauge aporeto_flowreports_collection_duration_seconds 0.004 # HELP aporeto_namespaces_collection_duration_seconds The namespace count collection duration in seconds. # TYPE aporeto_namespaces_collection_duration_seconds gauge aporeto_namespaces_collection_duration_seconds 0.005 # HELP aporeto_namespaces_total The namespaces count metric # TYPE aporeto_namespaces_total gauge aporeto_namespaces_total 3 # HELP aporeto_policies_collection_duration_seconds The policies count collection duration in seconds. # TYPE aporeto_policies_collection_duration_seconds gauge aporeto_policies_collection_duration_seconds 0.005 # HELP aporeto_policies_total The policies count metric # TYPE aporeto_policies_total gauge aporeto_policies_total 7 # HELP aporeto_processingunits_collection_duration_seconds The processing units count collection duration in seconds. # TYPE aporeto_processingunits_collection_duration_seconds gauge aporeto_processingunits_collection_duration_seconds 0.004 # HELP aporeto_processingunits_total The processing units count metric # TYPE aporeto_processingunits_total gauge aporeto_processingunits_total 0
Those metrics are used on the operational dashboard. | https://docs.aporeto.com/5.0/maintain/monitor/ | 2021-10-16T07:57:47 | CC-MAIN-2021-43 | 1634323584554.98 | [array(['../../../img/screenshots/ctrl-plane-grafana-main.png', 'Grafana'],
dtype=object)
array(['../../../img/screenshots/ctrl-plane-grafana-operational.png',
'Grafana'], dtype=object)
array(['../../../img/screenshots/ctrl-plane-grafana-resources-usage.png',
'Grafana'], dtype=object)
array(['../../../img/screenshots/ctrl-plane-grafana-aporeto-overview.png',
'Grafana'], dtype=object)
array(['../../../img/screenshots/ctrl-plane-grafana-aporeto-details.png',
'Grafana'], dtype=object)
array(['../../../img/screenshots/ctrl-plane-grafana-resources-allocation.png',
'Grafana'], dtype=object)
array(['../../../img/screenshots/ctrl-plane-grafana-mongodb.png',
'Grafana'], dtype=object)
array(['../../../img/screenshots/ctrl-plane-grafana-traces.png',
'Grafana'], dtype=object) ] | docs.aporeto.com |
KMC¶
KMC (K-mer Counter) is a utility for counting k-mers (sequences of consecutive k symbols) in a set of reads from genome sequencing projects.
KMC is available as a module on Apocrita.
Usage¶
KMC takes input files in the FASTA, FASTQ or multi FASTA and produces KMC databases.
To run the default installed version of KMC, simply load the
kmc module:
$ module load kmc $ kmc -h K-Mer Counter (KMC) ver. 3.0.0 (2017-01-28) Usage: kmc [options] <input_file_name> <output_file_name> <working_directory> ...
then run one of the commands such as:
kmc -k31 reads.fastq 31-mers ${TMPDIR}
KMC has a number of options, allowing for k-mer length adjustment and resource limits, specific options of interest are:
-k<len> - k-mer length (k from 1 to 256; default: 25) -m<size> - max amount of RAM in GB (from 1 to 1024); default: 12 -sm - use strict memory mode (memory limit from -m<n> switch will not be exceeded) -f<a/q/m> - input in FASTA format (-fa), FASTQ format (-fq) or multi FASTA (-fm); default: FASTQ -ci<value> - exclude k-mers occurring less than <value> times (default: 2)
${TMPDIR} as working_directory
${TMPDIR} is set by the scheduler to a local node disk which is
significantly faster than using GPFS as the
<working_directory>.
This will result in jobs executing faster, for example a 3.1GiB file
processed on GPFS takes an average of 16.46 seconds whilst using
${TMPDIR} only takes an average of 10.92 seconds.
Example job¶
Serial job¶
Here is an example job running on 1 core and 2GB of memory:
#!/bin/bash #$ -cwd #$ -j y #$ -pe smp 1 #$ -l h_rt=1:0:0 #$ -l h_vmem=2G module load kmc kmc -k31 reads.fastq 31-mers ${TMPDIR} | https://docs.hpc.qmul.ac.uk/apps/bio/kmc/ | 2021-10-16T07:47:22 | CC-MAIN-2021-43 | 1634323584554.98 | [] | docs.hpc.qmul.ac.uk |
Security Policy¶
Supported Versions¶
Reporting a Vulnerability¶
If you find a security vulnerability in the mastercomfig app, execution of mastercomfig, or something else, contact mastercoms through email directly: [email protected].
If you have a solution for the issue, attach it as a patch file to the email.
You can expect a reply within 24 hours of your report with the next steps of action regarding the vulnerability. This may include a request to create a pull request to resolve the vulnerability if applicable.
You must not disclose the vulnerability publicly unless you have not received a response after 1 month.
If the vulnerability is declined, you may post it publicly after 48 hours of its declination, unless the declination is retracted within that time period.
On the vulnerability being fixed, you may also disclose the vulnerability publicly after 1 week of the fix being deployed. | https://docs.mastercomfig.com/9.5.2/security/ | 2021-10-16T09:47:00 | CC-MAIN-2021-43 | 1634323584554.98 | [] | docs.mastercomfig.com |
Each time you create a component on the page, even a single one, the UIManager module is initialized (though indirectly). Its main tasks are:
Webix controls (e.g. Button, Text, etc), Form and Toolbar feature two opposite methods - focus() and blur() - that allow setting/removing focus.
$$("toolbar").focus(); // the toolbar is focused $$("toolbar").blur(); // the toolbar is no longer in focus $$("form").focus(); //the first focusable element in the form is focused
When using the focus() method either with a form or with a toolbar, you can pass the name of the needed control. In this case focus will be set to this control rather than to the whole form/toolbar:
$$("form").focus("text1"); // a text input with name "text1" in this form is focused
The UI Manager module has methods that take the ID of the needed widget as an argument:
You can use setFocus() to set focus to a data component and then call the select() method of the component to mark the focus visually:
Focusing widget with "books" ID
webix.ui({ id:"books", view:"list" }); webix.UIManager.setFocus("books"); //to set focus $$("books").select($$("books").getFirstId()) //to mark focusing visually
Every Webix component features a pair of focusing events onFocus and onBlur:
$$("datatable1").attachEvent("onFocus", function(current_view, prev_view){ //current_view is the DataTable in question }); $$("datatable1").attachEvent("onBlur", function(prev_view){ //prev_view is the DataTable in question });
In addition, Webix onFocusChange (global event) is triggered each time focus is shifted from one component to another. The following code retrieves the ID of the view that is currently focused and logs it to the console:
webix.attachEvent("onFocusChange", function(current_view, prev_view) { console.log("focused: " + (!current_view ? "null" : current_view.config.id)); });
Widgets that are in focus at the moment listen to the following keys:
For Global tab navigation see the related info.
Hotkeys for navigation in data widgets like datatable and list are enabled by default via the navigation property set to true:
Data widgets respond to arrow keys in the following way:
Hierarchical data widgets - Tree, Treetable, Grouplist - have specific behavior for the "right", "left" and "enter" keys:
If no item is selected at the moment, the first visible item gets selection.
Editors of data widgets react on the following keys:
Comboboxes include combo, richselect, multiselect, multicombo, datepicker, daterangepicker and colorpicker widgets. They listen to the following keys:
Only the first tab/radiobutton is included into the tab order. To navigate within a control use the following keys:
The hotkeys are enabled if the input area or slider handle is in focus:
Carousel buttons are in the tab order. In addition, carousel icons respond to the following keys if focused:
If no date is selected at the moment, the first date of month is selected.
If calendar shows a time selection view, "left" and "right" arrows are used to change hours while "up" and "down" arrows change minutes.
If no cell is selected at the moment, the first visible cell gets selection.
Note that if Color Select is used as a suggest for an input then:
You can define a custom hotkey that will trigger its onClick event. The key name (e.g. "enter" or "space") is specified by the hotkey property:
{ view:"button", click: doOnClick, hotkey: "enter" }
Related sample: Hotkeys for Buttons
The doOnClick function will fire either on pressing "enter" or on mouse clicking.
Key combinations joined by + or - sign are as well possible:
{ view:"button", click: doOnClick, hotkey: "enter-shift" }
Note that such functionality will work with simple controls like buttons and inputs, and will not with multiple-choice ones.
The addHotKey function is called from the UIManager object and has two obligatory parameters - key name and event handler. Key combinations joined by + or - sign are as well possible.
You can make hot keys global, which means that they will trigger the function regardless of the component. The one in focus will be subject to hot key actions.
webix.UIManager.addHotKey("Ctrl+V", function() { webix.message("Ctrl+V for any"); });
At the same time, you can specify any instance of a Webix component that should react on this or that hot key by passing its ID into the function as a third parameter.
In case you want all the view instances react on the specified hot key, state the view name instead of ID:
//hot keys for the component with "details" ID webix.UIManager.addHotKey("Ctrl+Enter", function() { console.log("Ctrl+Enter for details"); return false; }, $$("details")); // for "details" list only //hot keys for all list instances on the page. webix.UIManager.addHotKey("Ctrl+Space", function() { console.log("Ctrl+Space is detected for list"); }, "list");
Related sample: Basic Use of Editors
The removeHotKey function is used to remove a hotkey. It takes the key name as an obligatory parameter
//adding a hotkey webix.UIManager.addHotKey("Ctrl+Space", function() { ... }, "list"); //removing all hotkeys under the name webix.UIManager.removeHotKey("Ctrl+Space");
You can also specify the hotkey handler as the second parameter to remove only a particular hotkey:
//removing the hotkey with the specified handler webix.UIManager.removeHotKey("up", my_function);
It's also possible to remove a hotkey from a specific view. For this, you need to specify the view ID as the third parameter.
To remove the hot key from all the view instances, pass the view name instead of ID:
//removes a hotkey from the view with the "details" ID webix.UIManager.removeHotKey("up", null, $$("details")); //remove hot keys from all list instances on the page webix.UIManager.removeHotKey("up", null, "list");
In the code below the "Enter" key opens "details" accordion item. Before this, you check that it is not combined with either Ctrl, Shift or Alt key:
"Enter" key in action
$$("books").attachEvent("onKeyPress", function(code, e) { if (code === 13 && !e.ctrlKey && !e.shiftKey && !e.altKey) { $$("details").getParentView().expand("details"); return false; } });
These events require that a developer should know key codes used by UI Manager. Here they are:
You can move through your app with the Tab and Shift+Tab keys. All widgets and their clickable areas are in the tab order.
If you tab to a widget, its active area is focused. It can be the selected item, active tab or radiobutton, or the whole widget like text or button. If a data component does not have visible selection, the first visible item is focused.
Also, all clickable areas of a component (buttons, icons, text fields) are in the tab order as well.
The UIManager allows getting the next/previous widget in tab order:
All these methods take the necessary view id as an argument.
Let's consider the picture above and see how these methods will work for a text input field (its ID is "year").
Back to topBack to top
var prev = webix.UIManager.getPrev($$("year")); //returns "title" input field object var next = webix.UIManager.getNext($$("year")); //returns "rank" input field object var top = webix.UIManager.getTop($$("year")); //returns "layout" object | https://docs.webix.com/desktop__uimanager.html | 2021-10-16T08:36:48 | CC-MAIN-2021-43 | 1634323584554.98 | [] | docs.webix.com |
What's New in YNAB
With the release of these updates, the mobile app is getting a bit of a UI makeover, and we've completely revamped the way new YNABers are guided to set up their first budget on the mobile app to include some much-needed proverbial training wheels. In the web app, we've made some changes to both the Budget Header at the top of the web app as well as the Inspector (the right side bar). We have more details on all of those changes below! There are some wording changes that you’ll notice across all of YNAB as well, so let’s go through those first.
What Did You Call Me?
- To be Budgeted is now Ready to Assign: To be Budgeted is changing to Ready to Assign to make a clearer distinction between your budget as a plan for your money versus assigning money (a.k.a. budgeting) as an action. When you enter income, categorize it to Inflow: Ready to Assign and it will show up in Ready to Assign at the top of your budget so that you can assign that money to your categories.
- The Budgeted Column is now The Assigned column: This change affects the Budgeted column as well, which is now called the Assigned column. This column still works exactly the same way you’re used to - it just has a pretty new name. In the web app, you’ll also notice that in the inspector Total Budgeted has been changed to Assigned in [Month].
- Goals are now Targets: Goals will henceforth be referred to as Targets. They will behave the same as they have since the release of the new, smarter, Underfunded logic in the web app. Check out that link for more details.
- Quick Budget is now Auto-Assign: Quick Budget options are now called Auto-Assign options.
Web-Specific Changes
Aside from the verbiage changes, there are a few things in the web app that you'll notice are looking a little different (dare I say, more streamlined?). Here are the changes you can expect to see in the web app:
The Budget Header (the box at the top that shows what is Ready to Assign) is getting a makeover! We've streamlined the design so you can see how much you have Ready to Assign at a glance without having all of the other numbers in your face (they're still available by clicking on the Ready to Assign number in the budget header).
If you have money ready to assign, the budget header will be green, and show the amount you can assign as well as the Auto-Assign button. You can choose to either click the Auto-Assign tab and assign money to your categories based on the due dates you set up for your category targets, or you can continue to assign money to individual targets by clicking the Manual tab.
Once you've assigned all of the money in Ready to Assign, the Budget Header will turn grey and that Auto-Assign button will disappear so you don't assign more than you have. If you end up assigning more than you have, you'll see the Budget Header turn red with a negative number and a prompt to budget less into your categories until Ready to Assign goes back to zero.
The Inspector (the sidebar on the right side of the budget screen) is being re-arranged for a more streamlined look. Aside from the targets at the top (we moved those up a few months back), the Auto-Assign (formerly Quick Budget) section is getting bumped up, and the Inspector totals are taking up a bit less real estate in that sidebar.
Future Months (formerly Budgeted into Future) has moved out of the Budget header to the Inspector, along with a much anticipated Stealing from the Future alert, which lets you know if you've overassigned after assigning money in the future, and caused a negative Ready to Assign in a future month (if no categories are selected). And a collective cheer goes up from budgeters into the future everywhere! 🎉
The Category Activity, which shows up when a specific category is selected, has also been condensed into a smaller amount of real estate, though it still houses all of the activity information you know and love.
Mobile-Specific Changes
A lot of the changes to the mobile app are designed to help first time YNABers get up and budgeting more quickly, so we've completely revamped the steps that brand new YNABers go through when setting up their first budget. We have some gems for experienced YNABers as well, so if you've already been budgeting with us for awhile, check out these changes coming to the mobile app:
Progress bars - These have been around for a few months on the web app, but they’re now available on the mobile app as well! If you don’t want to see them, you can toggle them off and on in the Settings menu by toggling Expanded Categories. You can find more details on progress bars in the link above.
Edit Mode - Wouldn’t it be great if you could add, remove, and edit categories and category groups, as well as setting and editing goals, all in one window? Enter Edit Mode! You can access the Edit Mode by tapping the icon on the top right corner of the budget tab. From there you can add, edit, reorder, or delete categories or category groups, as well as create and edit the targets for those categories. Its like a one-stop shop for all budget changes in the mobile app.
Auto-Assign options will only show up when tapping on the category itself, where the Quick Budget button used to be. The lightning bolt and budget-wide quick budget options have retired, and last we heard they relocated to a sunny beach in Florida. 🏖 We know they provided some useful information, though, and we're working on getting that information added back into the mobile app - it just won't be in the form of the old lightning bolt, which was causing confusion for a lot of new users. | https://docs.youneedabudget.com/article/1725-whats-new-in-ynab | 2021-10-16T08:03:56 | CC-MAIN-2021-43 | 1634323584554.98 | [array(['http://d33v4339jhl8k0.cloudfront.net/docs/assets/569d98ea9033603f7da32549/images/60e30e1c9e87cb3d0124a157/file-xZNdjB8QWd.gif',
None], dtype=object)
array(['http://d33v4339jhl8k0.cloudfront.net/docs/assets/569d98ea9033603f7da32549/images/60e30fcc61c60c534bd6c3a6/file-a3FatFdLi9.jpg',
None], dtype=object) ] | docs.youneedabudget.com |
Multi Locus View¶
Summary¶
Multi Locus View (MLV) enables the user to visually inspect the underlying data from NGS experiments e.g. RNA-seq, ChIP-seq. Initially, the user uploads a list of genomic locations and associated data (Uploading a File), which can then intuitively filtered, sorted and viewed to drill down to regions of interest.
The view is divided into three sections. The top left shows charts (1), the bottom left a simple genome browser (2) and the right section shows a table containing all the genomic locations (3). Initially no charts are present and the browser only shows a track of the uploaded genomic regions (and possibly a RefGene track if a genome was selected). In order to make it easier to view and filter your data you can add charts (Adding Graphs/Charts) and browser tracks (Adding Tracks) . In addition analysis jobs such as finding intersections, claculating signal at each location from a bigWig track and dimension reduction can be carried out (Running Analysis Jobs). Locations can be annotated (Tagging Locations) by the user and downloaded or exported to the data visualisation tool Zegami
Creating a Project¶
To create a project click on ‘My Projects’ (1) on the top navigation bar and then the Multi Locus View panel (2). This will take you to a page where you have to fill in the name and description of the project (3). You also have to select the gemome required. If the genome you want is not available select ‘other’. In this case gene information and other annotations will not be available. When you press ‘Create’ you wll be taken to a page where you can upload your file (see below)
Uploading Data¶
Input File Required¶
The file format is quite flexible and can be either tab(.tsv) or comma(.csv) delmited and can also be gzipped (.gz). The only requirement is that the first three columns (in order) specify the genomic location i.e. chromosome, start and finish. Normal bed files fulfill these criteria as well as excel data that has been saved as a .csv or .tsv file.
Other file types apart from bed-like files are not supported in the initial upload but can be added later to the browser (see Adding Tracks). In addition, in subsequent analysis bigWig files can be uplaoded in order to calculate peak stats at each genomic location (see Calculate Peak Stats)
Chromosome names can either be the UCSC e.g. chr1, chr2, chrX etc or Ensembl style 1,2,X. However, if subsequent files are added to the anaylis they have to be of the same format. This is not the case for calculating intesections, where you can mix and match between Ensembl and UCSC style chromosome names.
Uploading a File¶
Press Choose in the displayed dialog and select your file containing genomic locations. The file will be parsed and the column (feild) types will be ascertained and displayed. Ensure the correct datatype has been deduced and change if necessary using the dropdowns (2). The header names are taken from the file (1), but you can change these and you can also delete any columns that you do not want (3). If no headers were detected in the file, enter the name of the column - the value of the first row is given to help you (1). If the file contains a header, but it was not detected, check the Has Headers box (4) at the the bottom of the dialog.
Once you are satisfied, press the upload button. There will be a delay as the data is uploaded and processed and if there are no problems you will be presented with an initial view
Saving A Project¶
When tracks, graphs or columns are added and you have edit permissions (see Permissions), they are permanantly added to the project. Also, when an analysis job has completed, any graphs,tracks or columns produced will be permanant. However, when you alter a graph or tracks’s settings or reize/move them, these are not saved. Similary any changes you make to the table (column resizing/ordering, sorting etc.) will not automatically be saved. In order to save these changes you need to save the layout > Save Layout.
If you wish to make changes to public project then you will have to clone the project > Save As. The project will be copied into your name and you can them make any changes you wish.
Adding Graphs/Charts¶
Charts help you get a picture of the data as a whole and also help you filter the data. By selecting regions (dragging with th mouse) on scatter plots and histograms or clicking on sections in pie charts, row charts and box plots, the data can be intutitively filtered. With each filtering step, all charts will update (as well as the table and browser) to reflect the filtered data. Filters on individual charts can be removed by clicking the reset button which appears on the chart’s title bar when a filter is applied or filters on all charts can be removed with the ‘Reset All’ button.
Charts can be moved by dragging on the title bar, and resized by dragging on the resize icon, which appears in the bottom right hand corner of chart when you hover over a chart.
Initially the only chart visible will be a row chart showing Tags (see Tagging Locations) so you need to add other charts to get a better insight into your data (see below)
Adding a Chart¶
Clicking on the ‘Add Chart’ button will show a dialog where you have to select the type of chart, the fields to use in the chart and its name. Once created you can change the chart’s settings ( icon), which differ according to the chart’s type and with some charts color it ( icon). Charts can moved by dragging them via the title bar and resized by the resize icon which appears in bottom left hand corner when the mouse is over the chart. The chart can be removed by clicking the trash icon, which appears when you hover over the graph’s title. Once charts have been added and the appropriate settings/colors added, they can be saved using the icon above the table. The following chart types are available
Scatter Plot¶
A standard scatter plot requiring numerical fields for the x and y axis. Once created, the points can be coloured ( on the title bar). Also by opening up the settings dialog ( icon) you can alter the point size (3). By default the graph will show all the points, but you can zoom in and out using the mouse wheel and pan by pressing shift and dragging with the mouse. However if you want the default view to be a particular region, you can set this using the inputs in the Main Region section (4)and pressing show. The x and/or y axis can also be set to a log scale (5). After zooming and panning, the Centre Plot button (6) will restore the plot to show all the points or the region specified in (4). Normal mouse dragging (without shift) will cause pruduce a brush that filters the data, once created the brus can be dragged to different regions of the plot.
Histogram¶
Shows the distribution of a numerical field in the data. The x range is automatically set to include the largest and smallest values. However, this will often lead to the chart looking skewed due to low numbers of extreme outliers. Therefore, you can use the cog icon (1) to open up the settings dialog, where an upper and/or lower limit can be set (3). Values higher or lower than these limits will be lumped together in the outermost bin (4). The y axis can also be capped (5) in order to get a better handle on bins conatining fewer counts. The number of bins can be adjusted using the appropriate spinner(6). Each bar can be coloured by categorical data use the icon (2).
Pie Chart¶
Shows categorical data. By default the maximium number of categories shown are the 8 largest ones, any reamining categories are lumped into ‘Others’. This can be changed by opening up the settings dialog (). Clicking on a segment (category) will select that category and clicking on further segements will add these to the filter. To filter again with a single category, use the reset button.
Row Chart¶
A chart showing catgories on the y axis and usually the number of records belonging to this category on the y axis. You can also choose a numerical field for the x axis, in which case the values of this field will be summed for each category. However a boxplot is usually more informative for this kind of information as the average and quartile ranges of the values are shown instead of the sum. As with the pie chart, the maximium number of categories shown are the 8 the largest ones, but this can be changed by opening up the settings dialog ()
BoxPlot¶
A chart showing categeories and average/quartile ranges of the values of another field for that category. Box plots work best for fields that contain only a small number of categories. They are scaled to include all the datapoints, so if there are extreme outliers, the boxes will appear squashed.
Bar Chart¶
A Bar Chart showing the column average of any number of supplied fields. Because fields may differ in scale,to ensure differing values can fit on the same scale, the average is scaled between the median +/- 1.5*IQR (the same as the whiskers on a boxplot). The graph changes to only include those datapints in the current filter. No Selection is possible with this chart, as it would make no sense to filter on a column
The Genome Browser¶
The browser shows the genomic location of the currently selected table row (or image). The distance either side of the region to also show can be controlld using the margin spinner (1) above the browser
Adding Tracks¶
Initially only two tracks will be displayed, the genomic locations you uploaded and if you didn’t select ‘other’ for the genome, a track is displaying the genes. Other tracks can be added with the ‘Add Tracks’ button (2), which shows a dialog where you need to enter the url of a publically accessible track. The hosting server of the track should allow Cross Origin Resource Sharing (CORS). The type of track will try and be ascertained based on the url, although you can manually overide this by clicking on one of the radio buttons
Tracks that can be added are:-
- bed(tabix) - A bed file that has been gzipped and indexed with tabix
- BigBed
- BigWig
- Bam - A bam index is also required
- UCSC session - either cut and paste the url from the UCSC browser or use a session url. The latter will be more stable as the former uses a temporary id, which is only valid for a short period.
Altering Track Appearance¶
Clicking on the track label in the legend (3) will open a dialog for that track. The contents of the dialog will vary according to the type of track. The track height can be altered from this dialog
Zooming/Panning¶
There are five ways you can navigate using the browser:-
- You can zoom in and out using the mouse wheel and scroll left and right by deagging the mouse
- Use shift and drag to highlight and zoom into a region on the browser
- Use the magnifying glass icons (4), the zoom amount can be controlled by the adjacent spinner (5)
- Type the location co-ordinates (chr:<start>-<stop>) in the location text box (6)
- Click on a row or image in the right hand table to go to that feature. The margin spinner (1) shows how many bp either side of the feature will be displayed.
Feature track¶
This shows the uploaded regions(features) displayed in the right hand table. Clicking on the settings icon (7) will bring up a dialog where the following can be adjsuted:-
- Set the field you wish to a label the features with
- Set the field to color the feature by
- Set the field with which to position the feature on the y axis. By default the feature layout depends on the layout (Collapsed, expanded or squished) but can be a numeric field
- Choose the margins (distance either side of the feature) that will be displayed when you click on an image or a row in thr table
Saving the Browser Layout¶
Use the disk icon above the the table to save all settings including the current layout of the browser (tracks and track settings)
The Table/Images¶
The default table behaves as a typical spread sheet, you can alter the column width by dragging the header’s left and right borders and move columns by dragging the column’s header. Clicking on the header will sort by that column. Clicking on that row will select it and update the browser.
Table Mode¶
If your project contains images (see Adding Images) then then you change how the table is displayed using the table icon (). Three choices are table (1), images (2) and table with thumbnials (3).In image mode, the genomic location can be selected by clicking on the image and using the arrow keys to select the next/previous image. In this mode, the data can be sorted and filtered using the icons ( ) in the menu above the table. Also in image mode you can alter image size using the slider in the table menu and also color the border around the image by a field (). This opens up a dialog where you can choose the field and the color scale to use
Filtering Data¶
It is often more intuitive to filter using graphs (see Adding Graphs/Charts ), however data can be filtered by clicking on filter icon in each column header. To filter on multiple columns or when the table is only showing images,press the filter icon on the top table menu. This will bring up a dialog showing filtering options for all fields in the data. Whenever any filters are added or changed, any charts will update accordingly,but the filters are not added to the charts or existing filters on the charts updated as they are completely independant.
Sorting Data¶
The data can be sorted on columns by clicking the column header (shift click to sort on multiple columns). The data can be also be filtered by clcking the sort icon in the table menu. In the sort dialog,the columns to be sorted on are added usng the plus icon and then either Ascending (ASC) or descending (DESC) can be chosen . The sort order can be changed by dragging the labels or columns removed from the sort by clicking on the trash icon
Tagging Locations¶
Sometimes it may be useful to catgeorise or tag the genomic location based on a trend theat that you have discovered. This can be done by opening up the tagging dialog with tag icon (1) in the menu above the table. Initially only the none category is present. To add other ones type a name in the text box (2) and press the add button (3) . The category will then be added to the list at the top of the dialog. By selecting the radio button next to it, then clicking on an image or a cell in the tagging column in the table will tag that genomic location. Multiple locations can be tagged by clicking and image/cell and the shift clicking another one and all the images/rows in between will be tagged. The ‘Tag All’ will tag all the currently filtered locations with the currently selected catogry. Another way to tag is to use the arrow keys to go to the next previous image/row and then press the shortcut key shown in brackets next to the category to tag the currently selected items with that category. The category color can be changed by clicking on the appropriate color chooser (7). The category can be removed (which will remove all tags of this category from the data using the trash icon next to the category (8)
N.B. To permanantly save the tags press the Save button (5) which will commit the changes to the database.
Adding Images¶
Images for every genomic location can be added to the project and then displayed in the table. The icon (1) opens up a dialog where you can choose to either have images created based on the internal browser (2) or by the UCSC browser (3). with the USCS option, you can have more detailed images, but is image generation is much slower and you are limited in the number of images you can create. One option is to create a smaller subset (see Creating Subsets and then produce images from this.
MLV Images¶
Clicking on the Preview button (4) will show a preview of the image for the currently selected row (5). The image is based on the tracks and settings in the browser (6) see The Genome Browser on how to add tracks and alter their appearance. You can adjust the image width and the width of margins shown either side of the genomic location by using the apprpriate spinners (7 and 8). Once you are happy with image you can press submit button (9) and images for all genomic locations will be created. This will take a few minutes (approx 800 images/min).
UCSC Images¶
Clicking on the UCSC radio (3) will enable the the URL input (9) where you can paste a UCSC browser URL or session. Pressing preview will check the url is valid and produce an image based ob the margin width (7), image width(8) and selected gemomic location (5). If a preview was sucessfuly produced then you can press the submit button (9) to generate images for all genomic locations. This will take quite a while.
An email will be sent when all images have been generated. You can then view the table in image or thumbnail mode (see Table Mode) and upload the project to Zegami
Running Analysis Jobs¶
Analysis jobs are run in the background on the server and the results, in the form of tracks, graphs and extra columns in the table are added to the project once the job is complete. The following types of analysis are possible:-
- Annotation Intersection - calculates whether each location overlaps a set of annotations or locations from another project.
- Find TSS Distances - calculates whether each location overlaps a Transcription Start Site (TSS) and if not, the distance to nearest site, either upstream (+) or downstream (-).
- Calculate Peak Stats - calculates the area and max height of the signal from a bigWig file at at each location in the project.
- Cluster on Columns - carries out dimension reduction (UMAP,tSNE) on any number of given columns in the project.
Jobs are run in the background and can be viewed on thr ‘My Jobs’ page (link in the top navigation bar) or in the history dialog (), which will automatically open when you send a job. You do not need to stay on the page whilst jobs are running, although if you do when the job is finished, you will be notified and the appropriate results loaded in.
Annotation Intersection¶
Intersections can be carried out between an Annotation Set, which us basically just a list of genomic locations, or another project. In the simplest case, a single column will be added, with TRUE/FALSE values, depending on whether a region in your project overlaps with a region in the query set.
Annotation Sets¶
Annotation sets are just lists of genome locations and can be created by clicking on ‘Annotations’ in the upper panel of my projects page, which will open up the following page:-
Fill in the name, description and genome in the right hand panel (1), then press next (2). A dialog will open (3), which allows you to upload a bed like file (see Uploading a File). In the left hand panel (4) is a list of all the annotation sets that you own. You can make these public (5), share (6) or delete them (7).
Another way to create an annotation set is within a project, > create annotation set. N.B. The set will be created from the currently selected (filtered locations), shown the top right hand corner. Again, the dialog allows you to fill in the name and description of the set and you can also check any columns that you want included in the set.
Intersections¶
> ‘Annotation Intersection’ will being up a table with all the Annotation Sets and projects that you are able to inteserct with. You can select single or multiple (ctrl or shift) sets/projects. If you select a single project/set then a dialog will ask you whether you want just a TRUE/FALSE column or whether you want extra columns, with information from the intersecting set.
Once an annotation set has run, columns will be added to the table, either a single TRUE/FALSE column for each intersecting set or the data columns you selected in the previous step. In addition, pie charts showing this TRUE/FALSE distribution will be added, along with a track for each intesecting set.
Find TSS Distances¶
will bring up a simple dialog, with the only choice being whether you want to include Gene Ontolgy annotations. These are taken from the go-basic.obo file () and collpsed, such there is only one term (the most frequent) at each hierarchical level. If you include GO annotations, you can choose upto which hierarchical level you want.
Once the job has run four columns will be added to the table
- TSS distance - the distance to the nearset TSS, + being upstream and - downstream
- Gene ID - the Genbank gene id of the nearest gene
- Gene Name - the common name of the nearest gene
- Ovelaps TSS - either TRUE or FALSE depending on whether the region overlaps the TSS
Cluster on Columns¶
In order to get a better handle on the data it may be useful to collapse some of the fields into two (or more) dimensions, such that they can be visually clustered in a 2D scatter plot. To do this click on the cluster icon () above the table, which brings up the dialog below.
Type a name in the text box (1) and select the dimension reduction methods required (2). The number of dimensions can also be increased (default 2) using the dropdown (3). All numerical fields in the analysis are displayed (4), check all the ones to be used in the analysis and then press submit (5).
The outputs are columns for each dimension for each method named ‘<method><number>_<anlysis_name>’ e.g tSNE1_anal1, tSNE2_anal1 etc. For each method, a scatterplot of the first two dimensions will be also be added. You can change these scatter plot e.g color by a specific field (see Scatter Plot) or use the dimensions to add another graph (see Adding a Chart)
Calculate Peak Stats¶
If you have bigWig files and you want to find out the peak area/height in each of the genomic locations in your project, use the icon which brings up the dialog below.
Paste the url (or a list of urls) corresposnding to bigwig files you want to analayse in the text box (1) and press Add (2). If the bigWig files can be located and they are the correct format, they will be added to the “bigWig Tracks to Process” section (3)in the dialog. The name is taken from the file name, but this can be changed. When you have specified all the bigWig files required, press submit The area, max height, width and density (area/width) in each location will be calulated. You do not have to stay on the page whilst the stats are being calculated.
When complete, columns will be added to the table with the relevent information and each bigWig track will be added to the Browser. The bigWig tracks are added with default settings, so you may need to change them to suit your needs (see Altering Track Appearance_). Note it just calculates the amount of signal in each region and reports the width of the region, it does not try and call peaks and work out the the width of the peak.
Creating Subsets¶
Clicking on brings up a dialog, which allows you to create a subset of the currently selected locations
You can choose to create the subset either from the currently filtered locations (1) or from a random subset (2) with a specified number (3). After filling in a name and description (4) and (5), press ‘Create’ and the subset will be created. Once this has happened, you will get a link to the subset. You can create a susbset of any project you have viewing rights too, including public projects, and you will be the owner of the subset. All graphs/tracks/columns are copied, although the graphs may look different as there will be fewer locations in the new project.
Exporting Data¶
Click on the download icon to download the currently filtered locatons. The data is just downloaded as a text file,
although you get a choice to download in either tsv or csv format.
Only the currently displayed columns will be downloaded, so expand any column groups by clicking the plus icon if you want these in your file.
If you have images in your project, you can export the data to Zegami. Click on the Zegami icon
and a dialog will appear,
wher you have to fill in your zegami username, project id and password. Once the project is created, you will emailed a link to the project
Project History¶
Every action such as adding a graph/track/column is recorded and be viewed by opening the history dialog ()
Clicking on the eye icon (1) will toggle information about the action. The second icon (2) shows the status of the action, a tick means it is complete, a spinning circle shows that it is still processing an an exclamation mark showd there was an error whilts trying to perform the action. If you have edit permissions you are able to undo the action (N.B. there is no redo action). This will remove the action from the dialog and remove any tracks,columns or graphs that the action generated. If columns are removed, than any graphs which use these columns will also be removed, even if they were not added by the action.
Permissions¶
There are two types of permission for a project, view and edit.
If you have a view permission for a project (anyone has view permission for a public project), you can open the project and add tracks and charts, as well as edit existing charts and tracks. However, you cannot save any updates or run any jobs such as finding TSS’s or creating images. If you want to do this, you will have to copy the project (you need to be logged in) - click the disk icon () and the select ‘save as’. This will clone the project in your name and then you can make any changes you wish.
If you have editing rights to a project you can make any changes you want , run any jobs and save the layout . You automatically have editing rights to a project you own it or if you are an adminisrator. You can also be assigened editing rights to a project (see below)
The icon on the menu allows you to share the project with another user and assign them view or editing rights
Making a Project Public¶
Click on share icon () above the table and select ‘share project’, you will be prompted to see whether you really want the project to become public. If you click OK then then anyone (including non users) will be able to view the project. You can share the project by sharing the link in the browser’s address bar.
Submitting an Issue¶
An Issue or question, can be asked within MLV (if you are logged in) using the help link in the top navigation bar > ‘Send Question’. You can also submit an issue to he GitHub page
Frequently Asked Questions¶
Can MLV be viewed on a mobile/small screen device?¶
No. The whole idea is to see how each component ie. the graphs,tracks and images change as you filter the data, which would not be possible, if only one component was displayed at once. All panels and individual tracks/graphs/table columns/images can be resized, to get the exact layout that the user requires, rather than relying on adapative screen size techniques which limit viewing to a single compnent/panel on small screen sizes
Can I upload a bigWig file?¶
Not initially. The only files that can be initially uplaoded are bed or bed like files with genomic locations. However, bigWig files can be added to the browser and uplaoded for processing later (see Calculate Peak Stats). Another application, Lanceotron does take bigWig files and identifies peaks based on machine learning. | https://lanceotron.readthedocs.io/en/latest/multi_locus_view/multi_locus_view.html | 2021-10-16T07:47:28 | CC-MAIN-2021-43 | 1634323584554.98 | [array(['../_images/multi_locus_view.png', 'alternate text'], dtype=object)
array(['../_images/create_mlv.png', 'alternate text'], dtype=object)
array(['../_images/upload_dialog.png', 'alternate text'], dtype=object)
array(['../_images/scatter_plot.png', 'alternate text'], dtype=object)
array(['../_images/histogram.png', 'alternate text'], dtype=object)
array(['../_images/browser.png', 'alternate text'], dtype=object)
array(['../_images/table_mode.png', 'alternate text'], dtype=object)
array(['../_images/tagging.png', 'alternate text'], dtype=object)
array(['../_images/creating_images.png', 'alternate text'], dtype=object)
array(['../_images/annotation_set.png', 'alternate text'], dtype=object)
array(['../_images/cluster_dialog.png', 'alternate text'], dtype=object)
array(['../_images/peak_stats.png', 'alternate text'], dtype=object)
array(['../_images/subset.png', 'alternate text'], dtype=object)
array(['../_images/history.png', 'alternate text'], dtype=object)] | lanceotron.readthedocs.io |
Event news feed centralizes publications from the current event. You can also put a message that will be intended to all members in the same way as you would on your news feed.
Learn more on News Feed
Refer to the News Feed / Activity Stream documentation for more information
// Get event Event event = session.getEvent().blockingGet(EVENT_ID); // List News Feed from Event Iterable<Feed> newsFeedIterable = event.blockingListNewsFeed(PAGE, SIZE); // Post on the Event News Feed FeedPost feedPost = new FeedPost.Builder()...build(); Feed postedFeed = event.blockingCreateFeedPost(feedPost);
// Get event let event = try session.event.blockingGet(EVENT_ID)! // List News Feed from Event let newsFeedIterable = try event.blockingListNewsFeed(page: PAGE, size: SIZE) // Post on the Event News Feed let feedPost = try FeedPost.Builder()...build() let postedFeed = try event.blockingCreateFeedPost(feedPost)
let event = await session.event.get(EVENT_ID); let feedItems = await event.listNewsFeed(); let post = new FeedPost().setMessage("..."); let feedItem = await event.createFeedPost(post);
session.event.get(EVENT_ID).then((event)=> { event.listNewsFeed().then((feedItems)=> { }); let post = new FeedPost().setMessage("..."); event.createFeedPost(post).then((feedItem)=> { }); }); | https://docs.mysocialapp.io/reference/event-news-feed | 2021-10-16T08:43:18 | CC-MAIN-2021-43 | 1634323584554.98 | [] | docs.mysocialapp.io |
Tables
Table CreationTable Creation
Once you have created a new NocoDB project you can open it, In the browser, the URL would be like
example.com/dashboard/#/nc/project_id.
Now you can start creating new tables, so let's begin the table creation by simply clicking one of the following options.
On click, it will popup a table create a modal popup, in which you can enter the table name alias and table name. Enable/disable default columns and finally click the
Submit button.
You can't disable the
idcolumn since we need a primary column for the table.
After the successful submission, the table will create and open as a new tab.
Column CreationColumn Creation
Adding a column is simple, you have to click the
+ icon on the right corner of the table.
After the click, it will show a menu and you can enter the column name and choose the column type (Abstract type) from the column type. And finally, you can click the save button to create the new column.
For more about Abstract type click here.
Finally, we have our new column as part of our table.
Row creationRow creation
For adding new values to the table we need new rows, new rows can be added in two methods.
Using FormUsing Form
- Click the
+icon in the toolbar of the table tab.
- Now it will open a modal Form to enter the values, provide the values and press the save button.
- After saving it will be there on your table.
Using Table RowUsing Table Row
Click the bottom row of the table which contains
+icon at the beginning.
Now it will add a new row in the table and you can start editing by any of the following methods
- Double click
- Click and start typing (this way it will clear the previous content)
- Click and press enter to start editing
And it will automatically save on blur event or if inactive.
Table DeletionTable Deletion
The table can be deleted using the
delete icon present in the toolbar within the table tab.
Column DeletionColumn Deletion
Column deletion can be done by using the
delete option from the column header menu.
Row DeletionRow Deletion
Right-click on anywhere in the row and then from the context menu select
Delete Row option. Bulk delete is also possible by selecting multiple rows by using the checkbox in first column and then
Delete Selected Rows options from the context menu. | https://docs.nocodb.com/setup-and-usages/tables/ | 2021-10-16T08:12:13 | CC-MAIN-2021-43 | 1634323584554.98 | [array(['https://user-images.githubusercontent.com/61551451/126771744-063f22da-6def-43fe-b9ef-1744d104db9d.png',
'table_create'], dtype=object)
array(['https://user-images.githubusercontent.com/61551451/126772859-5a301c45-d830-4df2-a05a-43b15dd77728.png',
'table_create_modal'], dtype=object)
array(['https://user-images.githubusercontent.com/61551451/126773614-c945f654-cba8-4dd6-bd5e-d74890543d11.png',
'image'], dtype=object)
array(['https://user-images.githubusercontent.com/61551451/126773798-4470d632-69e0-4f5f-803b-e3597715fe22.png',
'Pasted_Image_23_07_21__4_39_PM'], dtype=object)
array(['https://user-images.githubusercontent.com/61551451/126774157-ae9af236-e1ad-4a54-adb7-1b96775cae57.png',
'image'], dtype=object)
array(['https://user-images.githubusercontent.com/61551451/126774276-e947f510-2fe1-4595-afc1-a31d2c35a69a.png',
'Pasted_Image_23_07_21__4_43_PM'], dtype=object)
array(['https://user-images.githubusercontent.com/61551451/126787235-6751cadf-3e8a-446d-9db8-0d6ec330b243.png',
'Pasted_Image_23_07_21__6_45_PM'], dtype=object)
array(['https://user-images.githubusercontent.com/61551451/126787679-562aaa22-14b3-4ff8-8057-b8219e057110.png',
'Pasted_Image_23_07_21__6_49_PM'], dtype=object) ] | docs.nocodb.com |
Analysis Parameters
Project analysis settings can be configured in multiple places. Here is the hierarchy:
- Global properties, defined in the UI, apply to all projects (From the top bar, go to Administration > Configuration > General Settings)
- Project properties, defined in the UI, override global property values (At a project level, go to Project Settings > General Settings)
- Project analysis parameters, defined in a project analysis configuration file or scanner configuration file, override the ones defined in the UI
- Analysis / Command line parameters, defined when launching an analysis (with
-Don the command line), override project analysis parameters
Note that only parameters set through the UI are stored in the database.
For example, if you override the
sonar.exclusions parameter via command line for a specific project, it will not be stored in the database. test coverage and execution, see Test Coverage & Execution.
For language-specific parameters related to external issue reports, see External Issues.
Analysis parameters are case-sensitive.
Mandatory Parameters
Server
Project Configuration
Optional Parameters
Project Identity
Authentication
By default, user authentication is required to prevent anonymous users from browsing and analyzing projects on your instance, and you need to pass these parameters when running analyses. Authentication is enforced in the global Security(/instance-administration/security/) settings.
When authentication is required or the "Anyone" pseudo-group does not have permission to perform analyses, you'll need to supply the credentials of a user with Execute Analysis permissions for the analysis to run under.
Web Services
Project Configuration
Duplications
Analysis Logging
Quality Gate
Deprecated
These parameters are listed for completeness, but are deprecated and should not be used in new analyses. | https://docs.sonarqube.org/latest/analysis/analysis-parameters/ | 2021-10-16T09:16:00 | CC-MAIN-2021-43 | 1634323584554.98 | [] | docs.sonarqube.org |
JavaScript / TypeScript
Prerequisites
In order to analyze JavaScript or TypeScript code, you need to have supported version of Node.js installed on the machine running the scan. Supported versions are current LTS versions (v12, v14) and the latest version - v16. Odd (non LTS) versions might work, but are not actively tested. We recommend using the latest available LTS version (v14 as of today) for optimal stability and performance. v10 is still supported, but it already reached end-of-life and is deprecated.
If
node is not available in the PATH, you can use property
sonar.nodejs.executable to set an absolute path to
Node.js executable.
Language-Specific Properties
Discover and update the JavaScript / TypeScript properties in: Administration > General Settings > JavaScript / TypeScript.
Supported Frameworks and Versions
- ECMAScript 3, 5, 2015, 2016, 2017, 2018, 2019, and 2020
- TypeScript 4.3
- React JSX
- Vue.js
- Flow
Troubleshooting
Slow or unresponsive analysis
On a big project, more memory may need to be allocated to analyze the project. This would be manifested by analysis getting stuck and the following stacktrace might appear in the logs
ERROR: Failed to get response while analyzing [file].ts java.io.InterruptedIOException: timeout
You can use
sonar.javascript.node.maxspace property to allow the analysis to use more memory. Set this property to
4096 or
8192 for big projects. This property should be set in
sonar-project.properties file or on command line for scanner (with
-Dsonar.javascript.node.maxspace=4096).
Default exclusions
By default, analysis will exclude files from dependencies in usual directories, such as
node_modules,
bower_components,
dist,
vendor, and
external. It will also ignore
.d.ts files. If for some reason analysis of files in these directories
is desired, it can be configured by setting
sonar.javascript.exclusions property to empty value, i.e.
sonar.javascript.exclusions="", or to comma separated list of paths to be excluded. This property will exclude the
files also for other languages, similar to
sonar.exclusions property, however
sonar.exclusions property should be
preferred to configure general exclusions for the project.
Custom rules
Custom rules are not supported by the analyzer. As an alternative we suggest you to have a look at ESLint, it provides custom rules that you can then import thanks to the External Issues feature.
Related Pages
- Test Coverage & Execution (LCOV format)
- Importing External Issues (ESLint, TSLint)
- SonarJS Plugin for ESLint
- Adding Coding Rules
Issue Tracker
Check the issue tracker for this language. | https://docs.sonarqube.org/latest/analysis/languages/javascript/ | 2021-10-16T08:29:44 | CC-MAIN-2021-43 | 1634323584554.98 | [] | docs.sonarqube.org |
Shapelets¶
Shapelets are defined in [1] as “subsequences that are in some sense maximally representative of a class”. Informally, if we assume a binary classification setting, a shapelet is discriminant if it is present in most series of one class and absent from series of the other class. To assess the level of presence, one uses shapelet matches:
where \(L\) is the length (number of timestamps) of shapelet \(\mathbf{s}\) and \(\mathbf{x}_{t\rightarrow t+L}\) is the subsequence extracted from time series \(\mathbf{x}\) that starts at time index \(t\) and stops at \(t+L\). If the above-defined distance is small enough, then shapelet \(\textbf{s}\) is supposed to be present in time series \(\mathbf{x}\).
The distance from a time series to a shapelet is done by sliding the shorter shapelet over the longer time series and calculating the point-wise distances. The minimal distance found is returned.
In a classification setting, the goal is then to find the most discriminant shapelets given some labeled time series data. Shapelets can be mined from the training set [1] or learned using gradient-descent.
Learning Time-series Shapelets¶
tslearn provides an implementation of “Learning Time-series Shapelets”,
introduced in [2], that is an instance of the latter category.
In Learning Shapelets,
shapelets are learned such
that time series represented in their shapelet-transform space (i.e. their
distances to each of the shapelets) are linearly separable.
A shapelet-transform representation of a time series \(\mathbf{x}\) given
a set of shapelets \(\{\mathbf{s}_i\}_{i \leq k}\) is the feature vector:
\([d(\mathbf{x}, \mathbf{s}_1), \cdots, d(\mathbf{x}, \mathbf{s}_k)]\).
This is illustrated below with a two-dimensional example.
In
tslearn, in order to learn shapelets and transform timeseries to
their corresponding shapelet-transform space, the following code can be used:
from tslearn.shapelets import LearningShapelets model = LearningShapelets(n_shapelets_per_size={3: 2}) model.fit(X_train, y_train) train_distances = model.transform(X_train) test_distances = model.transform(X_test) shapelets = model.shapelets_as_time_series_
A
tslearn.shapelets.LearningShapelets model has several
hyper-parameters, such as the maximum number of iterations and the batch size.
One important hyper-parameters is the
n_shapelets_per_size
which is a dictionary where the keys correspond to the desired lengths of the
shapelets and the values to the desired number of shapelets per length. When
set to
None, this dictionary will be determined by a
heuristic.
After creating the model, we can
fit the optimal shapelets
using our training data. After a fitting phase, the distances can be calculated
using the
transform function. Moreover, you can easily access the
learned shapelets by using the
shapelets_as_time_series_ attribute.
It is important to note that due to the fact that a technique based on
gradient-descent is used to learn the shapelets, our model can be prone
to numerical issues (e.g. exploding and vanishing gradients). For that
reason, it is important to normalize your data. This can be done before
passing the data to the
fit
and
transform
methods, by using our
tslearn.preprocessing
module but this can be done internally by the algorithm itself by setting the
scale
parameter. | https://tslearn.readthedocs.io/en/stable/user_guide/shapelets.html | 2021-10-16T08:29:42 | CC-MAIN-2021-43 | 1634323584554.98 | [] | tslearn.readthedocs.io |
.
- In the System Configuration section, click Datastore and Customizations.
- In the Extended Datastore Settings section, click Configure Extended Datastore.
- Click the name of the mount that contains the datastore you want to archive.
- In the row of that datastore, click Disconnect Extended Datastore.
- Type YES to confirm and then click OK.
The datastore is disconnected from the appliance and marked for read-only access. Wait at least ten minutes before connecting any other Discover appliances to the archive.
Connect your Discover appliances to the archived datastore
- In the System Configuration, click Datastore and Customizations.
- In the Extended Datastore Settings section, click Configure Extended Datastore.
- Click the name of the mount that contains the archived datastore.
- In the Datastore Directory field, type the path of the archived datastore directory.
- Click Archive (Read Only).
- Click Configure.
Your extended database is now a read-only archive that can be accessed by multiple Discover appliances.
Thank you for your feedback. Can we contact you to ask follow up questions? | https://docs.extrahop.com/7.9/archive_datastore/ | 2021-10-16T07:53:58 | CC-MAIN-2021-43 | 1634323584554.98 | [] | docs.extrahop.com |
This option is extremely helpful to make sure no text messages are ever overlooked or slip through the cracks. In addition to routing the incoming message through the smrtPhone system, you can send a notification to another phone number.
For example, you can have a notification message to your personal mobile phone.
SMS Notification is an Option within the SMS Applets
When you set up your SMS Flow and choose either SMS Inbox or Save & Reply, you have the option to be notified on an alternative number that you provide.
Drag and drop the desired applet to start setting up your SMS Flow.
After this step, you can choose the number(s) on which a notification will be received, in order to keep track of your smrtPhone texting.
You can add multiple numbers to get a notification on, but be sure to add them on separate lines.
To enable SMS Notification, simply switch the button from No to YES, after you dragged and dropped the applet into the flow.
Even if the Inbox is set for a user or a group, the notification can be sent on a personal number in a parallel fashion, to make sure you know it is there as soon as possible. Your smrtPhone Inbox not being affected.
❗ Keep in mind that the notification will be only about the fact that you have a text message in your system, you will not be able to see the content of the text received, nor to respond to it from the personal number(s) provided.
To do so, you just get into your smrtPhone Inbox, where you can both see the content and also reply to it.
Each notification message counts as an outbound text message for billing purposes. | https://docs.smrtphone.io/en/articles/5538591-amplifying-notification-of-received-text-to-other-numbers-sms-notification-text-flow-applet-options | 2021-10-16T08:42:02 | CC-MAIN-2021-43 | 1634323584554.98 | [array(['https://downloads.intercomcdn.com/i/o/390995182/6f3c64be83847af73c1ecca4/Webp.net-gifmaker+%282%29.gif',
None], dtype=object)
array(['https://downloads.intercomcdn.com/i/o/391004013/bfba15964acff3d9f16de2c2/Webp.net-gifmaker+%284%29.gif',
None], dtype=object) ] | docs.smrtphone.io |
.
Available on these boards
- ATMegaZero ESP32-S2
- Adafruit Feather M4 CAN
- Adafruit Feather STM32F405 Express
-
- PyboardV1_1
- S2Mini
- SAM E54 Xplained Pro
- STM32F4_DISCO
- Saola 1 w/Wroom
- Saola 1 w/Wrover
- SparkFun STM32 MicroMod Processor
- Targett Module Clip w/Wroom
- Targett Module Clip w/Wrover
- TinyS2
- microS2
- nanoESP32-S2 w/Wrover
- nanoESP32-S2 w/Wroom
- class
canio.
BusState¶
The state of the CAN bus
ERROR_WARNING:object¶
The bus is in the normal (active) state, but a moderate number of errors have occurred recently.
Note
Not all implementations may use
ERROR_WARNING. Do not rely on seeing
ERROR_WARNINGbefore.
- Parameters
rx (Pin) – the pin to receive with
tx (Pin) – the pin to transmit with
baudrate (int) – The bit rate of the bus in Hz. All devices on the bus must agree on this value.
loopback (bool) – When True the
rxpin’s value is ignored, and the device receives the packets it sends.
silent (bool) – When True the
txpin is always driven to the high logic level. This mode can be used to “sniff” a CAN bus without interfering.
auto_restart (bool) – If True, will restart communications after entering bus-off state) → Listener¶.
Platform specific notes:
SAM E5x supports two Listeners. Filter blocks are shared between the two listeners. There are 4 standard filter blocks and 4 extended filter blocks. Each block can either match 2 single addresses or a mask of addresses. The number of filter blocks can be increased, up to a hardware maximum, by rebuilding CircuitPython, but this decreases the CircuitPython free memory even if canio is not used.
STM32F405 supports two Listeners. Filter blocks are shared between the two listeners. There are 14 filter blocks. Each block can match 2 standard addresses with mask or 1 extended address with mask.
ESP32S2 supports one Listener. There is a single filter block, which can either match a standard address with mask or an extended address with mask.
send(self, message: Union[RemoteTransmissionRequest, Message]) → None¶
Send a message on the bus with the given data and id. If the message could not be sent due to a full fifo or a bus error condition, RuntimeError is raised.
_.) → Optional[Union[RemoteTransmissionRequest, Message]]¶
Reads a message, after waiting up to
self.timeoutseconds
If no message is received in time,
Noneis returned. Otherwise, a
Messageor
RemoteTransmissionRequestis returned.
in_waiting(self) → int¶
Returns the number of messages (including remote transmission requests) waiting
__iter__(self) → Listener¶
Returns self
This method exists so that
Listenercan be used as an iterable
__next__(self) → Union[RemoteTransmissionRequest, Message]¶
Reads a message, after waiting up to self.timeout seconds
If no message is received in time, raises StopIteration. Otherwise, a Message or is returned.
This method enables the
Listenerto be used as an iterable, for instance in a for-loop.
_..
- Parameters
-
In CAN, messages can have a length from 0 to 8 bytes.
- class
canio.
RemoteTransmissionRequest(id: int, length: int, *, extended: bool = False)¶
Construct a RemoteTransmissionRequest to send on a CAN bus.
- Parameters
-
In CAN, messages can have a length from 0 to 8 bytes. | https://circuitpython.readthedocs.io/en/7.0.x/shared-bindings/canio/index.html | 2021-10-16T08:41:57 | CC-MAIN-2021-43 | 1634323584554.98 | [] | circuitpython.readthedocs.io |
Let’s assume that you have a Windows-based server, named SERVER.
You created a Cargador catalogue with local path: x:\data\cargador.catalog. You shared this folder in the read-only mode with network path \SERVER\cargador.catalog.
You should write the following into the main.conf file (located in the same folder that contains Cargador executables):
<cargador>
<root Value="x:\data\cargador.catalog" />
<net_root Value="//SERVER/cargador.catalog" />
</cargador>
So if some file is stored in the catalogue with local path x:\data\cargador.catalog\someProject\file.foo, it means that its location for network users will be resolved by Cargador as //SERVER/cargador.catalog/someProject/file.foo (it doesn’t matter for Windows if there are forward slashes or backslashes in the path).
If you have workstations running under Linux or MacOS, this path will not work for them. To make the files accessible for Linux or MacOS users, you should do the procedure called directory mapping (see chapter “Directory Mapping”).
Note
In this example you may skip the directory mapping for Linux-based workstations. Instead of mapping, create local folders /SERVER/cargador.catalog on the workstations and mount the catalogue network path there. In this case Linux will locate the network files by the direct path //SERVER/cargador.catalog/someProject/file.foo.
Warning
Pay attention to the letter case when dealing with case sensitive operating systems. | https://docs.cerebrohq.com/en/articles/3347775-file-storage-sample-configuration | 2021-10-16T08:54:17 | CC-MAIN-2021-43 | 1634323584554.98 | [] | docs.cerebrohq.com |
Snippets; via Asset Manager, from a campaign or a scenario node directly, or get a headstart with one of our Predefined Templates.
In the Asset Manager, you can click on the New Snippet button at the top of the screen. This will open up the code editor. In a campaign or scenario node, click a (+) button at the bottom of the editor or node modal, selecting Snippet in the asset selector and choosing the “New Snippet” option. As a starting point, you can also choose from different Predefined Templates ranging from simple personalization, price formatting to Jinja syntax examples. Learn more about our Predefined Templates.
You can use the Parameters tab to add new parameters (i.e. variable values that you can specify in each individual campaign).
When adding a new parameter, you will have to define its reference, type, and default value, and choose whether the parameter is required or add a tooltip.
- Parameter reference is the reference to the parameter that is used in the code. The parameters display name is derived from it.
- Parameter type is the data type of the parameter. Available data types include string, text, number, boolean, list, object, datetime, date, time, color, enum, image.
- Default value is the default parameter value that will be used if no specific parameter value is provided upon snippet usage in a campaign.
- Required defines whether the parameter must be filled in when using the snippet. Parameters marked as required must be filled in unless the default value is also used.
- Tooltip will be displayed by the parameter name when inputting the parameters values in campaign.
Alternatively, you can define the parameters directly in the code and add them using the button “Load parameters from the code editor”.
Parameters can be used within the code as standard Jinja variables using the keyword
params and the defined reference name e.g:
{{ params.parameterName }} or
{{ params["parameter name"] }}
{% set otherVariable = params.parameterName %}
{% if param.paramName = "something" %}
…
{% endif %}
Multi-word parameter references can only be used with the
params[""] notation.
Using Jinja as a parameter
You can also use simple jinja variables/expressions as parameters of snippets. However please note that parameters can inputs in the asset picker, you need to use the
{{ }} notation so that Exponea can recognize).
The language variant of the snippet will be selected based on the selected main campaign template language version.
Insert options
After selecting and configuring the snippet, there are two ways of inserting the snippet into your campaign.
(1) Copy
- Max snippet template size per translation is 10kB
- Max 30 distinct translations per snippet
- Max 20 of distinct snippets being used in a single campaign template/block
- Max 20 distinct parameters can be created per snippet
- Max length 100 characters of a parameter name or category
- Max length 1000 characters for parameter tool-tip
Updated 18 days ago | https://docs.exponea.com/docs/snippets | 2021-10-16T09:10:58 | CC-MAIN-2021-43 | 1634323584554.98 | [array(['https://files.readme.io/50e91b9-snippets_parameters.png',
'snippets_parameters.png'], dtype=object)
array(['https://files.readme.io/50e91b9-snippets_parameters.png',
'Click to close...'], dtype=object)
array(['https://files.readme.io/93e3e23-docs-asset-params.png',
'docs-asset-params.png'], dtype=object)
array(['https://files.readme.io/93e3e23-docs-asset-params.png',
'Click to close...'], dtype=object)
array(['https://files.readme.io/d19f295-adding_snippet.png',
'adding_snippet.png'], dtype=object)
array(['https://files.readme.io/d19f295-adding_snippet.png',
'Click to close...'], dtype=object) ] | docs.exponea.com |
Joining a channel
Channels are a way for you to interact with others along a team or common interest. Here are instructions on how to join channels in your organization.
Finding and joining channels
To see a list of all channels, click View all flows in the sidebar to open the Explorer. Use the search box, or search by type. Click on any channel you want to join.
If it is an open channel, you will be able to see the posts and you can click Join Channel to become a member. If it is a closed channel, you will not be able to see any posts. You can click Request Access to join the channel. After the Channel Admin approves your request, you can make a comment, and react to posts.
Leaving a channel
To leave a channel, click Joined, and then click Leave channel.
Channel Admins who want to leave a channel must first make someone else an admin from the Members screen. Then, click the More Options button (
) next to their name, and click Remove from channel.
Changing channel notifications
Notifications let you know the latest activity in the channel. By default, you will be notified about all the activities that take place in the channel.
To change the channel notification, click Notifications and select your preference.
- Everything - You will be notified about all the activities that happen in the channel.
- Relevant - Kissflow will learn which notifications you click and will tailor notifications to your usage pattern.
- Action items - You will be notified only when something requires your action.
- Nothing - You will not receive any notifications. | https://docs.kissflow.com/article/y3f3itfqp6-joining-channels | 2021-10-16T12:21:22 | CC-MAIN-2021-43 | 1634323584567.81 | [array(['https://files.helpdocs.io/vy1bn54mxh/articles/y3f3itfqp6/1563861203148/view-all-channel.gif',
None], dtype=object)
array(['https://files.helpdocs.io/vy1bn54mxh/articles/y3f3itfqp6/1562570577885/leave-channel.png',
None], dtype=object)
array(['https://files.helpdocs.io/vy1bn54mxh/articles/y3f3itfqp6/1563860465649/remove-member.gif',
None], dtype=object)
array(['https://files.helpdocs.io/vy1bn54mxh/articles/y3f3itfqp6/1586511498573/my-notifications.png',
None], dtype=object) ] | docs.kissflow.com |
FAQs¶
- What is Anaconda.org?
- What kind of packages does Anaconda.org support?
- Who can find and install my packages?
- What is Anaconda, Inc.?
- What are Anaconda.org’s Terms and Conditions?
- How much does Anaconda.org cost?
- How do I get started with Anaconda.org?
- What kind of account do I have?
- What is included in the free version of Anaconda.org?
- What is an organization account, and how is it different from an individual account?
What is Anaconda.org?¶
Anaconda.org is a package management service by Anaconda. For more information, see Anaconda.org.
What kind of packages does Anaconda.org support?¶
Anaconda.org supports any type of package. Today, it is primarily used for conda and PyPI packages, as well as notebooks and environments.
Who can find and install my packages?¶
If you have a free account, all of your packages are public. After you upload them to Anaconda.org, anyone can search for and download them.
What is Anaconda, Inc..
What are Anaconda.org’s Terms and Conditions?¶
Our Terms and Conditions are available on our website. For any additional questions, contact us by email.
How much does Anaconda.org cost?¶
Anaconda.org is free for downloading and uploading public packages.
How do I get started with Anaconda.org?¶
You can search, download and install hundreds of public packages without having an account. If you want to upload packages, you need to sign up for a Anaconda.org account. For more information, see sign up for a free Anaconda.org account.
What kind of account do I have?¶
By default your account is a personal, free account. All packages you upload to Anaconda.org are public, and you are the only person with administrative access to your account.
What is included in the free version of Anaconda.org?¶
The free plan allows you to search for, create and host public packages, and provides up to 3 GB storage space.
What is an organization account, and how is it different from an individual account?¶
An organization account allows multiple individual users to administer packages and have more control of package access by other users. An individual account is for use by one person. | https://docs.anaconda.org/anacondaorg/faq/ | 2021-10-16T12:41:32 | CC-MAIN-2021-43 | 1634323584567.81 | [] | docs.anaconda.org |
Publish to Native Platforms
Click Project -> Build in the main menu of the editor to open the Build panel.
Cocos Creator supports four native platforms, which include Android, iOS, Mac and Windows. The options to release games on iOS, Mac and Windows will only appear on those operating systems. This means it isn't possible to publish, for example, a game to iOS from a Windows computer.
Environment Configuration
To publish to the native platforms you need to install and configure some necessary development environments. Please refer to the Setup Native Development Environment for details.
Build Options
For the general build options for all platforms, see General Build Options for details.
General build options for native platforms
Due to the adjustments made to the build mechanism, the processing of different platforms are injected into the Build panel as plugins.
When you select the native platform you want to build in the Platform option of the Build panel, you will see that there is a native expand option in addition to the specific native platform expand option (e.g., android, ios). The build options in native are the same for all native platforms.
Resource Server Address
When the package is too large (in size), the resource can be uploaded to a resource server and downloaded via a network request. This option is used to fill in the address of the remote server where the resource is stored. The developer needs to manually upload the
remote folder in the release package directory to the filled-in resource server address after the build. For more details, please refer to the Uploading resources to a remote server documentation.
Polyfills
Polyfills is a new feature option supported by the script system. If this option is checked at build time, the resulting release package will have the corresponding polyfills in it, which means it will increase the size of the package. Developers can choose polyfills on demand, but only
Async Functions are currently available, and more will be opened later.
Make after build immediately
If this option is checked, the Make step will be executed automatically after the build is completed, without manual operation.
Job System
This option is currently used by the internal function module of the engine, users do not need to pay attention to this option for the time being, and selecting any of the options in the drop-down box will not have any impact on the project.
However, there are version restrictions for selecting TBB or TaskFlow on the native platform, please see section Version Support below for details.
Encrypt JS
This option is used to encrypt the published script. After build, the
JSC file is generated in the
assets/ directory, which is encrypted. And the
JS file will be backed up in the
script-backup directory for debugging, and will not enter the APP when packaged.
JS Encryption Key: This secret key will be used to encrypt
JS files. The project will generate the key randomly when created.
Zip Compress: If this option is checked, you can reduce the size of your scripts.
Native Engine
This option is used to show whether the built-in engine or a custom engine is currently being used. Click the Edit button behind it to go to the Preferences -> Engine Manager panel for settings.
Build Options for the Android Platform
The build options for the Android platform are as follows:
Render BackEnd
Currently, VULKAN, GLES3 and GLES2 are supported, and GLES3 is checked by default. If more than one is checked at the same time, the rendering backend will be selected based on the actual support of the device at runtime.
Game Package Name
The Game underscore or a number.
Target API Level
Set up the Target API Level required for compiling the Android platform. Click the Set Android SDK button next to it to quickly jump to the configuration page. Refer to the Setup Native Development Environment documentation for specific configuration rules.
APP ABI
Set up the CPU types that Android needs to support, including armeabi-v7a, arm64-v8a, x86 and x86_64. You can choose one or more options.
Notes:
- When you select an ABI to build and then build another ABI without
Clean, both ABI's
sowill be packaged into the APK, which is the default behavior of Android Studio. If you import a project with Android Studio, after selecting an ABI to build, run Build -> Clean Project, then build another ABI, only the latter ABI will be packaged into the APK.
-
After the project is imported with Android Studio, it is an independent existence and does not depend on the Build panel. If you need to modify the ABI, you can directly modify the
PROP_APP_ABIproperty in
gradle.propertiesfile as shown below:
Use Debug Keystore
Android requires that all APKs be digitally signed with a certificate before they can be installed. A default keystore is provided, check the Use Debug Keystore to use the
default keystore. If you need to customize the keystore, you can remove the Use Debug Keystore checkbox. Please refer to the official Android Documentation for details.
Android requires that all APKs must be digitally signed with a certificate before they can be installed. Cocos Creator provides a default keystore by checking Use Debug Keystore to use it. If you need to customize the keystore, you can remove the Use Debug Keystore checkbox, refer to the Official Documentation for details.
Screen Orientation
The screen orientation currently includes Portrait, Landscape Left and Landscape Right.
- Portrait: the screen is placed vertically with the Home button on the bottom.
- Landscape Left: the screen is placed horizontally, with the Home button on the left side of the screen.
- Landscape Right: the screen is placed horizontally, with the Home button on the right side of the screen.
Google Play Instant
If this option is enabled, the game can be packaged and published to Google Play Instant.
Google Play Instant relies on Google Play, and it is not a new distribution channel, but closer to a game micro-end solution. It can realize the game to be played without installing, which is useful for game's trial play, sharing and conversion.
The following notes are required when using:
- The Android Studio should be v4.0 and above.
- The Android Phone should be v6.0 and above. Devices with Android SDK version between 6.0 and 7.0 need to install Google Service Framework, while those with SDK version 8.0 or higher do not need it and can install it directly.
-
If you compile for the first time, you need to open the built project with Android Studio to download Google Play Instant Development SDK (Windows) or Instant Apps Development SDK (Mac) support package. If the download fails, it is recommended to set up an HTTP proxy for Android Studio.
App Bundle (Google Play)
If this option is enabled, the game can be packaged into App Bundle format for uploading to Google Play store. Please refer to Official Documentation for details.
Build Options for the Windows Platform
The build options for the Windows platform include Render BackEnd and Target Platform.
Render BackEnd
Currently, VULKAN, GLES3 and GLES2 are supported, and GLES3 is checked by default. If more than one is checked at the same time, the rendering backend will be selected based on the actual support of the device at runtime.
Target Platform
Set the compilation architecture, both x64 and win32 are currently supported.
If x64 is selected, only x64 architecture is supported to run on.
If win32 is selected, both architectures are supported to run on.
Build Options for the iOS Platform
The build options for the iOS platform include Bundle Identifier, Orientation, Target iOS Version, Render BackEnd and Developer Team. The setting of Orientation is the same as the Android platform.
Bundle Identifier
The package name, usually arranged in the reverse order of the product's website URL, such as:
com.mycompany.myproduct.
Note: only numbers (0~9), letters (A~Z, a~z), hyphens (-) and periods (.) can be included in the package name. Besides, the last section of package name should start with a letter, but not an underscore or a number. Please refer to the A unique identifier for a bundle documentation for details.
Target iOS Version
The option specifies the version of the iOS software when publishing to the iOS platform and defaults to 12.0. The version number is recorded in the
TARGET_IOS_VERSION field of the
proj/cfg.cmake file in the release package directory after the build.
Render BackEnd
Currently, only METAL is supported for the Render BackEnd. See the official documentation Metal for details.
Developer Team
This option is used to configure the Development Team signature information when building and compiling iOS projects. If the signature information is manually configured in Xcode when compiling with Xcode, the configuration in Xcode takes precedence. When a rebuild is performed, the value of this option will override the value configured in Xcode.
Build Options for the Mac Platform
The build options for the Mac platform include Bundle Identifier, Target macOS Version, Support M1 and Render BackEnd.
Bundle Identifier
Package name, usage is consistent with the iOS platform.
Target macOS Version
This option specifies the macOS system version when publishing to the Mac platform and defaults to 10.14. The version number is recorded in the
TARGET_OSX_VERSION field of the
proj/cfg.cmake file in the release package directory after the build.
Support M1
This option is used to better flag support issues for some known engine modules on Apple M1 (Silicon) architecture devices.
Render BackEnd
This option currently uses the METAL rendering backend by default, see the official documentation Metal for details.
Version Support
The minimum version of each functional module is supported in the native platform as follows:
Creator 3.0 supports C++14. v3.1 is upgraded to C++17 since v3.1 supports the TaskFlow Job System, which relies on C++17.
However, since C++17 is only supported in iOS 12+, we dropped it back to C++14 in v3.3.2 in order to support iOS 10.0. Note that in v3.3.2, if TaskFlow Job System in used, C++17 will be automatically enabled to support compilation.
Correspondingly, the minimum version support for each version of Creator on native platforms is as follows:
The highest version is supported as follows:
- Android: API Level 31(12.x)
- iOS: 15.x
Build a Native Project
After the build options are set, you can begin the build. Click the Build button in the bottom right corner of the Build panel to start the build process.
When compiling scripts and zipping resources, a blue progress bar will display on the Build Task window. When the build completes, the progress bar reaches 100% and turns green.
After the build, we get a standard Cocos2d-x project, with the same structure as a new project created using Cocos Console. Taking the Windows platform as an example, the directory structure of the exported native project package
windows is shown below:
assets: places project resources.
proj: places the currently built native platform project, which can be used by the IDE of the corresponding platform to perform compilation tasks.
cocos.compile.config.json: place the compile option json for current build.
For more information, please refer to Build Directory -- Native.
Next, you can continue to Make and run desktop previews through the Cocos Creator editor, or manually open the built native project in the IDE of the corresponding platform for further previewing, debugging, and publishing.
Make and Run
Cocos Creator supports Make and Run Preview steps via the editor or the corresponding IDE for each platform (e.g.: Xcode, Android Studio, Visual Studio).
By the Editor
Click the Make button on the Build Task window to enter the compile process. When the compilation is successful, it will prompt:
make package YourProjectBuildPath success!
Note: after the first compilation of the Android platform or version upgrade, it is recommended to open the project via Android Studio, download the missing tools according to the prompts, and then perform the Make and Run.
Once the Make process is complete, continue to click the Run button next to it. Some compilation work may continue, so please wait patiently or check the progress through the log file. The results of the Run for each platform are as follows:
- Mac/Windows platform: run the preview directly on the desktop.
- Android platform: must connect to physical device via USB and the preview can be run after the USB debugging is enabled on the physical device.
- IOS platform: will call the simulator to run the preview. But it is recommended to connect to the physical device via Xcode to execute Make and Run, as described below.
By the IDE
Click the folder icon button in the bottom left corner of the build task window, the release path will be opened in the file manager of the operating system. The
proj folder under the release package directory contains the native platform project of the current build.
Next, open these generated native projects using the IDE corresponding to the native platform (e.g.: Xcode, Android Studio, Visual Studio) and you can make further operations like compilation, preview and release.
Android
Windows
Mac 和 iOS
For the usage instructions for native platform's IDE, please search related information on your own, which will not be discussed in detail here.
To learn how to debug on a native platform, please refer to Debugging JavaScript on Native Platforms.
Precautions WebView and related features are not needed, please ensure that the WebView module is removed from the Project -> Project Settings -> Feature Cropping to help the approval process go as smoothly as possible on iOS App Store. If WebView is needed (or the added third-party SDK comes with WebView), and therefore the game rejected by App Store, try to appeal through email.
The result of compiling the Android through the editor and Android Studio has the following differences.
After executing the Make step via the editor, the
builddirectory will be created under the release path, and the
.apkwill be generated in the
app\build\outputs\apkdirectory of the
builddirectory.
After compiling with Android Studio, the
.apkis generated in the
proj\app\build\outputs\apkdirectory.
In Cocos Creator 3.0, Android and Android Instant use the same build template, and the built native projects are in the
build\android\projdirectory.
instantapp\srcand
instantapp\libsdirectories, respectively.
For code and third-party library used in common by the Android and Android Instant, place them in the
srcand
libsdirectories, respectively.
When compiling Android in Build panel,
assembleRelease/Debugis executed by default. When compiling Android Instant,
instantapp:assembleRelease/Debugis executed by default. | https://docs.cocos.com/creator/3.3/manual/en/editor/publish/native-options.html | 2021-10-16T12:08:22 | CC-MAIN-2021-43 | 1634323584567.81 | [array(['publish-native/native-platform.png', 'native platform'],
dtype=object)
array(['publish-native/native-options.png', 'native options'],
dtype=object)
array(['publish-native/encrypt-js.png', 'encrypt js'], dtype=object)
array(['publish-native/android-options.png', 'Android build options'],
dtype=object)
array(['publish-native/windows-options.png', 'Windows build options'],
dtype=object)
array(['publish-native/ios-options.png', 'iOS build options'],
dtype=object)
array(['publish-native/mac-options.png', 'Mac build options'],
dtype=object)
array(['publish-native/build-progress-windows.png', 'build progress'],
dtype=object)
array(['publish-native/native-directory.png', 'native directory'],
dtype=object) ] | docs.cocos.com |
Extracting a text and number using remote lookup
You can use a remote lookup field to extract data from an external API. For example, you may want to display the general weather forecast and temperature for a particular city.
- Create a new text field called City.
- Create a remote lookup field called Weather.
- In the URL field, add the API link such as one from OpenWeather.
- In this link, we’ve entered the Field ID from the first field we made.
- Choose GET as the request type.
- Header name and Body name are not required for this example.
- In the field What kind of data are you working with?, choose JSON
- In JSON path of value in result enter
$
- In the field What kind of data are you working with?, choose Text.
- In the field How should the result be chosen?, choose Autopopulate a value and click Done.
- To extract the forecast as text, create a new text field called Forecast. Enter the formula as
Weather.extractText("$.weather[*].main").get(1).
- To extract the temperature as a number, create a new number field called Temperature. Enter the formula as
Weather.extractNumber("$.main.temp", 1)
When extracting any data from a JSON, use this pattern:
Remotelookup_Fieldname.extractNumber(path, index)
Remotelookup_Fieldname.extractText(path, index)
We have deprecated the expression
Remotelookup_Fieldname.extractJson(path, index). However, we will continue supporting the fields which already have this expression. You will not be allowed to use this expression in a new field.
In the live form, when you type the city and click Get Data. The temperature and forecast will be displayed.
| https://docs.kissflow.com/article/4knp05tbad-extracting-a-text-and-number-using-remote-lookup | 2021-10-16T11:47:23 | CC-MAIN-2021-43 | 1634323584567.81 | [array(['https://files.helpdocs.io/vy1bn54mxh/articles/4knp05tbad/1583988459349/extract.gif',
None], dtype=object) ] | docs.kissflow.com |
Westpac PayWay payment gateway
The Westpac PayWay payment gateway asset allows you to create an ecommerce payment integration between Matrix and the Westpac PayWay system. This integration allows users to make payments by credit card in a secure way.
To use this payment gateway, you will need an account at the Westpac PayWay website with the PayWay Net module enabled on your account. You can also create a testing account using the PayWay Test Facility to test your payment gateway implementation in Matrix.
Additional dependant assets
When you create a PayWay payment gateway, the display format and bodycopy assets are automatically created beneath it. You can use this bodycopy to define the contents and layout of the payment and cardholder verification forms.
Details screen
Account details
This section allows you to enter your account and integration details, allowing you to connect Matrix with the payment gateway.
- Merchant ID
The merchant ID of the PayWay account. If you are using the PayWay test facility to test the integration, you can enter
testinto this field.
- Publishable key
The publishable key of the PayWay account.
- Secret key
The secret key of the PayWay account. To find the publishable key and secret key values within the PayWay admin portal, follow these steps:
Click on
REST APIin the left column menu.
Click on
REST API keys.
You should see two rows on this page, one with the access type of PUBLISHABLE and one that has SECRET.
Click on each of these API keys to reveal the full key value.
Copy and paste the keys into the payment gateway screen in Matrix.
Test connection
This section allows you to test the connection between your Matrix instance and the PayWay API endpoint.
Once you have filled out and saved the account details, you can click on the Test connection button to test the integration connection. If the connection is successful, you should see a message similar to this:
Pass-through variables
This section allows you to source additional values to pass-through to the payment gateway from the ecommerce asset to which you are connecting the gateway. These keys must be configured on the Ecommerce rules screen of the Ecommerce form page.
The only field that is available here is:
- Customer number variable name
The PayWay customer number against which the payment should be made.
Display formatting screen
The Display formatting screen allows you to edit the display format bodycopy. The display format bodycopy is used to define the layout of the PayWay payment gateway page.
A list of keyword replacements is provided in the toolbar on the Edit contents screen of the display format bodycopy. The following keyword replacements are unique to the PayWay payment gateway:
%payway_form%
This will print the required code, which will generate the PayWay credit card entry form. This code will generate an iframe that holds the form fields and accepts the user input.
%payway_js%
Prints the required inline javascript that powers the PayWay payment iframe form.
%customer_number%
Prints an input field that allows the user to manually input their PayWay customer number. | https://docs.squiz.net/matrix/version/latest/features/payment-gateway-assets/payment-gateway-payway.html | 2021-10-16T11:07:07 | CC-MAIN-2021-43 | 1634323584567.81 | [array(['../_images/PayWay-Gateway-Diagram.png', 'PayWay Gateway diagram'],
dtype=object)
array(['../_images/test-connection-example.png',
'test connection example'], dtype=object)] | docs.squiz.net |
VMware Tanzu Greenplum 6.14 Release Notes
A newer version of this documentation is available. Use the version menu above to view the most up-to-date release of the Greenplum 6.x documentation.
VMware Tanzu Greenplum 6.14 Release Notes
This document contains pertinent release information about VMware Tanzu Greenplum Database 6.14.1
Release Date: 2021-2-22
VMware Tanzu Greenplum 6.14.1 is a maintenance release that resolves several issues and includes related changes
Changed Features
Resolved Issues
VMware Tanzu Greenplum 6.14.1 resolves these issues:
- 31258 - Server
- Resolves an issue where the array typecasts of operands in view definitions with operators were erroneously transformed into anyarray typecasts. This caused errors in the backup and restore of the view definitions.
- 31249 - Query Optimizer
- Resolves an issue where Greenplum Database generated a PANIC during query execution when the Query Optimizer attempted to access the argument of a function (such as random() or timeofday()), but the query did not invoke the function with an argument.
- 31242 - Server
- Optimized locking to resolve an issue with certain SELECT queries on the pg_partitions system view, which were waiting on locks taken by other operations.
- 31232 - Server
- Resolves an issue where, after an upgrade from version 5.28 to 6.12, a query execution involving external tables resulted in a query PANIC and segment failover. This issue has been resolved by optimizing the query subplans.
- 31211 - gpfdist
- When an external table was configured with a transform, gpfdist would sporadically return the error 404 Multiple reader to a pipe is forbidden. This issue is resolved.
- 176684985 - Query Optimizer
- This release improves Greenplum Database’s performance for joins with multiple join predicates.
Release 6.14.0
Release Date: 2021-2-5
VMware Tanzu Greenplum 6.14.0 is a minor release that includes changed features and resolves several issues.
Features
Greenplum Database 6.14.0 includes these new and changed features:
- CentOS/RHEL 8 and SUSE Linux Enterprise Server x86_64 12 (SLES 12) Clients packages are available with this Greenplum Database release; you can download them from the Release Download directory named Greenplum Clients on VMware Tanzu Network.
- The PXF version 5.16.1 distribution is available with this release; you can download it from the Release Download directory named Greenplum Platform Extension Framework on VMware Tanzu Network.
- The default value of the optimizer_join_order server configuration parameter is changed from exhaustive to exhaustive2. The Greenplum Database Query Optimizer (GPORCA) uses this configuration parameter to identify the join enumeration algorithm for a query. With this new default, GPORCA operates with an emphasis on generating join orders that are suitable for dynamic partition elimination. This often results in faster optimization times and/or better execution plans, especially when GPORCA evaluates large joins. The Faster Optimization of Join Queries in ORCA blog provides additional information about this feature.
- The default cost model for the optimizer_cost_model server configuration parameter, calibrated, has been enhanced; GPORCA is now more likely to choose a faster bitmap index with nested loop joins rather than hash joins.
- GPORCA boosts query execution performance by improving its partition selection algorithm to more often eliminate the default partition.
- GPORCA now generates a plan alternative for a right outer join transform from a left outer join when equivalent. GPORCA's cost model determines if/when to pick this alternative; using such a plan can greatly improve query execution performance by introducing partition selectors that reduce the number of partitions scanned.
- The output of the gprecoverseg -a -s command has been updated to show more verbose progress information. Users can now monitor the progress of the recovering segments in incremental mode.
- The gpcheckperf command has been updated to support Internet Protocol version 6 (IPv6).
Resolved Issues
VMware Tanzu Greenplum 6.14.0 resolves these issues:
- 31195 - Server: Execution
- Resolves an issue where Greenplum Database generated a PANIC when the pg_get_viewdef_name_ext() function was invoked with a non-view relation.
- 31094 - Server: Execution
- Resolves an issue where a query terminated abormally with the error Context should be init first when gp_workfile_compression=on because Greenplum Database ignored a failing return value from a ZSTD initialization function.
- 31067 - Query Optimizer
- Resolves a performance issue where GPORCA did not consistently eliminate the default partition when the filter condition in a query matched more than a single partition. GPORCA has improved its partition selection algorithm for predicates that contain only disjunctions of equal comparisons where one side is the partition key by categorizing these comparisions as equal filters.
- 31062 - Cluster Management
- Resolves a documentation and --help output issue for the gprecoverseg, gpaddmirrors, gpmovemirrors, gpinitstandby utilities, where the --hba-hostnames command line flag details were missing.
- 31044 - Query Optimizer
- Fixes a plan optimizer issue where the query would fail due to the planning time being dominated by the sort process of irrelevant indexes.
- 30974 - Server: Execution
- Greenplum Database generated a PANIC when a query run in a utility mode connection invoked the gp_toolkit.gp_param_setting() function. This issue is resolved; Greenplum now ignores a function's EXECUTE ON options when in utility mode, and executes the function only on the local node.
- 30950 - Query Optimizer
- Resolves an issue where GPORCA did not use dynamic partition elimination and spent a long time planning a query that included a mix of unions, outer joins, and subqueries. GPORCA now caches certain object pointers to avoid repeated metadata lookups, substantially decreasing planning time for such queries when optimizer_join_order is set to query or exhaustive2.
- 30947 - Query Optimizer
- Resolves an issue where Greenplum Database returned the error no hash_seq_search scan for hash table "Dynamic Table Scan Pid Index" because GPORCA generated a query plan that incorrectly rescanned a partition selector during dynamic partition elimination. GPORCA now generates a plan that does not demand such a rescan.
- 11211 - Server
- During the parallel recovery and rebalance of segment nodes after a failure, if an error occurred during segment resynchronization, the main recovery process would halt and wait indefinitely. This issue has been fixed.
- 11058 - Query Optimizer
- Resolves an optimizer issue where CTE queries with a RETURNING clause would fail with the error INSERT/UPDATE/DELETE must be executed by a writer segworker group.
- 174873438 - Planner
- Resolves an issue where an index scan generated for a query involving a system table and a replicated table could return incorrect results. Greenplum no longer generates the index scan in this situation.
Upgrading from Greenplum 6.x to Greenplum 6.14
See Upgrading from an Earlier Greenplum 6 Release to upgrade your existing Greenplum 6.x software to Greenplum 6 | https://gpdb.docs.pivotal.io/6-14/relnotes/gpdb-614-release-notes.html | 2021-10-16T12:10:51 | CC-MAIN-2021-43 | 1634323584567.81 | [] | gpdb.docs.pivotal.io |
Native Monitoring Point Setup - Linux.
Use the following steps to install a Native Monitoring Point (NMP) on a Linux system on your network. After installation is complete, the NMP software connects to back APM. Once this is done, you need to license the Monitoring Point, set its location, and then set up monitoring.
- Prerequisites
- Prepare the host
- Download and install the software
- Verify the software is running
- Assign licenses
- Set the Monitoring Point location
- Set up monitoring
- Related topics
Prerequisites
- Make sure you have administrative privileges on the system you are deploying to.
- Make sure you’re using a supported OS.
- For virtual environments, make sure you’re using a qualified hypervisor and guest OS. If not, use a Virtual Monitoring Point (on KVM or VMware) instead.
- Configure your firewall rules to enable the Monitoring Point access to APM.
- The Monitoring Point can still be installed and configured without this step, but monitoring is not possible until it is done.
Prepare the host
systemd environments
To configure a Linux system (that supports systemd) to start automatically when the system is rebooted:
Create a systemd service file (/etc/systemd/system/sequencer.service) with the following content:
[Unit] Description=Sequencer After=syslog.target time-sync.target networking.service Requires=networking.service [Service] Type=forking ExecStart=/opt/pathview/netseq-linux -b StandardOutput=null [Install] WantedBy=multi-user.target networking.service
Enable the service.
systemctl enable sequencer.service
Virtual environments
In virtual environments, the pseudo performance counter must be enabled or the NMP will return spurious results. For example, to enable the pseudo performance counter in vSphere Client 5.5:
- In the vSphere Client, navigate to View > Inventory.
- Power off the host on which the NMP will be installed.
- Select the host at the root of the inventory tree and then navigate to it’s Summary tab.
- In the Resources pane of the Summary tab, find its datastore and right-click > Browse Datastore.
- In the Datastore view, select the directory corresponding to the host.
- Download the hosts’s configuration file (the .vmx file).
- In the configuration file, search for the line
monitor_control.pseudo_perfctr = "true". If it is not present, append it to the file and save your changes.
- In the Datastore view, upload your new .vmx file.
- Reset the host.
Download and install the software
- Log in to APM.
- Select an organization (if you belong to more than one).
- If you’re setting up your first Monitoring Point, you will be taken to the first step of the Add Monitoring Point wizard.
- If your organization already has Monitoring Points, navigate to > Manage Monitoring Points > Add Monitoring Points.
- In the Platform Type field, select Linux (native).
- Click Native Monitoring Point for Linux or Native Monitoring Point for Linux 64-bit depending on your host.
- The installer is downloaded to your computer.
- Follow the in-product instructions to install the software.
Verify the software is running
From the command line, verify that the NMP is running.
[root@hostname ~]# ps -ef | grep netseq-linux root 25660 1 0 Jan29 ? 00:03:25 ./netseq-linux -b root 32522 32465 0 10:55 pts/1 00:00:00 grep netseq-linux NMP only supports Delivery monitoring. It does not support Experience or Usage monitoring. | https://docs.appneta.com/appliance-setup-sequencer-linux.html | 2021-10-16T11:32:51 | CC-MAIN-2021-43 | 1634323584567.81 | [] | docs.appneta.com |
Date: Sat, 16 May 2015 10:13:56 +0100 From: Matthew Seaman <[email protected]> To: [email protected] Subject: Re: Swap partition for FreeBSD Message-ID: <[email protected]> In-Reply-To: <CAJ9BSW9cVmd8c+4E5rWAd9FPDvgpwqVKDSh7962FW3-g_W9jMQ@mail.gmail.com> References: <CAJ9BSW9cVmd8c+4E5rWAd9FPDvgpwqVKDSh7962FW3-g_W9jMQ@mail.gmail.com>
Next in thread | Previous in thread | Raw E-Mail | Index | Archive | Help
This is an OpenPGP/MIME signed message (RFC 4880 and 3156) --pc2qNR7SocS2d5K6oXtfb001lMktgTLle Content-Type: text/plain; charset=windows-1252 Content-Transfer-Encoding: quoted-printable On 16/05/2015 08:41, Avinash Sonawane wrote: > I was trying to build www/webkit-gtk3 (a dependency for x11/gnome3) > then it abruptly exited with "Out of swap space error" so I created a > separate 8GB partition to be used as freebsd-swap. >=20 > Here is `gpart show`, `gpart show -p`, `swapinfo` and `/etc/fstab` > >=20 > Now my problem is that I see a message "May 16 12:31:59 titanic > kernel: GEOM_PART: Partition 'ada0s6' not suitable for kernel dumps > (wrong type?)" during bootup. >=20 > And I think that's because though the system is using ada0s6 as swap > partition (evident from the above pastebin) the partition has a wrong > type. More specifically it is linux-data instead of freebsd-swap (See > `gpart show` in above paste) >=20 > So how do I change the type of ada0s6 to freebsd-swap? >=20 > I tried `bsdlabel -e ada0s6` but then it said "bsdlabel: /dev/ada0s6: > no valid label found" >=20 > Then I tried `gpart modify -i 2878773 -t freebsd-swap ada0s4` then it > said "gpart: pre-check failed: Operation canceled" (Here 2878773 is > the index number of ada0s6. See `gpart show` in above pastebin) >=20 > I tried these 2 approaches after `swapoff /dev/ada0s6` too but they > throw same error messages as above. >=20 > So how do I get rid of the above mentioned error message appearing > during boot or how do I format ada0s6 to freebsd-swap from linux-data? You're correct to use gpart(8) to create your new swap partition. Don't use bsdlabel(8) -- that is for dealing with the really old style of disk partitioning and is incompatible with the gpt disk labels you have. Now, in order to fix your problem: you can't change the type of a partition. Instead, you need to delete the partition and then create a new 'freebsd-swap' partition using the same chunk of disk space. Try and quiesce the system as much as possible before you do this -- it might be worth booting from a LiveCD. Also the index numbers look pretty strange to me. Possibly the OS that created those partitions has different ideas on the layout of a gpt partition table, and it might be necessary to boot back into that OS in order to remove the partition cleanly. There's no formatting required for a swap area: the system just uses it as a blob of available space and writes what it needs to. Cheers, Matthew --pc2qNR7SocS2d5K6oXtfb001lMktgTLleJVVwpNTNBNjhCOTEzQTRFNkNGM0UxRTEzMjZC QjIzQUY1MThFMUE0MDEzAAoJELsjr1GOGkAThBAP/0LlznS7ktRyaglDrBiLaVV7 DOHvhFJcX3VFvnV4xMXh+7BXyeUgb2b2SfxrU/mKcJ62U0JP1nDM48s64Ib4UFRx L4PdVoRxcRDE3Blbr7wmpcY0DNF4NrYSewwGDqOVKfBrjC5dJzg6218u/dO2YXh6 UuhcmPSAiIG04DEK5li9WeRpql13Ni68KxYC9NDJW33zqOytahg+M0YntCRE6t8L 6PipQfY6A7M4a37W+E5C3ERxtWCLK7cqZNNvly221wJjVRMrXhyEA3tNuTLIG0iz y/WR7lGJEMOAtACx8DRQXSBh8CPMGza95MQr/FUef0uoxB7GO1gS9QHCnoWS7Ght Wq6j+ZHAv9k1ZpUrw6oip5Hp5xQsVXYEF/hXkJeIancp9cZ+bbDnlCaPB97wvLh6 /2umItglrPgTFptXinniOMDk6MDWRdCRw5R52SjnXAB8H0UeWuVR8ZlP9d0FTMm4 C6Ckhgp2pF7uvcwjOsGJabq05eG3IvGJns2JL+0tvbTqzAHft/X1VLXbIytiUGdA IQ5gww1vgttgwZWlHpEIjO1uHw/CmojUcUp/TeEE/LVdolLivU2j9j4KY3uUfC6K QkwmyrYWvDCsiaXp3NXKkVX3XtXuIOS2MYs9Vqg29vwqSxO0hyetg3A+SclKwkIv 0m2mEgwwAqPpP3SWbaZh =723A -----END PGP SIGNATURE----- --pc2qNR7SocS2d5K6oXtfb001lMktgTLle--
Want to link to this message? Use this URL: <> | https://docs.freebsd.org/cgi/getmsg.cgi?fetch=644422+0+/usr/local/www/mailindex/archive/2015/freebsd-questions/20150517.freebsd-questions | 2021-10-16T12:25:25 | CC-MAIN-2021-43 | 1634323584567.81 | [] | docs.freebsd.org |
Xrm.WebApi.offline (Client API reference)
Provides methods to create and manage records in the model-driven apps mobile clients while working in the offline mode.
For information about the mobile offline feature, see Configure mobile offline synchronization to allow users to work in offline mode on their mobile device
var offlineWebApi = Xrm.WebApi.offline;
Note
Use Xrm.WebApi.offline instead of the deprecated Xrm.Mobile.Offline namespace to create and manage records in the mobile clients while working in the offline mode.
The offlineWebApi object provides the following methods. When in the offline mode, these methods will work only for tables that are enabled for mobile offline synchronization and available in current user’s mobile offline profile.
Important
While creating or updating record in the offline mode, only basic validation is performed on the input data. Basic validation includes things such as ensuring that the table column name specified is in lower case and does exist for a table, checking for data type mismatch for the specified column value, preventing records getting created with the same GUID value, checking whether the related table is offline enabled when retrieving related table records, and validating if the record that you want to retrieve, update, or delete actually exists in the offline data store. Business-level validations happen only when you are connected to the server and the data is synchronized. A record is created or updated only if the input data is completely valid. | https://docs.microsoft.com/en-us/powerapps/developer/model-driven-apps/clientapi/reference/xrm-webapi/offline | 2021-10-16T13:50:16 | CC-MAIN-2021-43 | 1634323584567.81 | [] | docs.microsoft.com |
Creating Formulas
Overview
This is a framework to codify your poolcare workflow so that it can be used, inspected, and remixed by anybody.
Most of our users don't really understand what a formula is; they just load one in the iOS or Android app and follow the instructions to balance their pool.
Anyone can view & edit formulas using our online editor.
What problem does this solve?
Flexibility: Most pool calculators reflect the opinions (or worse: financial incentives) of their creators. If you want to use different chemicals, target-ranges, or sanitization systems... you should be able to. With Pooldash, this is accomplished by selecting a new formula, or by remixing an existing one to suit your needs.
Transparency: Most pool calculators don't reveal the code used to calculate chemical dosages. This makes it hard for the community to help improve the app. At Pooldash, we recognize that many of our users know more about poolcare than us, so we not only encourage them to read this code, but we let the community write it by remixing one another's formulas, which are all open-source under the MIT license.
Correctness: Most pool calculators have a 1-1 relationship between readings and treatments. This is easy to understand, but it breaks down quickly as special cases are implemented to account for side effects and chemical substitutions. Pooldash formulas approach this differently -- instead of looking at every reading and asking "How do we balance this reading?", we instead look at every possible treatment and ask "How much of this treatment should we use?" Each treatment has a custom javascript function that must answer this question.
We previously tried a different approach based on "ChemicalEffect" objects, but it was insufficient to model the nonlinear relationships and complex side-effects. Our current model of treatments-with-functions is closer to how poolcare operators actually think, and it encapsulates the complexity in a way that is approachable to programmers from various backgrounds.
How do they work?
Formulas have an arbitrary list of readings (Free Chlorine, pH, whatever) and a separate list of possible treatments (Calcium Hypochlorite, Sodium Bicarbonate, "Backwash the filter", whatever). Each treatment has a custom function that must return a single number. They run like so:
- The user takes all readings specified by the formula
- The app iterates over the formula's treatments, executing the custom javascript function for each one
- The app displays all of the treatments whose function returns a non-0 value
You can inspect formulas using the online editor here. If you want to make your own, you'll need to create a free account and remix an existing formula. They're all MIT-licensed, so you can fork whichever one(s) you want.
Below is a reference for all the properties of a formula. If you have suggestions for improvement, please post it in the forum. Similarly, if you think these docs could be improved, submit a pull-request on Github.
Properties
Readings
Each reading will appear as a slider in the user's app. Users can either use the slider to set the value, or they can type into a text-field instead. View an example here.
- The slider is limited by
Slider Minand
Slider Max, but users can enter any number into the text-field.
- If a reading is skipped, the default value is supplied to treatment functions via the
robject.
- You should check the
sobject to check if a reading was skipped.
- This reduces
undefinedexceptions and confusion related to
0being falsey (and also a valid reading value).
To access the reading values in the treatment function, inspect the
robject. Reading entries will be keyed to the "Var Name" value. Assuming the reading pictured had "Var Name" set to "fc", this will assign a value of 1.4:
const readingValue = r.fc; // readingValue is now the number 1.4
Skipped readings are unusual.
s.<var>will resolve to true for all skipped readings, but the
robject will still contain the reading's default value.
const isReadingTaken = !!r.fc; // wrong const isReadingTaken = !s.fc; // correct
Target Levels
Target levels expose some parameters of your formula to end-users so they can be tweaked without remixing an entirely new formula.
These are exposed to the treatment-functions via the
cobject.
// Your formula will be useful to more people if you expose more ranges for customization. // Wrong: if (r.fc < 2.0) { return 0; } // Correct: if (r.fc < c.fc.min) { return 0; }
Target levels often have the same
var as an associated reading, but this is not a requirement (for instance, there is no reading for combined-chlorine, as it's derived). Users are also welcome to set the min & max values to the same number if no range is desired.
Your formula specifies default values for each target, and you can even customize these defaults based on the pool's wall-type.
Additional min & max overrides can be specified per wall-type. These can still be overridden by user edits on the client.
Treatments
Treatments are usually chemical additions, but can also be tasks ("backwash the filter") or even calculations (like the LSI). Each treatment defines its own function that must return a single number.
The functions are written in javascript, which is widely known as the world's best programming language :)
Treatment Functions
Here is very simple example of a treatment function to calculate the amount of Cyanuric Acid needed for this formula:
function(p, r, t, c, s) { if (r.cya >= c.cya.min) { return 0; } const target = (c.cya.min + c.cya.max) / 2.0; const delta = target - r.cya; const multiplier = .00013; return p.gallons * delta * multiplier; }
Here is more complex function from the same formula that doses Sodium Bicarbonate:
function(p, r, t, c, s) { // If the TA is already in good range, don't add any baking soda if (r.ta >= c.ta.min) { return 0; } // Otherwise, shoot for the middle of the ideal range: const target = (c.ta.min + c.ta.max) / 2.0; let taDelta = target - r.ta; // Remember, soda ash (from the previous step) also affects the TA, // so we should calculate how much (if any) the soda ash has // already moved the TA & offset our new delta accordingly: const sodaAshMultiplierForTA = .00014; const taIncreaseFromSodaAsh = t.soda_ash / (sodaAshMultiplierForTA * p.gallons); if (taIncreaseFromSodaAsh >= taDelta) { return 0; } taDelta = taDelta - taIncreaseFromSodaAsh; // Now, calculate the amount of baking soda necessary to close the remaining gap. const bakingSodaTAMultiplier = .000224; return p.gallons * taDelta * bakingSodaTAMultiplier; // NOTE: this ignores some complications. For instance, this new dose of // baking soda will also raise the pH, and could knock it above the ideal range. // If anyone wants to remix this recipe to account for this, you would be a hero. }
Finally, here is an example of a calculation-type treatment (for the LSI) that returns null if the necessary readings are skipped:
function(p, r, t, c, s) { // We need these 4 readings + temperature to calculate this: if (s.ch || s.ph || s.tds || s.ta || (s.temp_f && s.temp_c)) { return null; } // Prefer the temp_f reading (if the user took both for some reason). But, either works: const degrees_c = (s.temp_f) ? r.temp_c : ((r.temp_f - 32) / 1.8); const aa = (Math.log10(r.tds) - 1) / 10.0; const bb = (-13.12 * Math.log10(degrees_c + 273)) + 34.55; const cc = Math.log10(r.ch) - .4; const dd = Math.log10(r.ta); return r.ph - 9.3 - aa - bb + cc + dd; }
These are pure functions, so they should always return the same result given the same inputs. You shouldn't try to read from external data-sources, check the time, or use any other dependencies besides the explicit parameters.
The inputs include context about the pool & readings:
As a convenience, all available inputs are listed for each function at the bottom of the formula editor:
The return-value of the function is interpreted differently based on the "Type" selected:
Considerations
- If you remix a formula, post in the forum and tell us about it! It's more fun that way.
- When choosing a "Var Name" for readings, treatments, and target levels, you can technically pick anything you want. However, there are advantages to following the same standards as everyone else. For example, the charts will track chemical history over time based on the "Var Name", so even if a user changes formulas, the chart will remain unified over time:
| https://docs.pooldash.com/ | 2021-10-16T10:53:19 | CC-MAIN-2021-43 | 1634323584567.81 | [array(['images/reading-429d2139.jpg',
'a slider and a text-field that both represent a free chlorine reading'],
dtype=object)
array(['images/target_level-1528572c.jpg',
'2 cards, each with a chemical name for a title, with min & max textfields'],
dtype=object)
array(['images/treatment-753d299f.jpg',
'An empty checkbox, next to the instructions to add 3.4 lbs of 67% calcium hypochlorite'],
dtype=object)
array(['images/function_editor-d4849faa.jpg',
"An IDE with a textbox to edit a function on top, and seeral columns of text below, listing properties like 'r.tc: number'"],
dtype=object)
array(['images/charts-cb4276bf.jpg',
'2 charts, one for Total Alkalinity and another for Sodium Bicarbonate, showing 3 months of history.'],
dtype=object) ] | docs.pooldash.com |
Blur-Radial-Zoom Node
T-COMP2-003-007:
Refer to the following example to connect this effect:
Properties
| https://docs.toonboom.com/help/harmony-20/premium/reference/node/filter/blur-radial-zoom-node.html | 2021-10-16T12:32:20 | CC-MAIN-2021-43 | 1634323584567.81 | [array(['../../../Resources/Images/HAR/Stage/Effects/blur_radial_zoom_intro.png',
None], dtype=object)
array(['../../../Resources/Images/HAR/Stage/Effects/HAR11/HAR11_Blur-Radial-Zoom-Network-Basic.png',
'Blur Zoom Radial Network Blur Zoom Radial Network'], dtype=object)
array(['../../../Resources/Images/HAR/Stage/Effects/HAR11/HAR11_blur_radial_zoom_tline.png',
None], dtype=object) ] | docs.toonboom.com |
Comparing ground-based observations and a large-eddy simulation of shallow cumuli by isolating the main controlling factors of the mass flux distribution
Sakradzija, Mirjana; Klingebiel, Marcus, 2019: Comparing ground-based observations and a large-eddy simulation of shallow cumuli by isolating the main controlling factors of the mass flux distribution. In: Quarterly Journal of the Royal Meteorological Society, Band 146, 254 - 266, DOI 10.1002/qj.3671.
The distribution of mass flux at the cloud base has long been thought to be independent of large-scale forcing. However, recent idealized modelling studies have revealed its dependence on some large-scale conditions. Such dependence makes it possible to isolate the observed large-scale conditions, which are similar to those in large-eddy simulations (LES), in order to compare the observed and modelled mass flux distributions. In this study, we derive for the first time the distribution of the cloud-base mass flux among individual shallow cumuli from ground-based observations at the Barbados Cloud Observatory (BCO) and compare it with the Rain In Cumulus over the Ocean (RICO) LES case study. The procedure of cloud sampling in LES mimics the pointwise measurement procedure at the BCO to provide a mass flux metric that is directly comparable with observations. We find a difference between the mass flux distribution observed during the year 2017 at the BCO and the distribution modelled by LES that is comparable to the seasonal changes in the observed distribution. This difference between the observed and modelled distributions is diminished and an extremely good match is found by subsampling the measurements under a similar horizontal wind distribution and area-averaged surface Bowen ratio to those modelled in LES. This provides confidence in our observational method and shows that LES produces realistic clouds that are comparable to those observed in nature under the same large-scale conditions. We also confirm that the stronger horizontal winds and higher Bowen ratios in our case study shift the distributions to higher mass flux values, which is coincident with clouds of larger horizontal areas and not with stronger updrafts.
Statistik:View Statistics
Collection
Subjects:cloud-base mass flux
ground-based remote sensing
large-eddy simulation (LES)
mass flux distribution
shallow convection
This is an open access article under the terms of the Creative Commons Attribution License, which permits use, distribution and reproduction in any medium, provided the original work is properly cited. | https://e-docs.geo-leo.de/handle/11858/9228 | 2021-10-16T11:23:56 | CC-MAIN-2021-43 | 1634323584567.81 | [] | e-docs.geo-leo.de |
See the following topics for more information including how to setup and review Base and Alt. LRMs:
- LRS Data Model
Data Model
Once the alternate referencing is defined in the SETUP_LOC_REF_COLUMNS table and the changes applied to the database, the columns configured there will appear in the SETUP_LOC_IDENT table (refer to Chapter 4, for in-depth understanding and information about other uses of this table). In order to support translation between the base and Alt. LRMs, as shown in the.).
Window
There is no Alt LRM configured out of the box, but if you have created an Alt LRM, when you right-click the record for an alternate LRM in the upper pane, the system displays the following special command in a shortcut menu:
- Make Window – This command will create a new window that shows the mapping between Base and Alt. LRM. After choosing this command, a pop-up window (Form Window Maker) will display. Choose the "Alternative location Editor" and click Next. Then click the parent menu item into which a menu item for the window, which will show the cross-reference table between the Basic Referencing System (BRS) and the selected alternate system. After selecting the parent menu item, click OK to close the window. Then log off and log back on, and the menu item will be in the designated place, with the name of the menu item being what is entered in the Loc Ref Name column. Open the new window to see the cross-reference table. The BRS is shown on the left and the alternate system is on the right. User can then use the window to enter the mapping between the Base and Alt. LRM.
The steps are shown in the following charts.
Add a menu item under this parent
Example Alternate LRM Mapping Screen
SETUP_LOC_IDENT Table Data with a County-Based LRM Example | https://docs.agileassets.com/pages/?pageId=34309049&sortBy=size | 2021-10-16T12:06:07 | CC-MAIN-2021-43 | 1634323584567.81 | [] | docs.agileassets.com |
...
Supported spinner indicating that a layer is loading in the GIS Explorer, continues to spin after selecting to remove a previous loading layer by using either the new map or remove layer option
... | https://docs.agileassets.com/pages/diffpagesbyversion.action?pageId=20381839&selectedPageVersions=3&selectedPageVersions=4 | 2021-10-16T11:30:00 | CC-MAIN-2021-43 | 1634323584567.81 | [] | docs.agileassets.com |
MetricsFilter
Specifies a metrics configuration filter. The metrics configuration only includes objects that meet the filter's criteria. A filter must be a prefix, an object tag, an access point ARN, or a conjunction (MetricsAndOperator). For more information, see PutBucketMetricsConfiguration.
Contents
- AccessPointArn
The access point ARN used when evaluating a metrics filter.
Type: String
Required: No
- And
A conjunction (logical AND) of predicates, which is used in evaluating a metrics filter. The operator must have at least two predicates, and an object must match all of the predicates in order for the filter to apply.
Type: MetricsAndOperator data type
Required: No
- Prefix
The prefix used when evaluating a metrics filter.
Type: String
Required: No
- Tag
The tag used when evaluating a metrics filter.
Required: No
See Also
For more information about using this API in one of the language-specific Amazon SDKs, see the following: | https://docs.amazonaws.cn/en_us/AmazonS3/latest/API/API_MetricsFilter.html | 2021-10-16T12:28:33 | CC-MAIN-2021-43 | 1634323584567.81 | [] | docs.amazonaws.cn |
Example:
RR
#") ) ) ))
Run the following code in the interactive workbench prompt to install the Shiny package, load the library into the engine, and run the Shiny application Machine Learning web application, and select the Shiny UI, Hello Shiny!, from the dropdown. The UI will be active as long as the session is still running. | https://docs.cloudera.com/machine-learning/1.3.1/projects/topics/ml-example--a-shiny-application.html | 2021-10-16T12:46:10 | CC-MAIN-2021-43 | 1634323584567.81 | [] | docs.cloudera.com |
Java Specifics in Fedora for Users and Developers
This section contains information about default Java implementation in Fedora, switching between different Java runtime environments and about few useful tools which can be used during packaging/development.is a Java compiler which translates source files to Java bytecode, which can be later interpreted by JVM.
jdbis a simple command-line debugger for Java applications.
javadocis a tool for generating Javadoc documentation.
javapcan be used for disassembling Java class files. | https://docs.fedoraproject.org/te/java-packaging-howto/fedora_java_specifics/ | 2021-10-16T10:56:52 | CC-MAIN-2021-43 | 1634323584567.81 | [] | docs.fedoraproject.org |
- Features -
Alert My Relay
"Alert My Relay" is perfect for finding a lost Relay in the house or getting the attention of a Relay device user.
Check the weather, time, and battery life
Press and hold the volume button on your Relay device, say "Weather," and release to check the local weather forecast for your enabled location.
Press and hold the volume button on your Relay device, say "Time," and release to check the local time.
Press and hold the volume button on your Relay device, say "Battery," and release to check the battery life for the Relay device.
- Channels -
How to enable Channels for your Relay in the Relay app
Open the Relay App
Tap on Channels
Tap on the Channel you wish to enable
Tap on Manage or Add or Edit Members
Tap on the toggle to the right of the Relay you wish to enable access to
Tap on the X to close the window
Description of set up language translation on each of your Relay devices, personalizing which languages each Relay can speak.
Daily Joke Channel: Get new, funny content on your Relay every day, as well as listen to the previous days' jokes! Jokes are appropriate for kids of all ages.
Echo Channel: This silly channel repeats what you say back to you in a funny tone. Just turn to the echo channel, press and talk into the Relay, then hear your words repeated back to you in a variety of funny tones..
See other common articles that people are viewing
How to set up GPS and Geofencing
How to Send My First Message
How to enable and test SOS emergency alerts
How to Use and Enable "Do Not Disturb" mode
How to Add and Remove App Users on your Relay account
Overview of Buttons and LED Lights
| https://docs.relaygo.com/en/articles/3472450-additional-features-and-channels | 2021-10-16T12:41:06 | CC-MAIN-2021-43 | 1634323584567.81 | [array(['https://downloads.intercomcdn.com/i/o/161963651/3b37d60a7026cadb4ee1c5db/image.png',
None], dtype=object) ] | docs.relaygo.com |
Contents:
Contents:
This section provides information on improvements to the Trifacta® type system.
If you have upgraded from a Trifacta Release 3.0 or earlier to Release 3.1 or later, you should review this page, as some type-related behaviors have changed in the platform.
General Improvements in Typecasting
Mismatched data types
Where there are mismatches between inputs and the expected input data type, the following values are generated for the mismatches:
State values and custom data types are converted to string values, if they are mismatched.
Three-value logic for null values
The Trifacta Photon running environment has been augmented to use three-value logic for null values.
When values are compared, the result can be
true or
false in most cases.
If a null value was compared to a null value in the Trifacta Photon running environment:
- In Release 3.0 and earlier, this evaluated to
true.
- In Release 3.1 and later, this evaluates to an unknown (null) value.
This change aligns the behavior of the running environment with that of SQL and Hadoop Pig.
Improved handling of null values
Assume that the column
nuller contains null values and that you have the following transform:
derive value:(nuller >= 0)
Prior to Release 3.1, the above transform generated a column of
true values.
In Release 3.1 and later, the transform generates a column of null values.
More consistent evaluation of null values in ternaries
In the following example,
a_null_expression always evaluates to a null value.
derive value: (a_null_expression ? 'a' : 'b')
In Release 3.0, this expression generated
b for all inputs on the Trifacta Photon running environment and a null value on Hadoop Pig.
In Release 3.1 and later, this expression generates a null value for all inputs on both running environments.
Tip: Beginning in Release 3.1, you can use the
if function instead of ternary expressions. Ternaries may be deprecated at some point in the future. For more information, see IF Function.
For example, you have the following dataset:
You test each row for the presence of the string
can't:
derive value: if(find(MyStringCol, 'can\'t',true,0) > -1, true, false) as:'MyFindResults'
The above transform results in the following:
In this case, the value of
false is not written to the other columns, since the
find function returns a null value. This null value, in turn, nullifies the entire expression, resulting in a null value written in the new column.
You can use the following to locate the null values:
derive value:isnull(MyFindResults) as:'nullInMyFindResults'
Datetime changes
Raw date and time values must be properly formatted
NOTE: Upgraded recipes continue to function properly. However, if you edit the recipe step in an upgraded system, you are forced to fix the formatting issue before saving the change.
Before this release, you could create a transform like the following:
derive value:date(2016,2,15)
This transform generated a column of map values, like the following:
{"year":"2016","month":"2","date":"15"}
Beginning this release, the above command is invalid, as the date values must be properly formatted prior to display. The following works:
derive value:dateformat(date(2016,2,15),'yyyy-MM-dd')
This transform generates a column of Datetime values in the following format:
2016-02-15
Time:
Before this release:
derive value:time(11,34,58)
Prior release output:
{"hours":"11","minutes":"34","seconds":"58"}
This release:
derive value:dateformat(time(11,34,58), 'HH-mm-ss')
This release's output:
11-34-58
Date formatting functions supports 12-hour time only if AM/PM indicator is included
Beginning in this release, the
unixtimeformat and
dateformat functions requires an AM/PM indicator (
a) if the date formatting string uses a 12-hour time indicator (
h or
hh).
Valid for earlier releases:
derive value: unixtimeformat(myDate, 'yyyy-MM-dd hh:mm:ss') as:'myUnixDate'
Valid for this release and later:
derive value: unixtimeformat(myDate, 'yyyy-MM-dd hh:mm:ss a') as:'myUnixDate'
These references in recipes fail to validate in this release or later and must be fixed.
Un-inferrable formats from dateformat and unixtimeformat functions are written as strings
If a formatting string is not a datetime format recognized by the Trifacta platform, the output is generated as a string value.
This change was made to provide clarity to some ambiguous conditions.
Colon as a delimiter for date values is no longer supported
Beginning in this release, the colon (
:) is no longer supported as a delimiter for date values. It is still supported for time values.
When data such as the above is imported, it may not be initially recognized by the Trifacta application as Datetime type.
To fix, you might apply the following transform:
replace col:myDateValue with:'-' on:`-` global:true
The new column values are more likely to be inferred as Datetime values. If not, you can choose the appropriate Datetime format from the data type drop-down for the column. See Data Grid Panel.
This page has no comments. | https://docs.trifacta.com/display/r060/Improvements+to+the+Type+System?reload=true | 2021-10-16T11:53:29 | CC-MAIN-2021-43 | 1634323584567.81 | [] | docs.trifacta.com |
WRITE (ObjectScript)
Synopsis
WRITE:pc writeargument,... W:pc writeargument,...
where writeargument can be:
expression f *integer *-integer
Arguments
Description
The WRITE command displays the specified output on the current I/O device. (To set the current I/O device, use the USE command, which sets the value of the $IO special variable.) WRITE has two forms:
WRITE without an argument
WRITE with arguments
Argumentless WRITE
Argumentless WRITE lists the names and values of all defined local variables. It does not list process-private globals, global variables, or special variables. It lists defined local variables one variable per line in the following format:
varname1=value1 varname2=value2
Argumentless WRITE displays local variable values of all types as quoted strings. The exceptions are canonical numbers and object references. A canonical number is displayed without enclosing quotes. An object reference (OREF) is displayed as follows: myoref=<OBJECT REFERENCE>[1@%SQL.Statement]; a JSON array or JSON object is displayed as an object reference (OREF). Bit string values and List values are displayed as quoted strings with the data value displayed in encoded form.
The display of numbers and numeric strings is shown in the following example:
SET str="fred" SET num=+123.40 SET canonstr="456.7" SET noncanon1="789.0" SET noncanon2="+999" WRITE
canonstr=456.7 noncanon1="789.0" noncanon2="+999" num=123.4 str="fred"
Argumentless WRITE displays local variables in case-sensitive string collation order, as shown in the following WRITE output example:
A="Apple" B="Banana" a="apple varieties" a1="macintosh" a10="winesap" a19="northern spy" a2="golden delicious" aa="crabapple varieties"
Argumentless WRITE displays the subscripts of a local variable in subscript tree order, using numeric collation, as shown in the following WRITE output example:
a(1)="United States" a(1,1)="Northeastern Region" a(1,1,1)="Maine" a(1,1,2)="New Hampshire" a(1,2)="Southeastern Region" a(1,2,1)="Florida" a(2)="Canada" a(2,1)="Maritime Provinces" a(10)="Argentina"
Argumentless WRITE executes control characters, such as Formfeed ($CHAR(12)) and Backspace ($CHAR(8)). Therefore, local variables that define control characters would display as shown in the following example:
SET name="fred" SET number=123 SET bell=$CHAR(7) SET formfeed=$CHAR(10) SET backspace=$CHAR(8) WRITE
backspace=" bell="" formfeed=" " name="fred" number=123
Multiple backspaces display as follows, given a local variable named back: 1 backspace: back="; 2 backspaces: back""; 3 backspaces: bac"="; 4 backspaces: ba"k="; 5 backspaces: b"ck="; 6 backspaces: "ack="; 7 or more backspaces: "ack=".
An argumentless WRITE must be separated by at least two blank spaces from a command following it on the same line. If the command that follows it is a WRITE with arguments, you must provide the WRITE with arguments with the appropriate line return f format control arguments. This is shown in the following example:
SET myvar="fred" WRITE WRITE ; note two spaces following argumentless WRITE WRITE WRITE myvar ; formatting needed WRITE WRITE !,myvar ; formatting provided
Argumentless WRITE listing can be interrupted by issuing a CTRL-C, generating an <INTERRUPT> error.
You can use argumentless WRITE to display all defined local variables. You can use the $ORDER function to return a limited subset of the defined local variables.
WRITE with Arguments
WRITE can take a single writeargument or a comma-separated list of writearguments. A WRITE command can take any combination of expression, f, *integer, and *-integer arguments.
WRITE expression displays the data value corresponding to the expression argument. An expression can be the name of a variable, a literal, or any expression that evaluates to a literal value.
WRITE f provides any desired output formatting. Because the argumented form of WRITE provides no automatic formatting to separate argument values or indicate strings, expression values will display as a single string unless separated by f formatting.
WRITE *integer displays the character represented by the integer code.
WRITE *-integer provides device control operations.
WRITE arguments are separated by commas. For example:
WRITE "numbers",1,2,3 WRITE "letters","ABC"
displays as:
numbers123lettersABC
Note that WRITE does not append a line return to the end of its output string. In order to separate WRITE outputs, you must explicitly specify f argument formatting characters, such as the line return (!) character.
WRITE "numbers ",1,2,3,! WRITE "letters ","ABC"
displays as:
numbers 123 letters ABC
Arguments
pc
An optional postconditional expression. InterSystems IRIS executes the command if the postconditional expression is true (evaluates to a nonzero numeric value). InterSystems IRIS does not execute the command if the postconditional expression is false (evaluates to zero). You can specify a postconditional expression for an argumentless WRITE or a WRITE with arguments. For further details, refer to Command Postconditional Expressions in Using ObjectScript.
expression
The value you wish to display. Most commonly this is either a literal (a quoted string or a numeric) or a variable. However, expression can be any valid ObjectScript expression, including literals, variables, arithmetic expressions, object methods, and object properties. For more information on expressions, see Using ObjectScript.
An expression can be a variable of any type, including local variables, process-private globals, global variables, and special variables. Variables can be subscripted; WRITE only displays the value of the specified subscript node.
Data values, whether specified as a literal or a variable, are displayed as follows:
Character strings display without enclosing quotes. Some non-printing characters do not display: $CHAR 0, 1, 2, 14, 15, 28, 127. Other non-printing characters display as a placeholder character: $CHAR 3, 16–26. Control characters are executed: $CHAR 7–13, 27. For example, $CHAR(8) performs a backspace, $CHAR(11) performs a vertical tab.
Numbers display in canonical form. Arithmetic operations are performed.
Extended global references display as the value of the global, without indicating the namespace in which the global variable is defined. If you specify a nonexistent namespace, InterSystems IRIS issues a <NAMESPACE> error. If you specify a namespace for which you do not have privileges, InterSystems IRIS issues a <PROTECT> error, followed by the global name and database path, such as the following: <PROTECT> ^myglobal,c:\intersystems\IRIS\mgr\.
ObjectScript List structured data displays in encoded form.
InterSystems IRIS bitstrings display in encoded form.
Object References display as the OREF value. For example, ##class(%SQL.Statement).%New() displays as the OREF 2@%SQL.Statement. A JSON dynamic object or a JSON dynamic array displays as an OREF value. For information on OREFs, see “OREF Basics” in Defining and Using Classes.
Object methods and properties display the value of the property or the value returned by the method. The value returned by a Get method is the current value of the argument; the value returned by a Set method is the prior value of the argument. You can specify a multidimensional property with subscripts; specifying a non-multidimensional property with a subscript (or empty parentheses) results in an <OBJECT DISPATCH> error.
%Status displays as either 1 (success), or a complex encoded failure status, the first character of which is 0.
f
A format control to position the output on the target device. You can specify any combination of format control characters without intervening commas, but you must use a comma to separate a format control from an expression. For example, when you issue the following WRITE to a terminal:
WRITE #!!!?6,"Hello",!,"world!"
The format controls position to the top of a new screen (#), then issue three line returns (!!!), then indent six columns (?6). The WRITE then displays the string Hello, performs a format control line return (!), then displays the string world!. Note that the line return repositions to column 1; thus in this example, Hello is displayed indented, but world! is not.
Format control characters cannot be used with an argumentless WRITE.
For further details, see Using Format Controls with WRITE .
*integer
The *integer argument allows you to use a positive integer code to write a character to the current device. It consists of an asterisk followed by any valid ObjectScript expression that evaluates to a positive integer that corresponds to a character. The *integer argument may correspond to a printable character or a control character. An integer in the range of 0 through 255 evaluates to the corresponding 8-bit ASCII character. An integer in the range of 256 through 65534 evaluates to the corresponding 16-bit Unicode character.
As shown in the following example, *integer can specify an integer code, or specify an expression that resolves to an integer code. The following examples all return the word “touché”:
WRITE !,"touch",*233 WRITE !,*67,*97,*99,*104,*233 SET accent=233 WRITE !,"touch",*accent ; variables are evaluated WRITE !,"touch",*232+1 ; arithmetic operations are evaluated WRITE !,"touch",*00233.999 ; fractional numbers are truncated to integers
To write the name of the composer Anton Dvorak with the proper Czech accent marks, use:
WRITE "Anton Dvo",*345,*225,"k"
The integer resulting from the expression evaluation may correspond to a control character. Such characters are interpreted according to the target device. A *integer argument can be used to insert control characters (such as the form feed: *12) which govern the appearance of the display, or special characters such as *7, which rings the bell on a terminal.
For example, if the current device is a terminal, the integers 0 through 30 are interpreted as ASCII control characters. The following commands send ASCII codes 7 and 12 to the terminal.
WRITE *7 ; Sounds the bell WRITE *12 ; Form feed (blank line)
Here’s an example combining expression arguments with *integer specifying the form feed character:
WRITE "stepping",*12,"down",*12,"the",*12,"stairs"
*integer and $X, $Y
An integer expression does not change the $X and $Y special variables when writing to a terminal. Thus, WRITE "a" and WRITE $CHAR(97) both increment the column number value contained in $X, but WRITE *97 does not increment $X.
You can issue a backspace (ASCII 8), a line feed (ASCII 10), or other control character without changing the $X and $Y values by using *integer. The following Terminal examples demonstrate this use of integer expressions.
Backspace:
WRITE $X,"/",$CHAR(8),$X ; displays: 01 WRITE $X,"/",*8,$X ; displays: 02
Linefeed:
WRITE $Y,$CHAR(10),$Y /* displays: 1 2 */ WRITE $Y,*10,$Y /* displays: 4 4 */
For further details, see the $X and $Y special variables, and “Terminal I/O” in I/O Device Guide.
*-integer
An asterisk followed by a negative integer is a device control code. WRITE supports the following general device control codes:
Input Buffer Controls
The *-1 and *-10 controls are used for input from a terminal device. These controls clear the input buffer of any characters that have not yet been accepted by a READ command. The *-1 control clears the input buffer upon the next READ. The *-10 control clears the input buffer immediately. If there is a pending CTRL-C interrupt when WRITE *-1 or WRITE *-10 is invoked, WRITE dismisses this interrupt before clearing the input buffer.
An input buffer holds characters as they arrive from the keyboard, even those the user types before the routine executes a READ command. In this way, the user can type-ahead the answers to questions even before the prompts appear on the screen. When the READ command takes characters from the buffer, InterSystems IRIS echoes them to the terminal so that questions and answers appear together. When a routine detects errors it may use the *-1 or *-10 control to delete these type-ahead answers. For further details, see Terminal I/O in the I/O Device Guide.
For use of *-1 in TCP Client/Server Communication refer to the I/O Device Guide.
Output Buffer Controls
The *-3 control is used to flush data from an output buffer, forcing a write operation on the physical device. Thus it first flushes data from the device buffer to the operating system I/O buffer, then forces the operating system to flush its I/O buffer to the physical device. This control is commonly used when forcing an immediate write to a sequential file on disk. *-3 is supported on Windows and UNIX platforms. On other operating system platforms it is a no-op.
For use of *-3 in TCP Client/Server Communication refer to the I/O Device Guide.
Examples
In the following example, the WRITE command sends the current value in variable var1 to the current output device.
SET var1="hello world" WRITE var1
In the following example, both WRITE commands display the Unicode character for pi. The first uses the $CHAR function, the second a *integer argument:
WRITE !,$CHAR(960) WRITE !,*960
The following example writes first name and last name values along with an identifying text for each. The WRITE command combines multiple arguments on the same line. It is equivalent to the two WRITE commands in the example that follows it. The ! character is a format control that produces a line break. (Note that the ! line break character is still needed when the text is output by two different WRITE commands.)
SET fname="Bertie" SET lname="Wooster" WRITE "First name: ",fname,!,"Last name: ",lname
is equivalent to:
SET fname="Bertie" SET lname="Wooster" WRITE "First name: ",fname,! WRITE "Last name: ",lname
In the following example, assume that the current device is the user’s terminal. The READ command prompts the user for first name and last name and stores the input values in variables fname and lname, respectively. The WRITE command displays the values in fname and lname for the user’s confirmation. The string containing a space character (" ") is included to separate the output names.
Test READ !,"First name: ",fname READ !,"Last name: ",lname WRITE !,fname," ",lname READ !,"Is this correct? (Y or N) ",check#1 IF "Nn"[check { GOTO Test }
The following example writes the current values in the client(1,n) nodes.
SetElementValues SET client(1,1)="Betty Smith" SET client(1,2)="123 Primrose Path" SET client(1,3)="Johnson City" SET client(1,4)="TN" DisplayElementValues SET n=1 WHILE $DATA(client(1,n)) { WRITE client(1,n),! SET n=n+1 } RETURN
The following example writes the current value of an object instance property:
SET myoref=##class(%SYS.NLS.Format).%New() WRITE myoref.MonthAbbr
where myoref is the object reference (OREF), and MonthAbbr is the object property name. Note that dot syntax is used in object expressions; a dot is placed between the object reference and the object property name or object method name.
The following example writes the value returned by the object method GetFormatItem():
SET myoref=##class(%SYS.NLS.Format).%New() WRITE myoref.GetFormatItem("MonthAbbr")
The following example writes the value returned by the object method SetFormatItem(). Commonly, the value returned by a Set method is the prior value for the argument:
SET myoref=##class(%SYS.NLS.Format).%New() SET oldval=myoref.GetFormatItem("MonthAbbr") WRITE myoref.SetFormatItem("MonthAbbr"," J F M A M J J A S O N D") WRITE myoref.GetFormatItem("MonthAbbr") WRITE myoref.SetFormatItem("MonthAbbr",oldval) WRITE myoref.GetFormatItem("MonthAbbr")
A write command for objects can take an expression with cascading dot syntax, as shown in the following example:
WRITE patient.Doctor.Hospital.Name
In this example, the patient.Doctor object property references the Hospital object, which contains the Name property. Thus, this command writes the name of the hospital affiliated with the doctor of the specified patient. The same cascading dot syntax can be used with object methods.
A write command for objects can be used with system-level methods, such as the following data type property method:
WRITE patient.AdmitDateIsValid(date)
In this example, the AdmitDateIsValid() property method returns its result for the current patient object. AdmitDateIsValid() is a boolean method for data type validation of the AdmitDate property. Thus, this command writes a 1 if the specified date is a valid date, and writes 0 if the specified date is not a valid date.
Note that any object expression can be further specified by declaring the class or superclass to which the object reference refers. Thus, the above examples could also be written:
WRITE ##class(Patient)patient.Doctor.Hospital.Name
WRITE ##class(Patient)patient.AdmitDateIsValid(date)
WRITE with $X and $Y
A WRITE displays the characters resulting from the expression evaluation one at a time in left-to-right order. InterSystems IRIS records the current output position in the $X and $Y special variables, with $X defining the current column position and $Y defining the current row position. As each character is displayed, $X is incremented by one.
In the following example, the WRITE command gives the column position after writing the 11–character string Hello world.
WRITE "Hello world"," "_$X," is the column number"
Note that writing a blank space between the displayed string and the $X value (," ",$X) would cause that blank space to increment $X before it is evaluated; but concatenating a blank space to $X (," "_$X) displays the blank space, but does not increment the value of $X before it is evaluated.
Even using a concatenated blank, the display from $X or $Y does, of course, increment $X, as shown in the following example:
WRITE $Y," "_$X WRITE $X," "_$Y
In the first WRITE, the value of $X is incremented by the number of digits in the $Y value (which is probably not what you wanted). In the second WRITE, the value of $X is 0.
With $X you can display the current column position during a WRITE command. To control the column position during a WRITE command, you can use the ? format control character. The ? format character is only meaningful when $X is at column 0. In the following WRITE commands, the ? performing indenting:
WRITE ?5,"Hello world",! WRITE "Hello",!?5,"world"
Using Format Controls with WRITE
The f argument allows you to include any of the following format control characters. When used with output to the terminal, these controls determine where the output data appears on the screen. You can specify any combination of format control characters.
! Format Control Character
Advances one line and positions to column 0 ($Y is incremented by 1 and $X is set to 0). The actual control code sequence is device-dependent; it generally either ASCII 13 (RETURN), or ASCII 13 and ASCII 10 (LINE FEED).
InterSystems IRIS does not perform an implicit new line sequence for WRITE with arguments. When writing to a terminal it is a good general practice to begin (or end) every WRITE command with a ! format control character.
You can specify multiple ! format controls. For example, to advance five lines, WRITE !!!!!. You can combine ! format controls with other format controls. However, note that the following combinations, though permitted, are not in most cases meaningful: !# or !,# (advance one line, then advance to the top of a new screen, resetting $Y to 0) and ?5,! (indent by 5, then advance one line, undoing the increment). The combination ?5! is not legal.
If the current device is a TCP device, ! does not output a RETURN and LINE FEED. Instead, it flushes any characters that remain in the buffer and sends them across the network to the target system.
# Format Control Character
Produces the same effect as sending the CR (ASCII 13) and FF (ASCII 12) characters to a pure ASCII device. (The exact behavior depends on the operating system type, device, and record format.) On a terminal, the # format control character clears the current screen and starts at the top of the new screen in column 0. ($Y and $X are reset to 0.)
You can combine # format controls with other format controls. However, note that the following combinations, though permitted, are not in most cases meaningful: !# or !,# (advance one line, then advance to the top of a new screen, resetting $Y to 0) and ?5,# (indent by 5, then advance to the top of a new screen, undoing the increment). The combination ?5# is not legal.
?n Format Control Character
This format control consists of a question mark (?) followed by an integer, or an expression that evaluates to an integer. It positions output at the nth column location (counting from column 0) and resets $X. If this integer is less than or equal to the current column location (n<$X), this format control has no effect. You can reference the $X special variable (current column) when setting a new column position. For example, ?$X+3.
/mnemonic Format Control Character
This format control consists of a slash (/) followed by a mnemonic keyword, and (optionally) a list parameters to be passed to the mnemonic.
/mnemonic(param1,param2,...)
InterSystems IRIS interprets mnemonic as an entry point name defined in the active mnemonic space. This format control is used to perform such device functions as positioning the cursor on a screen. If there is no active mnemonic space, an error results. A mnemonic may (or may not) require a parameter list.
You can establish the active mnemonic space in either of the following ways:
Go to the Management Portal, select System Administration, Configuration, Device Settings, IO Settings. View and edit the mnemonic space setting.
Include the /mnemonic space parameter in the OPEN or USE command for the device.
The following are some examples of mnemonic device functions:
For further details on mnemonics, see the I/O Device Guide.
Specifying a Sequence of Format Controls
InterSystems IRIS allows you to specify a sequence of format controls and to intersperse format controls and expressions. When specifying a sequence of format controls it is not necessary to include the comma separator between them (though commas are permitted.) A comma separator is required to separate format controls from expressions.
In the following example, the WRITE command advances the output by two lines and positions the first output character at the column location established by the input for the READ command.
READ !,"Enter the number: ",num SET col=$X SET ans=num*num*num WRITE !!,"Its cube is: ",?col,ans
Thus, the output column varies depending on the number of characters input for the READ.
Commonly, format controls are specified as literal operands for each WRITE command. You cannot specify format controls using variables, because they will be parsed as strings rather than executable operands. If you wish to create a sequence of format controls and expressions to be used by multiple WRITE commands, you can use the #Define preprocessor directive to define a macro, as shown in the following example:
#Define WriteMacro "IF YOU ARE SEEING THIS",!,"SOMETHING HAS GONE WRONG",##Continue $SYSTEM.Status.DisplayError($SYSTEM.Status.Error(x)),!! SET x=83 Module1 /* code */ WRITE $$$WriteMacro Module2 /* code */ WRITE $$$WriteMacro
Escape Sequences with WRITE
The WRITE command, like the READ command, provides support for escape sequences. Escape sequences are typically used in format and control operations. Their interpretation is specific to the current device type.
To output an escape sequence, use the form:
WRITE *27,"char"
where *27 is the ASCII code for the escape character, and char is a literal string consisting of one or more control characters. The enclosing double quotes are required.
For example, if the current device is a VT-100 compatible terminal, the following command erases all characters from the current cursor position to the end of the line.
WRITE *27,"[2J"
To provide device independence for a program that can run on multiple platforms, use the SET command at the start of the program to assign the necessary escape sequences to variables. In your program code, you can then reference the variables instead of the actual escape sequences. To adapt the program for a different platform, simply make the necessary changes to the escape sequences defined with the SET command.
WRITE Compared with Other Write Commands
For a comparison of WRITE with the ZWRITE, ZZDUMP, and ZZWRITE commands, refer to the Display (Write) Commands features tables in the “Commands” chapter of Using ObjectScript.
See Also
-
-
-
-
-
-
-
Writing escape sequences for Terminal I/O and Interprocess Communications in the I/O Device Guide
Terminal I/O in I/O Device Guide
Sequential File I/O in I/O Device Guide
The Spool Device in I/O Device Guide | https://docs.intersystems.com/irisforhealthlatest/csp/docbook/DocBook.UI.Page.cls?KEY=RCOS_CWRITE | 2021-10-16T13:02:27 | CC-MAIN-2021-43 | 1634323584567.81 | [] | docs.intersystems.com |
I have the following result:
((System.ValueType) 0) != (System.ValueType) 0 && ((System.ValueType) 0) .Equals (0)
The former is a bit unexpected. What should I make of it?
I have the following result:
((System.ValueType) 0) != (System.ValueType) 0 && ((System.ValueType) 0) .Equals (0)
The former is a bit unexpected. What should I make of it?
Can you provide a small code sample to how the code is used?
It is not used because it does not work:
Debug .Assert (new Enum [] { ConsoleColor .Black } [0] == (Enum) ConsoleColor .Black);
If you hover over the "==" of this expression in Visual Studio it might help you see what's going on:
Debug.Assert(ConsoleColor.Black == ConsoleColor.Black);
You should see this in the popup:
bool ConsoleColor.operator ==(ConsoleColor left, ConsoleColor right)
So it's using the implementation of the equality operator that's automatically available to enums.
If you hover over the double equals in your example above you should see:
bool object.operator ==(object left, object right)
This is because
Enum hasn't defined an equality operator, and because it's a class it's falling back to
object's implementation.
Since you've boxed the
ConsoleColor on both sides of the expression inside an
Enum the implementation of
object's equality operator does it's default reference equality check against the two boxes and returns false.
I think that since ValueType is a class, your code creates three different objects (instances). Although, the content of these instances is similar.
Calling ‘!=’ on different objects (references) gives True. (It does not compare the contents of the objects in this case).
Calling Equals returns True because it is designed to compare the contents.
See also: “Boxing”.
8 people are following this question.
Use Microsoft CNG library to load key from KSP and use it to initiate two-way-ssl mutual authentication
Multiple Data Sources Blazor Server Hosted on IIS
Same user login into appliction from different browsers or devices
windows 2016 steps into future for half hour at 23:00 CET and then returns to normal
Change field text color(red) on Main Entity and Subgrid using Javascript (Unified Interface) | https://docs.microsoft.com/en-us/answers/questions/560026/valuetype-and-operator.html | 2021-10-16T11:37:24 | CC-MAIN-2021-43 | 1634323584567.81 | [] | docs.microsoft.com |
user mails were available in online archive and but now missing from the folder
Since the pst injected to cloud user was able to access all the emails in Online archive folder.
But last three month user is unable to find some emails where the emails are injected in the folder called 2017 - Inbox & 2018 -Inbox
using the app version of outlook
we have already checked individual folder but couldn’t find the missing mail
mails are not available in recycle bin or deleted folder
The mails are not hidden | https://docs.microsoft.com/en-us/answers/questions/569697/mails-missing.html | 2021-10-16T12:26:36 | CC-MAIN-2021-43 | 1634323584567.81 | [] | docs.microsoft.com |
Prev Tutorial: Out-of-focus Deblur Filter
Next Tutorial: Anisotropic image segmentation by a gradient structure tensor
In this tutorial you will learn:
For the degradation image model theory and the Wiener filter theory you can refer to the tutorial Out-of-focus Deblur Filter. On this page only a linear motion blur distortion is considered. The motion blur image on this page is a real world image. The blur was caused by a moving subject.
The point spread function (PSF) of a linear motion blur distortion is a line segment. Such a PSF is specified by two parameters: \(LEN\) is the length of the blur and \(THETA\) is the angle of motion.
On this page the Wiener filter is used as the restoration filter, for details you can refer to the tutorial Out-of-focus Deblur Filter. In order to synthesize the Wiener filter for a motion blur case, it needs to specify the signal-to-noise ratio ( \(SNR\)), \(LEN\) and \(THETA\) of the PSF.
You can find source code in the
samples/cpp/tutorial_code/ImgProc/motion_deblur_filter/motion_deblur_filter.cpp of the OpenCV source code library.
A motion blur image recovering algorithm consists of PSF generation, Wiener filter generation and filtering a blurred image in a frequency domain:
A function calcPSF() forms a PSF according to input parameters \(LEN\) and \(THETA\) (in degrees):
A function edgetaper() tapers the input image’s edges in order to reduce the ringing effect in a restored image:
The functions calcWnrFilter(), fftshift() and filter2DFreq() realize an image filtration by a specified PSF in the frequency domain. The functions are copied from the tutorial Out-of-focus Deblur Filter.
Below you can see the real world image with motion blur distortion. The license plate is not readable on both cars. The red markers show the car’s license plate location.
Below you can see the restoration result for the black car license plate. The result has been computed with \(LEN\) = 125, \(THETA\) = 0, \(SNR\) = 700.
Below you can see the restoration result for the white car license plate. The result has been computed with \(LEN\) = 78, \(THETA\) = 15, \(SNR\) = 300.
The values of \(SNR\), \(LEN\) and \(THETA\) were selected manually to give the best possible visual result. The \(THETA\) parameter coincides with the car’s moving direction, and the \(LEN\) parameter depends on the car’s moving speed. The result is not perfect, but at least it gives us a hint of the image’s content. With some effort, the car license plate is now readable.
You can also find a quick video demonstration of a license plate recovering method YouTube. | https://docs.opencv.org/master/d1/dfd/tutorial_motion_deblur_filter.html | 2021-10-16T12:47:12 | CC-MAIN-2021-43 | 1634323584567.81 | [] | docs.opencv.org |
The Red Hat Marketplace is an open cloud marketplace that makes it easy to discover and access certified software for container-based environments that run on public clouds and on-premises.
Cluster administrators can use the Red Hat Marketplace to manage software on OpenShift Container Platform, give developers self-service access to deploy application instances, and correlate application usage against a quota..
Cluster administrators can install Marketplace applications from within OperatorHub in OpenShift Container Platform, or from the Marketplace web application.
You can access installed applications from the web console by clicking Operators > Installed Operators.
You can deploy Marketplace applications from the web console’s Administrator and Developer perspectives..
Cluster administrators can access Operator installation and application usage information from the Administrator perspective.
They can also launch application instances by browsing custom resource definitions (CRDs) in the Installed Operators list. | https://docs.openshift.com/container-platform/4.7/applications/red-hat-marketplace.html | 2021-10-16T11:27:54 | CC-MAIN-2021-43 | 1634323584567.81 | [] | docs.openshift.com |
Site asset
Before you can start building your web site, you need to add a site asset. Once you create a site, you can start building your web site by adding the required assets beneath it. You can add as many site assets as you want to your system.
Details screen
The Details screen for a site lets you change the name of the site and apply an index and not found page.
Special page links
This section lets you configure the various assets to use the contents of various particular site pages.
- Index
Select an asset to use as the home page of the site. This page will be the first the user will see when they navigate to the URL of the site.
- for which they do not have read permission. If selected, the contents of this asset will be shown instead of a sign-in screen.
Not found page options
This section lets you configure the layout of the 'not found' page, specified in the Special page links section.
- Override design
Select a design (or design customization) to use for the not found page. If you have not selected a design cache the 'not found' page globally. When this option is enabled, the 'not found' page will share a common cache entry everywhere, irrespective of the URL.
By default, this field is
No.
URLs screen
Web URLs
This section lets you assign URLs (domains) to the site.
No URL is assigned by default. To assign a URL, enter it into the URLs field, select HTTP or HTTPS and click Save. The URL is applied to the site and any child assets created beneath it. You can assign as many URLs as you want to a site.
Read Configuring system settings for more information on the System configuration screen.
You can use a domain name and a subpath of a domain as the URL for your site. This option can be useful when using contexts, such as when you want to have different URL versions of your site for different languages.
Base context
For each URL you add to a site, you can set a base context. This action is only applicable if you are using contexts in your system.
The system uses the base context URL to determine what context to switch to by default when logging into admin or edit mode. For example, if you have a URL of \, which uses a German context and you sign in to Matrix using \, you will automatically activate the German context for my matrix interface.
Read the Context documentation for more information about this feature.
Authentication redirects
This section lets you create a redirect to authentication by an external mechanism as an alternative to displaying a sign-in box.
The redirection will continue in a chain if more than one redirect is configured and authentication does not occur.
For example,
URL-A redirects to
URL-B, and
URL-B redirects to
URL-C. Users accessing
URL-A will be redirected to
URL-B to authenticate.
The user will then be redirected on to
URL-C if authentication does not occur at
URL-B.
The user will see a sign-in prompt if they are not authenticated, and are redirected to a URL that does not have a redirect defined.
To set up a redirect:
Select the origin URL in the From field.
Put the destination URL in the To field.
Click Save.
The redirect will appear in the current redirects section.
More information
Read Creating your first site and content page for a simplified example of setting up a site asset with a standard page. | https://docs.squiz.net/matrix/version/latest/features/core-assets/site-asset.html | 2021-10-16T12:55:33 | CC-MAIN-2021-43 | 1634323584567.81 | [array(['../_images/urls.png', 'URLs'], dtype=object)] | docs.squiz.net |
WRLDBuildingHighlight Class Reference
Represents a single selected building on the map, for displaying a graphical overlay to highlight the building, or for obtaining information about the building.
+ highlightWithOptions:
+ (instancetype)highlightWithOptions:(WRLDBuildingHighlightOptions *)highlightOptions
Instantiate a highlight with highlight options.Returns:
A WRLDBuildingHighlight instance.
@property color
@property (nonatomic, copy) UIColor *color
The color for this highlight.
@property buildingInformation
@property (nonatomic, readonly, copy, nullable) WRLDBuildingInformation *buildingInformation
Returns building information for the map building associated with this highlight, if available. Returns null if the request for building information is still pending (internally, building information may be fetched asynchronously). Also returns nil if no building information was successfully retrieved for this building highlight. This may be either because no building exists at the query location supplied in the WRLDBuildingHighlightOptions construction parameters, or because an internal web request failed.
- Introduction
- WRLDApi
- WRLDMapView
- WRLDMapOptions
- WRLDMapCamera
- WRLDMarker
- WRLDPositioner
- WRLDPolygon
- WRLDPolyline
- WRLDIndoorMap
- WRLDIndoorMapFloor
- WRLDIndoorGeoreferencer
- WRLDIndoorEntityTapResult
- WRLDIndoorMapEntity
- WRLDIndoorMapEntityInformation
- WRLDBuildingContour
- WRLDBuildingDimensions
- WRLDBuildingHighlight
- WRLDBuildingInformation
- WRLDBuildingHighlightOptions
- WRLDPrecacheOperation
- WRLDPrecacheOperationResult
- WRLDPickResult
- WRLDBlueSphere
- WRLDPoiService
- WRLDPoiSearch
- WRLDTextSearchOptions
- WRLDTagSearchOptions
- WRLDAutocompleteOptions
- WRLDPoiSearchResponse
- WRLDPoiSearchResult
- WRLDMapscene
- WRLDMapsceneService
- WRLDMapsceneRequest
- WRLDMapsceneRequestOptions
- WRLDMapsceneRequestResponse
- WRLDRoutingService
- WRLDRoutingQuery
- WRLDRoutingQueryOptions
- WRLDRoutingQueryWaypoint
- WRLDRoutingQueryResponse
- WRLDRoute
- WRLDRouteSection
- WRLDRouteStep
- WRLDRouteDirections
- WRLDRouteView
- WRLDRouteViewOptions
- WRLDMapViewDelegate
- WRLDIndoorMapDelegate
- WRLDElevationMode
- WRLDIndoorMapEntityLoadState
- WRLDMapFeatureType
- WRLDRouteTransportationMode | https://docs.wrld3d.com/ios/latest/docs/api/Classes/WRLDBuildingHighlight.html | 2021-10-16T12:39:18 | CC-MAIN-2021-43 | 1634323584567.81 | [] | docs.wrld3d.com |
This section explains, through an example scenario, how the Smart Proxy EIP can be implemented using WSO2 ESB. The following topics are covered:
Introduction to Smart Proxy
The Smart Proxy EIP tracks messages on a service that publishes reply messages to the Return Address specified by the requestor. It stores the Return Address supplied by the original requestor and replaces it with the address of the
Smart Proxy. When the service sends the reply message, the EIP routes it to the original Return Address. For more information, refer to.
Figure 1: Smart Proxy EIP
Example scenario
This example scenario demonstrates a stock quote service, and a sample client sends a stock quote request to the ESB. The service that the client invokes is the
SmartProxy, but through the ESB it manages to make calls to the back-end stock quote service. Smart Proxy simulates an address to the user. When the user sends a request, it will be diverted to another back-end server that will receive the response and send it back to the client. The ESB stores and manages information on what request the response should go to.
The diagram below depicts how to simulate the example scenario using the WSO2 ESB.
Figure 2: Example Scenario of the Smart Proxy EIP
Before digging into implementation details, let's take a look at the relationship between the example scenario and the Smart Proxy <target> <inSequence> <send> <endpoint> <address uri=""/> </endpoint> </send> </inSequence> <outSequence> <send/> </outSequence> </target> </proxy> <sequence name="fault"> <log level="full"> <property name="MESSAGE" value="Executing default "fault" sequence"/> <property name="ERROR_CODE" expression="get-property('ERROR_CODE')"/> <property name="ERROR_MESSAGE" expression="get-property('ERROR_MESSAGE')"/> </log> <drop/> </sequence> <sequence name="main"> <in/> <out/> </sequence> </definitions>
Simulating the sample scenario
Send a request using the Stock Quote client to WSO2 ESB as follows. For information about the Stock Quote client, refer to the section Sample Clients in the WSO2 ESB documentation.
ant stockquote -Dtrpurl= -Dsymbol=foo
Note that the message is returned to the original requester.
How the implementation works
Let's investigate the elements of the ESB configuration in detail. The line numbers below refer to the ESB configuration shown above.
- proxy [line 2 in ESB config] - The proxy called
SmartProxypasses incoming messages to the back-end service.
- send [line 5 in ESB config] - The Send mediator sends the message to the back-end service. The ESB automatically sets the
Reply-Toheader of the incoming request to itself before messages are forwarded to the back-end service.
- send [line 12 in ESB config] - The Send mediator inside the
outSequenceof the proxy sends response messages back to the original requester. | https://docs.wso2.com/display/IntegrationPatterns/Smart+Proxy | 2021-10-16T11:03:43 | CC-MAIN-2021-43 | 1634323584567.81 | [] | docs.wso2.com |
20210628 - Upgrade of php fixed the page rendering issue.
Media Manager
Namespaces
Choose namespace
Media Files
- Media Files
- Upload
- Search
Upload to epub
Sorry, you don't have enough rights to upload files.
File
- Date:
- 2012/11/15 21:38 (UTC)
- Filename:
- sbopkg_start.png
- Format:
- PNG
- Size:
- 14KB
- Width:
- 578
- Height:
- 299
- References for:
- building_packages_with_sbopkg | http://docs.slackware.com/start?tab_files=upload&do=media&tab_details=view&image=howtos%3Asbopkg_start.png&ns=epub | 2021-10-16T13:15:35 | CC-MAIN-2021-43 | 1634323584567.81 | [] | docs.slackware.com |
Package metadata
Overview ▹
Overview ▾
Package metadata is a way of defining message headersContext ¶
func NewContext(ctx context.Context, md Metadata) context.Context
type Metadata ¶
Metadata is our way of representing request headers internally. They're used at the RPC level and translate back and forth from Transport headers.
type Metadata map[string]string
func FromContext ¶
func FromContext(ctx context.Context) (Metadata, bool) | http://docs.activestate.com/activego/1.8/pkg/github.com/micro/go-micro/metadata/ | 2018-09-18T20:01:24 | CC-MAIN-2018-39 | 1537267155676.21 | [] | docs.activestate.com |
About the Splunk Add-on for NGINX
The Splunk Add-on for NGINX allows a Splunk software administrator to collect Web server activities, performance metrics, and error logs using file monitoring and API inputs. After the Splunk platform indexes the events, you can analyze the data using the prebuilt panels included with the add-on.
This add-on provides the inputs and CIM-compatible knowledge to use with other Splunk apps, such as Splunk Enterprise Security, the Splunk App for PCI Compliance, and Splunk IT Service Intelligence.
Download the Splunk Add-on for NGINX from Splunkbase at.
Discuss the Splunk Add-on for NGINX on Splunk Answers at.
This documentation applies to the following versions of Splunk® Supported Add-ons: released
Feedback submitted, thanks! | http://docs.splunk.com/Documentation/AddOns/latest/nginx/About | 2018-09-18T20:05:15 | CC-MAIN-2018-39 | 1537267155676.21 | [array(['/skins/OxfordComma/images/acrobat-logo.png', None], dtype=object)] | docs.splunk.com |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.