content
stringlengths 0
557k
| url
stringlengths 16
1.78k
| timestamp
timestamp[ms] | dump
stringlengths 9
15
| segment
stringlengths 13
17
| image_urls
stringlengths 2
55.5k
| netloc
stringlengths 7
77
|
---|---|---|---|---|---|---|
ceph-detect-init – display the init system Ceph should use¶
Description¶
ceph-detect-init is a utility that prints the init system
Ceph uses. It can be one of
sysvinit,
upstart or
systemd.
The init system Ceph uses may not be the default init system of the
host operating system. For instance on Debian Jessie, Ceph may use
sysvinit although
systemd is the default.
If the init system of the host operating system is unknown, return on
error, unless
--default is specified.
Options¶
--use-rhceph
¶
When an operating system identifies itself as Red Hat, it is treated as if it was CentOS. With
--use-rhcephit is treated as RHEL instead.
--default
INIT¶
If the init system of the host operating system is unknown, return the value of INIT instead of failing with an error.
Bugs¶
ceph-detect-init is used by ceph-disk to figure out the init system to manage the mount directory of an OSD. But only following combinations are fully tested:
- systemd on Ubuntu 15.04 and up
- systemd on Debian 8 and up
- systemd on RHEL/CentOS 7 and up
- systemd on Fedora 22 and up
Availability¶
ceph-detect-init is part of Ceph, a massively scalable, open-source, distributed storage system. Please refer to the Ceph documentation at for more information.
See also¶
ceph-disk(8), ceph-deploy(8) | http://docs.ceph.com/docs/master/man/8/ceph-detect-init/ | 2018-02-18T04:38:11 | CC-MAIN-2018-09 | 1518891811655.65 | [] | docs.ceph.com |
Optimization¶
Optimization provides an alternative approach to marginal inference.
In this section we refer to the program for which we would like to obtain the marginal distribution as the target program.
If we take a target program and add a guide distribution to each random choice, then we can define the guide
program as the program you get when you sample from the guide
distribution at each
sample statement and ignore all
factor
statements.
If we endow this guide program with adjustable parameters, then we can optimize those parameters so as to minimize the distance between the joint distribution of the choices in the guide program and those in the target. For example:
Optimize({ steps: 10000, model: function() { var x = sample(Gaussian({ mu: 0, sigma: 1 }), { guide: function() { return Gaussian({ mu: param(), sigma: 1 }); }}); factor(-(x-2)*(x-2)) return x; }});
This general approach includes a number of well-known algorithms as special cases.
It is supported in WebPPL by a method for performing optimization, primitives for specifying parameters, and the ability to specify guides. | http://docs.webppl.org/en/master/optimization/index.html | 2018-02-18T04:54:15 | CC-MAIN-2018-09 | 1518891811655.65 | [] | docs.webppl.org |
Release Date
The Experience Builder service was updated, and will be deployed to all regions over the next several days. It contains the following updates:
Fixed issues
- Filtering content lists by origin displayed incorrect results.
- When displaying the Experience Builder sidebar, the website stopped responding due to a jQuery conflict.
- The Most Viewed Content sort order for content recommendations returned incorrect data. | https://docs.acquia.com/release-note/experience-builder-service-november-7-2017 | 2018-02-18T05:16:14 | CC-MAIN-2018-09 | 1518891811655.65 | [] | docs.acquia.com |
Applies To: Windows Server 2016, Windows Server 2012 R2, Windows Server 2012
Understanding Key AD FS Concepts
It is recommended that you learn about the important concepts for Active Directory Federation Services and become familiar with its feature set.
Tip
You can find additional AD FS resource links at the AD FS Content Map page on the Microsoft TechNet Wiki. This page is managed by members of the AD FS Community and is monitored on a regular basis by the AD FS Product Team.
AD FS terminology used in this guide
Overview of AD FS
AD FS. | https://docs.microsoft.com/en-us/windows-server/identity/ad-fs/technical-reference/understanding-key-ad-fs-concepts | 2018-10-15T10:35:50 | CC-MAIN-2018-43 | 1539583509170.2 | [] | docs.microsoft.com |
Azure Service Broker Release Notes
v1.9.0
Release Date: August 30, 2018
Service broker
azure-rediscache
- Support update API. You can update service plan to change Redis Cache instance tier. Updating the configuration
enableNonSslPortis also supported now. For more information, see the Azure Redis Cache Service documentation.
Service broker
azure-sqldb-failover-group
- Support configuring failover policy. For an updated example, see the Azure Redis Cache Service documentation.
v1.7.0
Release Date: July 19, 2018
- Service broker
azure-sqldb-failover-group
- Add a new optional provisioning parameter
userPermissionsto specify extra permissions to grant to the user created in binding.
v1.6.1
Release Date: June 8, 2018
Service broker
azure-rediscache
- Treat 200 as success in creation to mitigate 504 gateway timeout error from Azure. Get 504, but it actually succeeded. Retry would get 200 instead of 201. As the REST API is idempotent, it is OK to also treat 200 as success.
Service broker
azure-sqldb
- Add plans for Standard S4 - S12 and Premium P15
- The original approach can’t correctly handle the case sql server name is provided by default parameters. Fixed.
Service broker
azure-sqldb-failover-group
- Fix FQDN generation RegEx issue in binding. In v1.6.0, The binding FQDN is not correct when the primary name contains dashes.
v1.6.0
Release Date: May 18, 2018
Service broker
azure-rediscache
- Service Plans
- Move the specification of SKU from provisioning parameters to service plans. This fixes the original service plans not matching the Azure Redis Cache SKUs. It may break scripts to create new Redis instances.
Service broker
azure-sqldb-failover-group
- Service plan
SecondaryDatabaseWithFailoverGroupto support creating SQL Database failover group based on two existing servers and one existing primary database.
- TService plan ‘ExistingDatabaseInFailoverGroup’ to support registering an existing failover group as a service instance.
- For more information, see Azure SQL Database Failover Group Service.
v1.5.2
Release Date: November 27, 2017
* Service broker
* Fixes possible SQL injection issue.
* Uses timing safe comparison in basic authorization.
* Uses bundled nodejs buildpack instead of the github buildpack.
* Improves password generator. Impacted modules:
azure-sqldb,
azure-mysqldb, and
azure-postgresqldb.
* Service broker
azure-mysqldb
* Credentials
* Fixes URI encoding issue.
* Username changes: username -> username@servername, for MySQL client to use directly.
* Service broker
azure-postgresqldb
* Credentials
* Fixes URI encoding issue.
* Username changes: username -> username@servername, for PostgreSQL client to use directly.
v1.5.0
Release Date: August 22, 2017
- Service broker
- New service instance can be created in an existing resource group, with
locationdifferent from the group location.
- Service broker
azure-sqldb
- The service plan can be updated.
- Connection policy can be set if creating a new server.
- Keep binding credentials compatible with Cloud Foundry MySQL Release.
- Service broker
azure-mysqldb
- Create a database after creating server. The name of database can be specified.
- The
jdbcUrlin the binding credential is just the JDBC URL for the database.
- Keep binding credentials compatible with Cloud Foundry MySQL Release.
- Service broker
azure-postgresqldb
- Creating a database after creating server. The name of database can be specified.
- The
jdbcUrlin the binding credential is just the JDBC URL for the database.
- Keep binding credentials compatible with Cloud Foundry MySQL Release.
- Service broker
azure-eventhubs
- Fix the resource provider and separate it from the
azure-servicebusas an independent broker.
- Remove pricing tier from the parameters passed in, and expose it as service plans.
- Service broker
azure-servicebus
- Remove namespace type from the parameters passed in since
azure-eventhubsis separated.
- Remove pricing tier from the parameters passed in, and expose it as service plans.
v1.4.0
Release Date: July 26, 2017
- Service broker
- Allow to set default parameters for each service when installing or updating the broker.
- Support to create service instances on Azure USGovernment and Azure German Cloud.
- The NodeJS dependencies are vendored so that the installation will not downlod the dependencies.
- Service broker
azure-rediscache
- Add
redisUrlto support Spring apps.
- Service broker
azure-servicebus
- Update parameter style:
resource_group_name->
resourceGroup,
namespace_name->
namespaceName, and
messaging_tier->
messagingTier.
- Service broker
azure-storage
- Update parameter style:
resource_group_name->
resourceGroup,
storage_account_name->
storageAccountName, and
account_type->
accountType.
- Service broker
azure-sqldb
- Add option
Enable Transparent Data Encryptionin SQL Database Config.
- Add
Resource Group of the SQL Serverand
Location of the SQL Serveras a required part of SQL Server credentials in SQL Database Config.
v1.3.0
Release Date: June 23, 2017
- Service broker
azure-sqldb:
- Adds support to update server administrator password if the server password is changed. For information, see Updating the server instance.
- Add service plans for SQL Data Warehouse.
- Add
sqlServerFullyQualifiedDomainNamein the service credentials.
- Refine
jdbcUrlin the service credentials and add
jdbcUrlForAuditingEnabledfor SQL server with auditing enabled. For information, see Format of Credentials.
- Add a new service broker
azure-mysqldb:
- Support to create service instances for Azure Database for MySQL (preview).
- The broker document.
- Add a new service broker
azure-postgresqldb:
- Support to create service instances for Azure Database for PostgreSQL (preview).
- The broker document.
- Add a new service broker
azure-cosmosdb:
- Support to create service instances for Azure Cosmos DB.
- The broker document.
- Service broker
azure-documentdb:
- Document DB upgraded and renamed Cosmos DB. The service instances in use still work. For new instances, Microsoft recommends the new service Cosmos DB.
- Rewrite some modules using REST APIs directly.
- Change the logging library to Winston.
v1.2.3
Release Date: May 27, 2017
Regenerated from v1.2.2 using tile-generator v7.0.2.
Fixed in this release:
- The package path is unintialized before it’s used
v1.2.2
Release Date: May 12, 2017
Regenerated from v1.2.1 using tile-generator v6.0.0.
Fixed in this release:
- Security updates to address CVE-2017-4975
Features included in this release:
- Upgrades the stemcell version to 3363
v1.2.1
Release Date: March 10, 2017
Features included in this release:
Azure SQL Database Service:
- Adds a new config form SQL Database Config. It allows the operator to disable the developer from creating the SQL server. To do this, uncheck the Allow to Create Sql Server checkbox and provide SQL Server credentials by using Add in SQL Database Config. Then, the developer needs to specify the SQL server name in the module configuration.
- Provides the database-level users instead of the server-level users as the credentials. You no longer get the credentials of the admin user.
- Adds Transparent Data Encryption support.
- Adds
jdbcUrlstring property support:
- Append more options
Encrypt=true;TrustServerCertificate=false;HostNameInCertificate=*.database.windows.net;loginTimeout=30to keep consistent with Azure Portal.
- Add
jdbcUrlForAuditingEnabled. It should be used when auditing is enabled.
- Fix the issue of the allowed IP in the temporary firewall rule.
- Fixes the issue of the allowed IP in the temporary firewall rule.
Upgrades the stemcell version to 3312
v1.2.0
Release Date: December 22, 2016
Features included in this release:
- Supports Azure Storage
- Supports Azure Redis Cache
- Supports Azure DocumentDb
- Supports Azure SQL Database
- Supports Azure Service Bus and Event Hubs
- Supports PCF v1.8.x | https://docs.pivotal.io/partners/azure-sb/release-notes.html | 2018-10-15T10:13:12 | CC-MAIN-2018-43 | 1539583509170.2 | [] | docs.pivotal.io |
1 Introduction
The Collaborate category supports collaboration with your team and the tracking of sprints and other tasks in the app.
This category is divided into the five pages presented below.
2 Buzz
The Buzz lets you see and share ideas as well as collaborate with your team. You will get an overview of new team members, new stories and their changes, and sprints which are added or completed.
For more details, see Buzz for App.
3 Team
The Team page shows an overview of your team members. It’s also the place to Invite Members and Manage your team.
Click Name or Role to filter in an ascending or descending order. The default sorting order is ascending by Name.
For more details, see Team.
4 Stories
The Stories page lets you add, edit, and delete stories, sprints, and labels. You can also import to and export from Excel and view the history.
For more details, see Stories.
5 Feedback
The Feedback pages show an overview of the feedback provided about the app. There are two ways of submitting feedback:
- Add feedback on the Feedback page in the Developer Portal.
- Add feedback from within the app (to go to the app, click View App).
The Feedback button in the Developer Portal is used to provide feedback on the Mendix Platform. It is intended for low priority issues, questions, and ideas on how to improve the Mendix Platform.
For more details, see Feedback and How to Provide Feedback on Mendix.
6 Documents
The Documents page lets you upload files related to the app. It is possible to replace a current file with a newer version, add labels, comment, and download files.
For more details, see Documents. | https://docs.mendix.com/developerportal/collaborate/ | 2018-10-15T10:07:49 | CC-MAIN-2018-43 | 1539583509170.2 | [array(['attachments/collaborate.png', None], dtype=object)] | docs.mendix.com |
On non-shape charts in a paginated report, Reporting Services selects a new color.
Note
You can create and modify paginated report definition (.rdl) files in Report Builder and in Report Designer in SQL Server Data Tools. Each authoring environment provides different ways to create, open, and save reports and related items.
Note
You will need to replace the "Color1" strings with your own colors. You can use named colors, for example "Red", or you can use six-digit hexadecimal value that represent the color, such as "#FFFFFF" for black. If you have more than three colors defined, you will need to extend the array of colors so that the number of colors in the array matches the number of points in your shape chart. You can add new colors to the array by specifying a comma-separated list of string values that contain named colors or hexadecimal representations of colors.
Click OK.
Right-click on the shape chart and select Series Properties.
In Fill, click the Expression (fx) button to edit the expression for the Color property.
Type the following expression, where "MyCategoryField" is the field that is displayed in the Category Groups area:
=Code.GetColor(Fields!MyCategoryField)
See Also
Formatting Series Colors on a Chart (Report Builder and SSRS)
Add Bevel, Emboss, and Texture Styles to a Chart (Report Builder and SSRS)
Define Colors on a Chart Using a Palette (Report Builder and SSRS)
Add Empty Points to a Chart (Report Builder and SSRS)
Shape Charts (Report Builder and SSRS)
Linking Multiple Data Regions to the Same Dataset (Report Builder and SSRS)
Nested Data Regions (Report Builder and SSRS)
Sparklines and Data Bars (Report Builder and SSRS) | https://docs.microsoft.com/en-us/sql/reporting-services/report-design/specify-consistent-colors-across-multiple-shape-charts-report-builder-and-ssrs | 2017-07-20T17:48:21 | CC-MAIN-2017-30 | 1500549423269.5 | [] | docs.microsoft.com |
You can edit a user-defined security tag. If a security group is based on the tag you are editing, changes to the tag may affect security group membership.
Procedure
- Log in to the vSphere Web Client.
- Click Networking & Security and then click NSX Managers.
- Click an NSX Manager in the Name column and then click the Manage tab.
- Click the Security Tags tab.
- Right-click a security tag and select Edit Security Tag.
- Make the appropriate changes and click OK. | https://docs.vmware.com/en/VMware-NSX-for-vSphere/6.3/com.vmware.nsx.admin.doc/GUID-ABC3A83E-25E1-4051-90B1-1DBD30E5C893.html | 2017-07-20T16:40:14 | CC-MAIN-2017-30 | 1500549423269.5 | [] | docs.vmware.com |
The basic concepts of ESXi networking and how to set up and configure a network in a vSphere environment are discussed. Networking Concepts OverviewA few concepts are essential for a thorough understanding of virtual networking. If you are new to ESXi, it is helpful to review these concepts. Network Services in ESXiA virtual network provides several services to the host and virtual machines. VMware ESXi Dump Collector SupportThe ESXi Dump Collector sends the state of the VMkernel memory, that is, a core dump to a network server when the system encounters a critical failure. | https://docs.vmware.com/en/VMware-vSphere/6.0/com.vmware.vsphere.networking.doc/GUID-8CDF29B2-ABA8-4F34-9FEF-14987BC13265.html | 2017-07-20T16:41:28 | CC-MAIN-2017-30 | 1500549423269.5 | [] | docs.vmware.com |
Scanning is the process in which attributes of a set of hosts, virtual machines, or virtual appliances are evaluated against the patches, extensions, and upgrades included in the attached baselines and baseline groups.
You can configure Update Manager to scan virtual machines, virtual appliances, and ESXi hosts by manually initiating or scheduling scans to generate compliance information. To generate compliance information and view scan results, you must attach baselines and baseline groups to the objects you scan.
To initiate or schedule scans, you must have the Scan for Applicable Patches, Extensions, and Upgrades privilege. For more information about managing users, groups, roles, and permissions, see vCenter Server and Host Management. For a list of Update Manager privileges and their descriptions, see Update Manager Privileges.
You can scan vSphere objects from the Update Manager Client Compliance view. | https://docs.vmware.com/en/VMware-vSphere/6.0/com.vmware.vsphere.update_manager.doc/GUID-728B4587-DE37-4AEF-8FDE-A202B662AD0B.html | 2017-07-20T16:41:32 | CC-MAIN-2017-30 | 1500549423269.5 | [] | docs.vmware.com |
Take snapshots of all your appliances and Windows servers. If the installation fails, you can revert to these snapshots and try to install again.
About this task
The snapshots preserve your configuration work. Be sure to include a snapshot of the vRealize Automation appliance on which you are running the wizard.
Instructions are provided for vSphere users.
Note:
Do not exit the installation wizard or cancel the installation.
Procedure
- Open another browser and log in to the vSphere Client.
- Locate your server or appliance in the vSphere Client inventory.
- Right-click the server the inventory and select Take Snapshot.
- Enter a snapshot name.
- Select Snapshot the virtual machine's memory checkbox to capture the memory of the server and click OK.
The snapshot is created.
Results
Repeat these steps to take snapshots of each of your servers or appliances. | https://docs.vmware.com/en/vRealize-Automation/7.2/com.vmware.vra.install.doc/GUID-8192CFAD-13C6-4E29-A6DC-296F9AE3C599.html | 2017-07-20T16:42:22 | CC-MAIN-2017-30 | 1500549423269.5 | [] | docs.vmware.com |
You can customize the default policy and base policies included with vRealize Operations Manager for your own environment. You can then apply your custom policy to groups of objects, such as the objects in a cluster, or virtual machines and hosts, or to a group that you create to include unique objects and specific criteria.
You must be familiar with the policies so that you can understand the data that appears in the user interface, because policies drive the results that appear in the vRealize Operations Manager dashboards, views, and reports.
To determine how to customize operational policies and apply them to your environment, you must plan ahead. For example:
Must you track CPU allocation? If you overallocate CPU, what percentage must you apply to your production and test objects?
Will you overallocate memory or storage? If you use High Availability, what buffers must you use?
How do you classify your logically defined workloads, such as production clusters, test or development clusters, and clusters used for batch workloads? Or, do you include all clusters in a single workload?
How do you capture peak use times or spikes in system activity? In some cases, you might need to reduce alerts so that they are meaningful when you apply policies.
When you have privileges applied to your user account through the roles assigned, you can create and modify policies, and apply them to objects. For example:
Create a policy from an existing base policy, inherit the base policy settings, then override specific settings to analyze and monitor your objects.
Use policies to analyze and monitor vCenter Server objects and non-vCenter Server objects.
Set custom thresholds for analysis settings on all object types to have vRealize Operations Manager report on workload, anomalies, faults, capacity, stress, and so on.
Enable specific attributes for collection, including metrics, properties, and super metrics.
Enable or disable alert definitions and symptom definitions in your custom policy settings.
Apply the custom policy to object groups.
When you use an existing policy to create a custom policy, you override the policy settings to meet your own needs. You set the allocation and demand, the overcommit ratios for CPU and memory, and the thresholds for capacity risk and buffers. To allocate and configure what your environment is actually using, you use the allocation model and the demand model together. Depending on the type of environment you monitor, such as a production environment versus a test or development environment, whether you over allocate at all and by how much depends on the workloads and environment to which the policy applies. You might be more conservative with the level of allocation in your test environment and less conservative in your production environment..
Your policies are unique to your environment. Because policies direct vRealize Operations Manager to monitor the objects in your environment, they are read-only and do not alter the state of your objects. For this reason, you can override the policy settings to fine-tune them until vRealize Operations Manager displays the results that are meaningful and that affect for your environment. For example, you can adjust the capacity buffer settings in your policy, and then view the data that appears in the dashboards to see the effect of the policy settings. | https://docs.vmware.com/en/vRealize-Operations-Manager/6.5/com.vmware.vcom.core.doc/GUID-DA4B0A20-6C39-4395-94D5-648302A4428C.html | 2017-07-20T16:43:25 | CC-MAIN-2017-30 | 1500549423269.5 | [] | docs.vmware.com |
let:
Copy
CREATE INDEX GenreAndPriceIndex ON Music (genre, price);
DynamoDB
In DynamoDB, you can create and use a secondary index for similar purposes.
Indexes in DynamoDB are different from their relational counterparts. When you
create a secondary index, you must specify its key attributes – a partition key and
a
sort key. After you create the secondary index, you can
Query it or
Scan it just as you would with a table. DynamoDB does not
have a query optimizer, so a secondary index is only used when you
Query it or
Scan it.
DynamoDB supports two different kinds of indexes:
Global secondary indexes – The primary key of the index can be any two attributes from its table.
Local secondary indexes – The partition key of the index must be the same as the partition key of its table. However, the sort key can be any other attribute.
DynamoDB ensures that the data in a secondary index is eventually consistent with
its table.
You can request strongly consistent
Query or
Scan actions on a table or a local secondary index. However, global
secondary indexes only support eventual consistency.
You can add a global secondary index to an existing table, using the
UpdateTable action and specifying
GlobalSecondaryIndexUpdates:
Copy
{ the Amazon DynamoDB Getting Started Guide. | http://docs.aws.amazon.com/amazondynamodb/latest/developerguide/SQLtoNoSQL.Indexes.html | 2017-07-20T16:24:21 | CC-MAIN-2017-30 | 1500549423269.5 | [] | docs.aws.amazon.com |
Installing and Endorsing the JDK
Before installing Mule ESB, ensure that the host machine has one of the following Java Development Kit versions installed on it:
Standard Edition 1.6.0_26 (also known as JDK SE 6 Update 26) or more recent (including SE 7)
Enterprise Edition 1.6u3 (JDK EE 6 Update 3) or more recent (including EE 7) | https://docs.mulesoft.com/mule-user-guide/v/3.3/installing-and-endorsing-the-jdk | 2017-07-20T16:34:57 | CC-MAIN-2017-30 | 1500549423269.5 | [] | docs.mulesoft.com |
Panther library function interface for MTS applicationspublic interface ComFunctionsInterface
int log (String text, int code);int raise_exception (int code);int receive_args (String text);int return_args (String text);int sm_mts_CreateInstance (String text);int sm_mts_CreateProperty (String group, String prop);int sm_mts_CreatePropertyGroup (String group);int sm_mts_DisableCommit ();int sm_mts_EnableCommit ();String sm_mts_GetPropertyValue (String group, String prop);int sm_mts_IsCallerInRole (String role);int sm_mts_IsInTransaction ();int sm_mts_IsSecurityEnabled ();int sm_mts_PutPropertyValue (String group, String prop, String value);int sm_mts_SetAbort ();int sm_mts_SetComplete ();
Java only for COM/MTS
Objects that implement this interface provide access to functions that are of use in service components running under COM/MTS. Java methods that implement a service component's public methods are passed an object of type
ComFunction sInterface as a parameter.
Additional COM functions, such as
sm_obj_calland
sm_com_result, are implemented as part of the
CFunctionsInterface. | http://docs.prolifics.com/panther/html/prg_html/javafun3.htm | 2017-07-20T16:45:21 | CC-MAIN-2017-30 | 1500549423269.5 | [] | docs.prolifics.com |
Overview¶
dRep is a python program which performs rapid pair-wise comparison of genome sets. One of it’s major purposes is for genome de-replication, but it can do a lot more.
The publication is available at bioRxiv.
Source code is available on GitHub.
Genome de-replication¶
De-replication is the process of identifying sets of genomes that are the “same” in a list of genomes, and removing all but the “best” genome from each redundant set. How similar genomes need to be to be considered “same”, how to determine which genome is “best”, and other important decisions are discussed in Choosing parameters
A common use for genome de-replication is the case of individual assembly of metagenomic data. If metagenomic samples are collected in a series, a common way to assemble the short reads is with a “co-assembly”. That is, combining the reads from all samples and assembling them together. The problem with this is assembling similar strains together can severely fragment assemblies, precluding recovery of a good genome bin. An alternative option is to assemble each sample separately, and then “de-replicate” the bins from each assembly to make a final genome set.
The steps to this process are:
- Assemble each sample separately using your favorite assembler. You can also perform a co-assembly to catch low-abundance microbes
- Bin each assembly (and co-assembly) separately using your favorite binner
- Pull the bins from all assemblies together and run dRep on them
- Perform downstream analysis on the de-replicated genome list
Genome comparison¶
Genome comparison is simply comparing a list of genomes in a pair-wise manner. This allows identification of groups of organisms that share similar DNA content in terms of Average Nucleotide Identity (ANI).
dRep performs this in two steps- first with a rapid primary algorithm (Mash), and second with a more sensitive algorithm (gANI). We can’t just use Mash because, while incredibly fast, it is not robust to genome incompletenss (see Choosing parameters) and only provides an “estimate” of ANI. gANI is robust to genome incompleteness and is more accurate, but too slow to perform pair-wise comparisons of longer genome lists.
dRep first compares all genomes using Mash, and then only runs the secondary algorithm (gANI or ANIm) on sets of genomes that have at least 90% Mash ANI. This results it a great decrease in the number of (slow) secondary comparisons that need to be run while maintaining the sensitivity of gANI.
| http://drep.readthedocs.io/en/latest/overview.html | 2017-07-20T16:26:26 | CC-MAIN-2017-30 | 1500549423269.5 | [array(['_images/Figure0.jpg', '_images/Figure0.jpg'], dtype=object)
array(['_images/FigureD.png', '_images/FigureD.png'], dtype=object)] | drep.readthedocs.io |
To (Report Builder and SSRS).
Connection String:
DataSource=
For more connection string examples, see Data Connections, Data Sources, and Connection Strings in Report Builder.
Credentials (Report Builder and SSRS) or Specify Credentials in Report Builder.
Queries,).
Extended Field Properties.
Note
Values exist for extended field properties only if the data source provides these values when your report runs and retrieves the data for its datasets. You can then refer to those Field property values from any expression using the syntax described below. However, because these fields are specific to this data provider and not part of the report definition language, changes that you make to these values are not saved with the report definition.).
Remarks.)
Provides in-depth information about platform and version support for each data extension.
See Also
Report Parameters (Report Builder and Report Designer)
Filter, Group, and Sort Data (Report Builder and SSRS)
Expressions (Report Builder and SSRS) | https://docs.microsoft.com/en-us/sql/reporting-services/report-data/sap-netweaver-bi-connection-type-ssrs | 2017-07-20T17:41:55 | CC-MAIN-2017-30 | 1500549423269.5 | [] | docs.microsoft.com |
Steps to or not you want to auto-schedule bill runs rather than run them manually (if you plan to use monetization for billing documents). Learn more about the scheduler and scheduled jobs in Schedule monetization jobs, and about generating billing documents manually in Create billing documents.
-.
- Configure billing documents. The configuration provides some basic information such as the country in which you’re registered for tax purposes (this allows monetization to generate the applicable taxes on invoices and other billing documents).
- Enforce monetization limits on the API proxies that you will include in your API products.
- Monetize API products that you want to bundle into API packages.
-.
- Issuing credits notification setup?) | http://docs.apigee.com/monetization/content/set-monetization | 2017-07-20T16:24:28 | CC-MAIN-2017-30 | 1500549423269.5 | [] | docs.apigee.com |
Create Your First Amazon S3 Import Job
With an Amazon S3 import, AWS uploads individual files from your device to objects in an Amazon S3 bucket.
We recommend encrypting your data prior to sending it to us. For import to Amazon S3, you can provide a PIN-code protected device, or you can encrypt your data by using TrueCrypt software encryption, or both. You must provide the PIN code or password in the manifest when you create the import job. For more information, see Encrypting Your Data.
If you encrypt your data, the data is decrypted using the PIN code or TrueCrypt password you provide. Your device's file system is mounted on an AWS Import/Export data loading station and each file is uploaded as an object in your Amazon S3 bucket. You can optionally manage directories and filenames as part of the load process.
After the import operation completes, AWS Import/Export erases your
You must submit a separate job request for each device.
To create an Amazon S3 import job
Create a manifest file.
Prepare your device for shipping.
Send a
CreateJobrequest.
AWS Import/Export sends a response with a
SIGNATUREfile and job information.
Copy the
SIGNATUREfile to your device.
Send a GetShippingLabel request.
Ship your device.
Tip
You can create an Amazon S3 bucket using AWS Management Console. For more information, go to.
These steps assume that you have signed up for an AWS account and created your
AWScredentials.properties file as described in the earlier
tasks.
Create a Manifest File
The manifest file is a YAML-formatted text file that tells AWS how to handle your job. It consists of a set of name-value pairs that supply required information, such as your device ID, TrueCrypt password, and return address.-import-manifest.txtfile in a text editor.Copy
manifestVersion:2.0 generator:Text editor bucket:
[Enter import bucket]deviceId:
[Enter device serial number]eraseDevice:
yesnotificationEmail:
[Optional – PIN code]trueCryptPassword:
[Optional – password]acl:private serviceLevel:standard manifest option, see Customs Manifest File Option.
In the file, replace all text in brackets with the appropriate values. Delete any unused optional lines.
Provide the name of the existing Amazon S3 bucket where you want to upload your data. AWS Import/Export loads the data to a single bucket. For the
bucketoption provide only the name of your bucket, for example
s3examplebucket.
For the
eraseDevicefield, specify
yesto acknowledge that AWS will erase your device following the import operation before shipping it. If the
eraseDevicefield is missing or if the value is not
yes, the job request will fail.
For the
notificationEmailfield, enter one or more email addresses, separated by semicolons, so we can contact you if there are any problems. If the
notificationEmailfield is missing or empty, the job request will fail.
Save the file as
MyS3ImportManifest.txtin the same folder as your
AWSCredentials.propertiesfile.
For more information about manifest file options, see Creating Import Manifests.
Prepare Your Device for Import
Next, you prepare your device for import.
Optionally encrypt your data by using one of the encryption mechanisms supported by AWS.
For added security, we strongly recommend that you encrypt your data. You can encrypt the drive or create an encrypted file container on your device. For more information, see Encrypting Your Data.
Verify that all file names are valid. File names must use standard ASCII or UTF-8 character encoding. Any file with a name that is not a valid ASCII or UTF-8 string is not imported.
Copy your data to the device. Do not ship the only copy of your data. AWS will erase your device, even if we cannot perform an import.
If your device is not properly prepared.
Send a CreateJob Request
Now that you have your credentials file and manifest file in place, you send a
CreateJob request to AWS Import/Export. You submit a separate
CreateJob request for each device.
Open a command prompt (or, on a Mac, use the Terminal application), and change to the directory where you unzipped the AWS Import/Export tool.
Enter the following
CreateJobrequest on the command line.Copy
CmdPrompt>java -jar lib/AWSImportExportWebServiceTool-1.0.jar CreateJob Import MyS3Import.
Copy
JOB CREATED JobId: ABCDE JobType: Import **************************************** * AwsShippingAddress * **************************************** Please call GetShippingLabel API to retrieve shipping address **************************************** * SignatureFileContents * **************************************** version:2.0 signingMethod:HmacSHA1 jobId:ABCDE-VALIDATE-ONLY signature:cbfdUuhhmauYS+ABCl5R9heDK/V= Writing SignatureFileContents to cmdPrompt\.?. | http://docs.aws.amazon.com/AWSImportExport/latest/DG/GSCreateSampleS3ImportRequest.html | 2017-07-20T16:25:10 | CC-MAIN-2017-30 | 1500549423269.5 | [] | docs.aws.amazon.com |
iOS: Common Reasons for Failing Push Notifications
There are many possible reasons why your iOS users are not receiving push notifications. This article lists the most common of them.
- Q: What does the "Apple notification service connection couldn't be established due to invalid key/certificate." failure message mean?
- Possible cause: Using an iOS certificate for the wrong environment
- Possible cause: Using a wrong iOS certificate
- Possible cause: Expired iOS certificate
- Possible cause: Device is not registered in the backend
- Possible cause: Device is marked as inactive in the backend
- Possible cause: You are not targeting the correct platform
- Possible cause: The user device has no network connectivity
Q: What does the "Apple notification service connection couldn't be established due to invalid key/certificate." failure message mean?
A: Apple Push Notification service (APNs) responds that the server certificate used to initiate the TLS connection is not valid.
See Possible Cause: Possible Cause: Using a Wrong iOS Certificate
Possible cause: Using an iOS certificate for the wrong environment
Detailed description
Apple issues different SSL certificates for Development and Production purposes. You need to observe which one you use at any time or your devices will not be able to receive push notifications.
Recommended solution
Ensure that the type of the certificate that is selected as Active in Telerik Platform matches the type of the iOS Provisioning Profile and the type of the iOS certificate that you use to sign the app.
For example, if you have signed your app using an iOS App Development certificate on a Development Provisioning Profile you need to upload and activate an "Apple Push Notification service SSL" Development iOS certificate in Telerik Platform. Both certificates must be issued for the same App ID.
Possible cause: Using a wrong ios certificate
Detailed description
Apple issues different SSL certificates for server deployment and app signing. You need to observe which one you upload to Telerik Platform.
Recommended solution
Ensure that the active certificate in Telerik Platform is "Apple Push Notification service SSL" (for example
Apple Production IOS Push Services - com.example.sampleapp.p12), which is a server certificate, and not a client certificate that is used to sign apps.
Possible cause: Expired iOS certificate
Detailed description
An iOS certificate cannot be used for sending push notifications after its expiration date has passed. iOS certificates can also be inactive because they were revoked or invalidated.
Recommended solution
Sign in to Apple's Developer Center and ensure that your provisioning or development profile is active and that the respective certificates have not been expired, revoked, or invalidated.
Possible cause: Device is not registered in the backend
Detailed Description
In addition to registering a device with Apple Push Notification Service (APNS), you need to register it with Telerik Platform as well.
Recommended solution
To understand the concept of sending push notifications through Telerik Platform, see Introduction to Push Notifications.
To learn how to register a device, see Initializing and Registering a Device.
Possible cause: Device is marked as inactive in the backend
Detailed description
A device will be marked as "active: false" when its token is returned as invalid after sending a push notification to it. This could happen for the following reasons:
- The server certificate (such as the "Apple Push Notification service SSL") that you are using is for a Development Provisioning Profile, but the app is using an Production Provisioning Profile and the push token has been issued for Production (or the other way around)
- The token is not a valid token issued by APNs
- The "Apple Push Notification service SSL" certificate used on the server has expired or is not valid anymore (for example, has been revoked)
- The user has uninstalled your application
Recommended solution
Double-check the client and server certificates and Provisioning Profiles and their validity in the iOS Dev Center.
You can resolve some of the cases by taking these steps: 1. Rebuild and redeploy the app with the proper certificate and Provisioning Profile. 2. Reregister the device with Telerik Platform.
Possible cause: You are not targeting the correct platform
Detailed description
Telerik Platform allows you to send a push notification to multiple platforms.
Recommended solution
- When using the portal to send a push notification: Ensure that you selected either Broadcast or Platform Specific: iOS.
- When sending a push notification programmatically: Ensure that you have included a dedicated section for IOS or at least the default-sink Message value. See Push Notification Object Field Reference for details.
Possible cause: The user device has no network connectivity
Detailed Description
In addition to having Internet access, the device must have unrestricted access to TCP port 5223 used by the Apple Push Notification service (APNs).
Recommended solution
When the device is connecting to APNs over WiFi ensure that no firewalls are blocking inbound and outbound TCP packets over port 5223. | http://docs.telerik.com/platform/backend-services/dotnet/push-notifications/troubleshooting/push-trb-ios | 2017-07-20T16:21:49 | CC-MAIN-2017-30 | 1500549423269.5 | [] | docs.telerik.com |
Info Boxes
These wiki pages are used as information boxes transcluded in articles. They are designed to provide additional information on a page, usually in a collasped state, but not required. This allows the reader to instantly view the additional content without leaving the article they are currently reading while keeping the page length to a minimum.
This category should contain pages in the Main namespace using article name/subpage name format, showing it is supporting the article. It must not be used to categorize articles or pages unless the page. Use
<noinclude> ... </noinclude> tags to prevent category transclusion. Categorisation transclusion is specifically used in a navigational boxes in articles in a series or tutorials. See the Navigation boxes category for examples.
Pages in category "Info Boxes"
This category contains only the following page. | https://docs.joomla.org/Category:Info_Boxes | 2017-07-20T16:42:55 | CC-MAIN-2017-30 | 1500549423269.5 | [] | docs.joomla.org |
For all Joomla 3+ templates built using the Zen Grid Framework v4 (any theme after October 2014) please refer to the Zen Grid Framework v4 documentation.
If you are using the in built javascript compression for your Zen Grid Framework file you will notice that all of the javascript that is rendered from the framework is compiled into a file called zengrid.js. This file is rendered dynamically and depending on the settings for your template it may contain compressed versions of the following files:
While it is not recommended to edit any of the files in the Zen Grid Framework, if you need to adjust the code in any of these files then you will need to edit these files directly rather than editing the zengrid.js file.
The zengrid.js file is rendered in this location: | http://docs.joomlabamboo.com/zen-grid-framework-v2/developer/what-is-the-zengridjs-file | 2017-07-20T16:32:54 | CC-MAIN-2017-30 | 1500549423269.5 | [] | docs.joomlabamboo.com |
PureWeb 4.2 Upgrade Notes
Applications made using the 4.1.x version of the PureWeb SDK may require modifications to upgrade them to the current 4.2 release. These Upgrade Notes will help you identify the specific changes.
API Changes
There are no breaking API changes between the 4.1 and 4.2 releases of PureWeb. However, please note the following:
- In the iOS SDK, the PWResizeEvengArgs class contained a typo in its name, and has been replaced with PWResizeEventArgs. Although PWResizeEvengArgs is still available, it is deprecated.
- In the HTML5 SDK, the opt_asynch parameter is no longer included with the disconnect method. Since browsers do not support synchronous disconnects, this parameter was being ignored, and as such it was superfluous. This is not a breaking change, as the parameter will continue to be ignored in any existing code using it.
Developer Tool Changes
Some of the tools, or versions of tools, required for development have changed, as summarized below. For a complete list of supported platforms and tools, refer to the System Requirements.
- The supported version of JDK is now 1.8. This will impact you if you were using an earlier version for Java or Android development, or for PureWeb server deployments.
The minimum supported version of Android is now 4.1, which requires Android API level 16 or higher. Also, development with the Eclipse ADT plugin is no longer supported; you will need to switch to Android Studio 1.5.1+.
- Xcode 7 and Cocoapods 0.39 are now required for iOS development.
| http://docs.pureweb.io/SDK4.2/content/more/version_4.2_upgrade_notes.html | 2017-03-23T04:10:19 | CC-MAIN-2017-13 | 1490218186774.43 | [array(['../resources_internal/prettify/onload.png', None], dtype=object)] | docs.pureweb.io |
Configure Cloud Storage
Overview
Effective CloudCenter 4.7.0, the CloudCenter platform allows you to account for the storage cost along with the compute cost within the job billing process. The CloudCenter platform calculates the cloud storage billing based on the allocated storage amount and based on storage properties like IOPS.
Currently, the CloudCenter platform does not have the capability to calculate the cloud storage billing based on the data transfer across storage devices.
The storage or volume information is calculated using the Multiple Volumes object. The VolumeInfoSet has the list of volumeInfo objects (has the type (volumeInfos) and other related properties) and the instance ID. The volumeInfoSet is stored in the job's application settings as a JSON string. To compute the storage cost during billing, you must use the VolumeInfoSet setting.
Cloud Support
Cloud Storage Type
The Cloud Storage Type (cloudStorageType) is a new type used to retrieve the cloud storage related information from the CCM. This type allows you to model your own storage types for private clouds and capture existing cloud storage types for public clouds.
Add Cloud Storage
Billing works based on the price information of the cloud storage types. You have the option to configure cloud storage types during the CCM server initialization.
To configure cloud storage,.
- Scroll down to the Storage Types section.
- Click Add Storage Type to add a new storage type mapping. The Add Storage Type popup displays.
- Configure the fields based on your storage configuration. See the Cloud Storage Type table above for additional details.
- Click Save.
Sync Storage Types
Administrators have the ability to sync storage type and price information from the Package Store. This feature allows administrators to sync information when they see a change in cloud provider storage type definitions and price information (see Tenant Billing for additional billing information).
The storage type must be present in the remote repository.
To perform this procedure, you must have access to the Package Store.
To sync storage types, follow this process:
- Configure the cloud region as specified in the Configure Cloud(s) page.
Register the CCO with the CCM.
When the CCO (Orchestrator) Configuration section displays the Running status, click Sync Storage Types.
The Cloud Regions page updates to display the synced status.
Storage Billing
To account for storage billing, you (users) must define the storage type just as you would define an instance type to account for the storage cost.
During a deployment when you select the instance type, the hardware specification information is used to launch the instance. In the same way, the volume information is used for storage billing in the deployment context.
As the storage type contains the cost information and other storage related properties (like throughput), the CloudCenter platform uses the storage size and type to calculate the billing.
- For a public cloud, the defined storage type must match the cloud defined storage type so that CloudCenter can account for the storage cost.
- For private clouds :
- The storage types are user defined (not cloud-defined).
- During a job deployment the volume information specified by the user-defined storage type is used to calculate the storage cost.
- For Vmware (and other private clouds), the storage type is only used for billing purposes and not for deployment purposes.
- The volumeInfos > size input is used for job deployment purposes.
See Tenant Billing > Granular Billing Control for additional context on billing.
- No labels | http://docs.cliqr.com/display/CCD46/Configure+Cloud+Storage | 2017-03-23T04:23:56 | CC-MAIN-2017-13 | 1490218186774.43 | [] | docs.cliqr.com |
Data Users
From GCD
Who uses our data
Since it is provided under a free license, data from the Grand Comics Database™ is used by other web sites and applications. An incomplete list is provided below:
Web Sites
- What Were Comics? - A data-driven history of the American comic book, based in part on GCD data
- Comic Book Plus displays GCD data under each issue
- eBay is using GCD data under a formal agreement [1] [2]
- Mile High Comics is using some GCD data under an old agreement
Applications
- Comic Book Inventory (CBI) - comic collection tracking app for Android and iOS
- iCdb - iPhone comic database application [3]
Publishers
- Publishers of reprinted comic collections have used some of our credits from time to time (please add examples if known) | http://docs.comics.org/wiki/Data_Users | 2017-03-23T04:15:43 | CC-MAIN-2017-13 | 1490218186774.43 | [] | docs.comics.org |
Enterprise Marketplace
Overview
The Enteprise Marketplace is a location to import, export, or publish applications or application profiles that you (or someone in your tenant) have about one hundred applications (updated periodically) that are visible to all CloudCenter users. The applications listed in the Public Marketplace are saved in JSON format and do not contain any data.
Predefined Applications for SaaS Deployments
The CloudCenter platform provides access to numerous predefined applications through the CloudCenter Marketplace (Marketplace).
Import from Marketplace to Applications Page.
- To enable a tenant to publish to the Marketplace, you need to modify the vendor.properties file for the appropriate tenant. Contact the CloudCenter Support team for additional information.
- No labels | http://docs.cliqr.com/display/CCD46/Enterprise+Marketplace | 2017-03-23T04:24:58 | CC-MAIN-2017-13 | 1490218186774.43 | [array(['/download/attachments/590134/Screen%20Shot%202015-07-22%20at%2012.11.46%20PM.png?version=1&modificationDate=1437592348000&api=v2',
None], dtype=object) ] | docs.cliqr.com |
Graph Cycle Costs
As you saw in the previous page, a graph has a direct path between its start and the end of the execution, event triggers such as Connector blocks can create new events that trigger a new graph cycle.
Easier: Every block that starts a new trigger, from event or time-based, will init a new execution cycle that will occur more blocks of nodes and can create new cycles of nodes depending on your execution path of that cycle.
Since every block type has a fixed GAS as GLQ price you can directly now calculate the estimated cost for running a specific task on a Graph.
But be careful: since you can have an infinite number of cycles regardings the starting point that you have set (events, triggers, conditions..) the estimated fixed price is not the same as the dynamic that will depend on the number of times you're event get triggered.
A more concrete example with a random block price: If you watch every new block coming from the Ethereum Blockchain then add a Twitter message every time it occurs with the base Twitter block costing 1 GLQ at each execution, your graph price will get raised by 1 GLQ every 13 seconds (average eth block time).
But now if you decide to do the same but with every new transaction, even if the Twitter block base cost is 1 GLQ, you will have to pay a higher relevant price based on the number of time the Twitter block got executed.
Be assured that we will be watching the entire graphs costs and will do the best to keep it as lower as possible, we had to put the system in place into the Protocol to prevent an overload of our network and abuse from malicious attacks. | https://docs.graphlinq.io/graph/3-cycle/ | 2021-06-12T17:30:43 | CC-MAIN-2021-25 | 1623487586239.2 | [] | docs.graphlinq.io |
Once you have created an update repository as described in Add an update repository, or if you already have an update repository, you can apply the security updates.
This section instructs you on how to update Ubuntu packages manually. More specifically, the procedure includes update of the KVM nodes one by one, the VMs running on top of each KVM node, and the OpenStack compute nodes.
If you prefer the automated update, use the Apply Ubuntu security updates using Jenkins procedure.
Note
This documentation does not cover the upgrade of Ceph nodes.
To apply the Ubuntu security updates manually:
Log in to the Salt Master node.
For OpenStack Ocata with OpenContrail v3.2, update the
python-concurrent.futures package. Otherwise, skip this step.
salt -C "ntw* or nal*" cmd.run "apt-get -y --force-yes upgrade python-concurrent.futures"
Update the Virtual Control Plane and KVM nodes:
Identify the KVM nodes:
salt "kvm*" test.ping
Example of system response:
kvm01.bud.mirantis.net: True kvm03.bud.mirantis.net: True kvm02.bud.mirantis.net: True
In the example above, three KVM nodes are identified:
kvm01,
kvm02, and
kvm03.
Identify the VMs running on top of each KVM node. For example, for
kvm01:
salt "kvm01*" cmd.run "virsh list --name"
Example of system response:
kvm01.bud.mirantis.net: cfg01 prx01.<domain_name> ntw01.<domain_name> msg01.<domain_name> dbs01.<domain_name> ctl01.<domain_name> cid01.<domain_name> nal01.<domain_name>
Using the output of the previous command, upgrade the
cid,
ctl,
gtw/
ntw/
nal,
log,
mon,
msg,
mtr,
dbs, and
prx VMs of the particular KVM node. Do not upgrade
cfg01,
cmp, and
kvm.
Note
Ceph nodes are out of the scope of this procedure.
Example:
for NODE in prx01.<domain_name> \ ntw01.<domain_name> \ msg01.<domain_name> \ dbs01.<domain_name> \ ctl01.<domain_name> \ cid01.<domain_name> \ nal01.<domain_name> do salt "${NODE}*" cmd.run "export DEBIAN_FRONTEND=noninteractive && \ apt-get update && \ apt-get -y upgrade && \ apt-get -y -o Dpkg::Options::="--force-confdef" \ -o Dpkg::Options::="--force-confnew" dist-upgrade" done
Wait for all services of the cluster to be up and running.
If the KVM node hosts GlusterFS, verify the GlusterFS server and volumes statuses as described in Troubleshoot GlusterFS. Proceed with further steps only if the GlusterFS status is healthy.
If the KVM node hosts a
dbs node, verify that the Galera cluster
status is
Synced and contains at least three nodes as described in
Verify a Galera cluster status.
If the KVM node hosts a
msg node, verify the RabbitMQ cluster
status and that it contains at least three nodes as described in
Troubleshoot RabbitMQ.
If the KVM node hosts OpenContrail 4.x, verify that its services are up and running as described in Verify the OpenContrail status. If any service fails, troubleshoot it as required. For details, see: Troubleshoot OpenContrail.
Once you upgrade the VMs running on top of a KVM node, upgrade and
restart this KVM node itself. For example,
kvm01:
salt "kvm01*" cmd.run "export DEBIAN_FRONTEND=noninteractive && \ apt-get update && \ apt-get -y upgrade && \ apt-get -y -o Dpkg::Options::="--force-confdef" \ -o Dpkg::Options::="--force-confnew" dist-upgrade && \ shutdown -r 0"
Wait for all services of the cluster to be up and running. For details, see substeps of the step 3.4.
Repeat the steps 3.1-3.7 for the remaining
kvm nodes one by one.
Upgrade the OpenStack compute nodes using the example below, where
cmp10. is a set of nodes.
Caution
Before upgrading a particular set of
cmp nodes, migrate
the critical cloud environment workloads, which should not be
powered off, from these nodes to the
cmp nodes that are
not under maintenance.
Example:
# Check that the command will be applied to only needed nodes salt -E "cmp10." test.ping # Perform the upgrade salt -E "cmp10." cmd.run "export DEBIAN_FRONTEND=noninteractive && apt-get \ update && apt-get -y upgrade && apt-get -y -o \ Dpkg::Options::="--force-confdef" -o Dpkg::Options::="--force-confnew" \ dist-upgrade && apt-get -y install linux-headers-generic && shutdown -r 0" | https://docs.mirantis.com/mcp/q4-18/mcp-operations-guide/update-upgrade/minor-update/ubuntu-security-updates/apply-security-updates.html | 2021-06-12T18:23:57 | CC-MAIN-2021-25 | 1623487586239.2 | [] | docs.mirantis.com |
Crate detexify
Version 0.4.0
See all detexify's items
Free’s a classifier
Returns the default classifier
Classifiy the sample returning scores and free’s sample
sample
Free’s scores
Returns the i-th score of scores
i
scores
Returns the i-th symbol of scores, callers responsible for calling symbol_free once finished
symbol_free
Returns the length of the list of symbols
Adds a point to the stroke
Returns the stroke and frees builder
builder
Creates a new stroke builder
Adds a stroke to the stroke sample and frees the stroke
Returns a stroke sample and free’s builder
Creates a new stroke sample builder
Frees symbol
symbol
Gets the command of the i-th score
Gets the font encoding of the i-th score
Gets the math mode of the i-th score
Gets the package of the i-th score
Gets the text mode of the i-th score
Returns the total number of symbols
Returns the i-th symbol | https://docs.rs/detexify-c/0.4.0/detexify/ | 2021-06-12T18:06:16 | CC-MAIN-2021-25 | 1623487586239.2 | [] | docs.rs |
Configuring product and operational catalogs
You can add products to the product catalog, information such as operational services to the operational catalog, and other information to the generic catalog for use in various Remedy ITSM forms. Products can be any items used by an organization and are usually IT-related. Products are typically used to classify a configuration item, an incident, a problem, or a change request. The operational catalog can be used to contain a list of all the operational services that a typical help desk provides, and can also contain items that represent symptoms of incidents and problems. Generic categories for miscellaneous information, such as reasons, can be used in the generic catalog.
This section provides the following information: | https://docs.bmc.com/docs/itsm1908/configuring-product-and-operational-catalogs-877692431.html | 2021-06-12T18:10:03 | CC-MAIN-2021-25 | 1623487586239.2 | [] | docs.bmc.com |
Topics
This section provides information about instances and their basic characteristics. It also describes how to launch an instance and connect to it.
Amazon EC2 provides templates known as Amazon Machine Images (AMIs) that contains a software configuration (for example, an operating system, an application server, and applications.) You use these templates to launch an instance, which is a copy of the AMI running as a virtual server in the cloud., see Available Instance Types. You can launch multiple instances from an AMI, as shown in the following figure.
Your instance keeps running until you stop or terminate it, or until it fails. If an instance fails, you can launch a new one from the AMI.
Your AWS account has a limit on the number of instances that you can have running. For more information about these limits, and how to request an increase in your limits, see How many instances can I run in Amazon EC2 in the Amazon EC2 General FAQ.
Related Topics
Amazon Machine Images (AMI)
Amazon EC2 Instance Store | http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/Instances.html | 2013-05-18T19:52:02 | CC-MAIN-2013-20 | 1368696382764 | [] | docs.aws.amazon.com |
getNumberOfChars
This page has been flagged with the following issues:
High-level issues:
Notes
Remarks
The total number of characters includes referenced characters from tref reference, regardless of whether they are rendered.
The total number of characters is effectively equivalent to the length of the textContent attribute from the DOM Level 3 Core specification (section 1.4), if that attribute also expands tref elements.
Syntax
retVal = object.getNumberOfChars(); | http://docs.webplatform.org/wiki/svg/methods/getNumberOfChars | 2013-05-18T19:39:33 | CC-MAIN-2013-20 | 1368696382764 | [] | docs.webplatform.org |
Decorator activated via a scan which treats the function being decorated as an event subscriber for the set of interfaces passed as *ifaces to the decorator constructor.
For example:
from pyramid.events import NewRequest from pyramid.events import subscriber @subscriber(NewRequest) def mysubscriber(event): event.request.foo = 1
More than one event type can be passed as a construtor argument. The decorated subscriber will be called for each event type.
from pyramid.events import NewRequest, NewResponse from pyramid.events import subscriber @subscriber(NewRequest, NewResponse) def mysubscriber(event): print event
When the subscriber decor interface.
Note
For backwards compatibility purposes, this class can also be imported as pyramid.events.WSGIApplicationCreatedEvent. This was the name of the event class before Pyramid 1.0.
An instance of this class is emitted as an event whenever Pyramid begins to process a new request. The even instance has an attribute, request, which is a request object. This event class implements the pyramid.interfaces.INewRequest interface. interface.
Note
As of Pyramid 1.0, for backwards compatibility purposes, this event may also be imported as pyramid.events.AfterTraversal. was interface.
Note
Postprocessing a response is usually better handled in a WSGI middleware component than in subscriber code that is called by a pyramid.interfaces.INewResponse event. The pyramid.interfaces.INewResponse event exists almost purely for symmetry with the pyramid.interfaces.INewRequest event.
Return the value for key k from the renderer globals dictionary, or the default if no such value exists.
Return the existing value for name in the renderers globals dictionary. If no value with name exists in the dictionary, set the default value into the renderer globals dictionary under the name passed. If a value already existed in the dictionary, return it. If a value did not exist in the dictionary, return the default
Update the renderer globals dictionary with another dictionary d.
See Using Events for more information about how to register code which subscribes to these events. | http://pyramid.readthedocs.org/en/1.1-branch/api/events.html | 2013-05-18T19:13:51 | CC-MAIN-2013-20 | 1368696382764 | [] | pyramid.readthedocs.org |
These functiona are used to retrieve resource usage information:
The elements.
The first two elements of the return value are floating point values representing the amount of time spent executing in user mode and the amount of time spent executing in system mode, respectively. The remaining values are integers. Consult the getrusage() man page for detailed information about these values. A brief summary is presented here:
This function will raise a ValueError if an invalid who parameter is specified. It may also raise a resource.error exception in unusual circumstances.
The following RUSAGE_* symbols are passed to the getrusage() function to specify which processes information should be provided for. | http://docs.python.org/release/1.5/lib/node120.html | 2013-05-18T19:30:54 | CC-MAIN-2013-20 | 1368696382764 | [] | docs.python.org |
Resolve a Missing Page Elements Error
PROBLEM
I am unable to build a Test Project (executing a test in the Standalone version also builds the entire project) that contains coded steps because of a compilation error related to missing elements:
This error can be encountered in either the Standalone version or the VS plugin.
SOLUTION
Most recorded actions are associated with an element from the application being tested. Test Studio needs to locate each element before a test step against it can be executed. To that end, all elements are stored in the Elements Repository.
If you click on a particular step within your test, a yellow arrow will point out which element in the Repository is associated with this step (unless it's not associated with an element):
It's also possible to access the elements in the Repository from a coded step. This saves you the effort of writing Find Logic from scratch. Instead, you're reusing logic Test Studio initially generated. Similarly, if you convert a regular step to code, the generated coded step will access the associated element from the Repository (e.g. Pages.Bing.BingDiv).
Compilation errors will occur if the code is trying to access an element that doesn't exist in the Repository. There are a few situations where this can occur:
- You deleted the element.
Test Studio will not let you delete an element as long as a regular step is associated with it. However, Test Studio cannot detect whether you're referencing the element in code and will let you delete it if only a coded step is referencing the element. This will lead to a compilation error. This problem might arise if you convert a specific step to code. This will erase the connection to the element. The element will not be removed from the repository, but Test Studio will let you manually delete it. Also, elements added through the Add to Project Elements feature can be removed because they're not associated with elements to begin with.
To resolve this, either add the required element in the Repository or rewrite the code to include Find Logic. For instance, let's say the following code (inteded to run against Google's main page) does not compile:
Pages.Bing.BingDiv.Click(false)
You can rewrite it like this:
Find.ById<HtmlDiv>("binglogo").Click(false)
- You pasted code between coded steps belonging to different Test Projects.
Each Test Studio test contains a resource file which hold the Elements Repository for that test. You should import only entire tests by using Import/Export from the GUI when working with more than one test. Again, you can resolve this by adding the necessary element or rewriting the code as seen above. | https://docs.telerik.com/teststudio/troubleshooting-guide/test-execution-problems-tg/missing-page-elements | 2018-10-15T19:23:02 | CC-MAIN-2018-43 | 1539583509690.35 | [array(['/teststudio/img/troubleshooting-guide/test-execution-problems-tg/missing-page-elements/fig1.png',
'Error'], dtype=object)
array(['/teststudio/img/troubleshooting-guide/test-execution-problems-tg/missing-page-elements/fig2.png',
'Element'], dtype=object) ] | docs.telerik.com |
?
Please note that as of version 2017 R3 the default installation path for new installation is C:\Program Files (x86)\Progress\Test Studio.
Where Do I Install Run-Time Edition?
Is It Possible To Install Run-Time Using A Command Line?
If installing Run-Time on multiple machines you could automate the process using the command line prompt.
msiexec /q /i TestStudio_Runtime_[version].msi
- The default Run-Time installation will add Scheduling and Storage services on the target machine. To skip their installation you could use the following arguments:
msiexec /q /i TestStudio_Runtime_[version].msi INSTALL_SCHEDULING_SERVER=False INSTALL_STORAGE_SERVICE=False
How To Update The Run-Time Version?
If running tests in a scheduling configuration it is important to keep Test Studio and Run-Time versions equal to prevent possible misbehavior. Each next version of the Run-time edition could be found for download in the product license holder's Telerik account. | https://docs.telerik.com/teststudio/general-information/test-studio-run-time | 2018-10-15T19:06:05 | CC-MAIN-2018-43 | 1539583509690.35 | [] | docs.telerik.com |
when you download the *.exe installer directly from our site. In order to access it you should download the *.msi installer from your Telerik.com account.
On this screen you can select which features to install by clicking Customize button. You can also change the installation path. After making your selections, click OK to continue.
- To use this machine as a Storage Server or Scheduling Server, you must install the appropriate services at this time.
- To add the Storage and Scheduling Services features, you must complete the installation and perform a change using the installer.
- Storage server uses MongoDb as storage database. MongoDb requires at least 4Gb hard drive space to operate normally.
- If you have installed Visual Studio on the machine Test Studio plugin for Visual Studio will be also installed.
Review all the selections and click. | https://docs.telerik.com/teststudio/getting-started/installation/install-procedure | 2018-10-15T19:55:23 | CC-MAIN-2018-43 | 1539583509690.35 | [] | docs.telerik.com |
See Also: ExpressionBuilder Members
ASP.NET automatically evaluates expressions during page parsing using the System.Web.Configuration.ExpressionBuilder class. The System.Web.Configuration.ExpressionBuilderCollection collection, which is made up of the expressionBuilders elements contained in the compilation section of the configuration. The System.Web.Configuration.ExpressionBuilder contains specific values in key/value pairs.
A value is retrieved by including an expression of the form
<%$ ExpressionPrefix: ExpressionKey %>
within the page. The ExpressionPrefix maps the type of expression to be retrieved as either a common expression-builder type (that is, System.Web.Compilation System.Web.Configuration.ExpressionBuilder class. | http://docs.go-mono.com/monodoc.ashx?link=T%3ASystem.Web.Configuration.ExpressionBuilder | 2018-10-15T19:27:43 | CC-MAIN-2018-43 | 1539583509690.35 | [] | docs.go-mono.com |
Usage¶
Running the server¶
Once installed and configured, running Dashkiosk should be straightforward. While in the dist directory:
$ node server.js --environment production
Don’t forget to specify the environment! See Configuration for available options. They can also be specified in a configuration file.
Testing¶
The server has three browser endpoints:
- /unassigned which is the default dashboard for unassigned displays,
- /admin which is the administration interface,
- /receiver which is the receiver part that a display should load.
To test that everything is setup correctly, point your browser to /unassigned (for example if you kept the default parameters and installed Dashkiosk on your PC).
You should see the default dashboard displayed for unknown device. These are just a few photos cycling around.
Then, you should go to the administration interface located in /admin. While in the administration interface, open another tab and go to /receiver which is the URL displaying the receiver. In the /admin tab, you should see yourself as a new display in the “Unassigned” group and in the /receiver tab, you should see the default dashboard that you got by going in /unassigned.
To customize the default dashboard, see Unassigned dashboard.
Troubleshooting¶
If something goes wrong, be sure to look at the log. Either you run the server through something like supervisord and you can have a look at the log in some file or you can use --log.file to get a log file.
Administration¶
The administration interface allows to create new dashboards, see active displays and associate them to a group of dashboards. When pointing a browser to the /admin URL, you should see an interface like this:
The administration interface with a few groups. At the top, the special “Unassigned” group.
On the figure above, you can see the three main entities in Dashkiosk:
- The monitors with a 5-digit serial numbers are the displays. For each of them, the serial number is attributed on their first connection and stored locally in the display [2]. They come with a green light when they are actually alive.
- Each display is affected to a group of displays. In the above figure, we have three groups. It is possible to move a display from one group to another. Each group can have a name and a description. It is possible to create or rename any group. The group named “Unassigned” is special and new displays will be attached to it on first connection. Other than that, this is a regular group. The other special group is “Chromecast devices”. See Chromecast devices.
- Each group of displays contains an ordered list of dashboards. A dashboard is just an URL to be displayed with a bunch of parameters. You can reorder the dashboards in a group and choose how much time they should be displayed.
The first time, you will only have the special “Unassigned” group [3].
Displays¶
Clicking on a display will show a dialog box with various information about the display.
The dialog box of the APFI0S display.
First, you get the IP address of the display. This could be useful if you need to connect to it for some other purpose (like debugging a problem related to this display). If the display is offline, the IP displayed is the last known IP.
Then, on the top right corner, there are contextual icons relevant to the current display. On a display, you can execute two actions:
- force a reload of the receiver (after an update, for example),
- toggle the OSD on the receiver.
The receiver OSD is a neat feature to check if the display you are inspecting is really the one you are interested in. It will display an overlay with the display name as well as some technical information that may be useful when displaying dashboards.
Not shown on the above figure, you can destroy a display by clicking on the Delete button in the lower left corner.
You can assign a description to the display, like “In the kitchen”. You can also change the group the display is currently attached to by choosing another group in the dropdown menu. The display should immediatly display the current dashboard of the group.
The viewport will be explained in a dedicated section. See Viewport.
On a desktop browser, it is also possible to move the display to another group by dragging it to the appropriate group.
Groups¶
By default, you only get the “Unassigned” group. But you can create any number of groups you need by clicking on the “Add a new group” button.
The name and the description of a group can be changed by clicking on them. If you change the name of the “Unassigned” group, a new “Unassigned” group will be created the next time a new display comes to live.
As for displays, you can execute contextual actions on a group. There are three of them:
- for a reload of all the displays in the group,
- toggle the OSD of all the displays in the group,
- destroy the group.
The group can only be destroyed if no display is attached to it.
Each group has a list of dashboards. You can reorder them by using the up and down arrow icons on the right of each dashboard. You can add a new dashboard by using the “Add a new dashboard” button.
Dashboards¶
When creating a dashboard or modifying an existing one (by clicking on the little pen icon), you will get the following dialog box:
The dialog box to modify some random dashboard.
Currently, a dashboard has:
The timer is optional but it doesn’t make sense to omit it if you have several dashboards in a group. Without it, once the dashboard is displayed, the next one will never be displayed unless you remove or modify the current one.
You can also modify the timer and the viewport by clicking on them directly in the list of dashboards in each group.
About the dashboards¶
The dashboards to be displayed can be any URL accessible by the displays. When a new dashboard has to be displayed for a group, the server will broadcast the URL of the dashboard to each member of the group. They will load the dashboard and display it. This may seem easy but there are several limitations to the system.
Network access¶
So, the first important thing about those dashboards is that they are fetched by the displays, not by the server. You must therefore ensure that the dashboards are accessible by the displays and not protected by a password or something like that.
Processing power¶
Some dashboards may be pretty dynamic and use special effects that look cool on the average PC. However, when using a US$ 30 low-end Android stick to display it, it may become a bit laggy. Also, please note that the Android application uses a modern webview but some functionalities may be missing, like WebGL.
Viewport¶
By default, a dashboard is displayed using the native resolution of the display. If the display is a 720p screen and your dashboard can only be rendered correctly on a 1080p screen, you have a problem. There are several solutions to this problem.
-
Use a responsive dashboard that can adapt itself to any resolution.
-
Change the viewport of the display. By clicking on the display, you can specify a viewport. When empty, it means that you use the viewport matching the native resolution of the screen. By specifying another resolution, the display will render the dashboards at the given resolution and zoom in or out to fit it into its native resolution.
The support of this option depends on the ability of the browser running the receiver to exploit this information. Android devices are able to make use of it but other devices may not. If you don’t see any effect when changing the viewport, use the next option.
-
Change the viewport of the dashboard. This is quite similar to the previous option but it is a per-dashboard option and it will work on any device. It works in the same way: the rendering will be done at the given resolution and then resized to fit in the screen. Both options can be used at the same time, there is no conflict.
IFrames¶
Technically, the receiver is a simple app rendering the requested URL inside an IFrame which is like a browser inside a browser. There are some limitations to an IFrame:
- The receiver has almost no way to communicate with the IFrame [1]. It can know when an IFrame is ready but not if there is an error. The IFrame can therefore be displayed while it is not fully rendered and on the other hand, we cannot detect any error and try to reload the IFrame.
- The IFrame can refuse to be display its content if there is a special X-Frame-Options in the headers forbidding the use of an IFrame.
- If you are serving Dashkiosk from an HTTPS URL, you cannot display dashboards using HTTP. The other way is authorized. Hence, it seems just easier to serve Dashkiosk receiver on HTTP.
The second limitation can be quite annoying. Here are some workarounds:
- Find an embeddable version of the content. Youtube, Google Maps and many other sites propose a version specifically designed to be embedded into an iframe.
- Use a web proxy that will strip out the offending header. A good base for such a proxy is Node Unblocker. It should be easy to modify it to remove the X-Frame-Options header.
- Use a screenshot service. Instead of displaying the real website, just display a screenshot. There are many solutions to implement such a service with headless browsers like Phantom.JS. For example this one.
Footnotes | https://dashkiosk.readthedocs.io/en/v2.3.2/usage.html | 2018-10-15T18:59:34 | CC-MAIN-2018-43 | 1539583509690.35 | [array(['_images/administration.jpg', 'Administration interface'],
dtype=object)
array(['_images/display.jpg', 'Display details'], dtype=object)
array(['_images/dashboard.jpg', 'Dashboard details'], dtype=object)] | dashkiosk.readthedocs.io |
How do I update my profile on the SAP App Center?
To update profile information, go to Profile in the navigational dropdown.
Once the profile page will by default load onto the user profile page, but it is also possible to edit company information by any user with company admin access.
How can I update my company profile information?
Company settings can be reached, after logging in to the SAP App Center, by going to Account > Manage Company. Please note, only users with the Company Admin role can access company settings.
There are several options that are available while in company settings:
- Updating the company name.
- Choosing what access options new users have when they join the company.
- Allowing for basic users to start trials or make purchases to be turned on or off. | https://docs.sapappcenter.com/maintaining-your-app-listing/ | 2018-10-15T20:22:38 | CC-MAIN-2018-43 | 1539583509690.35 | [array(['https://i1.wp.com/docs.sapappcenter.com/wp-content/uploads/2018/01/dropdown.jpg?resize=229%2C300&ssl=1',
None], dtype=object)
array(['https://i1.wp.com/sapappcenter.blog/wp-content/uploads/2018/01/profile-300x86.jpg?resize=565%2C162&ssl=1',
None], dtype=object)
array(['https://i0.wp.com/sapappcenter.blog/wp-content/uploads/2018/01/companysettings-300x122.jpg?resize=418%2C170&ssl=1',
None], dtype=object) ] | docs.sapappcenter.com |
PATRIC Webinar – Genome Assembly via Command Line Interface, July 19, 2018, 3:00pm EDT¶
PATRIC provides a Genome Assembly Service for prokaryotes that allows single or multiple assemblers to be invoked to compare results. The service attempts to select the best assembly, i.e., assembly with the smallest number of contigs and the longest average contig length. Several assembly workflows or “recipes” are available.
On July 19th at 3pm ET PATRIC will host a webinar demonstrates this capability using the PATRIC Command Line Interface (CLI). Please email [email protected] if you plan to attend so that we will know approximately how many participants to expect.
Webinar connection information:¶
Time: Jul 19, 2018 3:00 PM Eastern Time (US and Canada)
Join from PC, Mac, Linux, iOS or Android:
- Or iPhone one-tap :
- US: +16699006833,,634111742# or +19294362866,,634111742#
- Or Telephone:
- Dial(for higher quality, dial a number based on your current location):
- US: +1 669 900 6833 or +1 929 436 2866
Meeting ID: 634 111: 634 111 742
SIP: [email protected] | https://docs.patricbrc.org/news/2018/20180628-assembly-cli-webinar.html | 2018-10-15T19:17:22 | CC-MAIN-2018-43 | 1539583509690.35 | [array(['../../_images/webinar_cli.png',
'PATRIC Webinar Genome Assembly via CLI'], dtype=object)] | docs.patricbrc.org |
.
Назначьте или отмените назначение задачи.
-.
Просмотр и поиск процесса.
- As an AARI user, to quickly search for a specific request created, you can filter and search for that request.
Фильтровать и искать запрос.
- As an AARI user, when you have many tasks created from multiple requests, you can filter and search for a specific task.
Фильтрация и поиск задач. | https://docs.automationanywhere.com/ru-RU/bundle/enterprise-v2019/page/enterprise-cloud/topics/hbc/aari-deploy-process.html | 2022-05-16T15:01:17 | CC-MAIN-2022-21 | 1652662510138.6 | [] | docs.automationanywhere.com |
Chainprox
Search…
Introduction
Introduction
Original idea
Proof of Proxy
ROX Token
How Chainprox works
Earning potential
Connecting to Chainprox
For business customers
For individual customers
Project and team
Business model
Funding
Roadmap
Team
Advisors
Links
Website
Twitter
Telegram
Medium
GitBook
For business customers
Chainprox use cases for business
Business customers use Chainprox proxy solutions primarily for:
Ad verification.
Sophisticated hackers can put fake affiliate links, information, or viruses on your website that you cannot see via your own IP address... or sometimes even via traditional VPNs. With Chainprox proxies, you can quickly identify and stamp out this cybercrime.
Pricing intelligence
- are other retailers offering prices to compete with yours? Or do they just want you to think that they are? Know the true prices they are showing customers by automatically viewing their storefronts with the same types of IP addresses their customers use.
Market research.
Disinformation is a powerful competitive tool, and many web businesses show their real content to their customers and false information to other companies. Get the true story with residential proxies from Chainprox and make decisions based on facts - not what your competitors want you to see.
Data scraping.
Information wants to be free, but web publishers are using increasingly powerful machine learning tools to block business data seekers. You can use rotating proxies from Chainprox to beat the machines and get full access to the information you need to boost your profits.
Search Engine Optimization (SEO).
Ad sellers and search engines often know your IP address, your VPN address, and how to use both to boost their profits - not yours. You can use residential proxies to get an accurate, unbiased picture of how your online marketing campaign is actually performing.
Brand protection.
In the information economy, your brand is one of your most valuable assets. Use Chainprox proxies to protect against hijacking, malware attacks, and data theft by verifying that the rest of the world sees the version of your brand’s content you want them to.
Global publishing.
With residential proxies from around the world, you can verify that your website is accessible from anywhere on earth. It’s access to content and websites around the world without any blockages.
How Chainprox works - Previous
Connecting to Chainprox
Next - How Chainprox works
For individual customers
Last modified
6mo ago
Copy link | https://docs.chainprox.com/how-chainprox-works/for-business-customers | 2022-05-16T15:53:41 | CC-MAIN-2022-21 | 1652662510138.6 | [] | docs.chainprox.com |
Package com.couchbase.client.core
Class Core
- java.lang.Object
- com.couchbase.client.core.Core
Constructor Detail
Core
protected Core(CoreEnvironment environment, Authenticator authenticator, Set<SeedNode> seedNodes)Creates a new Core.
- Parameters:
environment- the environment for this core.
Method Detail
create
public static Core create(CoreEnvironment environment, Authenticator authenticator, Set<SeedNode> seedNodes)
configurationProvider
@Internal public ConfigurationProvider configurationProvider()Returns the attached configuration provider.
Internal API, use with care!
send
public <R extends Response> void send(Request<R> request)Sends a command into the core layer and registers the request with the timeout timer.
- Parameters:
request- the request to dispatch.
send
@Internal public <R extends Response> void send(Request<R> request, boolean registerForTimeout)Sends a command into the core layer and allows to avoid timeout registration.
Usually you want to use
send(Request)instead, this method should only be used during retry situations where the request has already been registered with a timeout timer before.
- Parameters:
request- the request to dispatch.
registerForTimeout- if the request should be registered with a timeout.
context
public CoreContext context()Returns the
CoreContextof this core instance.
@Internal public CoreHttpClient target)Returns a client for issuing HTTP requests to servers in the cluster.
diagnostics
@Internal public Stream<EndpointDiagnostics> diagnostics()
serviceState
@Internal public Optional<Flux<ServiceState>> serviceState(NodeIdentifier nodeIdentifier, ServiceType type, Optional<String> bucket)If present, returns a flux that allows to monitor the state changes of a specific service.
- Parameters:
nodeIdentifier- the node identifier for the node.
type- the type of service.
bucket- the bucket, if present.
- Returns:
- if found, a flux with the service states.
initGlobalConfig
@Internal public void initGlobalConfig()Instructs the client to, if possible, load and initialize the global config.
Since global configs are an "optional" feature depending on the cluster version, if an error happens this method will not fail. Rather it will log the exception (with some logic dependent on the type of error) and will allow the higher level components to move on where possible.
clusterConfig
@Internal public ClusterConfig clusterConfig()This API provides access to the current config that is published throughout the core.
Note that this is internal API and might change at any time.
ensureServiceAt
@Internal public Mono<Void> ensureServiceAt(NodeIdentifier identifier, ServiceType serviceType, int port, Optional<String> bucket, Optional<String> alternateAddress)This method can be used by a caller to make sure a certain service is enabled at the given target node.
This is advanced, internal functionality and should only be used if the caller knows what they are doing.
- Parameters:
identifier- the node to check.
serviceType- the service type to enable if not enabled already.
port- the port where the service is listening on.
bucket- if the service is bound to a bucket, it needs to be provided.
alternateAddress- if an alternate address is present, needs to be provided since it is passed down to the node and its services.
- Returns:
- a
Monowhich completes once initiated.
responseMetric
@Internal public ValueRecorder responseMetric(Request<?> request)
createNode
protected Node createNode(NodeIdentifier identifier, Optional<String> alternateAddress)
- Parameters:
identifier- the identifier for the node.
alternateAddress- the alternate address if present.
- Returns:
- the created node instance. | https://docs.couchbase.com/sdk-api/couchbase-core-io-2.2.0/com/couchbase/client/core/Core.html | 2022-05-16T15:27:00 | CC-MAIN-2022-21 | 1652662510138.6 | [] | docs.couchbase.com |
A newer version of this page is available. Switch to the current version.
RibbonGalleryGroup Class
An individual gallery group in the gallery bar (RibbonGalleryBarItem) or drop-down gallery (RibbonGalleryDropDownItem) items.
Namespace: DevExpress.Web
Assembly: DevExpress.Web.v20.2.dll
Declaration
public class RibbonGalleryGroup : CollectionItem
Public Class RibbonGalleryGroup Inherits CollectionItem
Related API Members
The following members accept/return RibbonGalleryGroup objects:
Remarks
A gallery bar and drop-down gallery items maintain collections of groups and items. Groups are represented by instances of the RibbonGalleryGroup class which can be accessed using the RibbonGalleryBarItem.Groups/RibbonGalleryDropDownItem.Groups property. These properties return the | https://docs.devexpress.com/AspNet/DevExpress.Web.RibbonGalleryGroup?v=20.2 | 2022-05-16T16:41:35 | CC-MAIN-2022-21 | 1652662510138.6 | [] | docs.devexpress.com |
WA Voice Metrics
The following Table lists Workforce Advisor voice metrics.
35px|link= Filter Placeholders F1 ... F16 for Metrics
Some metrics are deployed with a filter placeholder (F1 ... F16).
To use these metrics, in the Configuration Server under the Advisors Filters folder, create a business object with a name that matches the filter placeholder name supplied in the metric (for example, F16). Within this business object, you must specify the actual filter as it is defined in your local environment. This filter will be applied to all metrics with F16 in their name. For example, F16 can represent a filter that filters out all private calls leaving only routed calls to be considered in the related metrics calculation. The following figure shows an example of the filter properties.
This page was last edited on May 30, 2018, at 21:23. | https://docs.genesys.com/Documentation/PMA/8.5.1/PMAMetric/WAVoiceMetrics | 2022-05-16T15:03:49 | CC-MAIN-2022-21 | 1652662510138.6 | [] | docs.genesys.com |
Logistics.
New alert type: light exposure
The logistics dashboard integrates a new alert type for your shipments—light exposure. This parameter tracks the shipment ambient light level and can tell when and where your shipment is open.
New sensor settings: data transfer frequency
The administration panel provides a new settings type for your sensors. Administrators now can change both the data acquisition frequency and the frequency of exchanging data between a sensor and the Moeco platform.
Technical improvements
- In Authentication panel > Team, the list of team members is now filtered by active users by default.
- In the Authentication panel, now if users don’t have access rights to the Team, Org, and Roles tabs, tab links will remain inactive for them.
- In Authentication panel > Org, only organization owners can now transfer owner rights to another user.
- The Authentication panel > Org and Team tabs are optimized to load faster.
- When trying to re-use an email that is already associated with the Moeco account, you’ll now see an error message instead of getting into a time loop on the sign-up page.
- Password strength policy is enhanced.
Bug fixes
- Fixed an issue that prevented a user from logging out while being in the authentication panel or logistics dashboard.
- Fixed an issue with missing Light exposure data on the interactive map pop-ups.
- Fixed an issue that sent an invalid confirmation email during registration. | https://docs.moeco.io/logistics/rn/3.0/ | 2022-05-16T15:15:55 | CC-MAIN-2022-21 | 1652662510138.6 | [] | docs.moeco.io |
Show pageOld revisionsBacklinksODT exportBack to top Media Manager Namespaces Choose namespace [root] almaligner any2utf8 bing bootcat buttons corpora help marketplace resources tutorials wiki Media Files Media Files Upload Search Upload to bing Sorry, you don't have enough rights to upload files. File View History History of buttons:previous.png 2011/12/14 15:29 buttons:previous.png (external edit) (current) Show differences between selected revisions start.txt Last modified: 2021/11/16 16:01by eros | https://docs.sslmit.unibo.it/doku.php?id=start&tab_details=history&do=media&tab_files=upload&image=buttons%3Aprevious.png&ns=bing | 2022-05-16T16:25:30 | CC-MAIN-2022-21 | 1652662510138.6 | [] | docs.sslmit.unibo.it |
Upgrade
How to upgrade to the latest version of Wiki.js
Before upgradingBefore upgrading
-.
UpgradeUpgrade
- Make a backup of your
config.ymlto a secure location.
- Delete all files and folders in your Wiki.js installation folder (including the /data and /repo folders, these will be generated again automatically).
- Install the latest version.
- Copy your
config.ymlbackup file to its original location, at the root of your Wiki.js installation folder.
- Start Wiki.js:
node wiki start | https://docs-legacy.requarks.io/wiki/upgrade | 2022-05-16T15:42:39 | CC-MAIN-2022-21 | 1652662510138.6 | [] | docs-legacy.requarks.io |
About Collection rules
Use a collection to model business logic and implement common rule engine patterns. Collections define an ordered sequence of rules, the conditions under which they execute, and postprocessing steps known as response actions. The business-friendly options on this form help you to quickly develop flexible solutions that are easily understood by various audiences.
The following tabs are available on this form:
Where referenced
Collections can be called by a Collect instruction in an activity or a step in another collection rule. Collections can also be associated with specifications.
Access
Use the Application Explorer to access collections that apply to a specific class in your application. Use the Records Explorer to list all collections available to you.
Enable the Declare Collections setting to view collections output in a Tracer session.
You can view the generated Java code of a rule by clicking. You can use this code to debug your application or to examine how rules are implemented.
Category
Collection rules are instances of the Rule-Declare-Collection class. They are part of the Decision category.
- Collection rules
- Completing the Basic/Advanced Collection tab
- Completing the Preamble and Stop tab
- Completing the Specifications tab
- Using conditions in a collection
- Setting the context of a collection step
- Using response actions in a collection
- Rule type
Previous topic Connected apps landing page Next topic Collection rules | https://docs.pega.com/reference/86/about-collection-rules | 2022-05-16T16:28:55 | CC-MAIN-2022-21 | 1652662510138.6 | [] | docs.pega.com |
Running other upgrade utilities
To identify and modify elements of the application that need to be changed to be compatible with the upgraded release, run the utilities on the Upgrade Tools page. The specific utilities depend on the version from which you upgraded. Run all recommended utilities.
- In the header of Dev Studio, click .
- Follow the on-screen instructions to run the tools.
Previous topic eForm accelerator Help: Completing the Review and Save form | https://docs.pega.com/reference/86/running-other-upgrade-utilities | 2022-05-16T16:10:12 | CC-MAIN-2022-21 | 1652662510138.6 | [] | docs.pega.com |
General API quickstart¶
[1]:
import warnings import arviz as az import matplotlib.pyplot as plt import numpy as np import pymc3 as pm import theano.tensor as tt warnings.simplefilter(action="ignore", category=FutureWarning)
[2]:
%config InlineBackend.figure_format = 'retina' az.style.use("arviz-darkgrid") print(f"Running on PyMC3 v{pm.__version__}") print(f"Running on ArviZ v{az.__version__}")
Running on PyMC3 v3.9.0 Running on ArviZ v0.8.3
1. Model creation¶
Models in PyMC3 are centered around the
Model class. It has references to all random variables (RVs) and computes the model logp and its gradients. Usually, you would instantiate it as part of a
with context:
[3]:
with pm.Model() as model: # Model definition pass
We discuss RVs further below but let’s create a simple model to explore the
Model class.
[4]:
with pm.Model() as model: mu = pm.Normal("mu", mu=0, sigma=1) obs = pm.Normal("obs", mu=mu, sigma=1, observed=np.random.randn(100))
[5]:
model.basic_RVs
[5]:
[mu, obs]
[6]:
model.free_RVs
[6]:
[mu]
[7]:
model.observed_RVs
[7]:
[obs]
[8]:
model.logp({"mu": 0})
[8]:
array(-136.56820547)
It’s worth highlighting the design choice we made with
logp. As you can see above,
logp is being called with arguments, so it’s a method of the model instance. More precisely, it puts together a function based on the current state of the model – or on the state given as argument to
logp (see example below).
For diverse reasons, we assume that a
Model instance isn’t static. If you need to use
logp in an inner loop and it needs to be static, simply use something like
logp = model.logp. Here is an example below – note the caching effect and the speed up:
[9]:
%timeit model.logp({mu: 0.1}) logp = model.logp %timeit logp({mu: 0.1})
163 ms ± 5.89 ms per loop (mean ± std. dev. of 7 runs, 1 loop each) 46.6 µs ± 311 ns per loop (mean ± std. dev. of 7 runs, 10000:
) | Compute the log of the cumulative distribution function for Normal distribution | at the specified value. | | Parameters | ---------- | value: numeric | Value(s) for which log CDF is calculated. If the log CDF for multiple | values are desired the values must be provided in a numpy array or theano tensor. | | Returns | ------- | TensorVariable | | | | ---------------------------------------------------------------------- | Data and other attributes defined here: | | data = array([ 2.29129006, 0.35563108, 1.07011046, 1...00530838, -0... | | ---------------------------------------------------------------------- |
[11]:
dir(pm.distributions.mixture)
[11]:
[', 'warnings']
Unobserved Random Variables¶
Every unobserved RV has the following calling signature: name (str), parameter keyword arguments. Thus, a normal prior can be defined in a model context like this:
[12]:
with pm.Model(): x = pm.Normal("x", mu=0, sigma=1)
As with the model, we can evaluate its logp:
[13]:
x.logp({"x": 0})
[13]:
array(-0.91893853)
Observed Random Variables¶
Observed RVs are defined just like unobserved RVs but require data to be passed into the
observed keyword argument:
:
:
.
[17]:
with pm.Model() as model: x = pm.Uniform("x", lower=0, upper=1)
When we look at the RVs of the model, we would expect to find
x there, however:
[18]:
model.free_RVs
[18]:
):
[19]:
model.deterministics
[19]:
[x]
When displaying results, PyMC3 will usually hide transformed parameters. You can pass the
include_transformed=True parameter to many functions to see the transformed parameters that are used for sampling.
You can also turn transforms off:
[20]:
with pm.Model() as model: x = pm.Uniform("x", lower=0, upper=1, transform=None) print(model.free_RVs)
[x]
Or specify different transformation other than the default:
[21]:
import pymc3.distributions.transforms as tr with pm.Model() as model: # use the default log transformation x1 = pm.Gamma("x1", alpha=1, beta=1) # specify)\)
.0, 1.0, transform=Exp()) x2 = pm.Lognormal("x2", 0.0, 1.0) lognorm1 = model.named_vars["x1_exp__"] lognorm2 = model.named_vars["x2"] _, ax = plt.subplots(1, 1, figsize=(5, 3)) x = np.linspace(0.0, 10.0, 100) ax.plot( x, np.exp(lognorm1.distribution.logp(x).eval()), "--", alpha=0.5, label="log(y) ~ Normal(0, 1)", ) ax.plot( x, np.exp(lognorm2.distribution.logp(x).eval()), alpha=0\)
[23]:
Order = tr.Ordered() Logodd = tr.LogOdds() chain_tran = tr.Chain([Logodd, Order]) with pm.Model() as m0: x = pm.Uniform("x", 0.0, 1.0, shape=2, transform=chain_tran, testval=[0.1, 0.9]) trace = pm.sample(5000, tune=1000, progressbar=False, return_inferencedata=False)
Auto-assigning NUTS sampler... Initializing NUTS using jitter+adapt_diag... Multiprocess sampling (4 chains in 4 jobs) NUTS: [x] Sampling 4 chains for 1_000 tune and 5_000 draw iterations (4_000 + 20_000 draws total) took 24 seconds. There were 6 divergences after tuning. Increase `target_accept` or reparameterize. There were 5 divergences after tuning. Increase `target_accept` or reparameterize. There was 1 divergence after tuning. Increase `target_accept` or reparameterize. The number of effective samples is smaller than 25% for some parameters.
[24]:
_, ax = plt.subplots(1, 2, figsize=(10, 5)) for ivar, varname in enumerate(trace.varnames): ax[ivar].scatter(trace[varname][:, 0], trace[varname][:, 1], alpha=0.01) ax[ivar].set_xlabel(varname + "[0]") ax[ivar].set_ylabel(varname + "[1]") ax[ivar].set_title(varname) plt.tight_layout()
Lists of RVs / higher-dimensional RVs¶
Above we have seen how to create scalar RVs. In many models, you want multiple RVs. There is a tendency (mainly inherited from PyMC 2.x) to create list of RVs, like this:
[25]:
with pm.Model(): # bad: x = [pm.Normal(f"x_{i}", mu=0, sigma=1) for i in range(10)]
However, even though this works it is quite slow and not recommended. Instead, use the
shape kwarg:
[26]:
with pm.Model() as model: # good: x = pm.Normal("x", mu=0, sigma=1, shape=10)
x is now a random vector of length 10. We can index into it or do linear algebra operations on it:
[27]::
[28]:
with pm.Model(): x = pm.Normal("x", mu=0, sigma=1, shape=5) x.tag.test_value
[28]:
array([0., 0., 0., 0., 0.])
[29]:
with pm.Model(): x = pm.Normal("x", mu=0, sigma=1, shape=5, testval=np.random.randn(5)) x.tag.test_value
[29]:
array([-0.5658512 , 0.31887773, 0.15274679, 0.64807147, -1.03204502]).
With PyMC3 version >=3.9 the
return_inferencedata=True kwarg makes the
sample function return an
arviz.InferenceData object instead of a
MultiTrace.
InferenceData has many advantages, compared to a
MultiTrace: For example it can be saved/loaded from a file, and can also carry additional (meta)data such as date/version, or posterior predictive distributions. Take a look at the ArviZ Quickstart to
learn more.
[30]:
with pm.Model() as model: mu = pm.Normal("mu", mu=0, sigma=1) obs = pm.Normal("obs", mu=mu, sigma=1, observed=np.random.randn(100)) idata = pm.sample(2000, tune=1500, return_inferencedata=True)
Auto-assigning NUTS sampler... Initializing NUTS using jitter+adapt_diag... Multiprocess sampling (4 chains in 4 jobs) NUTS: [mu]
Sampling 4 chains for 1_500 tune and 2_000 draw iterations (6_000 + 8_000 draws total) took 14 seconds.
As you can see, on a continuous model, PyMC3 assigns the NUTS sampler, which is very efficient even for complex models. PyMC3 also runs tuning to find good starting parameters for the sampler. Here we draw 2000 samples from the posterior in each chain and allow the sampler to adjust its parameters in an additional 1500 iterations. If not set via the
cores kwarg, the number of chains is determined from the number of available CPU cores.
[31]:
idata.posterior.dims
[31]:
Frozen(SortedKeysDict({'chain': 4, 'draw': 2000}))
The tuning samples are discarded by default. With
discard_tuned_samples=False they can be kept and end up in a special property of the
InferenceData object.
You can also run multiple chains in parallel using the
chains and
cores kwargs:
[32]:
with pm.Model() as model: mu = pm.Normal("mu", mu=0, sigma=1) obs = pm.Normal("obs", mu=mu, sigma=1, observed=np.random.randn(100)) idata = pm.sample(cores=4, chains=6, return_inferencedata=True)
Auto-assigning NUTS sampler... Initializing NUTS using jitter+adapt_diag... Multiprocess sampling (6 chains in 4 jobs) NUTS: [mu]
Sampling 6 chains for 1_000 tune and 1_000 draw iterations (6_000 + 6_000 draws total) took 10 seconds.
[33]:
idata.posterior["mu"].shape
[33]:
(6, 1000)
[34]:
# get values of a single chain idata.posterior["mu"].sel(chain=1).shape
[34]:
(1000,)
PyMC3, offers a variety of other samplers, found in
pm.step_methods.
[35]:
list(filter(lambda x: x[0].isupper(), dir(pm.step_methods)))
[35]:
['BinaryGibbsMetropolis', 'BinaryMetropolis', 'CategoricalGibbsMetropolis', 'CauchyProposal', 'CompoundStep', 'DEMetropolis', 'DEMetropolisZ', 'ElemwiseCategorical', 'EllipticalSlice', 'HamiltonianMC', 'LaplaceProposal', 'Metropolis', 'MultivariateNormalProposal', 'NUTS', 'NormalProposal', 'PoissonProposal', :
[36]: for 1_000 tune and 1_000 draw iterations (4_000 + 4_000 draws total) took 6 seconds. The number of effective samples is smaller than 25% for some parameters.
You can also assign variables to different step methods.
[37]:]) idata = pm.sample(10000, step=[step1, step2], cores=4, return_inferencedata=True)
Multiprocess sampling (4 chains in 4 jobs) CompoundStep >Metropolis: [mu] >Slice: [sd]
Sampling 4 chains for 1_000 tune and 10_000 draw iterations (4_000 + 40_000 draws total) took 14 seconds. The number of effective samples is smaller than 25% for some parameters.
3.2 Analyze sampling results¶
The most common used plot to analyze sampling results is the so-called trace-plot:
[38]:
az.plot_trace(idata);
Another common metric to look at is R-hat, also known as the Gelman-Rubin statistic:
[39]:
az.summary(idata)
[39]:
These are also part of the
forestplot:
[40]:
az.plot_forest(idata, r_hat=True);
Finally, for a plot of the posterior that is inspired by the book Doing Bayesian Data Analysis, you can use the:
[41]:
az.plot_posterior(idata);
For high-dimensional models it becomes cumbersome to look at all parameter’s traces. When using
NUTS we can look at the energy plot to assess problems of convergence:
[42]:
with pm.Model() as model: x = pm.Normal("x", mu=0, sigma=1, shape=100) idata = pm.sample(cores=4, return_inferencedata=True) az.plot_energy(idata);
Auto-assigning NUTS sampler... Initializing NUTS using jitter+adapt_diag... Multiprocess sampling (4 chains in 4 jobs) NUTS: [x]
Sampling 4 chains for 1_000 tune and 1_000 draw iterations (4_000 + 4_000 draws total) took 5 seconds.().
[43]:
with pm.Model() as model: mu = pm.Normal("mu", mu=0, sigma=1) sd = pm.HalfNormal("sd", sigma=1) obs = pm.Normal("obs", mu=mu, sigma=sd, observed=np.random.randn(100)) approx = pm.fit()
Finished [100%]: Average Loss = 146.92
The returned
Approximation object has various capabilities, like drawing samples from the approximated posterior, which we can analyse like a regular sampling run:
[44]:
approx.sample(500)
[44]:
<MultiTrace: 1 chains, 500 iterations, 3 variables>
The
variational submodule offers a lot of flexibility in which VI to use and follows an object oriented design. For example, full-rank ADVI estimates a full covariance matrix:
[45]:
mu = pm.floatX([0.0, 0.0]) cov = pm.floatX([[1, 0.5], [0.5, 1.0]]) with pm.Model() as model: pm.MvNormal("x", mu=mu, cov=cov, shape=2) approx = pm.fit(method="fullrank_advi")
Finished [100%]: Average Loss = 0.0065707
An equivalent expression using the object-oriented interface is:
[46]:
with pm.Model() as model: pm.MvNormal("x", mu=mu, cov=cov, shape=2) approx = pm.FullRankADVI().fit()
Finished [100%]: Average Loss = 0.011343
[47]:
plt.figure() trace = approx.sample(10000) az.plot_kde(trace["x"][:, 0], trace["x"][:, 1]);
Stein Variational Gradient Descent (SVGD) uses particles to estimate the posterior:
[48]:
w = pm.floatX([0.2, 0.8]) mu = pm.floatX([-0.3, 0.5]) sd = pm.floatX([0.1, 0.1]) with pm.Model() as model: pm.NormalMixture("x", w=w, mu=mu, sigma=sd) approx = pm.fit(method=pm.SVGD(n_particles=200, jitter=1.0))
[49]:
plt.figure() trace = approx.sample(10000) az.plot_dist(trace["x"]);
For more information on variational inference, see these examples.
4. Posterior Predictive Sampling¶
The
sample_posterior_predictive() function performs prediction on hold-out data and posterior predictive checks.
[50]:
data = np.random.randn(100) with pm.Model() as model: mu = pm.Normal("mu", mu=0, sigma=1) sd = pm.HalfNormal("sd", sigma=1) obs = pm.Normal("obs", mu=mu, sigma=sd, observed=data) idata = pm.sample(return_inferencedata=True)
Auto-assigning NUTS sampler... Initializing NUTS using jitter+adapt_diag... Multiprocess sampling (4 chains in 4 jobs) NUTS: [sd, mu]
Sampling 4 chains for 1_000 tune and 1_000 draw iterations (4_000 + 4_000 draws total) took 7 seconds. The acceptance probability does not match the target. It is 0.8793693733942349, but should be close to 0.8. Try to increase the number of tuning steps.
[51]:
with model: post_pred = pm.sample_posterior_predictive(idata.posterior) # add posterior predictive to the InferenceData az.concat(idata, az.from_pymc3(posterior_predictive=post_pred), inplace=True)
[52]:
fig, ax = plt.subplots() az.plot_ppc(idata, ax=ax) ax.axvline(data.mean(), ls="--", color="r", label="True mean") ax.legend(fontsize=10);
/env/miniconda3/lib/python3.7/site-packages/IPython/core/pylabtools.py:132: UserWarning: Creating legend with loc="best" can be slow with large amounts of data. fig.canvas.print_figure(bytes_io, **kw)
4.1 Predicting on hold-out data¶
In many cases you want to predict on unseen / hold-out data. This is especially relevant in Probabilistic Machine Learning and Bayesian Deep Learning. We recently improved the API in this regard with the
pm.Data container. It is a wrapper around a
theano.shared variable.
[53]:
x = np.random.randn(100) y = x > 0 with pm.Model() as model: # create shared variables that can be changed later on x_shared = pm.Data("x_obs", x) y_shared = pm.Data("y_obs", y) coeff = pm.Normal("x", mu=0, sigma=1) logistic = pm.math.sigmoid(coeff * x_shared) pm.Bernoulli("obs", p=logistic, observed=y_shared) idata = pm.sample(return_inferencedata=True)
Auto-assigning NUTS sampler... Initializing NUTS using jitter+adapt_diag... Multiprocess sampling (4 chains in 4 jobs) NUTS: [x]
Sampling 4 chains for 1_000 tune and 1_000 draw iterations (4_000 + 4_000 draws total) took 6 seconds.
Now assume we want to predict on unseen data. For this we have to change the values of
x_shared and
y_shared. Theoretically we don’t need to set
y_shared as we want to predict it but it has to match the shape of
x_shared.
[54]:
with model: # change the value and shape of the data pm.set_data( { "x_obs": [-1, 0, 1.0], # use dummy values with the same shape: "y_obs": [0, 0, 0], } ) post_pred = pm.sample_posterior_predictive(idata.posterior)
[55]:
post_pred["obs"].mean(axis=0)
[55]:
array([0.02875, 0.50125, 0.97575])
[56]:
%load_ext watermark %watermark -n -u -v -iv -w
arviz 0.8.3 numpy 1.18.5 pymc3 3.9.0 last updated: Mon Jun 15 2020 CPython 3.7.7 IPython 7.15.0 watermark 2.0.2 | https://docs.pymc.io/en/v3/pymc-examples/examples/pymc3_howto/api_quickstart.html | 2022-05-16T14:15:10 | CC-MAIN-2022-21 | 1652662510138.6 | [] | docs.pymc.io |
Migrate a SQL Server database to Azure
This article provides a brief outline of two options for migrating a SQL Server database to Azure. Azure has three primary options for migrating a production SQL Server database. This article focuses on the following two options:
- SQL Server on Azure VMs: A SQL Server instance installed and hosted on a Windows Virtual Machine running in Azure, also known as Infrastructure as a Service (IaaS).
- Azure SQL Database: A fully managed SQL database Azure service, also known as Platform as a Service (PaaS).
Both come with pros and cons that you will need to evaluate before migrating. The third option is Azure SQL Database managed instances.
The following migration guides will be useful, depending on which service you use:
- Migrate a SQL Server database to SQL Server in an Azure VM
- Migrate your SQL Server database to Azure SQL Database
Additionally, the following links to conceptual content will help you understand VMs better:
- High availability and disaster recovery for SQL Server in Azure Virtual Machines
- Performance best practices for SQL Server in Azure Virtual Machines
- Application Patterns and Development Strategies for SQL Server in Azure Virtual Machines
And the following links will help you understand Azure SQL Database better:
- Create and manage Azure SQL Database servers and databases
- Database Transaction Units (DTUs) and elastic Database Transaction Units (eDTUs)
- Azure SQL Database resource limits
Choosing IaaS or PaaS
When evaluating where to migrate your database, determine if IaaS or PaaS is more appropriate for you.
Choose SQL Server in Azure VMs if:
- You are looking to "lift and shift" your database and applications with minimal to no changes.
- You prefer having full control over your database server and the VM it runs on.
- You already have SQL Server and Windows Server licenses that you intend to use.
Choose Azure SQL Database if:
- You are looking to modernize your applications and are migrating to use other PaaS services in Azure.
- You do not wish to manage your database server and the VM it runs on.
- You do not have SQL Server or Windows Server licenses, or you intend to let licenses you have expire.
The following table describes differences between each service based on a set of scenarios.
To learn more about the differences between the two, see Choose the right deployment option in Azure SQL.
Can I still use tools such as SQL Server Management Studio and SQL Server Reporting Services (SSRS) with SQL Server in Azure VMs or Azure SQL Database?
Yes. All Microsoft SQL tooling works with both services. SSRS is not part of Azure SQL Database, though, and it's recommended that you run it in an Azure VM and then point it to your database instance.
I want to go PaaS but I'm not sure if my database is compatible. Are there tools to help?
Yes. The Data Migration Assistant is a tool that is used as a part of migrating to Azure SQL Database. The Azure Database Migration Service is a preview service that you can use for either IaaS or PaaS.
Can I estimate costs?
Yes. The Azure Pricing Calculator can be used for estimating costs for all Azure services, including VMs and database services. | https://docs.azure.cn/en-us/dotnet/migration/sql?view=azure-dotnet | 2022-05-16T15:53:38 | CC-MAIN-2022-21 | 1652662510138.6 | [] | docs.azure.cn |
Specify a Control's Binding Expression
- 2 minutes to read
Specify a control’s binding expression to bind report controls to a report’s data source field(s). An expression can point to a data source field(s), calculated field(s), or report parameter(s), or combine static and dynamic content (for instance, append a text prefix or postfix to a value obtained from a database).
This topic describes the methods to specify a binding expression.
Design Time
After you bind a report to data, the Field List displays the data source hierarchy and gives access to the available data fields.
Use the Field List to bind report controls to data.
Drop a data field onto the report surface to create a new report control bound to this field.
Drop a data field onto a control to bind this control to the dropped field.
See the Field List topic for more information.
Specify a control’s binding expression directly.
Click a property’s ellipsis button and specify an expression in the invoked Expression Editor.
Switch to the Expressions tab in the Properties panel to access the properties where you can specify a binding expression.
After you bind a report control to data, use the Format String property to format the control’s value.
Runtime
You can specify a report control’s binding expression at runtime. Create an ExpressionBinding object, specify its settings, and add the created object to the control’s ExpressionBindings collection. Specify the following properties in the created object:
- EventName - specifies the event handler that evaluates the expression.
- PropertyName - defines the property to apply a binding expression to.
- Expression - specifies the binding expression.
The following code demonstrates how to specify an expression for a label’s Text property:
using DevExpress.XtraReports.UI; public XtraReport1() { // ... ExpressionBinding expressionBinding = new ExpressionBinding("BeforePrint", "Text", "[UnitPrice]*[UnitsInStock]"); xrLabel1.ExpressionBindings.Add(expressionBinding); } | https://docs.devexpress.com/XtraReports/1180/detailed-guide-to-devexpress-reporting/use-report-controls/bind-report-controls-to-data/specify-a-control-s-binding-expression | 2022-05-16T16:44:45 | CC-MAIN-2022-21 | 1652662510138.6 | [] | docs.devexpress.com |
What is an XSMX file?
An XSMX file is a questionnaire file created with ExamSoft SofTest application that is used to conduct exams. It is used to send the exam paper to students that they take online and submit the result back as XMDX file. ExamSoft provides assessment solutions based on data-driver decisions through education technology and SofTest application available for both Windows and MacOS is one of these. The XSMX file contains all the questions and is opened in the SofTest application on student’s computer.
XSMX File Format
The XSMX files are saved and sent to exam takers as password protected files. The password protection is meant to ensure the security and secrecy of the exam questions. Password of the exam is shared with end user for opening the file and taking the test. | https://docs.fileformat.com/misc/xsmx/ | 2022-05-16T16:19:38 | CC-MAIN-2022-21 | 1652662510138.6 | [] | docs.fileformat.com |
react_jucesubdirectory of the root of the React-JUCE project.
react-jucenpm package carries a template generator that you can use to boostrap a React application for your project. For this step, let's assume your JUCE project directory is at
~/MyProject, the source files are at
~/MyProject/Source, and we want to put the React application source at
~/MyProject/Source/jsui(note, you can put this wherever you want). Now, to use the template generator, we start again at the root of the React-JUCE git repository:
jsuidirectory as suggested in the example command above, fill it with a basic "Hello World!" app, and install local dependencies like React.js and Webpack. Like the [[GainPlugin Example|running-the-example]], we now need to build our output bundle.
reactjuce::ReactApplicationRoot. This class is mostly just a
juce::Component, and in that way you should think about using it the same way you might use a
juce::Sliderin your application.
MainComponentor our
AudioProcessorPluginEditorat the top of our project:
reactjuce::ReactApplicationRootis easy, and should be familiar if you've worked with
juce::Components before:
appRootwill be the point at which our React application code from earlier will "take over," and it's also important to note that you can add this
appRootwherever you need it in your application. For example, if you want to write your entire interface in React, you should mount your
appRootat the top of your application in your
MainComponentor your
AudioProcessorPluginEditor. If instead you want only to write your preset switcher in React, you can build the rest of your interface as usual with JUCE, and add the
appRootwherever the preset switcher should go within the context of your interface.
appRootwhere to find the JavaScript bundle we made! So, putting the last piece together here: | https://docs.react-juce.dev/guides/integrating_your_project | 2022-05-16T14:34:48 | CC-MAIN-2022-21 | 1652662510138.6 | [] | docs.react-juce.dev |
Rules¶
Introduction¶
Along with ESP Easy R108, a new feature was enabled, named Rules. Rules can be used to create very simple flows to control devices on your ESP.
Note
To assist writing rules, one may prefer to use an editor like Notepad++ which supports user defined languages to colorize the text.
See the
Misc/Notepad++ folder for a Notepad++ language definition which can be used to colorize rules.:
Unary operator:
Equals sign:
Delimiters:
<space>
Devicename - Value separator:
An errormessage will be shown if any of these characters is used.
Special Notations¶
Quotes (single, double or back quotes) Marking begin and end of a command parameter
Note
Formulas used in tasks (thus not using the rules) may refer to
%value% for the new current value and
%pvalue% for the previous value before
PLUGIN_READ was called.
These notations cannot be used in the rules.
If a previous value is needed, one has to use variables for it.¶
<trigger>
The trigger can be an device value being changed:
DeviceName#ValueName:
Matching events¶
In rules, one can act on any event generated by ESPEasy.
Typical notation of such a rules block is:
on ... do // Code in the event handling rules block endon
An event always has an event name, with optional event values.
The event name and values are separated by an
=sign and the event values themselves are separated by a comma (
,).
Example events:
System#BootGenerated at boot, does not have any eventvalues
Rules#Timer=1Generated when a rules timer expires. Only one event value indicating which timer expired.
Clock#Time=Sun,16:29Clock event generated every minute.
bme#Temperature=21.12Event from the task called
bmesignalling its value
Temperaturewas updated. Event value shows the new measured value.
bme#All=21.12,49.23,1010.34Event from the task called
bme, which is configured to send all values in a single event. The event values show the new measured values in the order of the parameters of that task.
In the rules, such events can be handled by matching these events.
Note
When trying to match different versions of the same event, special care must be taken to make sure the correct event block is matched.
For example
on bme* do may be matched on the event
bme#All=... even when a block for
on bme#All do exists.
Matching events using wildcard¶
Added: 2022/04/17
ESPEasy does generate events which may be very similar, like when monitoring GPIO pins.
EVENT: PCF#1=0 EVENT: PCF#2=0 ...
To match such events in a single rules block, use:
on PCF* do
See
%eventname% for how to know which pin is then used.
Test¶
..
Event name (%eventname% or %eventpar%)¶
Added: 2022/04/17
%eventname% Substitutes the event name (everything before the ‘=’ sign,
or the complete event if there is no ‘=’ in the event)
This can be useful for shortening the rules by matching events using a wildcard and then by using
substring one may deduct the event origin.
For example, trying to match events triggered by monitoring a number of pins on a GPIO extender.
Typical events generated by the GPIO extenders look like this:
EVENT: PCF#1=0 EVENT: PCF#2=0 ...
Using
%eventname% :
on PCF* do logentry,"PCF pin: {substring:4:6:%eventname%} State: %eventvalue1%" endon
%eventpar% is the part of
%eventname% after the first
#character.
This allows to simplify the rules block described above:
Using
%eventpar% :
on PCF* do logentry,"PCF pin: %eventpar% State: %eventvalue1%" endon
Event value (%eventvalue%)¶
Rules engine specific:
%eventvalueN% - substitutes the N-th event value (everything that comes after
the ‘=’ sign).
For historic reasons,
%eventvalue% without a number, can also be used to access the first event value.
Thus it will be the same when using
%eventvalue1%.
There is one exception; When the event starts with an
%eventvalue% does refer to the literal event, or the part of the event after the
!Serial# followed by the received string.
!,
#character. This was introduced for the Serial Server plugin (P020) which sends events like
Changed/Added: 2022/04/20:
Removed the limit of upto 4 event values and using wildcard one may even use string eventvalues.
%eventvalue0%- will be substituded with all event values.
%eventvalueX%- will be substituded by
0if there is no X-th event value.
%eventvalueX|Y%X = event value nr > 0, Y = default value when eventvalue does not exist. N.B. default value can be a string, thus
%eventvalue3|[int#3]%should be possible as long as the default value not contains neither
Empty event values are now also possible. e.g. this event call with 6 event values:
event,MyEvent=1,,3,4,,6
Event values can now also be strings, just make sure to use the wildcard when matching the event name in the rules.
Add option to restrict which commands can be executed using the
restrictcommand prefix, to safely execute commands handed via eventvalues.
Using Event Values as command¶
Added: 2022/04/20
With the possibility to use strings as event values, one can also use it to send complete commands via events.
To execute an event value as a command, it is best to also set an empty string as default value, for when the event is called without event values.
If no empty default value is given, it will be replaced by
0 , which is not a valid command in ESPEasy.
e.g. This event:
event,eventvalues='logentry,test'
on eventvalues* do %eventvalue1|% endon
Log output:
11233271 : Info : EVENT: eventvalues='logentry,test' 11233280 : Error : Rules : Prefix command with 'restrict': restrict,logentry,test 11233283 : Info : ACT : (restricted) restrict,logentry,test 11233285 : Info : test
As can be seen, the rules parser will try to prefix lines starting with
eventvalue
with the
restrict attribute, and log an error to warn the user about this.
This
restrict attribute will not allow all commands to be executed.
By default, there are no restrictions on which commands can be executed via rules.
However, when handling events, the intentions of the sender may not always be honest.
For example,
event,myevent=%eventvalue100|factoryreset% might be considered tricky.
As there is very likely no 100-th eventvalue, so this example will evaluate to
factoryreset and that’s not a command you want to execute.
Note
Be careful when using event values as a command. Always use the
restrict attribute.
Examples¶
Matching event named
eventvalues to use more than 4 eventvalues:
on eventvalues* do logentry,"test eventvalues: 0:%eventvalue% 1:%eventvalue1% 2:%eventvalue2% 3:%eventvalue3% 4:%eventvalue4% 5:%eventvalue5% 6:%eventvalue6%" logentry,"All eventvalues: %eventvalue0%" endon
Log output of a test event:
572832 : Info : EVENT: eventvalues=1,2,3,4,5,6 572840 : Info : ACT : logentry,"test eventvalues: 0:1 1:1 2:2 3:3 4:4 5:5 6:6" 572843 : Info : test eventvalues: 0:1 1:1 2:2 3:3 4:4 5:5 6:6 572845 : Info : ACT : logentry,"All eventvalues: 1,2,3,4,5,6" 572847 : Info : All eventvalues: 1,2,3,4,5,6
Note
This can use strings as well as numericals. To match events with string values, one must include the wildcard (
*) as it will otherwise not be matched since there is a check for numerical values.
Using default value for non-existing event values:
on eventvalues* do logentry,"Not existing eventvalue: %eventvalue10|NaN%" endon
Log output for
event,eventvalues=1,2, ,4,5,6 :
1086458 : Info : EVENT: eventvalues=1,2, ,4,5,6 1086484 : Info : ACT : logentry,"Not existing eventvalue: NaN" 1086485 : Info : Not existing eventvalue: NaN
Sample rules section:
on remoteTimerControl do timerSet,1,%eventvalue% endon
Now send this command to the ESP:
.
Task value events¶
Tasks also send out events when a read was successful.
There is a number of triggers for a task to perform a read:
Periodical read. A task calls its own read function every <interval> number of seconds. (Setting per task)
TaskRuncommand. A task can be forced to take a reading via a command. This can be sent from rules, HTTP calls, etc.
Some task reschedule their own read calls right after the sensor is done collecting data. (e.g. the BME280)
Event per task value¶
By default, an event is created per read task value. For example a task called “bme” (using BMx280 plugin) may output upto 3 values:
Temperature
Humidity
Pressure
This would then generate upto 3 events:
bme#Temperature=21.12
bme#Humidity=49.23
bme#Pressure=1010.34
Single event with all values¶
(Added: 2021-01-11)
Each task may be configured to combine all task values in a single event, by checking “Single event with all values”.
This will create a single event with variable name “All” like this:
bme#All=21.12,49.23,1010.34
To access all event values in the rules:
on bme#All do LogEntry,"temp: %eventvalue1% hum: %eventvalue2% press: %eventvalue3%" endon
There is a number of reasons to combine all task values in a single event:
Less events to process, as the rules have to be parsed for each event.
All task values of the same read are present at the same time.
Especially the last reason, to have all values present when handling an event, is very useful. When you need to take an action based on 2 values of the same sensor, you must make sure they both belong to the same sample.
A typical example is to compute the dew point, which is a relation between temperature and (relative) humidity.
on bme#All do LogEntry,"Dew point: %c_dew_th%(%eventvalue1%,%eventvalue2%)" endon
Internal variables¶
A really great feature to use is the internal variables. You set them like this:
Let,<n>,<value>
Where n must be a positive integer (type
uint32_t) and the value a floating point value. To use the values in strings you can
either use the
%v7% syntax or
[var#7]. BUT for formulas you need to use the square
brackets in order for it to compute, i.e.
[var#12].
If you need to make sure the stored value is an integer value, use the
[int#n] syntax. (i.e.
[int#12])
The index
n is shared among
[var#n] and
[int#n].
On the “System Variables” page of the web interface all set values can be inspected including their values. If none is set, “No variables set” will be shown.
If a specific system variable was never set (using the
Let command), its value will be considered to be
0.0.
Special task names¶
You must not use the task names
Plugin,
var
int as these have special meaning.
Plugin can be used in a so called
PLUGIN_REQUEST, for example:
[Plugin#GPIO#Pinstate#N] to get the pin state of a GPIO pin.
[Plugin#MCP#Pinstate#N] to get the pin state of a MCP pin.
[Plugin#PCF#Pinstate#N] to get the pin state of a PCF pin.
For expanders you can use also the following:
[Plugin#MCP#PinRange#x-y] to get the pin state of a range of MCP pins from x o y.
[Plugin#PCF#PinRange#x-y] to get the pin state of a range of PCF pins from x o y..
{strtol:2:<string>}to convert BIN (base 2)
toBin / toHex¶
(Added: 2020-12-28)
Convert an integer value into a binary or hexadecimal representation.
Usage:
{toBin:<value>}Convert the number into binary representation.
{toHex:<value>}Convert the number into hexadecimal representation.
<value>The number to convert, if it is representing a valid unsigned integer value.
For example:
on myevent do let,1,%eventvalue1% let,2,{bitset:9:%eventvalue1%} LogEntry,'Values {tobin:[int#1]} {tohex:[int#1]}' LogEntry,'Values {tobin:[int#2]} {tohex:[int#2]}' endon
320528: HTTP: Event,eventname=123 320586: EVENT: eventname=123 320594: ACT : let,1,123 320603: ACT : let,2,635 320612: ACT : LogEntry,'Values 1111011 7b' 320618: Values 1111011 7b 320631: ACT : LogEntry,'Values 1001111011 27b' 320635: Values 1001111011 27b
bitread¶
(Added: 2020-12-28)
Read a specific bit of a number.
Usage:
{bitRead:<bitpos>:<string>}
<bitpos>Which bit to read,” command
bitRead()
For example:
on myevent do logentry,{bitread:0:123} // Get least significant bit of the given nr '123' => '1' logentry,{bitread:%eventvalue1%:%eventvalue1%} // Get bit nr given by 1st eventvalue from 2nd eventvalue => Either '0' or '1' endon
bitset / bitclear¶
(Added: 2020-12-28)
To set or clear a specific bit of a number to resp. ‘1’ or ‘0’.
Usage:
{bitSet:<bitpos>:<string>}Set a specific bit of a number to ‘1’.
{bitClear:<bitpos>:<string>}Set a specific bit of a number to ‘0’.
With:
<bitpos>Which bit to set,” commands
bitSet() and
bitClear()
For example:
on myevent do logentry,{bitset:0:122} // Set least significant bit of the given nr '122' to '1' => '123' logentry,{bitclear:0:123} // Set least significant bit of the given nr '123' to '0' => '122' logentry,{bitset:%eventvalue1%:%eventvalue1%} // Set bit nr given by 1st eventvalue to '1' from 2nd eventvalue endon
bitwrite¶
(Added: 2020-12-28)
To set a specific bit of a number to a given value.
Usage:
{bitWrite:<bitpos>:<string>:<bitval>}
<bitpos>Which bit to set, starting at 0 for the least-significant (rightmost) bit.
<string>The number from which to read, if it is representing a valid unsigned integer value.
<bitval>The value to set in the given number. N.B. only the last bit of this integer parameter is used. (Thus ‘0’ and ‘2’ as parameter will give the same result)
Note
Bitwise operators act on
unsigned integer types, thus negative numbers will be ignored.
Note
The order of parameters differs from the “Arduino” command bitSet()
For example:
on myevent do logentry,{bitwrite:0:122:1} // Set least significant bit of the given nr '122' to '1' => '123' endon
urlencode¶
(Added: 2021-07-22)
Replace any not-allowed characters in an url with their hex replacement (%-notation).
Usage:
{urlencode:"string to/encode"} will result in
string%20to%2fencode
XOR / AND / OR¶
(Added: 2020-12-28)
Perform bitwise logic operations XOR/AND/OR
Note
Bitwise operators act on
unsigned integer types, thus negative numbers will be ignored.
Usage:
{XOR:<uintA>:<uintB>}
{AND:<uintA>:<uintB>}
{OR:<uintA>:<uintB>}
With:
<uintA>The first number, if it is representing a valid unsigned integer value.
<uintB>The second number, if it is representing a valid unsigned integer value.
For example:
{xor:127:15}to XOR the binary values
1111111and
1111=>
1110000
{and:254:15}to AND the binary values
11111110and
1111=>
1110
{or:254:15}to OR the binary values
11111110and
1111=>
11111111
on eventname do let,1,%eventvalue1% let,2,{abs:%eventvalue2%} let,3,{and:[int#1]:[int#2]} LogEntry,'Values {tobin:[int#1]} AND {tobin:[int#2]} -> {tobin:[int#3]}' endon 1021591: EVENT: eventname=127,15 1021601: ACT : let,1,127 1021611: ACT : let,2,15.00 1021622: ACT : let,3,15 1021639: ACT : LogEntry,'Values 1111111 AND 1111 -> 1111' 1021643: Values 1111111 AND 1111 -> 1111
Abs¶
(Added: 2020-12-28)
Perform ABS on integer values.
Usage:
abs(<value>)
With:
<value>The number to convert into an absolute value, if it is representing a valid numerical value.
For example:
abs(-1)Return the absolute value => 1
Note
Bitwise operators act on
unsigned integer types, thus negative numbers will be ignored.
This makes the use of ‘’abs’’ necessary for using bitwise operators if the value may become negative.
on eventname do let,1,%eventvalue1% // Don't change the value let,2,{bitset:9:abs(%eventvalue1%)} // Convert to positive and set bit '9' LogEntry,'Values {tobin:[int#1]} {tohex:[int#1]}' LogEntry,'Values {tobin:[int#2]} {tohex:[int#2]}' endon
Called with
Event,eventname=-123 :
110443: EVENT: eventname=-123 110452: ACT : let,1,-123 110462: ACT : let,2,635 110475: ACT : LogEntry,'Values {tobin:-123} {tohex:-123}' 110484: Values {tobin:-123} {tohex:-123} 110496: ACT : LogEntry,'Values 1001111011 27b' 110500: Values 1001111011 27b
As can be seen in the logs, when calling bitwise operators with negative numbers, the value is ignored and thus the expression is still visible in the output.
Therefore make sure to use the
abs function before handing the value over to binary logical operators.
Constrain¶
(Added: 2020-12-28)
Constrains a number to be within a range.
Usage:
{constrain:<value>:<low>:<high>}
With:
<value>The number to constrain, if it is representing a valid numerical value.
<low>Lower end of range, if it is representing a valid numerical value.
<high>Higher end of range, if it is representing a valid numerical value.
Math Functions¶
(Added: 2021-01-10)
ESPEasy also supports some math functions, like trigonometric functions, but also some more basic functions.
Basic Math Functions¶
log(x)Logarithm of x to base 10.
ln(x)Natural logarithm of x.
abs(x)Absolute value of x.
exp(x)Exponential value, e^x.
sqrt(x)Square root of x. (x^0.5)
sq(x)Square of x, x^2.
round(x)Rounds to the nearest integer, but rounds halfway cases away from zero (instead of to the nearest even integer).
Rules example:
on eventname2 do let,1,sq(%eventvalue1%) let,2,sqrt([var#1]) let,3,=log(%eventvalue2%) let,4,ln(%eventvalue2%) LogEntry,'sqrt of [var#1] = [var#2]' LogEntry,'log of %eventvalue2% = [var#3]' LogEntry,'ln of %eventvalue2% = [var#4]' endon
Called with event
eventname2=1.234,100
213293 : Info : EVENT: eventname2=1.234,100 213307 : Info : ACT : let,1,sq(1.234) 213316 : Info : ACT : let,2,sqrt(1.522756) 213328 : Info : ACT : let,3,=log(100) 213337 : Info : ACT : let,4,ln(100) 213346 : Info : ACT : LogEntry,'sqrt of 1.522756 = 1.234' 213351 : Info : sqrt of 1.522756 = 1.234 213357 : Info : ACT : LogEntry,'log of 100 = 2' 213361 : Info : log of 100 = 2 213369 : Info : ACT : LogEntry,'ln of 100 = 4.60517018598809' 213374 : Info : ln of 100 = 4.60517018598809
Trigonometric Functions¶
Since the trigonometric functions add quite a bit to the compiled binary, these functions are not included in builds which have a flag defined to limit their build size.
All trigonometric functions are present in 2 versions, for angles in radian and with the
_d suffix for angles in degree.
Radian Angle:
sin(x)Sine of x (radian)
cos(x)Cosine of x (radian)
tan(x)Tangent of x (radian)
aSin(x)Arc Sine of x (radian)
aCos(x)Arc Cosine of x (radian)
aTan(x)Arc Tangent of x (radian)
Degree Angle:
sin_d(x)Sine of x (degree)
cos_d(x)Cosine of x (degree)
tan_d(x)Tangent of x (degree)
aSin_d(x)Arc Sine of x (degree)
aCos_d(x)Arc Cosine of x (degree)
aTan_d(x)Arc Tangent of x (degree)
Alternatively, TASKname and/or VARname can be used instead of TASKnr and VARnr:
TaskValueSet,TASKname,VARname,Value TaskValueSet,TASKnr,VARname,Value TaskValueSet,TASKname // Alternative for above example using TASKname/VARname on sw1#state do if [dummy#var1]=0 TaskValueSet dummy,var1,0 else TaskValueSet dummy,var1,1 endif gpio,16,[dummy#var1] gpio,13,[dummy#var1] endon on sw1a#state do if [dummy#var1]=0 TaskValueSet dummy,var1,1 else TaskValueSet dummy,var1 call¶
When you enter this first command with the correct IP address in the URL of your browser:
.)
Authentication to Domoticz via SendToHTTP¶
It is possible to use authentication in Domoticz and use it via SendToHTTP.
MkE= is the base64 encoded username (‘2A’ in this example)
OVM= is the base64 encoded password (‘9S’ in this example)
SendToHTTP xx.xx.xx.xx,8080,/json.htm?username=MkE=&password=OVM&type=command¶m=switchlight&idx=36&switchcmd=On
See also Domoticz Wiki
Iterate over lookup table¶
Sometimes you need to perform actions in a sequence. For example turn on a few LEDs in some specific order. This means you need to keep track of the current step, but also know what specific pin to turn on or off.
Here an example just showing a number of GPIO pins that could be turned on and off. For the example, the GPIO pin numbers are just sent to a log, but it is easy to convert them to a GPIO command.
on init do // Set the pin lookup sequence let,1,1 let,2,2 let,3,3 let,4,-1 let,15,0 // Used for keeping the position in the sequence asyncevent,loop // Trigger the loop endon on run do // Use %eventvalue1% as the index for the variable if [int#%eventvalue1%] >= 0 LogEntry,'Off: [int#%eventvalue1%]' endif if [int#%eventvalue2%] >= 0 LogEntry,'On : [int#%eventvalue2%]' endif endon on loop do if [int#15]<4 let,14,[int#15] // Store the previous value let,15,[int#15]+1 // Increment asyncevent,run=[int#14],[int#15] asyncevent,loop endif endon
This can be started by sending the event
init like this:
event,init
N.B. the events
run and
loop are not executed immediately, as that will cause a recursion and thus using a lot of memory to process the rules.
Therefore the
asyncevent is used to append the events to a queue.
This can be made much more dynamic as you may trigger a
taskrun, which will send an event when new values are read.
Like this it is possible to automate a complex sequence of steps as not only GPIO pins can be stored, but also task indices. | https://espeasy.readthedocs.io/en/latest/Rules/Rules.html?highlight=format | 2022-05-16T14:51:13 | CC-MAIN-2022-21 | 1652662510138.6 | [] | espeasy.readthedocs.io |
Maiar Web Wallet Extension
Popularly referred to as the "future of money," Maiar currently has a robust web wallet extension known as the Maiar DeFi Wallet Extension. It is a powerful browser extension for the Elrond Wallet that effectively automates and reduces the steps and time required for users to interact with Elrond Decentralized apps.
The Maiar DeFi Wallet can be installed on Chrome, Brave, and other chromium-based browsers. This extension is free and secure, with compelling features that allow you to create a new wallet or import existing wallets, manage multiple wallets on the Elrond mainnet, and store Elrond tokens such as EGLD, ESDT, or NFTs on the Elrond Network with easy accessibility.
Let's walk through the steps how to install and set up the Maiar DeFi Wallet extension:
PrerequisitesPrerequisites
Add Maiar DeFi Wallet to your browserAdd Maiar DeFi Wallet to your browser
On the Chrome Web Store Extension, search for Maiar DeFi Wallet. and add it to your browser.
Confirm the action in the pop-up.
You should receive a notification that the extension has been added successfully.
Set up Maiar DeFi walletSet up Maiar DeFi wallet
Once it has been successfully installed, click on the extension to get started.
You will be presented with two options: you can either Create new wallet or Import existing wallet.
Create a new walletCreate a new wallet
Step 1: To get started, first install the Maiar DeFi wallet extension.
Step 2: Open up the extension and click on ‘’Create new wallet”.
Step 3: Next, a secret phrase consisting of a set of 24 secret words will be displayed. Safely backup these secret words. We strongly recommend you write them down or copy and store them in a safe place like a password manager. These secret words are the key to your wallet account and cannot be recovered if lost.
Step 4: Before proceeding to the next step, confirm that you have safely stored your secret phrase.
Step 5: For further verification, you will be prompted to input some of the secret words.
Step 6: Create a password that will be used to access the wallets stored in the Maiar DeFi wallet extension. Ensure you keep this password safe as it will be needed to access your wallets regularly. Please note that this password cannot be recovered if lost.
Step 7: Completed! Your Maiar DeFi Wallet has been successfully created and set to be used.
Import existing walletImport existing wallet
Do you already have a wallet?
Then there is no need to create a new one. The Maiar Wallet Extension provides an option to import your existing wallet. However, to import an existing wallet you must have access to its secret (recovery) phrase.
The Maiar wallet has a set of 24-words, which serve as your wallet’s secret phrase. Using a secret phrase to import an existing wallet does not affect your wallet in any way.
To get started:
Step 1: With the Maiar DeFi wallet extension installed. Click on ‘’Import existing wallet”.
Step 2: Next, enter your 24-word secret phrase. You can either enter these words one at a time or you can simply paste in the words using the "paste" icon.
Step 3: Enter in your wallet password and confirm this password.
Step 4: Completed! Your Maiar DeFi Wallet has been successfully imported and set to be used.
Key featuresKey features
Now you have a wallet registered in the Maiar DeFi Wallet Extension and it's ready to use. Great! Here's what you can do with this wallet:
Send to a walletSend to a wallet
One of the key features of this extension is that it allows you to send funds from your wallet to another wallet. To use this feature, you will need to have some funds in your wallet before proceeding.
To get started
Step 1: Go to the Maiar Wallet extension, enter your password and click on “Send”.
Step 2: Enter the address of the wallet you intend to send to and the amount.
(Optional) Step 3: Enter the data. This is a description of the transaction or any information you wish to pass through.
Step 4: Click on the “Continue” button to complete the transaction.
Lock/unlockLock/unlock
After 60minutes of being inactive, the extension automatically locks itself. You can unlock it at any time using your password. In addition, you can lock the extension manually, by clicking the “lock” icon in the header.
Deposit to a walletDeposit to a wallet
A deposit can be made to your wallet using the wallet extension. This feature allows you to share your QR code or wallet address to receive a token deposit. To get started:
Open up your Maiar Wallet extension.
Next, click on the "deposit" and share your QR code or wallet address.
Transactions historyTransactions history
On the wallet extension dashboard, the wallet records all transactions sent and received in your wallet. If you are a new user, it says "No transactions found for this wallet" until you make your first transaction.
NetworksNetworks
In the settings section on your extension dashboard, you can connect to the different networks provided by Elrond, such as the mainnet, testnet, and devnet.
Choose either of these networks.
Connecting the Maiar DeFi Wallet to Maiar Exchange AppConnecting the Maiar DeFi Wallet to Maiar Exchange App
You can now connect Maiar Exchange to the Maiar DeFi wallet in real-time. With this connection, you will be able to log in to the Maiar exchange using the Maiar DeFi wallet extension in a few steps. Follow these steps to proceed:
Step 1: To get started, go to the Maiar Exchange page on the right section of the page, click on the “connect” button.
Step 2: Select "Maiar DeFi Wallet" from the options displayed.
Step 3: Lastly, enter your password and click on the wallet address you want to connect to.
- In a split second, the Maiar Exchange home page automatically reloads. You’ll notice your account has been added to the right section of the page.
Successful 🎉 | https://docs.elrond.com/wallet/wallet-extension/ | 2022-05-16T14:27:59 | CC-MAIN-2022-21 | 1652662510138.6 | [] | docs.elrond.com |
Getting started with access control: portal
Role Based Access Control (RBAC) lets you provide limited access to your Joyent account and Triton Object Storage (formerly known as Manta) to other members of your organization.
If you haven't already, be sure to read the RBAC overview, then come back here to continue. Note that the command line interface is more flexible that the portal interface, and there are RBAC actions that are not available in the portal.
In this walkthrough, we'll use the Triton Compute Service portal to:
- create a subuser
- create a role that allows the subuser to log in to the portal
- create a minimal policy
Since we're starting with a blank slate, no subusers, no roles, no policies, it's easier to build each of these components from the bottom up. So we'll start by creating a policy, creating a role that uses that policy, the finally creating a user who assumes that role.
Subusers do not currently work with Docker instances. This functionality will be available in a future version of RBAC.
It is also possible to create users with Triton CLI. You must use Triton CLI to modify permissions for users, as that feature is not availble in the portal.
Creating the login policy
First, we'll create a policy that allows a subuser to log in to the portal and to update their user information and SSH keys
- Click Accounts in the left navigation pane.
- Click Policies.
- Click Create Policy in the upper right corner.
- Name the policy "Login".
Add the following rules. Make sure that you spell the actions like
getusercorrectly. Rules are case sensitive. The capitalized
CANis a convention.
CAN changeuserpassword
CAN createuserkey
CAN deleteuserkey
CAN getuser
CAN listuserkeys
CAN updateuser
You can enter the rules separated by commas in a single line:
CAN changeuserpassword, createuserkey, deleteuserkey, getuser, listuserkeys, and updateuser
- Click Create Policy to create the policy.
- The policy "Login" will appear in the list of policies.
You can learn more about policies in Working with policies, and you can learn about the available rules in Working with rules.
Creating a basic user role
Next, we'll create a role named "Basic User" that uses the "Login" policy.
- Click Roles in the left navigation pane.
- Click Create Role in the upper right corner.
- Create a role named "Basic User".
- Expand the list of policies by clicking on the
- Check the "Login" policy to make it part of the role.
- Click Create Role to create the role.
- The role "Basic User" will appear in the list of roles.
A role must have a name. You can create a role without policies or users.
You can learn more about roles in Working with roles.
Creating a user
Now, we'll create a user and assign them the "Basic User" role.
- Click Users in the left navigation pane.
- Click Create User in the upper right.
Fill out the form. Some of the fields will be filled with data from the main account. You must provide at least the following fields:
- Username
- Country
- You'll see the "Basic Role" you created earlier in the list of available roles. Click it to move it to the list of assigned roles.
- Click Create User at the bottom of the form.
The subuser will appear in the list of users.
You can learn more about users in Working with users.
Logging in as a subuser
We suggest using a different browser than the one logged into your Triton account. Otherwise, log out of the arYou may want to do this part in a different browser than the one you used to create the user. Otherwise log out of your Triton account.
To log in as a subuser, append
/subusername to the account owner's accountname, and use the subuser's password. For example, if your Triton account name is
thejungle and the subuser's username is
george, you would log in as
thejungle/george.
If you followed the steps above, you should be able to log in as the subuser. In this walkthrough we created a role that uses a policy that provides the minimum permissions for a subuser to log in. The subuser can view and modify their own information, such as changing the password, but nothing else. The subuser will not be able to list or create instances, work with Triton Object Storage, or create other users.
If a subuser doesn't have any roles that allow them to log in, they'll be redirected to an "Access Denied" page.
If a subuser forgets their password, they can use the reset password link, and use
<account>/<subuser> in the user name field.
In the next section we'll add policies that allow more actions and create roles that use those policies.
Adding more roles and policies
The "Basic User" role is very limited. It doesn't allow the subuser to do anything useful. In this section we'll create some new roles and the policies to go with them.
Adding a support role
Suppose we want people in our support department to be able to see all instances in the account, start and stop them, but not be able to create new instances. We already have the "Login" policy that allows subusers to log in to the portal. Now let's add policies that allow starting and stopping instances, and listing them.
The first policy is "Listing". It allows listing of anything in the datacenter.
CAN getaccount
CAN getnetwork
CAN listdatacenters
CAN listmachines
CAN listpackages
CAN listnetworks
CAN listimages
CAN listfirewallrules
Note: The
getaccount and
getnetwork actions are needed in order to allow listing in the portal.
The next policy is "Reboot". It allows starting, stopping, and rebooting instances.
CAN startmachine
CAN stopmachine
CAN rebootmachine
CAN getmachine
Note: The
getmachine action is needed in order to allow the portal to get the current state of an instance.
Adding an operator role
We want some of our subusers to be able to create instances. We'll create a new policy called "Creating" and a new role called "Operator".
The "Creating" policy looks like this.
CAN createmachine
CAN deletemachine
CAN getmachine
CAN listkeys
Note: The
listkeys action is needed by the portal.
The "Operator" role uses all of the policies we've defined.
Differences between using the portal and CLI
There are some things that you can do with the RBAC command line interface that you cannot do in the portal.
- When a user is assigned a role, the user is added to both the default members list and the members list.
- Users cannot assume roles if they are not in the default members list.
- The portal interface automatically associates roles with every resource in the account.
- The portal does not give the ability to assign permissions to users. | https://docs.joyent.com/public-cloud/rbac/portal | 2022-05-16T15:44:23 | CC-MAIN-2022-21 | 1652662510138.6 | [] | docs.joyent.com |
Build multiclass classifiers with Amazon SageMaker linear learner
Amazon SageMaker is a fully managed service for scalable training and hosting of machine learning models. We’re adding multiclass classification support to the linear learner algorithm in Amazon SageMaker. Linear learner already provides convenient APIs for linear models such as logistic regression for ad click prediction, fraud detection, or other classification problems, and linear regression for forecasting sales, predicting delivery times, or other problems where you want to predict a numerical value. If you haven’t worked with linear learner before, you might want to start with the documentation or our previous example notebook on this algorithm. If it’s your first time working with Amazon SageMaker, you can get started here.
In this example notebook we’ll cover three aspects of training a multiclass classifier with linear learner: 1. Training a multiclass classifier 1. Multiclass classification metrics 1. Training with balanced class weights
Training a multiclass classifier
Multiclass classification is a machine learning task where the outputs are known to be in a finite set of labels. For example, we might classify emails by assigning each one a label from the set inbox, work, shopping, spam. Or we might try to predict what a customer will buy from the set shirt, mug, bumper_sticker, no_purchase. If we have a dataset where each example has numerical features and a known categorical label, we can train a multiclass classifier.
Related problems: binary, multiclass, and multilabel
Multiclass classification is related to two other machine learning tasks, binary classification and the multilabel problem. Binary classification is already supported by linear learner, and multiclass classification is available with linear learner starting today, but multilabel support is not yet available from linear learner.
If there are only two possible labels in your dataset, then you have a binary classification problem. Examples include predicting whether a transaction will be fraudulent or not based on transaction and customer data, or detecting whether a person is smiling or not based on features extracted from a photo. For each example in your dataset, one of the possible labels is correct and the other is incorrect. The person is smiling or not smiling.
If there are more than two possible labels in your dataset, then you have a multiclass classification problem. For example, predicting whether a transaction will be fraudulent, cancelled, returned, or completed as usual. Or detecting whether a person in a photo is smiling, frowning, surprised, or frightened. There are multiple possible labels, but only one is correct at a time.
If there are multiple labels, and a single training example can have more than one correct label, then you have a multilabel problem. For example, tagging an image with tags from a known set. An image of a dog catching a Frisbee at the park might be labeled as outdoors, dog, and park. For any given image, those three labels could all be true, or all be false, or any combination. Although we haven’t added support for multilabel problems yet, there are a couple of ways you can solve a multilabel problem with linear learner today. You can train a separate binary classifier for each label. Or you can train a multiclass classifier and predict not only the top class, but the top k classes, or all classes with probability scores above some threshold.
Linear learner uses a softmax loss function to train multiclass classifiers. The algorithm learns a set of weights for each class, and predicts a probability for each class. We might want to use these probabilities directly, for example if we’re classifying emails as inbox, work, shopping, spam and we have a policy to flag as spam only if the class probability is over 99.99%. But in many multiclass classification use cases, we’ll simply take the class with highest probability as the predicted label.
Hands-on example: predicting forest cover type
As an example of multiclass prediction, let’s take a look at the Covertype dataset (copyright Jock A. Blackard and Colorado State University). The dataset contains information collected by the US Geological Survey and the US Forest Service about wilderness areas in northern Colorado. The features are measurements like soil type, elevation, and distance to water, and the labels encode the type of trees - the forest cover type - for each location. The machine learning task is to predict the cover type in a given location using the features. We’ll download and explore the dataset, then train a multiclass classifier with linear learner using the Python SDK.
[ ]:
# import data science and visualization libraries %matplotlib inline from sklearn.model_selection import train_test_split import pandas as pd import numpy as np import seaborn as sns import boto3
[ ]:
# download the raw data s3 = boto3.client("s3") s3.download_file( f"sagemaker-sample-files", "datasets/tabular/uci_covtype/covtype.data.gz", "covtype.data.gz" )
[ ]:
# unzip the raw dataset !gunzip covtype.data.gz
[ ]:
# read the csv and extract features and labels covtype = pd.read_csv("covtype.data", delimiter=",", dtype="float32").values covtype_features, covtype_labels = covtype[:, :54], covtype[:, 54] # transform labels to 0 index covtype_labels -= 1 # shuffle and split into train and test sets np.random.seed(0) train_features, test_features, train_labels, test_labels = train_test_split( covtype_features, covtype_labels, test_size=0.2 ) # further split the test set into validation and test sets val_features, test_features, val_labels, test_labels = train_test_split( test_features, test_labels, test_size=0.5 )
Note that we transformed the labels to a zero index rather than an index starting from one. That step is important, since linear learner requires the class labels to be in the range [0, k-1], where k is the number of labels. Amazon SageMaker algorithms expect the
dtype of all feature and label values to be
float32. Also note that we shuffled the order of examples in the training set. We used the
train_test_split method from
numpy, which shuffles the rows by default. That’s
important for algorithms trained using stochastic gradient descent. Linear learner, as well as most deep learning algorithms, use stochastic gradient descent for optimization. Shuffle your training examples, unless your data have some natural ordering which needs to be preserved, such as a forecasting problem where the training examples should all have time stamps earlier than the test examples.
We split the data into training, validation, and test sets with an 80/10/10 ratio. Using a validation set will improve training, since linear learner uses the validation data to stop training once overfitting is detected. That means shorter training times and more accurate predictions. We can also provide a test set to linear learner. The test set will not affect the final model, but algorithm logs will contain metrics from the final model’s performance on the test set. Later on in this example notebook, we’ll also use the test set locally to dive a little bit deeper on model performance.
Exploring the data
Let’s take a look at the mix of class labels present in training data. We’ll add meaningful category names using the mapping provided in the dataset documentation.
[ ]:
# assign label names and count label frequencies label_map = { 0: "Spruce/Fir", 1: "Lodgepole Pine", 2: "Ponderosa Pine", 3: "Cottonwood/Willow", 4: "Aspen", 5: "Douglas-fir", 6: "Krummholz", } label_counts = ( pd.DataFrame(data=train_labels)[0] .map(label_map) .value_counts(sort=False) .sort_index(ascending=False) ) label_counts.plot(kind="barh", color="tomato", title="Label Counts")
We can see that some forest cover types are much more common than others. Lodgepole Pine and Spruce/Fir are both well represented. Some labels, such as Cottonwood/Willow, are extremely rare. Later in this example notebook, we’ll see how to fine-tune the algorithm depending on how important these rare categories are for our use case. But first we’ll train with the defaults for the best all-around model.
Training a classifier using the Amazon SageMaker Python SDK
We’ll use the high-level estimator class
LinearLearner to instantiate our training job and inference endpoint. For an example using the Python SDK’s generic
Estimator class, take a look at this previous example notebook. The generic Python SDK estimator offers some more control options, but
the high-level estimator is more succinct and has some advantages. One is that we don’t need to specify the location of the algorithm container we want to use for training. It will pick up the latest version of the linear learner algorithm. Another advantage is that some code errors will be surfaced before a training cluster is spun up, rather than after. For example, if we try to pass
n_classes=7 instead of the correct
num_classes=7, then the high-level estimator will fail immediately,
but the generic Python SDK estimator will spin up a cluster before failing.
[ ]:
import sagemaker from sagemaker.amazon.amazon_estimator import RecordSet import boto3 # instantiate the LinearLearner estimator object multiclass_estimator = sagemaker.LinearLearner( role=sagemaker.get_execution_role(), train_instance_count=1, train_instance_type="ml.m4.xlarge", predictor_type="multiclass_classifier", num_classes=7, )
Linear learner accepts training data in protobuf or csv content types, and accepts inference requests in protobuf, csv, or json content types. Training data have features and ground-truth labels, while the data in an inference request has only features. In a production pipeline, we recommend converting the data to the Amazon SageMaker protobuf format and storing it in S3. However, to get up and running quickly, we provide a convenience method
record_set for converting and uploading when the
dataset is small enough to fit in local memory. It accepts
numpy arrays like the ones we already have, so we’ll use it here. The
RecordSet object will keep track of the temporary S3 location of our data.
[ ]:
# wrap data in RecordSet objects train_records = multiclass_estimator.record_set(train_features, train_labels, channel="train") val_records = multiclass_estimator.record_set(val_features, val_labels, channel="validation") test_records = multiclass_estimator.record_set(test_features, test_labels, channel="test")
[ ]:
# start a training job multiclass_estimator.fit([train_records, val_records, test_records])
Multiclass classification metrics
Now that we have a trained model, we want to make predictions and evaluate model performance on our test set. For that we’ll need to deploy a model hosting endpoint to accept inference requests using the estimator API:
[ ]:
# deploy a model hosting endpoint multiclass_predictor = multiclass_estimator.deploy( initial_instance_count=1, instance_type="ml.m4.xlarge" )
We’ll add a convenience function for parsing predictions and evaluating model metrics. It will feed test features to the endpoint and receive predicted test labels. To evaluate the models we create, we’ll capture predicted test labels and compare them to actuals using some common multiclass classification metrics. As mentioned earlier, we’re extracting the
predicted_label from each response payload. That’s the class with the highest predicted probability. We’ll get one class label per
example. To get a vector of seven probabilities for each example (the predicted probability for each class) , we would extract the
score from the response payload. Details of linear learner’s response format are in the documentation.
[ ]:
def evaluate_metrics(predictor, test_features, test_labels): """ Evaluate a model on a test set using the given prediction endpoint. Display classification metrics. """ # split the test dataset into 100 batches and evaluate using prediction endpoint prediction_batches = [predictor.predict(batch) for batch in np.array_split(test_features, 100)] # parse protobuf responses to extract predicted labels extract_label = lambda x: x.label["predicted_label"].float32_tensor.values test_preds = np.concatenate( [np.array([extract_label(x) for x in batch]) for batch in prediction_batches] ) test_preds = test_preds.reshape((-1,)) # calculate accuracy accuracy = (test_preds == test_labels).sum() / test_labels.shape[0] # calculate recall for each class recall_per_class, classes = [], [] for target_label in np.unique(test_labels): recall_numerator = np.logical_and( test_preds == target_label, test_labels == target_label ).sum() recall_denominator = (test_labels == target_label).sum() recall_per_class.append(recall_numerator / recall_denominator) classes.append(label_map[target_label]) recall = pd.DataFrame({"recall": recall_per_class, "class_label": classes}) recall.sort_values("class_label", ascending=False, inplace=True) # calculate confusion matrix label_mapper = np.vectorize(lambda x: label_map[x]) confusion_matrix = pd.crosstab( label_mapper(test_labels), label_mapper(test_preds), rownames=["Actuals"], colnames=["Predictions"], normalize="index", ) # display results sns.heatmap(confusion_matrix, annot=True, fmt=".2f", cmap="YlGnBu").set_title( "Confusion Matrix" ) ax = recall.plot( kind="barh", x="class_label", y="recall", color="steelblue", title="Recall", legend=False ) ax.set_ylabel("") print("Accuracy: {:.3f}".format(accuracy))
[ ]:
# evaluate metrics of the model trained with default hyperparameters evaluate_metrics(multiclass_predictor, test_features, test_labels)
The first metric reported is accuracy. Accuracy for multiclass classification means the same thing as it does for binary classification: the percent of predicted labels which match ground-truth labels. Our model predicts the right type of forest cover over 72% of the time.
Next we see the confusion matrix and a plot of class recall for each label. Recall is a binary classification metric which is also useful in the multiclass setting. It measures the model’s accuracy when the true label belongs to the first class, the second class, and so on. If we average the recall values across all classes, we get a metric called macro recall, which you can find reported in the algorithm logs. You’ll also find macro precision and macro f-score, which are constructed the same way.
The recall achieved by our model varies widely among the classes. Recall is high for the most common labels, but is very poor for the rarer labels like Aspen or Cottonwood/Willow. Our predictions are right most of the time, but when the true cover type is a rare one like Aspen or Cottonwood/Willow, our model tends to predict wrong.
A confusion matrix is a tool for visualizing the performance of a multiclass model. It has entries for all possible combinations of correct and incorrect predictions, and shows how often each one was made by our model. It has been row-normalized: each row sums to one, so that entries along the diagonal correspond to recall. For example, the first row shows that when the true label is Aspen, the model predicts correctly only 1% of the time, and incorrectly predicts Lodgepole Pine 95% of the time. The second row shows that when the true forest cover type is Cottonwood/Willow, the model has 27% recall, and incorrectly predicts Ponderosa Pine 65% of the time. If our model had 100% accuracy, and therefore 100% recall in every class, then all of the predictions would fall along the diagonal of the confusion matrix.
It’s normal that the model performs poorly on very rare classes. It doesn’t have much data to learn about them, and it was optimized for global performance. By default, linear learner uses the softmax loss function, which optimizes the likelihood of a multinomial distribution. It’s similar in principle to optimizing global accuracy.
But what if one of the rare class labels is especially important to our use case? For example, maybe we’re predicting customer outcomes, and one of the potential outcomes is a dissatisfied customer. Hopefully that’s a rare outcome, but it might be one that’s especially important to predict and act on quickly. In that case, we might be able to sacrifice a bit of overall accuracy in exchange for much improved recall on rare classes. Let’s see how.
Training with balanced class weights
Class weights alter the loss function optimized by the linear learner algorithm. They put more weight on rarer classes so that the importance of each class is equal. Without class weights, each example in the training set is treated equally. If 80% of those examples have labels from one overrepresented class, that class will get 80% of the attention during model training. With balanced class weights, each class has the same amount of influence during training.
With balanced class weights turned on, linear learner will count label frequencies in your training set. This is done efficiently using a sample of the training set. The weights will be the inverses of the frequencies. A label that’s present in 1/3 of the sampled training examples will get a weight of 3, and a rare label that’s present in only 0.001% of the examples will get a weight of 100,000. A label that’s not present at all in the sampled training examples will get a weight of 1,000,000 by
default. To turn on class weights, use the
balance_multiclass_weights hyperparameter:
[ ]:
# instantiate the LinearLearner estimator object balanced_multiclass_estimator = sagemaker.LinearLearner( role=sagemaker.get_execution_role(), train_instance_count=1, train_instance_type="ml.m4.xlarge", predictor_type="multiclass_classifier", num_classes=7, balance_multiclass_weights=True, )
[ ]:
# start a training job balanced_multiclass_estimator.fit([train_records, val_records, test_records])
[ ]:
# deploy a model hosting endpoint balanced_multiclass_predictor = balanced_multiclass_estimator.deploy( initial_instance_count=1, instance_type="ml.m4.xlarge" )
[ ]:
# evaluate metrics of the model trained with balanced class weights evaluate_metrics(balanced_multiclass_predictor, test_features, test_labels)
The difference made by class weights is immediately clear from the confusion matrix. The predictions now line up nicely along the diagonal of the matrix, meaning predicted labels match actual labels. Recall for the rare Aspen class was only 1%, but now recall for every class is above 50%. That’s a huge improvement in our ability to predict rare labels correctly.
But remember that the confusion matrix has each row normalized to sum to 1. Visually, we’ve given each class equal weight in our diagnostic tool. That emphasizes the gains we’ve made in rare classes, but it de-emphasizes the price we’ll pay in terms of predicting more common classes. Recall for the most common class, Lodgepole Pine, has gone from 81% to 52%. For that reason, overall accuracy also decreased from 72% to 59%. To decide whether to use balanced class weights for your application, consider the business impact of making errors in common cases and how it compares to the impact of making errors in rare cases.
Finally, we’ll delete the hosting endpoints. The machines used for training spin down automatically, but the hosting endpoints remain active until you shut them down.
[ ]:
# delete endpoints multiclass_predictor.delete_endpoint() balanced_multiclass_predictor.delete_endpoint()
Conclusion
In this example notebook, we introduced the new multiclass classification feature of the Amazon SageMaker linear learner algorithm. We showed how to fit a multiclass model using the convenient high-level estimator API, and how to evaluate and interpret model metrics. We also showed how to achieve higher recall for rare classes using linear learner’s automatic class weights calculation. Try Amazon SageMaker and linear learner on your classification problems today!
[ ]: | https://sagemaker-examples.readthedocs.io/en/latest/scientific_details_of_algorithms/linear_learner_multiclass_classification/linear_learner_multiclass_classification.html | 2022-05-16T16:19:36 | CC-MAIN-2022-21 | 1652662510138.6 | [] | sagemaker-examples.readthedocs.io |
3. The internal filesystem¶
If your devices has 1Mbyte or more of storage then it will be set up (upon first boot) to contain a filesystem. This filesystem uses the FAT format and is stored in the flash after the MicroPython firmware.
3.1. Creating and reading files¶
MicroPython on the ESP8266 supports the standard way of accessing files in
Python, using the built-in
open() function.
To create a file try:
>>> f = open('data.txt', 'w') >>> f.write('some data') 9 >>> f.close()
The “9” is the number of bytes that were written with the
write() method.
Then you can read back the contents of this new file using:
>>> f = open('data.txt') >>> f.read() 'some data' >>> f.close()
Note that the default mode when opening a file is to open it in read-only mode,
and as a text file. Specify
'wb' as the second argument to
open for writing in binary mode, and
'rb' to open for reading in binary
mode.
3.2. Listing file and more¶
The os module can be used for further control over the filesystem. First import the module:
>>> import os
Then try listing the contents of the filesystem:
>>> os.listdir() ['boot.py', 'port_config.py', 'data.txt']
You can make directories:
>>> os.mkdir('dir')
And remove entries:
>>> os.remove('data.txt')
3.3. Start up scripts¶
There are two files that are treated specially by the ESP8266 when it starts up: boot.py and main.py. The boot.py script is executed first (if it exists) and then once it completes the main.py script is executed. You can create these files yourself and populate them with the code that you want to run when the device starts up.
3.4. Accessing the filesystem via WebREPL¶
You can access the filesystem over WebREPL using the web client in a browser or via the command-line tool. Please refer to Quick Reference and Tutorial sections for more information about WebREPL. | https://docs.micropython.org/en/latest/esp8266/tutorial/filesystem.html | 2022-05-16T16:24:23 | CC-MAIN-2022-21 | 1652662510138.6 | [] | docs.micropython.org |
Property-Map-DecisionTree method
Use the Property-Map-DecisionTree method in to evaluate a decision tree ( Rule-Declare-DecisionTree rule type) and store the result as the value of a property.
In the Diagram tab of a flow rule, the Decision shape can reference a decision tree.
Parameters
This method has four parameters:
Results
The system forms a decision tree key using the second parameter and the class of the step page or primary page. It uses rule resolution to locate the decision tree to be evaluated.
It then takes as input to the decision tree evaluation either the value of the third parameter, or the value of the Property field on the Input tab of the decision tree. It evaluates the decision tree in the context of this input value and the current clipboard.
It stores the results in the property you identify in the first parameter. Typically, the flow referencing this decision tree chooses which connector to follow from the decision shape based on this property value.
Trapping not found conditions
If the AllowMissingProperties parameter is not selected and a needed property is not present on the clipboard, the Property-Map-DecisionTree method places an output parameter DecisionTreeInvalidProperty on the parameter page of the current activity. This output parameter identifies the name and class of the missing property.
This facility can be useful in those situations where a user can be prompted for the missing property. The Pega Community article cited below provides an example.
Checking the method status
This method updates the pxMethodStatus property. See How to test method results using a transition.
Pega Community noteSee the Pega Community article How to evaluate a decision tree and handle errors.
Previous topic Property-Map-DecisionTable method Next topic Property-Map-Value method | https://docs.pega.com/reference/86/property-map-decisiontree-method | 2022-05-16T16:17:27 | CC-MAIN-2022-21 | 1652662510138.6 | [] | docs.pega.com |
Cash balanceTime to read: 2 minutes.
Cash balance count
We indicate the balance in the cash register here. A difference is automatically counted, which you can book.
For example, the cash register is posted when you deposit.
For example, a withdrawal can be booked for the withdrawal or the tip.
After the count, the closing balance appears in the module administration. Further actions can be carried out here:
For example, a withdrawal is now created:
All transactions can be displayed in the journal and also exported.
Installation
Glarotech GmbH hosting customers will have the module installed after ordering.
System requirements
To be able to use the DPD module, a PepperShop v.8.0 Professional is required.
Support
A detailed help text can be found directly in the module itself by clicking on the help button. For questions or the forums are available at | https://docs.peppershop.com/latest/en/cash-balance/ | 2022-05-16T15:22:57 | CC-MAIN-2022-21 | 1652662510138.6 | [] | docs.peppershop.com |
Windows Internet Explorer Application Compatibility
This document is designed to be a companion to the Microsoft Application Compatibility Toolkit (ACT) (which is part of the Internet Explorer Compatibility Test Tool). The Windows Internet Explorer Compatibility Test Tool logs information about your browsing session in Internet Explorer.
Introduction
As you browse webpages, Internet Explorer logs events that indicate potential application compatibility issues for Windows Internet Explorer 8. The Internet Explorer Compatibility Test Tool logs the name of each event along with a short description. Each description also contains a link to this documentation. The intention is that users of the Internet Explorer Compatibility Test Tool use this document to find out more about each event and what they can do to remediate the identified compatibility issue.
The remainder of this section contains a topic page for each of the events that can be logged by the Internet Explorer Compatibility Test Tool. After the user has navigated to this document, the intention is that the table of contents will be used to link directly to the event that the user is interested in. The section for each event contains the following information:
- Logged Message– This is a copy of the event description that you will see in the Internet Explorer Compatibility Test Tool.
- What is it?– This is an elaboration of the logged message explaining what the event is. Additional references are provided when available.
- When is this event logged?– This is a short description of what has to happen in your webpage for this event to be logged in the Internet Explorer Compatibility Test Tool.
- Example– Most events include examples that demonstrate how to make the corresponding event create a log entry in the compatibility tool. These examples help make the description of the event more concrete.
- Remediation– Guidance on what you can do to eliminate the incompatibility from your website.
The Remediation section for each event has been written to be as complete as possible. Be aware that sometimes this guidance is short; there is not always a workaround. Therefore, some guidance is intended to simply educate you about the issue so that you can design your site appropriately.
In many cases remediation guidance includes steps to disable a particular feature. It's important to understand that the first and best option is always to redesign your application to eliminate the compatibility issue. Disabling a feature (many of which are security related) may fix a particular compatibility issue, but it may also open a vulnerability in your browser. Disabling a feature is mainly useful during troubleshooting to observe behavior in an enabled versus disabled state. But on an on-going basis, disabling features should only be used as a last resort—and even then only as a short term solution.
When security issues come up they will be called out with the label Security Alert. Be sure to pay special attention to these warnings.
Internet Explorer Policy Settings
Starting with Windows XP Service Pack 2 (SP2), Windows Internet Explorer provides enhanced management capabilities through Group Policy. Prior to Windows XP SP2, many of the Internet Explorer security-related settings could only be managed by setting user preferences. This approach provided limited manageability because users could change their preference settings by using the Internet Explorer user interface or the registry.
Starting with Windows XP SP2, Internet Explorer settings can be managed by using .adm policy settings. These are referred to as "true policies." In Windows XP SP2 or later, you can manage all Internet Explorer security settings for both computer and user configurations with these new policy settings, making true policies secure and set only by an administrator.
Some of the events in this document include a discussion of how to enable or disable feature using either Group Policy or a registry setting. It's important to understand that there is a precedent in making a setting in one place or another. Internet Explorer looks for a policy setting in the following order:
- HKEY_LOCAL_MACHINE policy hive
- HKEY_CURRENT_USER policy hive
- HKEY_CURRENT_USER preference hive
- HKEY_LOCAL_MACHINE preference hive
The settings are applied as follows:
- Computer policies are applied when the computer starts.
- After computer policies, the user policies are applied when the user logs on.
- If neither computer nor user policy settings have been specified, user preferences are applied.
Generally, user policy settings override computer policy settings. And a particular setting on the local machine (set in the registry) is only applied if the same setting is not set in Group Policy.
For a more complete discussion of this issue, please see Internet Explorer Policy Settings.
Events
- Event 1021 - MIME Handling Restrictions
- Event 1022 - Windows Restrictions
- Event 1023 - Zone Elevation Restrictions
- Event 1024 - Binary Behaviors Restrictions
- Event 1025 - Object Caching Protection
- Event 1026 - ActiveX Blocking
- Event 1027 - Pop-Up Blocking
- Event 1028 - Automatic Download Blocking
- Event 1030 - Local Machine Zone Lockdown (LMZL)
- Event 1031 - Centralized URL Parsing
- Event 1032 - Internationalized Domain Names (IDN) Support
- Event 1033 - Secure Sockets Layer (SSL)
- Event 1034 - Cross-Domain Barrier and Script URL Mitigation
- Event 1035 - Anti-Phishing
- Event 1036 - Manage Add-ons
- Event 1037 - Protected Mode
- Event 1040 - Cascading Style Sheet (CSS Fixes)
- Event 1041 - UIPI Extension Blocked
- Event 1042 - UIPI Cross Process Window Message
- Event 1046 - Cross-Site Scripting Filter
- Event 1047 - Intranet at Medium Integrity Level
- Event 1048 - DEP/NX Crash Recovery
- Event 1049 - Standards Mode
- Event 1056 - File Name Restriction
- Event 1058 - Codepage Sniffing
- Event 1059 - Ajax Navigation
- Event 1061 - Application Protocol
- Event 1062 - Windows Reuse Navigation Restriction
- Event 1063 - MIME Restrictions - Authoritative Content Type Handling
- Event 1064 - MIME Sniffing Restrictions - No Image Elevation to HTML
- Event 1065 - Web Proxy Error Handling Changes
- Event 1073 - Certificate Filtering | https://docs.microsoft.com/en-us/previous-versions/windows/internet-explorer/ie-developer/compatibility/dd565632(v=vs.85)?redirectedfrom=MSDN | 2019-09-15T14:15:26 | CC-MAIN-2019-39 | 1568514571506.61 | [] | docs.microsoft.com |
Adding Existing SAML or LDAP Users to a PCF Deployment
This topic describes the procedure for adding existing SAML or LDAP users to a Pivotal Cloud Foundry (PCF) deployment enabled with SAML or LDAP.
The following two ways exist to add existing SAML or LDAP users to your PCF deployment:
Prerequisites
You must have the following to perform the procedures in this topic:
- Admin access to the Ops Manager Installation Dashboard for your PCF deployment
- The Cloud Foundry Command Line Interface (cf CLI) v6.23.0 or later
Option 1: Import Users in Bulk
You can import SAML or LDAP users in bulk by using the UAA Bulk Import Tool. See the UAA Users Import README for instructions about installing and using the tool.
Option 2: Add Users Manually
Perform the procedures below to add existing SAML or LDAP users to your PCF deployment manually.
Step 1: Create User
Perform the following steps to add a SAML or LDAP user:
- Run
cf target target the API endpoint for your PCF deployment. Replace
YOUR-SYSTEM-DOMAINwith your system domain. For example:
$ cf target
- Run
cf loginand provide credentials for an account with the Admin user role:
$ cf login
- Run
cf create-user EXAMPLE-USERNAME --origin YOUR-PROVIDER-NAMEto create the user in UAA. Replace
EXAMPLE-USERNAMEwith the username of the SAML or LDAP user you wish to add, and select one of the options below:
- For LDAP, replace
YOUR-PROVIDER-NAMEwith
ldap. For example:
$ cf create-user [email protected] --origin ldap
- For SAML, replace
YOUR-PROVIDER-NAMEwith the name of the SAML provider you provided when configuring Ops Manager. For example:
$ cf create-user [email protected] --origin example-saml-provider
Step 2: Associate User with Org or Space Role
After creating the SAML or LDAP user, you must associate the user with either an Org or Space role.
For more information about roles, see the Roles and Permissions section of the Orgs, Spaces, Roles, and Permissions topic.
Associate User with Org Role
Run
cf set-org-role USERNAME YOUR-ORG ROLE to associate the SAML or LDAP user with an Org role. Replace
USERNAME with the name of the SAML or LDAP user, and replace
YOUR-ORG with the name of your Org.
For
ROLE, enter one of the following:
OrgManager: Org Managers can invite and manage users, select and change plans, and set spending limits.
BillingManager: Billing Managers can create and manage the billing account and payment information.
OrgAuditor: Org Auditors have read-only access to Org information and reports.
Example:
$ cf set-org-role [email protected] my-org OrgManager
Associate User with Space Role
Run
cf set-space-role USERNAME YOUR-ORG YOUR-SPACE ROLE to associate the SAML or LDAP user with a Space role. Replace
USERNAME with the name of the SAML or LDAP user, replace
YOUR-ORG with the name of your Org, and
YOUR-SPACE with the name of a Space in your Org.
For
ROLE, enter one of the following:
SpaceManager: Space Managers can invite and manage users, and enable features for a given Space.
SpaceDeveloper: Space Developers can create and manage apps and services, and see logs and reports.
SpaceAuditor: Space Auditors can view logs, reports, and settings on this Space.
Example:
$ cf set-space-role [email protected] my-org my-space SpaceDeveloper | https://docs.pivotal.io/pivotalcf/2-6/opsguide/external-user-management.html | 2019-09-15T14:23:07 | CC-MAIN-2019-39 | 1568514571506.61 | [] | docs.pivotal.io |
Configuring Spring Boot Actuator Endpoints for Apps Manager. below") }.17" } | https://docs.pivotal.io/pivotalcf/2-6/console/spring-boot-actuators.html | 2019-09-15T14:10:36 | CC-MAIN-2019-39 | 1568514571506.61 | [] | docs.pivotal.io |
numpy.lexsort¶
numpy.
lexsort(keys, axis=-1)¶
Perform an indirect stable sort using a sequence of keys.
Given multiple sorting keys, which can be interpreted as columns in a spreadsheet, lexsort returns an array of integer indices that describes the sort order by multiple columns. The last key in the sequence is used for the primary sort order, the second-to-last key for the secondary sort order, and so on. The keys argument must be a sequence of objects that can be converted to arrays of the same shape. If a 2D array is provided for the keys argument, it’s rows are interpreted as the sorting keys and sorting is according to the last row, second last row etc.
See also
argsort
- Indirect sort.
ndarray.sort
- In-place sort.
sort
- Return a sorted copy of an array. a, then by b >>> ind array(would]) | https://docs.scipy.org/doc/numpy-1.17.0/reference/generated/numpy.lexsort.html | 2019-09-15T14:10:54 | CC-MAIN-2019-39 | 1568514571506.61 | [] | docs.scipy.org |
pointpats.PoissonPointProcess¶
- class
pointpats.
PoissonPointProcess(window, n, samples, conditioning=False, asPP=False)[source]¶
Poisson point process including \(N\)-conditioned CSR process and \(\lambda\)-conditioned CSR process.
- Parameters
- window
Window
Bounding geometric object to contain point process realizations.
- nint
Size of each realization.
- sampleslist
Number of realizations.
- conditioningbool
If True, use the \(\lambda\)-conditioned CSR process, number of events would vary across realizations; if False, use the \(N\)-conditioned CSR process.
- asPPbool
Control the data type of value in the “realizations” dictionary. If True, the data type is point pattern as defined in pointpattern.py; if False, the data type is an two-dimensional array. \(N\)-conditioned csr process in the same window (10 points, 2 realizations)
>>> np.random.seed(5) >>> samples1 = PoissonPointProcess(window, 10, 2, conditioning=False, asPP=False) >>> samples1.realizations[0] # the first realized event points array([[-81.80326547, 36.77687577], [-78.5166233 , 37.34055832], [-77.21660795, 37.7491503 ], [-79.30361037, 37.40467853], [-78.61625258, 36.61234487], [-81.43369537, 37.13784646], [-80.91302108, 36.60834063], [-76.90806444, 37.95525903], [-76.33475868, 36.62635347], [-79.71621808, 37.27396618]])
2. Simulate a \(\lambda\)-conditioned csr process in the same window (10 points, 2 realizations)
>>> np.random.seed(5) >>> samples2 = PoissonPointProcess(window, 10, 2, conditioning=True, asPP=True) >>> samples2.realizations[0].n # the size of first realized point pattern 10 >>> samples2.realizations[1].n # the size of second realized point pattern 13
-: 1. | https://pointpats.readthedocs.io/en/v2.1.0/generated/pointpats.PoissonPointProcess.html | 2019-09-15T14:32:05 | CC-MAIN-2019-39 | 1568514571506.61 | [] | pointpats.readthedocs.io |
infix :
Documentation for infix
: assembled from the following types:
language documentation Operators
(Operators) infix :
Used as an argument separator just like infix
, and marks the argument to its left as the invocant. That turns what would otherwise be a function call into a method call.
substr('abc': 1); # same as 'abc'.substr(1)
Infix
: is only allowed after the first argument of a non-method call. In other positions, it's a syntax error. | http://docs.perl6.org/routine/: | 2019-09-15T14:17:53 | CC-MAIN-2019-39 | 1568514571506.61 | [] | docs.perl6.org |
AWS Flow Framework for Ruby Developer Guide. click Create a Free Account. | https://docs.aws.amazon.com/amazonswf/latest/awsrbflowguide/ | 2019-09-15T14:54:02 | CC-MAIN-2019-39 | 1568514571506.61 | [] | docs.aws.amazon.com |
You are viewing documentation for version 2 of the AWS SDK for Ruby. Version 3 documentation can be found here.
Class: Aws::Lambda::Types::Environment
Overview
Note:
When passing Environment as input to an Aws::Client method, you can use a vanilla Hash:
{ variables: { "EnvironmentVariableName" => "EnvironmentVariableValue", }, }
A function's environment variable settings.
Returned by:
Instance Attribute Summary collapse
- #variables ⇒ Hash<String,String>
Environment variable key-value pairs.
Instance Attribute Details
#variables ⇒ Hash<String,String>
Environment variable key-value pairs. | https://docs.aws.amazon.com/sdkforruby/api/Aws/Lambda/Types/Environment.html | 2019-09-15T14:50:38 | CC-MAIN-2019-39 | 1568514571506.61 | [] | docs.aws.amazon.com |
plugin:static:*
For each
.js and
.css file in your plugin’s
static directory, a template is generated named after the file. For example,
plugin:static:example.js and
plugin:static:example.css.
These templates always render as an empty string, but as a side effect, set the CSS or client side JavaScript resource to be included in the generated HTML page.
This only works when you’re using the default HTML layout. If you’re generating the entire page yourself, you’ll have to write your own tags.
This is an alternative to calling the
useStaticResource() function on the
Response object, but can be used when you’re not generating the entire response, for example, responding to a hook.
If you want to include generated CSS and JavaScript files, use the
std:resources template.
View
These templates do not use the
view argument to the
render() function. This allows them to be used to concisely mark that a template requires one of the static resources, without adding unnecessary boilerplate to your plugin.
A
<script> or
<link> tag will be generated for the resource, using the file extension to determine what kind of resource you’re including.
Example
{{>plugin:static:example.js}}{{>plugin:static:example.css}} <h2>A title</h2> <p> ... </p> | https://docs.haplo.org/plugin/misc/handlebars/std-template/plugin-static | 2019-09-15T14:51:47 | CC-MAIN-2019-39 | 1568514571506.61 | [] | docs.haplo.org |
Step 2: Checking for Previously Referenced Assemblies.
Note
To revert to the behavior of the .NET Framework versions 1.0 and 1.1, which did not cache binding failures, include the <disableCachingBindingFailures> Element in your configuration file.
See Also
Reference
<disableCachingBindingFailures> Element
Concepts
How the Runtime Locates Assemblies
Step 1: Examining the Configuration Files
Step 3: Checking the Global Assembly Cache
Step 4: Locating the Assembly through Codebases or Probing
Partial Assembly References | https://docs.microsoft.com/en-us/previous-versions/dotnet/netframework-4.0/aa98tba8%28v%3Dvs.100%29 | 2019-09-15T14:10:51 | CC-MAIN-2019-39 | 1568514571506.61 | [] | docs.microsoft.com |
Contents Now Platform Capabilities Previous Topic Next Topic Roles installed with Notify Subscribe Log in to subscribe to topics and get notified when content changes. ... SAVE AS PDF Selected Topic Topic & Subtopics All Topics in Contents Share Roles installed with Notify Notify adds the following roles. Role title [name] Description Contains roles Notify administrator[notify_admin] Administrator with privileges for Notify 2 functionality. workflow_admin workflow_creator workflow_publisher Notify viewer[notify_view] Can view notify content. This role has read-only access to the Notify Conference Calls table [notify_conference_call], Notify Conference Call Participants table [notify_participant], Notify Conference Call Participant Session table [notify_participant_session] and Notify Call table [notify_call]. The itil role inherits the notify_view role when the Incident Alert Management and the Notify plugins are activated. On this page Send Feedback Previous Topic Next Topic | https://docs.servicenow.com/bundle/kingston-servicenow-platform/page/product/notify2/reference/r_NotifyRoles.html | 2019-09-15T14:50:45 | CC-MAIN-2019-39 | 1568514571506.61 | [] | docs.servicenow.com |
routine permutations
Documentation for routine
permutations assembled from the following types:
class Any
(Any) method permutations
Defined as:
method permutations(|c)
Coerces the invocant to a
list by applying its
.list method and uses
List.permutations on it.
say <a b c>.permutations;# OUTPUT: «((a b c) (a c b) (b a c) (b c a) (c a b) (c b a))»say set(1,2).permutations;# OUTPUT: «((2 => True 1 => True) (1 => True 2 => True)) | http://docs.perl6.org/routine/permutations | 2019-09-15T14:15:35 | CC-MAIN-2019-39 | 1568514571506.61 | [] | docs.perl6.org |
Setup Symfony¶
This example will use
symfony to install Symfony from within the Devilbox PHP container.
After completing the below listed steps, you will have a working Symfony setup ready to be served via http and https.
See also
Official Symfony
- Install Symfony
- Symlink webroot directory
- Enable Symfony prod (
app.php)
--symfony
See also
3. Install Symfony¶
Navigate into your newly created vhost directory and install Symfony with
symfony cli.
[email protected] in /shared/httpd $ cd my-symfony [email protected] in /shared/httpd/my-symfony $ symfony new symfony
How does the directory structure look after installation:
[email protected] in /shared/httpd/my-symfony $ tree -L 1 . └── symf-symfony $ ln -s symfony/web/ htdocs
How does the directory structure look after symlinking:
[email protected] in /shared/httpd/my-sw $ tree -L 1 . ├── symfony └── htdocs -> symfony/web 2 directories, 0 files
As you can see from the above directory structure,
htdocs is available in its expected
path and points to the frameworks entrypoint.
5. Enable Symfony prod (
app.php)¶
[email protected] in /shared/httpd/my-symfony $ cd symfony/web [email protected] in /shared/httpd/my-symfony/symfony/web $ ln -s app.php index.php | https://devilbox.readthedocs.io/en/latest/examples/setup-symfony.html | 2019-09-15T15:24:05 | CC-MAIN-2019-39 | 1568514571506.61 | [array(['https://raw.githubusercontent.com/cytopia/icons/master/11x11/ext-link.png',
None], dtype=object) ] | devilbox.readthedocs.io |
Message-ID: <1637603262.20094.1568558368661.JavaMail.confluence@localhost.localdomain> Subject: Exported From Confluence MIME-Version: 1.0 Content-Type: multipart/related; boundary="----=_Part_20093_1263931599.1568558368660" ------=_Part_20093_1263931599.1568558368660 Content-Type: text/html; charset=UTF-8 Content-Transfer-Encoding: quoted-printable Content-Location:
Release Notes and a complete User Manual.Welcome to the = Online Documentation home for MyPMS, our premier Cloud-based property manag= ement system. Here you will find information on how to setup, configu= re, manage and optimize your BookingCenter MyPMS system as well as
Begin with the MyPMS System Overview and for properties just getti= ng started, continue to the Getting Started with MyPMS. Current users will find useful topic= s in the MyPMS Training Gui= de section with frequently used functions.
For a complete step-by-step guide to MyPMS, = go to MyPMS User Manual. T= o see details and instructions on our selection of Interfaces and Modules, = go to Interfaces and Modu= les. | https://docs.bookingcenter.com/exportword?pageId=1376264 | 2019-09-15T14:39:29 | CC-MAIN-2019-39 | 1568514571506.61 | [] | docs.bookingcenter.com |
Editing Client Area Menus
WHMCS version 6 introduces a programmatic way to interact with the client area navigation and sidebars through hooks and modules.
Contents
- 1 Menu structure
- 2 Menu layout
- 3 Menu items
- 4 Menu item arrangement
- 5 Interacting with menus
- 6 Examples
- 6.1 Add a social media panel to the end of the sidebar
- 6.2 Add a special offer image and link to the top of the topmost sidebar
- 6.3 Move the “Contact Us” link to the secondary navigation bar and add more contact options
- 6.4 Add support hours and a custom message to the sidebar on the submit ticket page
Menu structure
The client area's navigation and sidebars are defined in a tree structure of menu item objects. Each menu item has one parent item and can have many child items. The only menu item with no parent an invisible root item that is not displayed on the page.
Navigation bars consist of the invisible menu root with children representing every item displayed in the navigation bar. Each of these items may have their own child items. These child items are rendered as that navigation bar item's dropdown menu.
- Navigation bar root Item
- navigation item
- navigation item
- dropdown item
- navigation item
- dropdown item
- dropdown item
- dropdown item
- navigation item
Navigation bars are displayed on every page in the client area, but their contents may change if a client is logged in or not. For instance, the navigation bar may show login and password recovery links if a user isn't logged in.
Sidebars
Like the navigation bar, the sidebars in WHMCS begin with an invisible menu root, but each child represents an individual panel in the side bar. Each panel item is rendered as an item within the panel in the WHMCS client area.
- Sidebar root Item
- panel
- panel item
- panel item
- panel item
- panel
- panel item
- panel item
- panel
- panel item
- panel item
- panel item
Sidebars help provide context for the data displayed on the page. Different pages in the client area may have different sidebar items. For example, a page to view an account may contain sidebar links to view that client’s open tickets or unpaid invoices.
Menu layout
Desktop mode
Likewise there are two sidebars on every client area page. The primary sidebar is displayed above the secondary side bar on the left side of the page. Sidebar content varies per page, though the primary sidebar typically displays information directly relevant to the page content while the secondary sidebar usually contains more general links.
Responsive mode
In responsive mode the primary and secondary navigation bars are displayed above the primary sidebar, followed by the page’s content with the secondary sidebar displayed at the bottom of the page.
Menu items
Menu items are modeled in code by the \WHMCS\View\Menu\Item class. These objects contain all of the information needed to render that menu item within a template, including their parent and child menu item relationships. Menu items can have the following aspects:
- A single parent item
- Multiple optional child items
- A name used to internally refer to the menu item
- A label to display when the item is rendered to the page. If no label is defined, then WHMCS render's the menu item's name
- A URI to link to when the user clicks or taps on a menu item
- An optional icon displayed to the left of the label. WHMCS has access to both the Glyphicons and Font Awesome libraries.
- An optional badge displayed to the right of the label, usually used for contextual information, such as the number of tickets in a status next to that statuses' name
- The order that a menu item is displayed in its parent's list of children.
Menu item arrangement
Sidebars
Hooks
WHMCS 6.0 introduces a number of hooks to allow menu interaction before they’re sent to the template renderer. Use WHMCS’ add_hook() function to call custom code when WHMCS reaches these hook points during page generation.
- ClientAreaPrimaryNavbar
- Called prior to rendering navigation bars.
- Passes the primary navigation bar object to the hook function.
- ClientAreaSecondaryNavbar
- Called prior to rendering navigation bars.
- Passes the secondary navigation bar object to the hook function.
- ClientAreaNavbars
- Called prior to rendering navigation bars.
- Passes no parameters to the hook function.
- ClientAreaPrimarySidebar
- Called prior to rendering sidebars.
- Passes the primary side bar object to the hook function.
- ClientAreaSecondarySidebar
- Called prior to rendering sidebars.
- Passes the secondary side bar object to the hook function.
- ClientAreaSidebars
- Called prior to rendering sidebars.
- Passes no parameters to the hook function
Direct Access
WHMCS allows direct manipulation of menu objects outside the hooks system for modules and other custom code that don’t use the hooks system. The built-in Menu class is an alias to an object repository that can retrieve all of WHMCS’ menu objects. Please note that if the menu isn’t generated by the page yet then an empty menu structure may exist that is overwritten by normal page generation. WHMCS recommends using the hooks system to interact with menus.
The Menu class has four static methods to retrieve menus:
- Item Menu::primaryNavbar()
- Retrieve the primary navigation bar.
- Item Menu::secondaryNavbar()
- Retrieve the secondary navigation bar.
- Item Menu::primarySidebar()
- Retrieve the primary sidebar.
- Item Menu::secondarySidebar()
- Retrieve the secondary sidebar.
WHMCS employs a number of pre-built sidebars in its built-in pages. These side bars are available to hook and module developers through the Menu::PrimarySidebar() and Menu::SecondarySidebar() methods. Call either of these methods with the name of the sidebar as the first parameter to retrieve the pre-built sidebar. WHMCS will build the side bar if it isn't already defined. See the table below for the currently defined pre-built sidebars and which pages they are used on.
Context
WHMCS’ menus, especially the sidebars, render information specific to the page in the client area that’s being accessed by the user. For instance, client information is passed to the “my account” page and ticket information is passed to the “view ticket page”. This data is passed to menu item objects as context. Context can be any PHP object or data type. The Menu class has two static methods for setting and retrieving context items:
- void Menu::addContext(string $key, mixed $value)
- Add $value to the menu context at the key $key, overriding existing values.
- mixed|null Menu::context(string $key)
- Retrieve the menu context at $key or null if no context exists at the key.
Examples
This example uses the ClientAreaSecondarySidebar hook and the menu item’s addChild()and moveToBack() methods. To add panel with links to the sidebar we must:
- Create a panel at the end of the sidebar.
- Retrieve the panel we just created.
- Add social media links to the panel.
Fortunately the Font Awesome library already has icons for all of these services. Create the includes/hooks/socialMediaPanel.php file in your WHMCS installation and enter the code below. Save the file and reload your WHMCS installation’s client area. WHMCS automatically loads all hooks the includes/hooks directory. The SecondarySidebar hook is registered with add_hook() and is consequently loaded every time WHMCS renders the secondary sidebar on page load.
<?php use WHMCS\View\Menu\Item as MenuItem; // Add social media links to the end of all secondary sidebars. add_hook('ClientAreaSecondarySidebar', 1, function (MenuItem $secondarySidebar) { // Add a panel to the end of the secondary sidebar for social media links. // Declare it with the name "social-media" so we can easily retrieve it // later. $secondarySidebar->addChild('social-media', array( 'label' => 'Social Media', 'uri' => '#', 'icon' => 'fas fa-thumbs-up', )); // Retrieve the panel we just created. $socialMediaPanel = $secondarySidebar->getChild('social-media'); // Move the panel to the end of the sorting order so it's always displayed // as the last panel in the sidebar. $socialMediaPanel->moveToBack(); // Add a Facebook link to the panel. $socialMediaPanel->addChild('facebook-link', array( 'uri' => '', 'label' => 'Like us on Facebook!', 'order' => 1, 'icon' => 'fab fa-facebook-f', )); // Add a Twitter link to the panel after the Facebook link. $socialMediaPanel->addChild('twitter-link', array( 'uri' => '', 'label' => 'Follow us on Twitter!', 'order' => 2, 'icon' => 'fab fa-twitter', )); // Add a Google+ link to the panel after the Twitter link. $socialMediaPanel->addChild('google-plus-link', array( 'uri' => '', 'label' => 'Add us to your circles!', 'order' => 3, 'icon' => 'fab fa-google-plus-g', )); });
This example uses the ClientAreaPrimarySidebar hook and the menu item’s getFirstChild() and setBodyHtml() methods. To add panel with links to the sidebar we must:
- Get the first panel from the primary sidebar.
- Set the panel’s body HTML.
Create the includes/hooks/specialOfferInSidebar.php file in your WHMCS installation and enter the code below. As with the previous example the new file and hook within the file is run on page load and adds the image and link to the topmost panel in the primary sidebar, no matter which page the client is accessing.
<?php use WHMCS\View\Menu\Item as MenuItem; add_hook('ClientAreaPrimarySidebar', 1, function (MenuItem $primarySidebar) { // The HTML for the link to the the special offer. $specialOfferHtml = <<<EOT <a href="//myawesomecompany.com/special-offer/"> <img src="/assets/img/catdeals.png" alt="Click here for amazing deals!"> Kitten says <strong><em>thanks</em></strong> for making us the best web host! </a> EOT; // Add a link to the special to the first panel's body HTML. It will render // above the panel's menu item list. $firstSidebar = $primarySidebar->getFirstChild(); if ($firstSidebar) { $firstSidebar->setBodyHtml($specialOfferHtml); } });
This example requires manipulating more than one menu bar. To do that we’ll use the ClientAreaNavbars hook and the Menu class to retrieve the primary and secondary navigation bars. We’ll use the menu item’s getChild(), removeChild(), addChild(), and moveToFront() methods and the static Menu::primaryNavbar() and Menu::secondaryNavbar() methods. Here’s what we’ll do:
- Save the “Contact Us” link from the primary navigation bar.
- Remove the “Contact Us” link from the primary navigation bar.
- Add an email link child to the “Contact Us” link.
- Add a phone link child to the “Contact Us” link.
- Add a map link child to the “Contact Us” link.
- Add the “Contact Us” link to the secondary navigation bar then move it to the beginning of the menu.
Create the includes/hooks/moveContactUsLink.php file in your WHMCS installation and enter the code below. As with the other examples this hook file is picked up on page load and rearranges the navigation bars with the appropriate dropdown items.
<?php add_hook('ClientAreaNavbars', 1, function () { // Get the current navigation bars. $primaryNavbar = Menu::primaryNavbar(); $secondaryNavbar = Menu::secondaryNavbar(); if (!is_null($primaryNavbar->getChild('Contact Us'))){ // Save the "Contact Us" link and remove it from the primary navigation bar. $contactUsLink = $primaryNavbar->getChild('Contact Us'); $primaryNavbar->removeChild('Contact Us'); // Add the email sales link to the link's drop-down menu. $contactUsLink->addChild('email-sales', array( 'label' => 'Email our sales team', 'uri' => '[email protected]', 'order' => 1, 'icon' => 'far fa-gem', )); // Add the call us link to the link's drop-down menu. $contactUsLink->addChild('call-us', array( 'label' => 'Call us', 'uri' => 'tel:+18005551212', 'order' => 2, 'icon' => 'fas fa-mobile-alt', )); // Add the map to the company to the link's drop-down menu. $contactUsLink->addChild('map', array( 'label' => '123 Main St. AnyTown, TX 11223, USA', 'uri' => 'https:\/\/maps.google.com/maps/place/some-map-data', 'order' => 3, 'icon' => 'fas fa-map-marker-alt', )); // Add the link and its drop-down children to the secondary navigation bar. $secondaryNavbar->addChild($contactUsLink); // Make sure the contact us link appears as the first item in the // secondary navigation bar. $contactUsLink->moveToFront(); }});
Since the powers that be want this to appear at the top of the sidebars we'll manipulate the primary sidebar via the ClientAreaPrimarySidebar hook. We’ll use the menu item’s addChild(), moveToFront(), and setBodyHtml() methods to add the new panel. The special message to the user addresses the logged in user by first name. Every sidebar has a "client" context available which contains the record of the client that is logged in or null if no client is logged in. The Menu::context() method will retrieve the client record for us. If the client is logged in then we'll use the client object's firstName property to address the user by name. Version 6.0 uses the very helpful Carbon date library internally. Carbon is available to third party developers, so we'll use it to determine if support is currently open. Here’s what we’ll do:
- Determine if the user is visiting submitticket.php.
- Add a "Support Hours" panel to the primary sidebar and move it to the front so it displays at the top.
- Create child items in the support hours panel saying when support is open and closed.
- Determine if support is currently open.
- If there is a user logged in then determine their first name.
- Assign the support hours' panel body HTML to a special message depending on the logged in user's first name and whether support is currently open.
Create the includes/hooks/addSupportHours.php file in your WHMCS installation and enter the code below. As with the other examples this hook file is picked up on page load and adds the custom panel and message to the primary sidebar before the submit ticket page renders.
<?php use Carbon\Carbon; use WHMCS\View\Menu\Item as MenuItem; // Add a helpful support hours notice to the top of the sidebar on the submit // ticket page. if (App::getCurrentFilename() == 'submitticket') { add_hook('ClientAreaPrimarySidebar', 1, function (MenuItem $primarySidebar) { // Create the support hours panel and make sure it's the first one // displayed. /** @var MenuItem $supportHours */ $supportHours = $primarySidebar->addChild('Support Hours'); $supportHours->moveToFront(); // Add hours to the panel. $supportHours->addChild( '<strong>Open</strong> 08:00-17:00 M-F', array( 'icon' => 'far fa-smile', 'order' => 1, ) ); $supportHours->addChild( '<strong>Closed</strong> Weekends', array( 'icon' => 'far fa-frown', 'order' => 2, ) ); // Add a custom notice to the support hours panel with the logged in // client's first name and a different message depending on whether // support is open. /** @var \WHMCS\User\Client $client */ $client = Menu::context('client'); $greeting = is_null($client) ? '' : ", <strong>{$client->firstName}</strong>"; $now = Carbon::now(); $supportIsOpen = $now->isWeekday() && $now->hour >= 8 && $now->hour <= 17; $supportHours->setBodyHtml( $supportIsOpen ? "Hi{$greeting}! We're open and will respond to your ticket soon!" : "Don't worry{$greeting}! We will respond on the next business day. Sit tight!" ); }); } | https://docs.whmcs.com/Editing_Client_Area_Menus | 2019-09-15T14:27:10 | CC-MAIN-2019-39 | 1568514571506.61 | [] | docs.whmcs.com |
Listing variable products on Amazon
Requirements
Creating new variable listings on Amazon is not much different from creating simple products, but there are some things you should know:
- You will need to provide individual SKUs and UPCs (or EANs) for each single variation.
- You need to use global product attributes for your variations - read more about variation attributes
- Your variation attributes need to be allowed for the specific category you are using.
Variation Attributes
Not all categories (feed templates) allow you to list variable products - and the ones that do only allow selected variation attributes.
This means you can use attributes like Size and Color when you list clothes, but you won’t be able to use variation attributes like “Inseam” for example.
You may also need to select the right variation attributes in your listing profile. If your variation attribute is called "Shirt Size" in WooCommerce you need to tell WP-Lister that the size value should go to the column size_name. Visit your listing profile, search for the field size_name, click the magnifier icon next to the fields and select your product attribute Shirt Size.
If your attribute names match Amazon's requirements already, WP-Lister will fill in the values on its own. So if you’re using "Size" and "Color" it will populate the fields size_name and color_name automatically, even when they are left empty in the listing profile.
Once you have taken care of the above, you should be able to list variable products the same way as non-variable products.
FYI: Some feed templates allow different versions of the same attribute, like "color" and "ColorName" or "size" and "SizeName". To our knowledge, these versions are treated equally. Even Amazon Seller Support wasn't able to shed any light on why those different versions exist, so it doesn't matter which one you use.
Variable listings in WP-Lister
Because of how variations work on Amazon, where each variation is a “listing” with its own SKU and ASIN, you will see all variations as separate listings in WP-Lister as well – plus the parent variation, which “wraps” the child variations and has its own SKU and ASIN too.
So if you have a product that comes in 3 sizes and 4 colors, you’ll see not 12 but 13 listings in WP-Lister – which correspond to 13 rows that will be created in the product data feed. These are not duplicates so please don't delete them. Instead, check out the two view buttons on the top right next to the pagination - clicking the right one will only show parent variations and simple products.
This article is about listing new products which currently don’t exist on Amazon. For existing products which already have an ASIN, the workflow is different and explained here. | https://docs.wplab.com/article/15-listing-variable-products-on-amazon | 2019-09-15T14:25:15 | CC-MAIN-2019-39 | 1568514571506.61 | [] | docs.wplab.com |
Configuring BOSH Director on OpenStack
- Step 1: Access Ops Manager
- Step 2: OpenStack Configs Page
- Step 3: (Optional) Advanced Config Page
- Step 4: Director Config Page
- Step 5: Create Availability Zones Page
- Step 6: Create Networks Page
- Step 7: Assign AZs and Networks Page
- Step 8: Security Page
- Step 9: BOSH DNS Config Page
- Step 10: Syslog Page
- Step 11: Resource Config Page
- Step 12: (Optional) Add Custom VM Extensions
- Step 13: Complete BOSH Director Installation
This topic describes how to configure BOSH Director after deploying Pivotal Cloud Foundry (PCF) on OpenStack. Use this topic when Installing Pivotal Cloud Foundry on OpenStack.
Before beginning this procedure, ensure that you have successfully completed all steps in Provisioning the OpenStack Infrastructure.
After you complete this procedure, follow the instructions in Configuring PAS.
Note: You can also perform the procedures in this topic using the Ops Manager API. For more information, see Using the Ops Manager API.
Step 1: Access Ops Manager
In a web browser, navigate to the fully qualified domain you created in the Create a DNS Entry step of Provisioning the OpenStack Infrastructure.: OpenStack Configs BOSH Director tile.
Select OpenStack Configs.
Complete the OpenStack Management Console Config page with the following information:
- Name: Enter a unique name for the OpenStack config.
- Authentication URL: Enter the Service Endpoint for the Identity service that you recorded in a previous step.
Keystone Version: Choose a Keystone version, either v2 or v3.
- If you choose v3, enter the OpenStack Keystone domain to authenticate against in the Domain field. For more information about Keystone domains in OpenStack, see Domains in the OpenStack documentation.
Username: Enter your OpenStack Horizon username. The
PrimaryProjectfor the user must be the project you are using to deploy PCF. For more information, see Manage projects and users in the OpenStack documentation.
Password: Enter your OpenStack Horizon password.
Tenant: Enter your OpenStack tenant name.
Region: Enter
RegionOne, or another region if recommended by your OpenStack administrator.
Select OpenStack Network Type: Select either Nova, the legacy OpenStack networking model, or Neutron, the newer networking model.
Ignore Server Availability Zone: Do not select the checkbox.
Security Group Name: Enter
opsmanager. You created this Security Group in the Configure Security step of Provisioning the OpenStack Infrastructure.
Key Pair Name: Enter the name of the key pair that you created in the Configure Security step of Provisioning the OpenStack Infrastructure.
SSH Private Key: In a text editor, open the key pair file that you downloaded in the Configure Security step of Provisioning the OpenStack Infrastructure. Copy and paste the contents of the key pair file into the field.
(Optional) API SSL Certificate: If you configured API SSL termination in your OpenStack Dashboard, enter your API SSL Certificate.
Disable DHCP: Do not select the checkbox unless your configuration requires it.
Boot From Volume: Enable to boot VMs from a Cinder volume.
Click Save.
(Optional) Click Add OpenStack Config to configure additional data centers. Click Save for each additional OpenStack config to add it successfully. For more information, see Managing Multiple Data Centers.
Step 3: (Optional) Advanced Config Page
Note: This is an advanced option. Most users leave this field blank.
In Ops Manager, select Advanced Infrastructure Config.
If your OpenStack environment requires specific connection options, enter them in the Connection Options field in JSON format. For example:
'connection_options' => { 'read_timeout' => 200 }
Note: Your connection options apply to all of your OpenStack configs.
Click Save.
Step 4: Director Config Page
In Ops Manager, select Director Config.
Enter one or more NTP servers in the NTP Servers (comma delimited) field. For example,
us.
Select a Database Location. By default, Ops Manager deploys and manages an Internal database for you. If you choose to use an External MySQL Database, complete the associated fields with information obtained from your external MySQL Database provider: Host, Port, Username, Password, and Database.
In addition, if you selected the Enable TLS for Director Database checkbox, you can complete the following optional fields:
- Enable TLS: Select 5: Create Availability Zones Page
In Ops Manager, select Create Availability Zones.
Enter the name of the availability zone that you selected in the Launch Ops Manager VM step of Provisioning the OpenStack Infrastructure.
(Optional) Select an OpenStack config name from the IaaS Configuration dropdown. The default is set to your first OpenStack config.
Enter the OpenStack Availability Zone of your OpenStack environment. Many OpenStack environments default to
nova.
Click Add for each additional OpenStack config you created in Step 2: OpenStack Configs Page. Give each AZ a unique Name and an IaaS Configuration with a different OpenStack config.
Click Save.
Step 6: step of Deploying BOSH and Ops Manager to OpenStack to ensure you have configured ICMP in your Security Group.
Use the following steps to create one or more Ops Manager networks using information from your OpenStack network:
- Click Add Network.
- Enter a unique Name for the network.
- Click Add Subnet to create one or more subnets for the Provisioning the OpenStack Infrastructure.
- For DNS, enter one or more Domain Name Servers.
- For Gateway, use the Gateway IP from the OpenStack page.
- For Availability Zones, select which Availability Zones to use with the network.
Click Save.
Note: After you deploy Ops Manager, you add subnets with overlapping Availability Zones to expand your network. For more information about configuring additional subnets, see Expanding Your Network with Additional Subnets.
Step 7: Assign AZs and Networks Page
Select Assign Availability Zones.
From the Singleton Availability Zone dropdown, select the availability zone that you created in a previous step. The BOSH Director installs in this Availability Zone.
Use the dropdown to select the Network that you created in a previous step. BOSH Director installs in this network.
Click Save.
Step 8: 11: Resource Config Page
Select Resource Config.
Adjust any values as necessary for your deployment, such as increasing the persistent disk size. Select Automatic from the dropdown to provision the amount of persistent disk predefined by the job. If the persistent disk field reads None, the job does not require persistent disk space.
Note: Ops Manager requires a Director VM with at least 8 GB memory.
Note: If you set a field to Automatic and the recommended resource allocation changes in a future version, Ops Manager automatically uses the updated recommended allocation.
Click Save.
Step 12: (Optional) Add Custom VM Extensions
Use the Ops Manager API to add custom properties to your VMs such as associated security groups and load balancers. For more information, see Managing Custom VM Extensions.
Step 13: Complete BOSH Director Installation
Click the Installation Dashboard link to return to the Installation Dashboard.
Click Review Pending Changes, then Apply Changes. If the following ICMP error message appears, click Ignore errors and start the install.
BOSH Director installs. The image shows the Changes Applied message that Ops Manager displays when the installation process successfully completes.
After you complete this procedure, follow the instructions in Configuring PAS.
Return to Installing Pivotal Cloud Foundry on OpenStack. | https://docs.pivotal.io/pivotalcf/2-6/om/openstack/config.html | 2019-09-15T13:51:59 | CC-MAIN-2019-39 | 1568514571506.61 | [] | docs.pivotal.io |
NBIS Metagenomics Workflow¶
Setup
Preprocessing
De-novo assembly
Overview¶
This is a snakemake workflow for rocessing and analysis of metagenomic datasets. It can handle single- and paired-end data and can run on a local laptop with either Linux or OSX, or in a cluster environment.
The source code is available at BitBucket and is being developed as part of the NBIS bioinformatics infrastructure.
Installation¶
1. Clone the repository Checkout the latest version of this repository (to your current directory):
git clone cd nbis-meta
Change directory:
cd nbis-meta
2. Install the required software All the software needed to run this workflow is included as a Conda environment file. See the conda installation instructions for how to install conda on your system.
To create the environment
nbis-meta use the supplied
envs/environment.yaml file:
mkdir envs/nbis-meta conda env create -f envs/environment.yaml -p envs/nbis-meta
Next, add this directory to the envs_dirs in your conda config (this is to simplify activation of the environment and so that the full path of the environment installation isn’t shown in your bash prompt):
conda config --add envs_dirs $(pwd)/envs/
Activate the environment using:
conda activate envs/nbis-meta
You are now ready to start using the workflow!
Note
If you plan on using the workflow in a cluster environment running the SLURM workload manager (such as Uppmax) you should configure the workflow with the SLURM snakemake profile. See the documentation. | https://nbis-metagenomic-workflow.readthedocs.io/en/latest/ | 2019-09-15T14:36:11 | CC-MAIN-2019-39 | 1568514571506.61 | [] | nbis-metagenomic-workflow.readthedocs.io |
8.5.050.38 displays buffer overflow warnings and recording errors during MSML file-based call recording of MP3 files and GIR recording with MP3 files. (GVP-21268)
Calls no longer end prematurely after this warning appears: VGDTMFRecognitionThread.C:1349 Recognition session 4612 does not exist. Previously, after the warning displayed, recognition and transfer attemps failed and the call ended. (GVP-21212)
MCP now generates log messages with the correct log IDs (22022 for ASR and 22027 for TTS) when it is not able to connect to the MRCP ASR or TTS servers. (GVP-21149)
Feedback
Comment on this article: | https://docs.genesys.com/Documentation/RN/latest/gvp-mcp85rn/gvp-mcp8505038 | 2019-09-15T13:58:01 | CC-MAIN-2019-39 | 1568514571506.61 | [] | docs.genesys.com |
iWD Runtime Node 8.5.x Release Note
This Release Note applies to all 8.5.x releases of iWD Runtime Node. Links in the Available Releases section enable you to access information regarding a specific release.
For information about 8.1.x releases of iWD Runtime Node,.
You can find Release Notes for particular releases of iWD Runtime Node Guide.
- Support for AIX operating system.
- Support for Solaris operating system.
Discontinued as of: 8.5.104.03
- Support for WebSphere Application Server 6.
- Support for Solaris/SPARC operating system version 9.
- Support for Red Hat Enterprise Linux 4 operating system.
- Support for MS Windows Server 2003 operating system.
- Support for IBM AIX operating system 5.3.
- Support for Oracle 10g and 10g RAC databases.
- Support for all versions of MySQL.
Discontinued as of: 8.5.000.15 iWD Runtime Node, including the issues that are specific to Localized (International) releases, at the following links:
Additional Information
Additional information on Genesys Telecommunications Laboratories, Inc. is available on our Customer Care website.
The following documentation also contains information about this software. Please consult the Deployment Guide first.
- The iWD Deployment Guide provides details about installing and configuring iWD Runtime Node.
-: | https://docs.genesys.com/Documentation/RN/latest/iwd-rt-node85rn/iwd-rt-node85rn | 2019-09-15T14:50:10 | CC-MAIN-2019-39 | 1568514571506.61 | [] | docs.genesys.com |
Date
Time
Date Offset. Add Milliseconds(Double) Time
Date Offset. Add Milliseconds(Double) Time
Date Offset. Add Milliseconds(Double) Time
Method
Offset. Add Milliseconds(Double)
Definition
Returns a new DateTimeOffset object that adds a specified number of milliseconds to the value of this instance.
public: DateTimeOffset AddMilliseconds(double milliseconds);
public DateTimeOffset AddMilliseconds (double milliseconds);
member this.AddMilliseconds : double -> DateTimeOffset
Public Function AddMilliseconds (milliseconds As Double) As DateTimeOffset
Parameters
A number of whole and fractional milliseconds. The number can be negative or positive.
Returns
An object whose value is the sum of the date and time represented by the current DateTimeOffset object and the number of whole milliseconds represented by
milliseconds.
Exceptions
The resulting DateTimeOffset value is less than MinValue.
-or-
The resulting DateTimeOffset value is greater than MaxValue.
Remarks.
Note
This method returns a new DateTimeOffset object. It does not modify the value of the current object by adding
milliseconds to its date and time.
Because a DateTimeOffset object does not represent the date and time in a specific time zone, the AddMilliseconds method does not consider a particular time zone's adjustment rules when it performs date and time arithmetic. | https://docs.microsoft.com/en-gb/dotnet/api/system.datetimeoffset.addmilliseconds?view=netframework-4.8 | 2019-09-15T15:00:47 | CC-MAIN-2019-39 | 1568514571506.61 | [] | docs.microsoft.com |
Configuring Service Connections for Grails
Cloud Foundry provides extensive support for connecting a Grails application to services such as MySQL, Postgres, MongoDB, Redis, and RabbitMQ. In many cases, a Grails application running on Cloud Foundry can automatically detect and configure connections to services. For more advanced cases, you can control service connection parameters yourself.
Auto-Configuration
Grails provides plugins for accessing SQL (using
Hibernate),
MongoDB, and
Redis services.
If you install any of these plugins and configure them in your
Config.groovy
or
DataSource.groovy file, Cloud Foundry reconfigures the plugin with
connection information when your app starts.
If you use all three types of services, your configuration might look like this:
environments { production { dataSource { url = 'jdbc:mysql://localhost/db?useUnicode=true&characterEncoding=utf8' dialect = org.hibernate.dialect.MySQLInnoDBDialect driverClassName = 'com.mysql.jdbc.Driver' username = 'user' password = "password" } grails { mongo { host = 'localhost' port = 27107 databaseName = "foo" username = 'user' password = 'password' } redis { host = 'localhost' port = 6379 password = 'password' timeout = 2000 } } } }
The
url,
host,
port,
databaseName,
username, and
password fields in this configuration will be overriden by the Cloud Foundry auto-reconfiguration if it detects that the application is running in a Cloud Foundry environment. If you want to test the application locally against your own services, you can put real values in these fields. If the application will only be run against Cloud Foundry services, you can put placeholder values as shown here, but the fields must exist in the configuration.
Manual Configuration
If you do not want to use auto-configuration, you can configure the Cloud Foundry service connections manually.
Follow the steps below to manually configure a service connection.
Add the
spring-cloudlibrary to the
dependenciessection of your
BuildConfig.groovyfile.
repositories { grailsHome() mavenCentral() grailsCentral() mavenRepo "" } dependencies { compile "org.springframework.cloud:spring-cloud-cloudfoundry-connector:1.0.0.RELEASE" compile "org.springframework.cloud:spring-cloud-spring-service-connector:1.0.0.RELEASE" }
Adding the
spring-cloudlibrary allows you to disable auto-configuration and use the
spring-cloudAPI in your
DataSource.groovyfile to set the connection parameters.
Add the following to your
grails-app/conf/spring/resources.groovyfile to disable auto-configuration:
beans = { cloudFactory(org.springframework.cloud.CloudFactory) }
Add the following
importsto your
DataSource.groovyfile to allow
spring-cloudAPI commands:
import org.springframework.cloud.CloudFactory import org.springframework.cloud.CloudException
Add the following code to your
DataSource.groovyfile to enable Cloud Foundry’s
getCloudmethod to function locally or in other environments outside of a cloud.
def cloud = null try { cloud = new CloudFactory().cloud } catch (CloudException) {}
Use code like the following to access the cloud object:
def dbInfo = cloud?.getServiceInfo('myapp-mysql') url = dbInfo?.jdbcUrl username = dbInfo?.userName password = dbInfo?.password
myapp-mysqlis the name of the service as it appears in the
namecolumn of the output from
cf services. For example,
mysqlor
rabbitmq.
The example
DataSource.groovy file below contains the following:
- The
importsthat allow
spring-cloudAPI commands
- The code that enables the
getCloudmethod to function locally or in other environments outside of a cloud
- Code to access the cloud object for SQL, MongoDB, and Redis services
import org.springframework.cloud.CloudFactory import org.springframework.cloud.CloudException def cloud = null try { cloud = new CloudFactory().cloud } catch (CloudException) {} dataSource { pooled = true dbCreate = 'update' driverClassName = 'com.mysql.jdbc.Driver' } environments { production { dataSource { def dbInfo = cloud?.getServiceInfo('myapp-mysql') url = dbInfo?.jdbcUrl username = dbInfo?.userName password = dbInfo?.password } grails { mongo { def mongoInfo = cloud?.getServiceInfo('myapp-mongodb') host = mongoInfo?.host port = mongoInfo?.port databaseName = mongoInfo?.database username = mongoInfo?.userName password = mongoInfo?.password } redis { def redisInfo = cloud?.getServiceInfo('myapp-redis') host = redisInfo?.host port = redisInfo?.port password = redisInfo?.password } } } development { dataSource { url = 'jdbc:mysql://localhost:5432/myapp' username = 'sa' password = '' } grails { mongo { host = 'localhost' port = 27107 databaseName = 'foo' username = 'user' password = 'password' } redis { host = 'localhost' port = 6379 password = 'password' } } } } | https://docs.pivotal.io/pivotalcf/2-6/buildpacks/java/configuring-service-connections/grails-service-bindings.html | 2019-09-15T14:52:47 | CC-MAIN-2019-39 | 1568514571506.61 | [] | docs.pivotal.io |
What Happens During PAS Upgrades
This topic explains what happens to Pivotal Application Service (PAS) components and apps during a PAS upgrade.
BOSH Drains Diego Cell VMs
During a PAS the Specific Guidance for Diego Cells section of the Configuring PAS for Upgrades topic.
cf push Can Become Unavailable
cf push is mostly available for the duration of a PAS upgrade. However,
cf push can become unavailable when a single VM is in use or during BOSH Backup and Restore (BBR).
For more information, see cf push Availability During Pivotal Application Service Upgrades.
PAS Components Upgrade
This section describes the order in which Ops Manager upgrades components and runs tasks during a full platform upgrade. It also explains how the scale of different Pivotal Application Service (PAS) components affects uptime during upgrades, and which components are scalable.
When performing an upgrade, Ops Manager first upgrades individual components, and then runs one-time tasks.
The Components section describes how Ops Manager upgrades PAS components and explains how individual component upgrades affect broader PAS capabilities.
The One-Time Tasks section lists the tasks that Ops Manager runs after it upgrades the PAS components.
Components
Ops Manager upgrades PAS components in a fixed order that honors component dependencies and minimizes downtime and other system limitations during the upgrade process.
The type and duration of downtime and other limitations that you can expect during a PAS upgrade reflect the following:
Component instance scaling. See How Single-Component Scaling Affects Upgrades
Component upgrade order. See Component Upgrade Order and Behavior
How Single-Component Scaling Affects Upgrades
In Pivotal Cloud Foundry (PCF) Ops Manager, the Pivotal Application Service (PAS) tile > Resource Config pane the Component Upgrade Order and Behavior table below.
Note: A full Ops Manager upgrade may take close to two hours, and you will have limited ability to deploy an application during this time.
Component Upgrade Order and Behavior
The table below lists components in the order that Ops Manager upgrades each. It also lists which components are scalable and explains how component downtime affects PAS app and control availability. The table includes the following columns:
Scalable: Indicates whether the component is scalable above a single instance.
Note: For components marked with a checkmark in this column, we recommend that you change the preconfigured instance value of
1to.
Downtime Affects…: Indicates the plane of the PAS the following information:
- Component availability, behavior, and usage during an upgrade
- Guidance on disabling the component before an upgrade
One-Time Tasks
After Ops Manager upgrades components, it performs system checks and launches UI apps and other PAS components as Cloud Foundry apps. These tasks run in the following order:
Upgrading Installation Example
For sample performance measurements of an upgrading Cloud Foundry installation, see Upgrade Load Example: Pivotal Web Services. | https://docs.pivotal.io/pivotalcf/2-6/upgrading/understanding-pas.html | 2019-09-15T13:54:08 | CC-MAIN-2019-39 | 1568514571506.61 | [] | docs.pivotal.io |
You are viewing documentation for version 3 of the AWS SDK for Ruby. Version 2 documentation can be found here.
Class: Aws::QuickSight::Types::AccessDeniedException
- Defined in:
- gems/aws-sdk-quicksight/lib/aws-sdk-quicksight/types.rb
Overview.
Instance Attribute Summary collapse
- #message ⇒ String
- #request_id ⇒ String
The AWS request id for this request.
Instance Attribute Details
#message ⇒ String
#request_id ⇒ String
The AWS request id for this request. | https://docs.aws.amazon.com/sdk-for-ruby/v3/api/Aws/QuickSight/Types/AccessDeniedException.html | 2019-09-15T15:06:33 | CC-MAIN-2019-39 | 1568514571506.61 | [] | docs.aws.amazon.com |
Entity replacements
This feature builds on top of the Entities system to allow you to specify alternatives to those entities. This provides per-instance customisation for workflows, giving some extra flexibility where required.
Usually this functionality will not be required, as the
Entities system will be sufficient. It’s worth avoiding the extra layer of abstraction if possible.
Using replacement entities
myWorkflow.use("std:entities:entity_replacement", { replacements: { // replacement entity definitions "editorOrNominee": { entity: "editor", assignableWhen: {flags: ["canAssignReplacements"]}, replacementTypes: [T.Researcher] }, "adminOrNominee": { entity: "admin", assignableWhen: {flags: ["canAssignReplacements"]}, selectableWhen: {flags: ["canAssignReplacements", "adminNotApproved"]}, // optional listAll: true, // optional replacementTypes: [T.Staff] } onRenderUI: function(M, ui) { ui.addSidebarButton("/do/example-path"); ui.addDeferredRender(deferredRenderObject); } } });
In use throughout the consuming workflow plugin, the usage is the same as any non-replaced entity (so from the above definition, using
M.entities.editorOrNominee_ref is valid).
Replacements are stored as one-to-one mappings between object Refs, so replacement is performed by a simple lookup operation.
If no replacement is specified, the original entity is returned.
Configuring entity replacements
use() the
std:entities:entity_replacement feature, passing in the specification object, which has keys:
key replacements
This is an object, specifying a mapping from new entity name to an object specifying which entity to replace, when the replacement can be specified, and what type of object can be used as a replacement.
The
entity string must be an entity that is already defined for the workflow.
assignableWhen is a Selector specifiying when the nominees for that role can be changed.
selectableWhen is an optional Selector which allows the user to choose whether that entity is selected for this workflow. This sets a flag on the workflow instance object
M, which follows the naming pattern
entity-selected_ENTITYNAME_ORIGINALREF. The ref used is that of the original entity, not the replacement, for uniqueness.
These flags can then be used for any custom logic the consuming workflow requires. If
selectableWhen is not used then the entity is always selected.
listAll is an optional boolean. When
true the underlying entity
_list is used, allowing a replacement to be chosen for each of those entities, if required.
replacementTypes is an array of schema types. The replacement StoreObject must have one of these types.
function onRenderUI
Optional. Passes in an object containing functions to add custom UI into the entity replacements overview page. The functions implemented are:
addDeferredRender(deferred)
Adds the
deferredRender() object passed in to be rendered at the bottom of the entity replacement page.
addSidebarButton(link, label, indicator)
Uses the url passed in as
link as the path for an action button rendered into the right-hand sidebar.
label and
indicator are optional, and have sensible default values.
User interface
The feature supplies a full functional user interface for viewing and selecting the replacement entities for a workflow. A link is added to the workflow obect’s Action Panel, linking to the Overview page.
An
entityReplacementUserInterfaceURL() method (taking no arguments) is added to the
M workflow object if the ui is needed to be accessed from elsewhere.
Overview
This gives a table of the entities that have been changed, their original value, and the replacement. If the
assignableWhen selector is
true, then a
set or
change link will be displayed to the user.
Form
A very simple form page, displaying the original entity and a lookup field for replacing it. The lookup is restricted to suggesting objects of the replacement’s
replacementTypes.
Text
The text for use in the ui should be added to the workflow Text interface.
The following names are searched:
Displayable entity names
- entity-replacement:display-name:editorOrNominee
- entity-replacement:display-name
Either a generic displayable value can be given, or a user-readable name supplied for each of your replacement entitites
Page titles
- entity-replacement:page-title:overview
- entity-replacement:page-title:form
- entity-replacement:page-title
These are optional, with sensible defaults used if left undefined | https://docs.haplo.org/standard/workflow/definition/std-features/entity-replacements | 2019-09-15T14:45:31 | CC-MAIN-2019-39 | 1568514571506.61 | [] | docs.haplo.org |
Information architecture in modern SharePoint
Having a solid information architecture is an important prerequisite for realizing a well-maintained and well-performing portal. Designing the optimal structure requires good planning. Even with a good plan, information architecture is a continuous process. Over time, organizations change, people change, and projects change. Plus, the more you learn about your users, the more discoverable you can make your content. The modern experience in SharePoint makes it easier and faster to evolve your structure and navigation because it is more flexible with the new, flat structure of Hub sites.
Hub sites
Classic SharePoint architecture is typically built using a hierarchical system of site collections and sub-sites, with inherited navigation, permissions, and site designs. Once built, this structure can be inflexible and difficult to maintain. In the modern SharePoint experience, every site is a site collection, and all can be associated to a hub site which is a flat structure of sites that share navigation, branding, and other elements. This type of structure is far more flexible and adaptive to the changing needs of your organization. Learn about how to plan for Hub sites.
Navigation
The most effective SharePoint sites (and web sites differs.
No matter which framework you are using, you can use the guidance in Plan navigation in the modern experience to help make good decisions for navigation.
Feedback | https://docs.microsoft.com/en-us/sharepoint/information-architecture-modern-experience | 2019-09-15T15:19:55 | CC-MAIN-2019-39 | 1568514571506.61 | [array(['sharepointonline/media/5f386901-5347-4dce-94db-9ec35b5746d5.png',
'HR hub'], dtype=object) ] | docs.microsoft.com |
>>. ® Enterprise: 7.2.0, 7.2.1, 7.2.2, 7.2.3, 7.2.4, 7.2.5, 7.2.6, 7.2.7, 7.2.8, 7.3.0, 7.3.1
Feedback submitted, thanks! | https://docs.splunk.com/Documentation/Splunk/7.2.7/SearchReference/Bin | 2019-09-15T14:21:05 | CC-MAIN-2019-39 | 1568514571506.61 | [array(['/skins/OxfordComma/images/acrobat-logo.png', None], dtype=object)] | docs.splunk.com |
Set a block of pixel colors.
This function takes a color,
RGB24 and
Alpha8 texture formats.
For other formats
SetPixels is ignored.
The texture also has to have read/write enabled flag set in the texture import settings.
Using
SetPixels can be much faster than calling SetPixel repeatedly, especially
for large textures. In addition,
SetPixels can access individual mipmap levels. For an even faster pixel data
access, use GetRawTextureData that returns a
NativeArray.
See Also: GetPixels, SetPixels32, Apply, GetRawTextureData, LoadRawTextureData, mipmapCount.
using UnityEngine; using System.Collections;
public class ExampleClass : MonoBehaviour { void Start() { Renderer rend = GetComponent<Renderer>();
// duplicate the original texture and assign to the material Texture2D texture = Instantiate(rend.material.mainTexture) as Texture2D; rend.material.mainTexture = texture;
// colors used to tint the first 3 mip levels Color[] colors = new Color[3]; colors[0] = Color.red; colors[1] = Color.green; colors[2] = Color.blue; int mipCount = Mathf.Min(3, texture.mipmapCount);
// tint each mip level for (int mip = 0; mip < mipCount; ++mip) { Color[] cols = texture.GetPixels(mip); for (int i = 0; i < cols.Length; ++i) { cols[i] = Color.Lerp(cols[i], colors[mip], 0.33f); } texture.SetPixels(cols, mip); } // actually apply all SetPixels, don't recalculate mip levels texture.Apply(false); } }
Set a block of pixel colors.
This function is an extended version of
SetPixels above; it does not modify the whole
mip level but modifies only
blockWidth by
blockHeight region starting at x,y.
The
colors array must be blockWidth*blockHeight size, and the modified block
must fit into the used mip level. | https://docs.unity3d.com/kr/current/ScriptReference/Texture2D.SetPixels.html | 2019-09-15T14:25:10 | CC-MAIN-2019-39 | 1568514571506.61 | [] | docs.unity3d.com |
WebExtensions API Development¶
This documentation covers the implementation of WebExtensions inside Firefox. Documentation about existing WebExtension APIs and how to use them to develop WebExtensions is available on MDN.
To use this documentation, you should already be familiar with WebExtensions, including the anatomy of a WebExtension and permissions. You should also be familiar with concepts from Firefox development including e10s in particular.
WebExtension API Developers Guide
- Background
- API Implementation Basics
- API Schemas
- Implementing a function
- Implementing an event
- Implementing a manifest property
- Managing the Extension Lifecycle
- Incognito Implementation
- Utilities for implementing APIs
- WebExtensions Javascript Component Reference | https://firefox-source-docs.mozilla.org/toolkit/components/extensions/webextensions/index.html | 2019-09-15T15:11:45 | CC-MAIN-2019-39 | 1568514571506.61 | [] | firefox-source-docs.mozilla.org |
pointpats.Lenv¶
- class
pointpats.
Lenv(pp, intervals=10, dmin=0.0, dmax=None, d=None, pct=0.05, realizations=None)[source]¶
Simulation envelope for Len) >>> lenv = Lenv(pp, realizations=csrs) >>> lenv.plot()
- Attributes
- namestring
Name of the function. (“G”, “F”, “J”, “K” or “L”)
- observedarray
A 2-dimensional numpy array of 2 columns. The first column is the distance domain sequence for the observed point pattern. The second column is, intervals=10, dmin=0.0, dmax=None, d=None, pct=0.05, realizations=None)[source]¶
Initialize self. See help(type(self)) for accurate signature.
Methods | https://pointpats.readthedocs.io/en/v2.1.0/generated/pointpats.Lenv.html | 2019-09-15T14:00:29 | CC-MAIN-2019-39 | 1568514571506.61 | [] | pointpats.readthedocs.io |
Revision history of "JHTML::script"::script (cleaning up content namespace and removing duplicated API references) | https://docs.joomla.org/index.php?title=JHTML::script&action=history | 2015-04-18T06:06:06 | CC-MAIN-2015-18 | 1429246633799.48 | [] | docs.joomla.org |
Difference between revisions of "What jdoc include types are available?" From Joomla! Documentation Redirect page Revision as of 08:37, 22 June 2013 (view source)Tom Hutchison (Talk | contribs)m (added Category:Template Development using HotCat)← Older edit Latest revision as of 10:48, 25 September 2013 (view source) Wilsonge (Talk | contribs) (Redirect to better page) [[Category:Template Development]]+ −</noinclude>+ Latest revision as of 10:48, 25 September 2013 Jdoc statements Retrieved from ‘’ | https://docs.joomla.org/index.php?title=What_jdoc_include_types_are_available%3F&diff=103962&oldid=100796 | 2015-04-18T05:26:33 | CC-MAIN-2015-18 | 1429246633799.48 | [] | docs.joomla.org |
4.1
LDAP
For this mode, Lenses relies on the LDAP server to handle the user authentication. The groups that a user belongs to (authorization) may come either from LDAP (automatic mapping), or via manually mapping an LDAP user to a set of Lenses groups.
Since the authentication is deferred, this means users are stored by LDAP and not Lenses. Once the authentication is successful, the next step involves querying LDAP for the user’s groups. All the user’s groups are then matched by name (case sensitive) with the groups stored in Lenses. All the matching groups' permissions are combined. If a user has been assigned manually a set of Lenses groups, then the groups coming from LDAP are ignored.
Active Directory (AD) and OpenLDAP (with the memberOf overlay if LDAP group mapping is required) servers are tested and supported in general. Due to the LDAP standard ambiguity, it is impossible to support all the configurations in the wild. The most usual pain point is LDAP group mapping. If the default class that extracts and maps LDAP groups to Lenses groups does not work, it is possible to implement your own.
Before setting up an LDAP connection, we advise to familiarize with LDAP and/or have access to your LDAP and/or Active Directory administrators.
An LDAP setup example with LDAP group mapping is shown below:
# LDAP connection details lenses.security.ldap.url="ldaps://example.com:636" ## For the LDAP user please use the distinguished name (DN). ## The LDAP user must be able to list users and their groups. lenses.security.ldap.user="cn=lenses,ou=Services,dc=example,dc=com" lenses.security.ldap.password="[PASSWORD]" ## When set to true, it uses the lenses.security.ldap.user to read the user's groups ## lenses.security.ldap.use.service.user.search=false # LDAP user search settings lenses.security.ldap.base="ou=Users,dc=example,dc=com" lenses.security.ldap.filter="(&(objectClass=person)(sAMAccountName=<user>))" # LDAP group search and mapping settings lenses.security.ldap.plugin.class="io.lenses.security.ldap.LdapMemberOfUserGroupPlugin" lenses.security.ldap.plugin.group.extract.regex="(?i)CN=(\\w+),ou=Groups.*" lenses.security.ldap.plugin.memberof.key="memberOf" lenses.security.ldap.plugin.person.name.key = "sn"
In the example above you can distinguish three key sections for LDAP:
- the connection settings,
- the user search settings,
- and the group search settings.
Lenses uses the connection settings to connect to your LDAP server. The provided account should be able to list users under the base path and their groups. The default group plugin only needs access to the memberOf attributes for each user, but your custom implementation may need different permissions.
When a user tries to login, a query is sent to the LDAP server for all accounts that are under the lenses.security.ldap.base and match the lenses.security.ldap.filter. The result needs to be unique; a distinguished name (DN) —the user that will login to Lenses.
In the example, the application would query the LDAP server for all entities
under ou=Users,dc=example,dc=com that satisfy the LDAP filter
(&(objectClass=person)(sAMAccountName=
Once the user has been verified, Lenses queries the user groups and maps them to Lenses groups. For every LDAP group that matches a Lenses group, the user is granted the selected permissions.
Depending on the LDAP setup, only one of the user or the Lenses service user may be able to retrieve the group memberships. This can be controlled by the option lenses.security.ldap.use.service.user.search.
The default value (false) uses the user itself to query for groups. Groups be can created in the admin section of the web interface, or in the command line via the lenses-cli application.
LDAP user group mapping
If mapping LDAP groups to Lenses groups is not desired. Manually map LDAP users to Lenses groups, using the web interface, or the lenses-cli.
- Create a user of type LDAP inside Lenses and assign them groups.
LDAP still provides the authentication, but all LDAP groups for this user are ignored.
When you create an LDAP user in Lenses, the username will be used in the search expression set in lenses.security.ldap.filter to authenticate them. If no user should be allowed to use the groups coming from LDAP, then this functionality should be disabled.
Additionaly you can set lenses.security.ldap.plugin.memberof.key or lenses.security.ldap.plugin.group.extract.regex to a bogus entry, rendering it unusable.
An example would be:
lenses.security.ldap.plugin.memberof.key = "notaKey"
Group extract plugin
The group extract plugin is a class that implements an LDAP query that retrieves a user’s groups and makes any necessary transformation to match the LDAP group to a Lenses group name.
The default class implementation that comes with Lenses is io.lenses.security.ldap.LdapMemberOfUserGroupPlugin.
If your LDAP server supports the memberOf functionality, where each user has his/her group memberships added as attributes to his/her entity, you can use it by setting the lenses.security.ldap.plugin.class option to this class:
lenses.security.ldap.plugin.class=io.lenses.security.ldap.LdapMemberOfUserGroupPlugin
Below you will see a brief example of its setup.
# Set the full classpath that implements the group extraction lenses.security.ldap.plugin.class="io.lenses.security.ldap.LdapMemberOfUserGroupPlugin" # The plugin uses the 'memberOf' attribute. If this attribute has a different # name in your LDAP set it here. lenses.security.ldap.plugin.memberof.key="memberOf" # This regular expression should return the group common name. If it matches # a Lenses group name, the user is granted its permissions. # As an example if there is a 'memberOf' attribute with value: # cn=LensesAdmins,ou=Groups,dn=example,dn=com # The regular expression will return 'LensesAdmins'. # Group names are case sensitive. lenses.security.ldap.plugin.group.extract.regex="(?i)cn=(\\w+),ou=Groups.*" # This is the LDAP attribute that holds the user's full name. It's optional. lenses.security.ldap.plugin.person.name.key = "sn"
As an example, the memberOf search may return two attributes for user Mark:
attribute value --------- ------------------------------------------ memberOf cn=LensesAdmin,ou=Groups,dc=example,dc=com memberOf cn=RandomGroup,ou=Groups,dc=example,dc=com
The regular expression (?i)cn=(\w+),ou=Groups.* will return these two regex group matches:
LensesAdmin RandomGroup
If any of these groups exist in Lenses, Mark will be granted the permissions of the matching groups.
Custom LDAP plugin
If your LDAP does not offer the memberOf functionality, or uses a complex setup, you can provide your
own implementation. Start with the code on
GitHub
, create
a JAR, add it to the
plugins/ folder and set your implementation’s full classpath:
# Set the full classpath that implements the group extraction lenses.security.ldap.plugin.class="io.lenses.security.ldap.LdapMemberOfUserGroupPlugin"
Do not forget to grant to the account any permissions it may need for your plugin to work.
Manage permissions
To learn how to use data centric permissions for users and service accounts check the help center .
LDAP Configuration Options
See configuration settings.
Notes
The following configuration entries are specific to the default group plugin.
A custom LDAP plugin might require different entries under
lenses.security.ldap.plugin:
lenses.security.ldap.plugin.memberof.key lenses.security.ldap.plugin.person.name.key lenses.security.ldap.plugin.group.extract.regex lenses.security.ldap.plugin.person.name.key | https://docs.lenses.io/4.1/integrations/authentication/ldap/ | 2021-09-16T16:17:59 | CC-MAIN-2021-39 | 1631780053657.29 | [] | docs.lenses.io |
To install Red Hat Advanced Cluster Security for Kubernetes, you must have:
OpenShift Container Platform version 4.5 or later.
Cluster nodes with a supported operating system. See the Red Hat Advanced Cluster Security for Kubernetes Support Policy for additional information.
Operating system: Amazon Linux, CentOS, Container-Optimized OS from Google, CoreOS, Debian, Red Hat Enterprise Linux, or Ubuntu.
Processor and memory: 2 CPU cores and at least 3GiB of RAM.
Persistent storage by using persistent volume claim (PVC).
Use Solid-State Drives (SSDs) for best performance. However, you can use another storage type if you do not have SSDs available.
Helm command-line interface (CLI) v3.2 or newer.
Use the
helm version command to verify the version of Helm you have installed.
The OpenShift Container Platform CLI (
oc).
You must have the required permissions to configure deployments in the Central cluster.
You must have access to the Red Hat Container Registry. See Red Hat Container Registry Authentication for information about downloading images from
registry.redhat.io.
A single containerized service called Central handles data persistence, API interactions, and user interface (Portal) access.
Central requires persistent storage:
You can provide storage with a persistent volume claim (PVC).
Use Solid-State Drives (SSD) for best performance. However, you can use another storage type if you do not have SSDs available.
The following table lists the minimum memory and storage values required to install and run Central.
Use the following compute resources and storage values depending upon the number of nodes in your cluster.
Red Hat Advanced Cluster Security for Kubernetes includes an image vulnerability scanner called Scanner. This service scans images that are not already scanned by scanners integrated into image registries.
Sensor monitors your Kubernetes and OpenShift Container Platform clusters. These services currently deploy in a single deployment, which handles interactions with the Kubernetes API and coordinates with Collector.
The Admission controller prevents users from creating workloads that violate policies you configure.
By default, the admission control service runs 3 replicas. The following table lists the request and limits for each replica. | https://docs.openshift.com/acs/installing/prerequisites.html | 2021-09-16T15:32:51 | CC-MAIN-2021-39 | 1631780053657.29 | [] | docs.openshift.com |
Have I been pwned? integration The Security Operations Have I been pwned? integration enables you to submit lookups on domain names and email addresses to determine whether user personal data has been compromised by data breaches. Explore Security Incident Response integrations Set up Security Operations Have I been pwned? integration setup Activate the Security Operations Have I been pwned? integration Use Perform lookups on observables Threat Lookup - Have I been pwned? workflow Develop ServiceNow Security Operations integration development guidelines Tips for writing integrations Developer training Developer documentation Find components installed with an application Troubleshoot and get help Integration troubleshooting Ask or answer questions in the Security Operations community Search the Known Error Portal for known error articles Contact Customer Service and Support | https://docs.servicenow.com/bundle/rome-security-management/page/product/secops-integration-sir/secops-integration-haveibeenpwned/reference/haveibeenpwned-landing-page.html | 2021-09-16T16:25:25 | CC-MAIN-2021-39 | 1631780053657.29 | [] | docs.servicenow.com |
API Structure and LibrariesAPI Structure and Libraries
gRPCgRPC
We use gRPC as our messaging protocol for all our APIs, except the
Subscription API and
Provision API.
Our
.proto files can be found at github.com/working-group-two/wgtwoapis
under the
wgtwo folder.
REST-likeREST-like
The
Subscription API and
Provision API has a REST-like API.
Our OpenAPI specification can be found at github.com/working-group-two/wgtwoapis
under the
openapi folder.
LibrariesLibraries
Using our
.proto files and OpenAPI specification, you may generate code for most languages.
In addition, we offer generated code for Go and Java.
GoGo
Add import for the API you would like to use. Path is the same as its
.proto file:
import ( wgtwoEvents "github.com/working-group-two/wgtwoapis/wgtwo/events/v0" )
Java / Kotlin using MavenJava / Kotlin using Maven
To add the dependencies, first you need to add the repository:
<repositories> <repository> <id>jitpack.io</id> <url></url> </repository> </repositories>
Then you can add the dependencies:
<dependencies> <dependency> <groupId>com.github.working-group-two.wgtwoapis</groupId> <artifactId>event-grpc</artifactId> <version>cca7093</version> </dependency> </depenencies>
The specific package to include is included in the documentation of each API.
The version used is the commit SHA from our repository.
Latest version should match the output of:
git ls-remote master | cut -f1 | https://docs.wgtwo.com/intro/api-structure-and-libraries/ | 2021-09-16T16:50:11 | CC-MAIN-2021-39 | 1631780053657.29 | [] | docs.wgtwo.com |
Developers can choose to interact directly with Ethereum clients via the JSON-RPC API, however there are often easier options for dApp developers depending on their language preferences.
Polkadot clients have different libraries.
Many API libraries exist to provide wrappers on top of the JSON-RPC API. With these libraries, developers can write one-line methods in the programming language of their choice to initialize JSON-RPC requests (under the hood) that interact with Ethereum.
The web3.js is a collection of libraries that allow you to interact with a local or remote ethereum node using HTTP, IPC or WebSocket.
The ethers.js library aims to be a complete and compact library for interacting with the Ethereum Blockchain and its ecosystem. It was originally designed for use with ethers.io and has since expanded into a more general-purpose library.
The Nethereum library is the .Net integration library for Ethereum, simplifying smart contract management and interaction with Ethereum nodes whether they are public, like Geth , Parity or private, like Quorum and Besu.
Polkadot nodes can use libraries such as @polkadot/api, go-substrate-rpc-client, etc. | https://docs.ankr.com/blockchain-developer-api-services/untitled-1 | 2021-09-16T16:35:07 | CC-MAIN-2021-39 | 1631780053657.29 | [] | docs.ankr.com |
Subsets and Splits
No saved queries yet
Save your SQL queries to embed, download, and access them later. Queries will appear here once saved.