content
stringlengths 0
557k
| url
stringlengths 16
1.78k
| timestamp
timestamp[ms] | dump
stringlengths 9
15
| segment
stringlengths 13
17
| image_urls
stringlengths 2
55.5k
| netloc
stringlengths 7
77
|
---|---|---|---|---|---|---|
openstack.cloud.security_group module – Add/Delete security groups from an OpenStack cloud..security_group.
Synopsis
Add or Remove security groups from an OpenStack cloud.
# Create a security group - openstack.cloud.security_group: cloud: mordred state: present name: foo description: security group for foo servers # Update the existing 'foo' security group description - openstack.cloud.security_group: cloud: mordred state: present name: foo description: updated description for the foo security group # Create a security group for a given project - openstack.cloud.security_group: cloud: mordred state: present name: foo project: myproj
Collection links
Issue Tracker Repository (Sources) | https://docs.ansible.com/ansible/latest/collections/openstack/cloud/security_group_module.html | 2022-06-25T13:52:27 | CC-MAIN-2022-27 | 1656103035636.10 | [] | docs.ansible.com |
Gradle.
Gradle is quickly gaining popularity as flexible Java build system, which can be used in situations requiring writing custom build logic. Flexibility means that Gradle build scripts can differ significantly across different projects and there is no universal way of packaging Gradle projects. However, in most common situations the following steps macro is very similar to
%mvn_build and it
takes the same arguments. It is a wrapper for
gradle command which
enables so called "local mode" of artifact resolution - dependency
artifacts are resolved from local system before trying other
repositories (system Maven repositories are used through XMvn). By
default
%gradle_build invokes Gradle with
build goal, but when
tests are skipped (option
-f) then
assemble goal is invoked
instead.
In most simple cases calling
%gradle_build should be enough to
resolve all dependencies, but sometimes additional patching of build
script may be necessary. The following example patch adds XMvn
resolver to make Gradle correctly resolve dependencies of build script
itself.
diff --git a/buildSrc/build.gradle b/buildSrc/build.gradle index a3cb553..50dd2a4 100644 --- a/buildSrc/build.gradle +++ b/buildSrc/build.gradle @@ -21,6 +21,7 @@ apply plugin: 'idea' apply plugin: 'eclipse' repositories { + xmvn() maven { url '' } mavenCentral() }
Once Gradle build completes, all Maven artifacts produced by the build
will be marked for installation.
%mvn_install macro, which should
be called from the
%install section of spec file, automatically
handles installation of all marked artifacts and generates
.mfiles
file lists, which are used to define package contents through
%file
sections. If you are building Javadocs (recommended) then you should
call
%mvn_install with
-J argument pointing to directory
containing generated Javadoc documentation.
Most of Maven macros starting with
%mvn_ can also be used when
building packages with Gradle. This includes macros that control
which artifacts should be installed where and how (
%mvn_package,
%mvn_alias,
%mvn_file,
%mvn_compat_version),
%mvn_artifact
that is used to mark additional artifacts as installable and
%mvn_config which can be used to add extra config for XMvn. | https://docs.fedoraproject.org/uz/java-packaging-howto/gradle/ | 2022-06-25T14:29:02 | CC-MAIN-2022-27 | 1656103035636.10 | [] | docs.fedoraproject.org |
Presets Join KC discussion
KingComposer's Preset is the best way to save your element settings and use them later in other places. One more wonderful thing is that not only is it used for elements, but also you can create presets data for rows and columns. Here we show how to save presets data, it is very easy to save within 3 seconds.
1. Save current settings to new preset:
2. Use presets:
3. Load preset to exist element: | https://docs.kingcomposer.com/presets/ | 2022-06-25T14:17:28 | CC-MAIN-2022-27 | 1656103035636.10 | [] | docs.kingcomposer.com |
How to author an attestation policy
Attestation policy is a file uploaded to Microsoft Azure Attestation. Azure Attestation offers the flexibility to upload a policy in an attestation-specific policy format. Alternatively, an encoded version of the policy, in JSON Web Signature, can also be uploaded. The policy administrator is responsible for writing the attestation policy. In most attestation scenarios, the relying party acts as the policy administrator. The client making the attestation call sends attestation evidence, which the service parses and converts into incoming claims (set of properties, value). The service then processes the claims, based on what is defined in the policy, and returns the computed result.
The policy contains rules that determine the authorization criteria, properties, and the contents of the attestation token. A sample policy file looks as below:
version=1.0; authorizationrules { c:[type="secureBootEnabled", issuer=="AttestationService"]=> permit() }; issuancerules { c:[type="secureBootEnabled", issuer=="AttestationService"]=> issue(claim=c) c:[type="notSafeMode", issuer=="AttestationService"]=> issue(claim=c) };
A policy file has three segments, as seen above:
version: The version is the version number of the grammar that is followed.
version=MajorVersion.MinorVersion
Currently the only version supported is version 1.0.
authorizationrules: A collection of claim rules that will be checked first, to determine if Azure Attestation should proceed to issuancerules. The claim rules apply in the order they are defined.
issuancerules: A collection of claim rules that will be evaluated to add additional information to the attestation result as defined in the policy. The claim rules apply in the order they are defined and are also optional.
See claim and claim rules for more information.
Drafting the policy file
- Create a new file.
- Add version to the file.
- Add sections for authorizationrules and issuancerules.
version=1.0; authorizationrules { =>deny(); }; issuancerules { };
The authorization rules contain the deny() action without any condition, to ensure no issuance rules are processed. Alternatively, the authorization rule can also contain permit() action, to allow processing of issuance rules.
- Add claim rules to the authorization rules
version=1.0; authorizationrules { [type=="secureBootEnabled", value==true, issuer=="AttestationService"]=>permit(); }; issuancerules { };
If the incoming claim set contains a claim matching the type, value, and issuer, the permit() action will tell the policy engine to process the issuancerules.
- Add claim rules to issuancerules.
version=1.0; authorizationrules { [type=="secureBootEnabled", value==true, issuer=="AttestationService"]=>permit(); }; issuancerules { => issue(type="SecurityLevelValue", value=100); };
The outgoing claim set will contain a claim with:
[type="SecurityLevelValue", value=100, valueType="Integer", issuer="AttestationPolicy"]
Complex policies can be crafted in a similar manner. For more information, see attestation policy examples.
- Save the file.
Creating the policy file in JSON Web Signature format
After creating a policy file, to upload a policy in JWS format, follow the below steps.
Generate the JWS, RFC 7515 with policy (utf-8 encoded) as the payload
- The payload identifier for the Base64Url encoded policy should be "AttestationPolicy".
Sample JWT:
Header: {"alg":"none"} Payload: {"AttestationPolicy":" Base64Url (policy)"} Signature: {} JWS format: eyJhbGciOiJub25lIn0.XXXXXXXXX.
(Optional) Sign the policy. Azure Attestation supports the following algorithms:
- None: Don't sign the policy payload.
- RS256: Supported algorithm to sign the policy payload
Upload the JWS and validate the policy.
- If the policy file is free of syntax errors, the policy file is accepted by the service.
- If the policy file contains syntax errors, the policy file is rejected by the service.
Next steps
Feedback
Submit and view feedback for | https://docs.microsoft.com/en-ca/azure/attestation/author-sign-policy | 2022-06-25T15:04:39 | CC-MAIN-2022-27 | 1656103035636.10 | [] | docs.microsoft.com |
Notes
A new version of the agent has been released. Follow standard procedures to update your Infrastructure agent.
Added
- Agent version is now logged in non-verbose mode, too.
Fixed
- A missing parsers configuration file for the log forwarder has been restored to the Windows installer.
- The
mountInfo/
diskstatsmapping for fetching LVM volumes' I/O stats has been fixed; it can still fail for older systems that lack a
mountInfofile and use custom names for their LVM volumes.
- A bug in the log forwarder that prevented some
tcpand
syslogURIs to be parsed correctly has been fixed.
Changed
- The built-in Flex integration has been updated to version 1.1.2. For more information, see the Flex changelog.
- The built-in Docker integration has been updated to version 1.2.1. For more information, see the Docker integration changelog. | https://docs.newrelic.com/docs/release-notes/infrastructure-release-notes/infrastructure-agent-release-notes/new-relic-infrastructure-agent-11124/ | 2022-06-25T14:20:41 | CC-MAIN-2022-27 | 1656103035636.10 | [] | docs.newrelic.com |
$ oc new-project vault
You can install Helm charts on an OKD cluster using the following methods:
The CLI.
The Developer perspective of the web console.
The Developer Catalog, in the Developer perspective of the web console, displays the Helm charts available in the cluster. By default, it lists the Helm charts from the Red Hat OpenShift Helm chart repository. For a list of the charts, see the Red Hat
Helm index file.
As a cluster administrator, you can add multiple Helm chart repositories, apart from the default one, can use either the Developer perspective in the web console or the CLI to select and install a chart from the Helm charts listed in the Developer Catalog. You can create Helm releases by installing Helm charts and see them in the Developer perspective of the web console.
You have logged in to the web console and have switched to the Developer perspective.
To create Helm releases from the Helm charts provided in the Developer Catalog:
In the Developer perspective, navigate to the +Add view and select a project. Then click.
Select the required chart version from the Chart Version drop-down list.
Configure your Helm chart by using the Form View or the YAML View.
Click Install to create a Helm release. You will be redirected to the Topology view where the release is displayed. If the Helm chart has release notes, the chart is pre-selected and the right panel displays the release notes for that release.
You can upgrade, rollback, or uninstall a Helm release by using the Actions button on the side panel or by right-clicking a Helm release.
You can use Helm by initializing the web terminal in the Developer perspective of the web console. For more information, see Using the web terminal.
Create a new project:
$ oc new-project nodejs-ex-k
Download an example Node.js chart that contains OKD objects:
$ git clone
Go to the directory with the sample chart:
$ cd redhat-helm-charts/alpha/nodejs-ex-k/
Edit the
Chart.yaml file and add a description of your chart:
apiVersion: v2 (1) name: nodejs-ex-k (2) description: A Helm chart for OpenShift (3) icon: (4) version: 0.2.1 (5)
Verify that the chart is formatted properly:
$ helm lint
[INFO] Chart.yaml: icon is recommended 1 chart(s) linted, 0 chart(s) failed
Navigate to the previous directory level:
$ cd ..
Install the chart:
$ helm install nodejs-chart nodejs-ex-k
Verify that the chart has installed successfully:
$ helm list
NAME NAMESPACE REVISION UPDATED STATUS CHART APP VERSION nodejs-chart nodejs-ex-k 1 2019-12-05 15:06:51.379134163 -0500 EST deployed nodejs-0.1.0 1.16.0
As a cluster administrator, you can add custom Helm chart repositories to your cluster and enable access to the Helm charts from these repositories in the Developer Catalog.
To add a new Helm Chart Repository, you must add the Helm Chart Repository custom resource (CR) to your cluster.
apiVersion: helm.openshift.io/v1beta1 kind: HelmChartRepository metadata: name: <name> spec: # optional name that might be used by console # name: <chart-display-name> connectionConfig: url: <helm-chart-repository-url>
For example, to add an Azure sample chart repository, run:
$ cat <<EOF | oc apply -f - apiVersion: helm.openshift.io/v1beta1 kind: HelmChartRepository metadata: name: azure-sample-repo spec: name: azure-sample-repo connectionConfig: url: EOF
Navigate to the Developer Catalog in the web console to verify that the Helm charts from the chart repository are displayed.
For example, use the Chart repositories filter to search for a Helm chart from the repository. | https://docs.okd.io/4.9/applications/working_with_helm_charts/configuring-custom-helm-chart-repositories.html | 2022-06-25T14:17:01 | CC-MAIN-2022-27 | 1656103035636.10 | [] | docs.okd.io |
Accept Giropay with UI components
Use Unzer UI component to add Unzer Giropay payment to your checkout page.
Overview
Using UI components for Giropay, you create a payment type resource that is used to make the payment. You do not need any form fields for this payment method.
Because no additional data is required for this payment method, you just need to create a simple HTML button on the client side:
="payment-form"> matches the ID used in the script: "payment-form".
Create a payment type resource
Call the method
unzerInstance.Giropay() to create an instance of the payment type Giropay.
// Creating an Giropay instance var giropay = unzerInstance.Giropay()
Add an event listener for submit action of the form.
Inside, create a
Promise on the
Giropay object. The
Promise gets either resolved or rejected:
let form = document.getElementById('payment-form'); form.addEventListener('submit', function(event) { event.preventDefault(); giropay
Giropay
typeId that you previously-gro-jldsmlmiprwe" } }
$unzer = new Unzer('s-priv-xxxxxxxxxxx'); $charge = $unzer->charge(12.99, 'EUR', 's-gro-fqaycgblsq0y', '');
Unzer unzer = new Unzer("s-priv-xxxxxxxxxx"); Charge charge = unzer.charge(BigDecimal.ONE, Currency.getInstance("EUR"), "s-gro-gro
Giropay resource, implement the following flow:
- Redirect the customer to the
redirectUrlreturned to the initial request.
- The customer is forwarded to the Giropay payment page.
- After a successful payment or abort on the Giropay-131937", -gro-grpucjmy5zrk" }, Giropay payments, see Manage Giropay payments page.. | https://docs.unzer.com/payment-methods/giropay/accept-giropay-ui-component/ | 2022-06-25T13:22:58 | CC-MAIN-2022-27 | 1656103035636.10 | [] | docs.unzer.com |
Integrate IPAM with AWS Organizations
Optionally, you can follow the steps in this section to integrate IPAM with AWS Organizations and delegate a member account as the IPAM account.
The IPAM account is responsible for creating an IPAM and using it to manage and monitor IP address usage.
Integrating IPAM with AWS Organizations and delegating an IPAM admin has the following benefits:
Share your IPAM pools with your organization: When you delegate an IPAM account, IPAM enables other AWS Organizations member accounts in the organization to allocate CIDRs from IPAM pools that are shared using AWS Resource Access Manager (RAM). For more information on setting up an organization, see What is AWS Organizations? in the AWS Organizations User Guide.
Monitor IP address usage in your organization: When you delegate an IPAM account, you give IPAM permission to monitor IP usage across all of your accounts. As a result, IPAM automatically imports CIDRs that are used by existing VPCs across other AWS Organizations member accounts into IPAM.
If you do not delegate an AWS Organizations member account as an IPAM account, IPAM will monitor resources only in the AWS account that you use to create the IPAM.
You must enable integration with AWS Organizations by using IPAM in the AWS management console or the enable-ipam-organization-admin-account AWS CLI command. This ensures that the
AWSServiceRoleForIPAMservice-linked role is created. If you enable trusted access with AWS Organizations by using the AWS Organizations console or the register-delegated-administrator
AWS CLI command, the
AWSServiceRoleForIPAMservice-linked role isn't created, and you can't manage or monitor resources within your organization.
When integrating with AWS Organizations:
You cannot use IPAM to manage IP addresses across multiple AWS Organizations.
IPAM charges you for each active IP address that it monitors in your organization's member accounts. For more information about pricing, see IPAM pricing
.
You must have an account in AWS Organizations and a management account set up with one or more member accounts. For more information about account types, see Terminology and concepts in the AWS Organizations User Guide. For more information on setting up an organization, see Getting started with AWS Organizations.
The IPAM account must be an AWS Organizations member account. You cannot use the AWS Organizations management account as the IPAM account.
The IPAM account must have an IAM policy attached to it that permits the
iam:CreateServiceLinkedRoleaction. When you create the IPAM, you automatically create the AWSServiceRoleForIPAM service-linked role.
The IAM user account associated with the AWS Organizations management account must have the following IAM policy actions attached:
ec2:EnableIpamOrganizationAdminAccount
organizations:EnableAwsServiceAccess
organizations:RegisterDelegatedAdministrator
iam:CreateServiceLinkedRole
For more information on managing IAM policies, see Editing IAM policies in the IAM User Guide.
- AWS Management Console
To select an IPAM account
Open the IPAM console at
.
In the AWS Management Console, choose the AWS Region in which you want to work with IPAM.
In the navigation pane, choose Settings.
Enter the AWS account ID for an IPAM account. The IPAM administrator must be an AWS Organizations member account.
Choose Delegate.
- Command line
The commands in this section link to the AWS CLI Reference documentation. The documentation provides detailed descriptions of the options that you can use when you run the commands.
To delegate an IPAM admin account using AWS CLI, use the following command: enable-ipam-organization-admin-account
When you delegate an Organizations member account as an IPAM account, IPAM automatically creates a service-linked IAM role in all member accounts in your organization. IPAM monitors the IP address usage in these accounts by assuming the service-linked IAM role in each member account, discovering the resources and their CIDRs, and integrating them with IPAM. The resources within all member accounts will be discoverable by IPAM regardless of their Organizational Unit. If there are member accounts that have created a VPC, for example, you’ll see the VPC and its CIDR in the Resources section of the IPAM console.
The role of the AWS Organizations management account that delegated the IPAM admin is now complete. To continue using IPAM, the IPAM admin account must log into Amazon VPC IPAM and create an IPAM. | https://docs.aws.amazon.com/vpc/latest/ipam/enable-integ-ipam.html | 2022-06-25T14:56:44 | CC-MAIN-2022-27 | 1656103035636.10 | [] | docs.aws.amazon.com |
Using Azure Stack HCI on a single server
Applies to: Azure Stack HCI, version 21H2
This article provides an overview of running Azure Stack HCI on a single server, also known as a single-node cluster. Using a single server minimizes hardware and software costs in locations that can tolerate lower resiliency. A single server can also allow for a smaller initial deployment that you can add servers to later (scaling out).
Along with the benefits mentioned, there are some initial limitations to recognize.
- You must use PowerShell to create the single-node cluster and enable Storage Spaces Direct.
- Single servers must use only a single drive type: Non-volatile Memory Express (NVMe) or Solid-State (SSD) drives.
- Stretched (dual-site) clusters aren't supported with individual servers (stretched clusters require a minimum of two servers in each site).
- To install updates using Windows Admin Center, use the single-server Server Manager > Updates tool. Or use PowerShell or the Server Configuration tool (SConfig). For solution updates (such as driver and firmware updates), see your solution vendor. You can't use the Cluster Manager > Updates tool to update single-node clusters at this time.
- Operating system or other updates that require a restart cause downtime to running virtual machines (VMs) because there isn't another running cluster node to move the VMs to. We recommend manually shutting down the VMs before restarting to ensure that the VMs have enough time to shut down prior to the restart.
Prerequisites
- A server from the Azure Stack HCI Catalog that's certified for use as a single-node cluster and configured with all NVMe or all SSD drives.
- An Azure Subscription.
For hardware, software, and network requirements see What you need for Azure Stack HCI.
Comparing single-node and multi-node clusters
The following table compares attributes of a single-node cluster to multi-node clusters.
1 Limited support, AKS on Azure Stack HCI is in preview on single-node clusters.
Known issues
The following table describes currently known issues for single-node clusters. This list is subject to change as other items are identified: check back for updates.
Next steps
Feedback
Submit and view feedback for | https://docs.microsoft.com/en-us/azure-stack/hci/concepts/single-server-clusters | 2022-06-25T14:17:54 | CC-MAIN-2022-27 | 1656103035636.10 | [] | docs.microsoft.com |
Camera Reference Design
The Camera Reference Design hardware options include multiple cameras with support for various Opal Kelly FPGA Development Modules:
- EVB1005
- XEM3010
- XEM3050
- XEM6010
- XEM6110
- XEM6310
- XEM7010
- XEM7310
- EVB1006
- XEM6006
- XEM7350
- Other FMC carriers (untested)
- EVB1007
- ZEM4310
- Other HSMC carriers (untested)
- SZG-CAMERA
- Brain-1
- XEM7320
- XEM8320
- BRK8350
- BRK1900
- Other SYZYGY-compliant carriers (untested)
- SZG-MIPI-8320
- XEM8320
The EVB1005/6/7 modules include a Micron MT9P031I12STC 5 Mpx color image sensor and necessary power supply circuitry. Designed as evaluation boards for Opal Kelly integration modules, the modules provide an excellent platform for getting accustomed to the FrontPanel SDK.
The SZG-CAMERA module, is also compatible with the Opal Kelly Camera Reference Design and includes an ON Semiconductor AR0330CM1C00SHAA0 3.4 Mpx color image sensor..
Linux is a registered trademark of Linux Torvalds. Microsoft and Windows are both registered trademarks of Microsoft Corporation. All other trademarks referenced herein are the property of their respective owners and no trademark rights are claimed. | https://docs.opalkelly.com/camera/ | 2022-06-25T14:36:34 | CC-MAIN-2022-27 | 1656103035636.10 | [array(['https://docs.opalkelly.com/wp-content/uploads/2021/07/[email protected]',
None], dtype=object) ] | docs.opalkelly.com |
Category: Operations Database: DBC View Column Data Type Format Comment CreateDate DATE NOT NULL YY/MM/DD Returns the date when the event took place. CreateTime FLOAT NOT NULL 99:99:99.99 Returns the time when the event took place. EventNum INTEGER NOT NULL --,---,---,--9 Returns the client system event number of the restore operation. EventType CHAR(30) LATIN NOT CASESPECIFIC NOT NULL X(30) Returns the type of event that occurred. UserName VARCHAR(128) UNICODE NOT CASESPECIFIC NOT NULL X(128) Returns the username associated with the event. LogProcessor SMALLINT -(5)9 Returns the logical processor ID for an AMP not affected by the event. PhyProcessor SMALLINT -(5)9 Returns the physical processor ID for an AMP not affected by the event. Vproc SMALLINT -(5)9 Identifies the virtual processor for which an event was logged. ProcessorState CHAR(1) LATIN NOT CASESPECIFIC NOT NULL X(1) Returns D (the event was for all AMPs and the processor was down) or U (the event was for specific AMPs). RestartSeqNum SMALLINT ---,--9 Returns an integer (0 through n) to indicate the number of times that the Teradata Database had to be restarted during the event. 0 indicates that no restarts took place. | https://docs.teradata.com/r/Teradata-VantageTM-Data-Dictionary/July-2021/Views-Reference/Events_ConfigurationV-X | 2022-06-25T14:29:23 | CC-MAIN-2022-27 | 1656103035636.10 | [] | docs.teradata.com |
# Backup Seed Phrase
Open Waves.Exchange app and login to your account. Click on the account avatar and navigate to Settings > Security
Click Show in the Seed phrase box.
You will be asked to enter your account password. After that the seed phrase will be displayed in the box.
Note: Never provide your seed phrase to anyone except the official Waves.Exchange app. We recommend to write the seed phrase on a piece of paper and store it in a secure location. Do not store the backup phrase unencrypted on any electronic device. We strongly recommend to backup the seed phrase, since this is the only way to restore access to your seed-account in case of loss or theft of the device.
See more articles in the Account Management chapter.
If you have difficulties with Waves.Exchange, please create a support (opens new window) ticket or write a question on our forum (opens new window). | https://docs.waves.exchange/en/waves-exchange/waves-exchange-online-desktop/online-desktop-account/online-desktop-backup | 2022-06-25T13:49:20 | CC-MAIN-2022-27 | 1656103035636.10 | [array(['/assets/img/create-002.d67dbd0a.png', 'Figure 2'], dtype=object)] | docs.waves.exchange |
Boylesque Skip to tickets Directed By: Bogna Kowalczyk International Spectrum 2022 Poland, Czech Republic Polish Full Subtitles World Premiere 70 minutes Winner – Emerging International Filmmaker AwardAt 82 years old, Lula is every inch the rebel. An openly gay man in communist Poland, he organized underground parties and after-curfew salons of men inside private apartments. He enthusiastically took up drag, despite a fiercely homophobic culture, to free himself from the stifling correctness of the 80s. But now, he's an old, single man in a youth-obsessed world. His friend was crushed by depression and killed himself, but somehow Lula, now Poland's oldest drag queen, remains buoyant. Is he escaping loneliness with his constant clubbing, looking for love yet again to insulate himself against what he knows is coming? Lula isn't waiting for approval. Filmmaker Bogna Kowalczyk's energetic portrait pairs with her subject's kinetic drive, right down to the stellar soundtrack and nimble camerawork. Whether it's meeting fans at Pride or selecting an artist to sculpt his specialty crematorium urn, try to keep up with a man who knows life is to be lived out loud. Myrocia Watamaniuk Read less Credits Director(s) Bogna Kowalczyk Producer(s) Tomasz Morawski Katarzyna Kuczynska Vratislav Šlajer Hanka Kastelicova Featuring Andrzej Szwan aka Lulla La Polaca Writer(s) Bogna Kowalczyk Editor(s) Krzysztof Komander Kacper Plawgo Aleksandra Gowin Cinematography Milosz Kasiura Composer Wojciech Frycz Sound Lukáš Moudrý Blazej Kanclerz Visit the film's website Read less Promotional Partners International Spectrum supported by Community Partner: Toronto Queer Film Festival Back to What's On See More & Save Buy your Festival ticket packages and passes today! Share | https://hotdocs.ca/whats-on/hot-docs-festival/films/2022/boylesque | 2022-06-25T14:17:00 | CC-MAIN-2022-27 | 1656103035636.10 | [] | hotdocs.ca |
Multi-Storey Shear Walls
Saving time by designing each element along the height from one place
The biggest addition to MASS Version 4.0 is the addition of a new module which coordinates the design of several shear wall elements to come up with one quick, thorough, and unified design along the entire height of a structure.
The embedded video below gives an introduction to the multi-storey module with the details covered in the linked pages further below:
The Multi-Storey Shear wall section of the MASS user documentation is comprised of multiple pages:
Introduction to Multi-Storey Shear Walls
Covering the basic scope of the multi-storey shear wall module. This should be read first for anyone new to MASS Version 4.0
Design Steps
This page goes through the entire process from start to finish of designing a multi-storey shear wall. All of the inputs and design possibilities are covered here.
Load Distribution
The Load Distribution page explains the manner in which MASS takes loads applied to a full multi-storey assemblage and determines the unfactored loads applied to each individual storey module (run using a shear wall element tab).
Design Strategy
Wondering how MASS goes about arriving at the results that you see after a design? This page walks through the entire complicated process used by the software to arrive at a result that satisfies the design requirements.
How MASS Calculates Drift
This article is a second part to the individual shear wall element deflection article which explains the added aspects of rotations coming into the base of each storey.An example is included that can be followed along with at home.
Was this post helpful? | https://docs.masonryanalysisstructuralsystems.com/multi-storey-shear-walls/ | 2022-06-25T14:03:35 | CC-MAIN-2022-27 | 1656103035636.10 | [] | docs.masonryanalysisstructuralsystems.com |
Security
Understanding security requirements is the first step to set up the security model in Sitecore Content Hub. Security requirements are a set of governance rules that define the permissions structure of your organization. Each department or division can have its own user group membership and corresponding policies with refined access roles.
This section provides best practices and generic steps for you to best define your security model.
Note
For more detailed procedures, refer to the user groups and policies documentation.
Access roles
Common access roles are content readers, content creators, and content approvers.
Readers should have the following permissions:
- Read and Download rights to the Assets search page.
- Access to their user Profile page.
- Read access to the Collections page.
- ViewNotWatermarked rights to view renditions without watermarking.
Creators should have the following permissions:
- Create and Submit rights on assets to upload and submit assets for review.
- Update rights to their own assets that are yet unapproved.
Content approvers should have the following permissions:
- Approve rights on assets under review (the Approve permission includes Reject rights).
- Read and CreateAnnotations rights on assets.
Note
These roles are typically refined based on the metadata, such as brand, product, and campaign linked to the assets (or products). For more information about permissions, refer to Permissions
Define user groups
The following process is a best practice to define your user groups.
To define user groups:
- Define the roles you need as described in the previous section.
- Create a new user group per role.
- Assign the modules relevant to this user group.
- Define the pages that each user group needs to access.
- Define access for Asset and File definitions:
- Create one rule for Asset and File when the definitions share identical permissions.
- Set conditions to limit the assets available for this user group, according to your domain model design.
- Define user group permissions to other entity definitions.
- Define which definitions the users need to access, update, or delete:
- Review the taxonomy definitions.
- Review any custom entity definition.
- Define which permissions the users need for these definitions.
Note
For more information about permissions, refer to Permissions
SSO configuration
Disable register by default.
Always enable reCAPTCHA, as described on Enabling reCAPTCHA.
Do not give the Everyone user group any meaningful permissions.
Enable auto-lockout to increase the security.
More best practices
When translating the security requirements to user groups and policies, keep the following highlights in mind:
Keep the number of user groups small. Having hundreds of user groups requires maintenance effort with every change in the domain model.
You should not assign a user to more than ten user groups. Security checks are performed before loading certain operations or when running background processes. Setting more than ten user groups per user has a performance impact. Consider grouping user groups to avoid it.
Do not define duplicate rules and permissions. When you assign several user groups to one user, identical permissions might be granted by more than one user group. Review how the user groups share the permissions on certain entities. You can use Security Diagnostics to detect duplicated policies that grant the same permission on the same entity.
Can we improve this article ? Provide feedback | https://docs.stylelabs.com/contenthub/4.0.x/content/user-documentation/best-practices/security.html | 2022-06-25T13:45:32 | CC-MAIN-2022-27 | 1656103035636.10 | [] | docs.stylelabs.com |
Permission is granted to copy, distribute and/or modify this document under the terms of the Creative Commons Attribution-Share Alike 2.0 UK: England & Wales as published by Creative Commons.
This version of the Yocto Project Quick Start background.4):
$.4 -b poky_2.4.4
The previous Git
creates a local branch named
poky_2.4.4.
The files available to you in that branch exactly match the
repository's files in the
rocko
development branch at the time of the Yocto Project 2.4.4.4
tag when you checked out the
poky
repository by tag, you should use a
meta-intel
tag that corresponds with the release you used for
poky.
Consequently, you need to checkout out the
"
8.1-rocko-2.4.4"
branch after cloning
meta-intel:
$ cd $HOME/poky/meta-intel $ git checkout tags/8.1-rocko-2.4.4 -b meta-intel-rocko-2.4.4 Switched to a new branch 'meta-intel-rocko-2.4.4'
The previous Git
creates a local branch named
meta-intel-rocko-2.4.4.
You have the option to name your local branch whatever
you want by providing any name you like for
"meta-intel-rocko-2.4.4". | https://docs.yoctoproject.org/2.4.4/yocto-project-qs/yocto-project-qs.html | 2022-06-25T13:11:15 | CC-MAIN-2022-27 | 1656103035636.10 | [] | docs.yoctoproject.org |
Caution
You're viewing documentation for an unstable version of Scylla Manager. Switch to the latest stable version.
Repair is important to make sure that data across the nodes is consistent. To learn more about repairs please consult this Scylla University lesson.
Scylla Manager automates the repair process and allows you to configure how and when repair occurs. When you create a cluster a repair task is automatically scheduled. This task is set to occur each week by default, but you can change it to another time, change its parameters or add additional repair tasks if needed.
Glob patterns to select keyspaces or tables to repair
Parallel repairs
Control over repair intensity and parallelism even for ongoing repairs
Resilience to schema changes
Retries
Pause and resume
Scylla Manager can repair distinct replica sets in a token ring in parallel. This is beneficial for big clusters. For example, a 9 node cluster and a keyspace with replication factor 3, can be repaired up to 3 times faster in parallel. The following diagram presents a benchmark results comparing different parallel flag values. In a benchmark we ran 9 Scylla 2020.1 nodes on AWS i3.2xlarge machines under 50% load, for details check this blog post
By default Scylla Manager runs repairs with full parallelism, you can change that using sctool repair –parallel flag.
Intensity specifies how many token ranges per shard can be repaired in a Scylla node at every given time. It can be a decimal between (0,1). In that case the number of token ranges is a fraction of number of shards. The default intensity is one, you can change that using sctool repair –intensity flag.
Scylla Manager 2.2 adds support for intensity value zero.
In that case the number of token ranges is calculated based on node memory and adjusted to the Scylla maximal number of ranges that can be repaired in parallel (see
max_repair_ranges_in_parallel in Scylla logs).
If you want to repair faster, try using intensity zero. will have little impact.
Repair speed is controlled by two parameters:
--parallel and
--intensity
Those parameters can be set when you:
Schedule a repair with sctool repair
Update a repair specification with sctool repair update
Update a running repair task with sctool repair control
More on the topic of repair speed can be found in Repair faster and Repair slower articles. | https://manager.docs.scylladb.com/master/repair/index.html | 2022-06-25T13:10:57 | CC-MAIN-2022-27 | 1656103035636.10 | [] | manager.docs.scylladb.com |
Understanding and configuring a Dynamic MPD¶
This page explains the most relevant attributes in dynamic MPD (MPEG_DASH). Quite some of these attributes are unique to Live streaming scenarios and won't be present in a static MPD. None of them have to be configured (Origin will take care of this automatically), but some can be.
Table of Contents
Attention
In general, we recommend against manually configuring any of the below options, and to rely on the default logic of Origin instead.
MPD@type¶
MPD@type is set to 'dynamic', meaning that this is an MPD for a Live stream and that new segments will become available over time.
MPD@profile¶
By default the DASH 'ISO Media Live profile' is used
(so the @profile is
urn:mpeg:dash:profile:isoff-live:2011). For certain
devices that require the DVB-DASH MPD profile, you may want to use
--mpd.profile to specify this profile.
MPD@availabilityStartTime¶
MPD@availabilityStartTime indicates the zero point on the MPD timeline of a dynamic MPD. It defaults to Unix epoch. The benefit of this becomes evident when Should Fix: add Origin and encoder redundancy is required: when both encoders are setup to use UTC timing they can start independent of each other, as the Unix epoch is always the same. Please see: Must Fix: use of UTC timestamps.
Offsetting availabilityStartTime¶
Adding for instance 10 seconds will offset the player's Live edge with that amount. An example:
--mpd.availability_start_time=10
MPD@publishTime¶
MPD@publishTime specifies the wall-clock time when the MPD was generated and served by Origin.
MPD@minimumUpdatePeriod¶@timeShiftBufferDepth¶
MPD@timeShiftBufferDepth is signals the length of the DVR window (i.e., how much a client can scrub back, to start playback at an earlier point in the stream). This length is specified with the --dvr_window_length.
MPD@maxSegmentDuration¶
MPD@maxSegmentDuration is set to maximum segment duration within MPD.
MPD@minimumBufferTime¶
MPD@minimumBufferTime prescribes how much buffer a client should keep under
ideal network conditions. This value can be changed with
--mpd.min_buffer_time. It is set to 10 seconds by default.
MPD@suggestedPresentationDelay¶
MPD@suggestedPresentationDelay by default is not present in the MPD (except when
you use --mpd.profile to specifyTimeOffset¶
@presentationTimeOffset is a key component in establishing the relationship between the MPD timeline and the actual media timeline, also referred to as the sample timeline. Origin uses it in scenarios when a virtual subclip is requested from a Live archive, if that subclip has an end time in the past.
As the all of the content for the subclip will be available in the Live archive in such a case (assuming the start time is not to far in the past and still part of the archive), this will result in a VOD clip being served, of which the media timeline starts at zero.
In such scenarios the presentation time offset is calculated automatically to represent the time between the start of Unix epoch and the start of the media, so that the media can be addressed using the original timestamps used for the livestream.
Configuring presentation time offset¶. | https://beta.docs.unified-streaming.com/documentation/live/configure-dynamic-mpd.html | 2022-06-25T14:47:04 | CC-MAIN-2022-27 | 1656103035636.10 | [] | beta.docs.unified-streaming.com |
Example usage
Once you've installed Objective a variable bool enabled = [client isFeatureEnabled:@"price_filter" userId: userId]; int min_price = [[client getFeatureVariableInteger:@"price_filter" userId: userId] integerValue]; // Activate an A/B test OPTLYVariation *variation = [client activate:@"app_redesign" userId:userId]; if ([variation.variationKey isEqualToString:@"control"]) { // Execute code for variation A } else if ([variation.variationKey isEqualToString:@"treatment"]) { // Execute code for variation B } else { // Execute code for users who don't qualify for the experiment } // Track an event [client track:@"purchased" userId:userId];
Updated almost 3 years ago | https://docs.developers.optimizely.com/full-stack/docs/example-usage-objective-c | 2022-06-25T13:10:29 | CC-MAIN-2022-27 | 1656103035636.10 | [] | docs.developers.optimizely.com |
This reference is provided to help guide you through the design process of a mating peripheral to the ECM1900. It is not intended to be a comprehensive instruction manual. While we put forth great effort to reduce the effort required to build an FPGA-enabled platform, there are hundreds of pages of product documentation from Xilinx that should be considered. Use this guide as a roadmap and starting point for your design effort.
Useful References
Electrical Design Guide
Input Power Supply Connection
Input power to the ECM1900 must be applied either through mezzanine header MC3. For information on the barrel jack dimensions and polarity, see Powering the ECM1900. For information on mezzanine header pin assignments, see the ECM1900 Pins Reference.
Total Power Budget
The total operating power budget is an important system consideration. The power budget for the ECM1900 is highly dependent on the FPGA’s operating parameters and this can only be determined in the context of an actual target design.
The onboard ECM1900 power supply regulators provide power for all on-board systems, including the user-adjustable VIO rails provided to the mezzanine headers. The Power Budget table on the Powering the ECM1900 page indicates the total current available for each supply rail. This table may be used to estimate the total amount of input power required for your design.
FPGA I/O Bank Selection and I/O Standard
Details on the available standards can be found in the following Xilinx documentation:
- Zynq UltraScale+ MPSoC Data Sheet: DC and AC Switching Characteristics (DS925)
- UltraScale Architecture SelectIO Resources (UG571)
FPGA I/O Bank Selection and Voltage
Voltage supply rails VCCO_28, VCCO_67, VCCO_68, VCCO_87_88 power the FPGA I/O banks indicated in the supply rail net names. For information on configuring these voltages using FrontPanel, see the Device Settings page in the ECM1900 documentation. See the ECM1900 Pins Reference for details about FPGA bank power assignments.
Mechanical Design Guide
Mezzanine Connector Placement
Refer to the ECM1900 mating board diagram for placement locations of the QTH connectors, mounting holes, and jack screw standoffs. This diagram can be found on the Specifications page of the ECM1900 documentation.
Confirm the Connector Footprint
For recommended PCB layout of the QTH connector, refer to the QTH footprint drawing.
Confirm Mounting Hole Locations
Refer to the ECM1900 Specifications for a comprehensive mechanical drawing. Also refer to the BRK1900 as a reference platform. The BRK1900 design files can be found in the Downloads section of the Pins website.
Refer to the Samtec jack screw standoff instructions for information on using the jack screws to mate and unmate the ECM1900 module to the carrier board. For our version of these instructions, please visit Jack Screw Instructions.
Confirm Other Mechanical Placements
Refer to the ECM1900 mechanical drawing for locations of the two vertical-launch USB jacks. This drawing is available on the Specifications page of the ECM1900 documentation.
Thermal Dissipation Requirements
Thermal dissipation for the ECM1900 is highly dependent on the FPGA’s operating parameters and this can only be determined in the context of an actual target design.
An active FPGA cooling solution is recommended for any design with high power consumption. Opal Kelly provides an optional fansink designed to clip onto the ECM1900. See the Powering the ECM1900 page for more information. Some designs may require a different cooling solution. Thermal analysis and simulation may be required.
Determine the Mated Board Stacking Height
The Samtec QSH-series connectors on the ECM1900 mate with QTH-series connectors on the carrier board. The QTH series is available in several stacking height options from 5 to 25 mm. The stack height is determined by the “lead style” of the QTH connector.
Note that increased stack height can lead to decreased high-speed channel performance. Information on 3-dB insertion loss point is available at the Samtec product page link above. | https://docs.opalkelly.com/ecm1900/hardware-design-guide/ | 2022-06-25T14:27:17 | CC-MAIN-2022-27 | 1656103035636.10 | [] | docs.opalkelly.com |
.
Feedback
Thanks for your feedback.
Post your comment on this topic. | https://docs.us.sios.com/dkce/8.8.2/en/topic/step-11-primary-application-server-and-additional-application-server-installation | 2022-06-25T14:26:36 | CC-MAIN-2022-27 | 1656103035636.10 | [] | docs.us.sios.com |
To generate an extensible platform for the Vitis environment, the AI Engine IP must be instantiated and connected to the rest of the design. A base platform hardware design includes a block design that contains a minimally configured AI Engine IP block with any memory-mapped slave AXI connections enabled and connected to the dedicated NoC bus masters.
All other configuration of the AI Engine is performed through compilation of the user ADF graph and AI Engine kernels through the Vitis aiecompiler and through Vitis v++ linking of the aiecompiler libadf.a with the extensible platform design. Connections from the AI Engine to the base platform include AXI4-Stream master and slave connections, memory-mapped AXI bus connections to NoC, and clocks for the bus interfaces. AI Engine events triggered within the IP are transferred to memory and via XSDB through the AXI4 connections.
For more information, see the AI Engine LogiCORE IP Product Guide (PG358) and Versal ACAP AI Engine Programming Environment User Guide (UG1076). | https://docs.xilinx.com/r/en-US/ug1273-versal-acap-design/AI-Engine-IP | 2022-06-25T13:10:08 | CC-MAIN-2022-27 | 1656103035636.10 | [] | docs.xilinx.com |
Crimes Without Honour Skip to tickets Directed By: Raymonde Provencher Focus On Raymonde Provencher 2012 Canada English, Swedish, German, Kurdish 69 minutes Every winter in a cemetery near Stockholm, activists gather to keep the memory of Fadime Sahindal alive. A Kurdish immigrant to Sweden who was murdered by her father in 2002, Fadime has become an international symbol of the debate over cultural traditions that accept the use of violence to control women's behaviour. In Crimes Without Honour, four extraordinary activists risk everything to publicly challenge these traditions and tell their own stories of physical and emotional violence. While they practice different faiths, hail from different parts of the world and have immigrated to different countries, all make it crystal clear that the justification for these crimes is an entrenched family power structure of male supremacy—one that crosses borders, cultures and religions. Raymonde Provencher has crafted a vital addition to a growing body of films about crimes related to patriarchal traditions of family honour. Lynne Fernie Read less Credits Director(s) Raymonde Provencher Producer(s) Raymonde Provencher Writer(s) Raymonde Provencher Editor(s) Andrea Henriquez Cinematography François Beauchemin Music Robert Marcel Lepage Sound Marcel Fraser Read less Promotional Partners Focus On Program supported by Back to What's On See More & Save Buy your Festival ticket packages and passes today! Focus On Raymonde Provencher Learn more about this year's honouree Share | https://hotdocs.ca/whats-on/hot-docs-festival/films/2022/crimes-without-honour | 2022-06-25T14:45:19 | CC-MAIN-2022-27 | 1656103035636.10 | [] | hotdocs.ca |
How to Detect a File Format and Check if the File is Encrypted
Sometimes you need to detect a file’s format before opening it because the file extension does not guarantee that the file content is appropriate. The file might be encrypted (a password-protected file) so it can’t be read it directly, or we should not read it. Aspose.Cells provides the FileFormatUtil.DetectFileFormat() static method and some relevant APIs that you can use to process documents.
The following sample code illustrates how to detect a file format (using the file path) and check its extension. You can also determine whether the file is encrypted. | https://docs.aspose.com/cells/net/how-to-detect-a-file-format-and-check-if-the-file-is-encrypted/ | 2022-06-25T14:49:33 | CC-MAIN-2022-27 | 1656103035636.10 | [] | docs.aspose.com |
improved
Expanded Scope of Async Webhook Events
3 months ago by Andriy Mysyk
Now, webhooks for
auto_designer_job_completed,
irradiance_analysis_job_completed,
performance_simulation_job_completed events fire regardless of how the corresponding job was started (e.g., via Sales Mode, Pro Mode, or API). Previously, only jobs started via the API would fire the webhooks. With the expanded coverage of the performance simulation event, for example, you can keep your CRM in sync with any design changes made from the app, as simulation jobs are a strong proxy for when Aurora designs have been modified. | https://docs.aurorasolar.com/changelog/expanded-scope-of-async-webhook-events | 2022-06-25T14:18:56 | CC-MAIN-2022-27 | 1656103035636.10 | [] | docs.aurorasolar.com |
Tutorial: Build your own package¶
This tutorial aims to guide you through the process of creating a package that can then be used to create a deployment.
Prerequisites¶
- You have to be a registered partner of Elastycloud.
- You need to have an account and be able to sign in to the Elastycloud Console.
- You need to have experience in developing for Magento 2.
- You must have the
gitand
composerinstalled, and experience using them.
- You need a Bitbucket account, and permission to create new repositories.
- You need to get a personal Composer access key.
Tutorial¶
Before building the package, you have to select a name for it, and set the name in your elastycloud.yaml file.
- Go to
Packagesin the menu.
- Press the
+button (Add package) near the top right corner of the page.
- Fill in the form according to the Package details, using the
Package keychosen earlier.
- Create a new Git repository in your Bitbucket account, following the Create Bitbucket repository instructions.
- Get the Elastycloud project skeleton, and push to your repository, following the Prepare and push repository instructions.
- Add and push a Git tag according to the Tag and trigger a build instructions.
- Go to the
Package Versionspage from the main menu and monitor the build status. Once it has been successfully built it can be deployed to production.
Additional information¶
Package.
Create Bitbucket repository¶
Add a Bitbucket webhook in that repository (
Bitbucket ‣ [repo] ‣ Settings ‣ Webhooks ‣ Add webhook) with the title "Elastycloud" and the URL. Check
Enable request history collection and leave rest as default.
This will trigger a build on every push.
Note
Make sure that the Bitbucket user ElastycloudCICD has READ access to your repository.
Prepare.
Tag and trigger a build¶
In order to trigger a build of your code, you must first tag it with a SemVer tag, e.g.
0.0.0-alpha.1, and then push that tag with Git.
git tag 0.0.0-alpha.1 git push --tags
Except for the build number, you can use all aspects of SemVer. This is because the builder automatically adds a build number. | https://docs.elastycloud.io/tutorials/build-your-own-package/ | 2022-06-25T13:52:45 | CC-MAIN-2022-27 | 1656103035636.10 | [] | docs.elastycloud.io |
12.6 The Task Format¶
The Task format is MOSEK‘s native binary format. It contains a complete image of a MOSEK task, i.e.
Problem data: Linear, conic, semidefinite and quadratic data
Problem item names: Variable names, constraints names, cone names etc.
Parameter settings
Solutions
There are a few things to be aware of:
Status of a solution read from a file will always be unknown.
Parameter settings in a task file always override any parameters set on the command line or in a parameter file.
The format is based on the TAR (USTar) file format. This means that the individual pieces of data in a
.task file can be examined by unpacking it as a TAR file. Please note that the inverse may not work: Creating a file using TAR will most probably not create a valid MOSEK Task file since the order of the entries is important. | https://docs.mosek.com/10.0/cmdtools/task-format.html | 2022-06-25T13:42:56 | CC-MAIN-2022-27 | 1656103035636.10 | [] | docs.mosek.com |
New in this release
- Add support for the upcoming Android D8/R8 toolset
- Discontinue the New Relic Maven plugin
- Disable CPU sampling for apps running on Android 8 and higher
Fixed in this release
- Correct the mismatched build ID shared between the stamped and uploaded Proguard/Dexguard map file and submitted crashes
- Fix an issue that caused a Facebook login API failure
- Update agent plugin to address Gradle 5 warnings
Notes
Dexguard users must upgrade to version 8.2.22 to be compatible with this release. | https://docs.newrelic.com/jp/docs/release-notes/mobile-release-notes/android-release-notes/android-5210/?q= | 2022-06-25T14:26:36 | CC-MAIN-2022-27 | 1656103035636.10 | [] | docs.newrelic.com |
Getting Started
Contents
Your shipment should have arrived with the following items. If anything is missing, please contact us as soon as possible so we may replace the item.
- Xilinx or Intel FPGA development module
- USB Cable (USB 2.0 or USB 3.0 as appropriate)
Downloading the FrontPanel SDK
The FrontPanel SDK is available for free download to registered customers through our Pins Support Portal online. Note that you will need to sign up for a Pins account and request download approval by supplying your order information.
Following the instructions at the top… If you are not already registered with Pins you will need to register first. Once you are registered, you will need to request access to the FrontPanel SDK downloads. Instructions for this are at the top of the page. We need to get enough information from you to associate you with a purchase of one of our products.
Installation and Setup
This section will guide you through the software installation. The XEM is a USB device which requires a special driver to be installed on your system. You should install this driver before connecting the XEM to your system. After installation, you should attach your device. Windows performs additional device driver installation after the device is attached, so you should do this before starting FrontPanel for the first time.
Note that the FrontPanel SDK is also available for Mac and Linux platforms. This guide only covers the Windows release.
System Requirements
- Microsoft Windows 7 or later (32- or 64-bit)
- USB (version 1.1, 2.0, or 3.0)
- 512 MB RAM
- Approximately 50 MB free disk space
Software Installation
The FrontPanel SDK includes the following components:
- Universal USB driver for all Opal Kelly integration modules
- Opal Kelly’s FrontPanel® desktop application
- FrontPanel API for C, C++, Python, and Java
- Sample Verilog and VHDL projects
- Sample FrontPanel projects
The installation wizard will guide you through the installation and should only take a minute or two. We suggest installing everything available — it won’t take up much space. By default, the installation procedure will copy files to the following directory:
C:\Program Files\Opal Kelly\FrontPanelUSB
When installation is complete, a two new entry will be made in your Start Menu:
Start Menu → All Programs → Opal Kelly
The folder (
Opal Kelly) contains shortcuts to the documentation as well as a shortcut to the Samples folder. The shortcut (Opal Kelly FrontPanel) will start FrontPanel.
Powering the Device
Some integration modules are USB bus powered (e.g. XEM6001 and XEM6002). Others will require external power to be applied. Please refer to the corresponding User’s Manual for information on powering the device. Be sure to use a clean, well-regulated supply within the specified voltage range for the device input.
Connecting the Device
Once the software is installed on your system, connect the module to a USB port which can provide bus power. Windows will recognize that a new device has been attached and will display a notice near the taskbar. Windows will then display the Found New Hardware dialog.
Allow Windows to install the software automatically and click Next. Windows should then properly locate the driver installed previously and complete the installation. Your module is now ready to be used and will be recognized in FrontPanel.
Note that different modules have different USB serial numbers. Windows recognizes them as different devices and therefore performs the “Found New Hardware” procedure for each board. If you have multiple devices, you’ll need to go through this short procedure for each one. Each time, Windows should find the appropriate driver automatically.
An Introductory Project
A few examples have been installed with the FrontPanel software. These can be found at the following location under the Start Menu:
Opal Kelly → FrontPanel → Samples
Browse to this location and open the First example to find the following:
The Samples folder contains a README file that describes the process of building the sample bitfiles. You can download pre-built bitfiles from our website.
Download the Configuration File
Start FrontPanel and click the download button to open a file selector dialog. Browse to the appropriate configuration file (e.g.
first-xem3005-1200.bit) and click OK. FrontPanel will then download the configuration file to the FPGA. Download time typically varies between 50 ms and 800 ms depending on the USB connection speed and configuration file size.
Load the FrontPanel Profile
When the download is complete, click the “Load Profile” button to open another file selector dialog. This time, select the
First.xfp file and click OK. FrontPanel will load the profile and open the first (and only) panel in the profile. It should look like this:
The panel contains eight okToggleButtons which are connected to the physical LEDs on the XEM3001 as well as four okLEDs which are connected to the physical pushbuttons. You can click on the toggle buttons and observe that the LEDs change state. You can also press the pushbuttons and observe the virtual LEDs in FrontPanel echo their state. At the bottom are the inputs (A and B) and the output (SUM) of a 16-bit adder designed into the FPGA. Using the mouse wheel or keyboard, you can change the inputs and observe the output.
Note that the functionality varies slightly depending on what LEDs and buttons are available on your specific hardware. For example, the XEM3010 only has two pushbuttons.
Congratulations! You’ve just been introduced to the simplicity and power of FrontPanel. More elaborate examples are available in the Samples directory. Please see the FrontPanel User’s Manual for a short tutorial on FrontPanel.
What’s Next?
You’re ready to start using your module. A few documents will help get you going quickly. These documents are available on our website.
FrontPanel User’s Manual – This manual describes FrontPanel and describes how to add FrontPanel support to new or existing FPGA designs. You’re looking at it now!
Device User’s Manual – This manual describes the design and function of the module. Look here to find specification and application information about the module.
Pins – Module pin mappings (how each expansion pin is connected to the FPGA) are provided online through Opal Kelly Pins. You can review this information online and create your own Peripherals to automatically generate constraint files for the FPGA design tools.
FrontPanel API Documentation – This HTML documentation is a browsable description of the FrontPanel programmer’s interface. Although specifically written for the C++ API, it readily translates to the Python API except in noted areas.
Samples & Tools – Several sample projects provide a great starting point to developing your own project. These are installed with FrontPanel in the installation directory.
Additional Resources
The following additional tools are not available for download from Opal Kelly but may be useful.
Latest FPGA Development Software
Developing new FPGA applications requires design suite software from the vendor of the FPGA on the Opal Kelly module, either Intel (ZEM boards) or Xilinx (XEM boards). Both vendors offer free versions of their tools that can be used with low density FPGAs found on lower end FPGA modules from Opal Kelly. Please refer to the FPGA vendor documentation for the FPGA on your module for the development suite requirements. Note that with 7-series and newer modules (XEM7xxx, XEM8xxx, and beyond) Xilinx requires use of the Vivado Design Suite.
Python
Python is an object-oriented, multi-platform interpreted language with a simple syntax. Together with extensive libraries, application development with Python is quick, easy, and robust. FrontPanel Python extensions allow you to configure and communicate with the XEM within any Python program. More information on Python can be found at their official website:. Be sure to download a supported version of Python.
wxWidgets
wxWidgets is an Open Source C++ class library designed to make GUI design fast and portable. Applications designed with wxWidgets can easily be ported between Windows, Linux, and Apple operating systems. wxWidgets is often favorably compared to the Microsoft Foundation Classes (MFC). More information can be found at the website:
wxPython
This is an implementation of the wxWidgets library for use under Python. This diverse library makes it very easy to build graphical applications quickly. In addition to the FrontPanel Python extensions, a complete GUI interface to your project can be built quickly and easily. The official website for wxPython is at:
FreeMat
FreeMat is a free environment for rapid engineering, scientific prototyping, and data processing. It is similar to commercial systems such as MATLAB from Mathworks but is Open Source. It is available as a free download at | https://docs.opalkelly.com/fpsdk/getting-started/ | 2022-06-25T14:10:54 | CC-MAIN-2022-27 | 1656103035636.10 | [array(['https://docs.opalkelly.com/wp-content/uploads/2021/07/Screenshot-FirstPanel.png',
None], dtype=object) ] | docs.opalkelly.com |
Permission is granted to copy, distribute and/or modify this document under the terms of the Creative Commons Attribution-Share Alike 2.0 UK: England & Wales as published by Creative Commons.
This version of the Toaster User Manual is for the 2.4.2
buildslist
builddelete
perf
checksettings
runbuilds
Table of Contents. Metadata: an overview of Toaster shipped with the Yocto Project 2.4 "The Build Host Packages" and "Yocto Project Release" sections in the Yocto Project Quick Start. "Preparing to Use Toaster", "Setting Up the Basic System Requirements" rocko DATABASE` TOASTER_CONF=./meta-poky/conf/toasterconf.json \ ./bitbake/lib/toaster/manage.py checksettings $ ./bitbake/lib/toaster/manage.py collectstatic
For the above set of commands, after moving to the
poky directory,
the
migrate
command ensures the database
schema has had changes propagated correctly (i.e.
migrations).
The next line sets the Toaster root directory
TOASTER_DIR and the location of
the Toaster configuration file
TOASTER_CONF, which is
relative to the Toaster root directory
TOASTER_DIR.
For more information on the Toaster configuration file,
see the
Configuring Toaster
chapter. Metadata does exist
across the Web and the information in this manual by no means
attempts to provide a command comprehensive reference. Metadata worrying about cloning or editing.
To use your own layer source, you need to set up the layer source and then tie it into Toaster. This section describes how to tie into a layer index in a manner similar to the way Toaster ties into the OpenEmbedded Metadata Index. "BSP Layers" and "Using the Yocto Project's BSP Tools" sections in the Yocto Project Board Support Package (BSP) Developer's Guide. 2.4.2 "Rocko" or OpenEmbedded "Rocko": This release causes your Toaster projects to build against the head of the rocko following.. 'pk' values in the example above should start at "1" and increment uniquely. You can use the same layer name in multiple releases.">rocko<..
Sometimes it is useful to determine the status of a specific build. To get the status of a specific build, use the following call:
http://
host:
port/toastergui/api/build/
ID. The following.
In addition to the web user interface and the scripts that start
and stop Toaster, command-line commands exist.
perf command measures Toaster
performance.
Access the command as follows:
$ bitbake/lib/toaster/manage.py perf
The command is a sanity check that returns page loading times in order to identify performance problems.. | https://docs.yoctoproject.org/2.4.2/toaster-manual/toaster-manual.html | 2022-06-25T13:08:36 | CC-MAIN-2022-27 | 1656103035636.10 | [] | docs.yoctoproject.org |
Table of Contents
Product Index
Nature is wonderful! What better example than flowers and butterflies!
This product consists of a Generic Swallowtail Butterfly that can be morphed into other 6 different species, a Morphing Flower, a Flowers Vase and a Butterflies Display Dome.
The Butterfly comes completely rigged to facilitate its use in animation sequences and for greater flexibility creating poses. The rigging includes wings, legs, head, thorax (equivalent to the hip), abdomen, antennas, proboscis (feeding tube), etc. It also comes with 10 pose presets for each species.
The Flower comes with several morphs and material presets to allow you to change its appearance. A rigged stem allows you to pose the flower any way you want. Also included are 5 pose presets.
A simple; but beautiful Glass Vase with two translucency options is also included.
Several versions of a Display Dome are included. Two of them come with Butterflies included and posed. Other comes without Butterflies to allow you to add the species and poses that you prefer. A fourth version only includes a Glass Dome without the base.
Special Pose presets that allow you to re-size the Butterfly, Flower and Glass Vase are part of the content.
The quality of the textures included are suitable for real life renders and close-ups. Many Iray materials presets are included for all of the. | http://docs.daz3d.com/doku.php/public/read_me/index/57695/start | 2020-08-03T12:26:36 | CC-MAIN-2020-34 | 1596439735810.18 | [] | docs.daz3d.com |
About AMIDST¶
What is AMIDST?¶
AMIDST is an open source Java toolbox for scalable probabilistic machine learning with a special focus on (massive) streaming data. The toolbox allows specifying probabilistic graphical models with latent variables and temporal dependencies.
The main features of the tooblox are listed below:
- Probabilistic Graphical Models: Specify your model using probabilistic graphical models with latent variables and temporal dependencies. AMIDST contains a large list of predefined latent variable models:
- Scalable inference: Perform inference on your probabilistic models with powerful approximate and scalable algorithms.
- Data Streams: Update your models when new data is available. This makes our toolbox appropriate for learning from (massive) data streams.
- Large-scale Data: Use your defined models to process massive data sets in a distributed computer cluster using Apache Flink or (soon) Apache.
General Information
Examples
First steps
Contributing to AMIDST | https://amidsttoolbox.readthedocs.io/en/develop/ | 2020-08-03T11:20:21 | CC-MAIN-2020-34 | 1596439735810.18 | [array(['_images/logo.png', '_images/logo.png'], dtype=object)] | amidsttoolbox.readthedocs.io |
Views contain all the HTML that you’re application will use to render to the user. Unlike Django, views in Masonite are your HTML templates. All views are located inside the
resources/templates directory.
All views are rendered with Jinja2 so we can use all the Jinja2 code you are used to. An example view looks like:
resources/templates/helloworld.html<html><body><h1> Hello {{ 'world' }}</h1></body></html>
Since all views are located in
resources/templates, we can use simply create all of our views manually here or use our
craft command tool. To make a view just run:
terminal$ craft view hello
This will create a template under
resources/templates/hello.html.
The
View class is loaded into the container so we can retrieve it in our controller methods like so:
app/http/controllers/YourController.pyfrom masonite.view import Viewdef show(self, view: View):return view.render('dashboard')
This is exactly the same as using the helper function above. So if you choose to code more explicitly, the option is there for you.
If this looks weird to you or you are not sure how the container integrates with Masonite, make sure you read the Service Container documentation
Some views may not reside in the
resources/templates directory and may even reside in third party packages such as a dashboard package. We can locate these views by passing a
/ in front of our view.
For example as a use case we might pip install a package:
terminal$ pip install package-dashboard
and then be directed or required to return one of their views:
app/http/controllers/YourController.pydef show(self, view: View):return view.render('/package/views/dashboard')
This will look inside the
dashboard.views package for a
dashboard.html file and return that. You can obviously pass in data as usual.
Caveats
It's important to note that if you are building a third party package that integrates with Masonite that you place any
.html files inside a Python package instead of directly inside a module. For example, you should place .html files inside a file structure that looks like:
package/views/__init__.pyindex.htmldashboard.htmlsetup.pyMANIFEST.in...
ensuring there is a
__init__.py file. This is a Jinja2 limitation that says that all templates should be located in packages.
Accessing a global view such as:
app/http/controllers/YourController.pydef show(self, view: View):return view.render('/package/dashboard')
will perform an absolute import for your Masonite project. For example it will locate:
app/config/databases/...package/dashboard.html
Or find the package in a completely separate third part module. So if you are making a package for Masonite then keep this in mind of where you should put your templates.
Most of the time we’ll need to pass in data to our views. This data is passed in with a dictionary that contains a key which is the variable with the corresponding value. We can pass data to the function like so:
app/http/controllers/YourController.pydef show(self, view: View, request: Request):return view.render('dashboard', {'id': request.param('id')})
Remember that by passing in parameters like
Request to the controller method, we can retrieve objects from the IOC container. Read more about the IOC container in the Service Container documentation.
This will send a variable named
id to the view which can then be rendered like:
resources/templates/dashboard.html<html><body><h1> {{ id }} </h1></body></html>
Views use Jinja2 for it's template rendering. You can read about Jinja2 at the official documentation here.
Masonite also enables Jinja2 Line Statements by default which allows you to write syntax the normal way:
{% extends 'nav/base.html' %}{% block content %}{% for element in variables %}{{ element }}{% endfor %}{% if some_variable %}{{ some_variable }}{% endif %}{% endblock %}
Or using line statements with the
@ character:
@extends 'nav/base.html'@block content@for element in variables{{ element }}@endfor@if some_variable{{ some_variable }}@endif@endblock
The choice is yours on what you would like to use but keep in mind that line statements need to use only that line. Nothing can be after after or before the line.
This section requires knowledge of Service Providers and how the Service Container works. Be sure to read those documentation articles.
You can also add Jinja2 environments to the container which will be available for use in your views. This is typically done for third party packages such as Masonite Dashboard. You can extend views in a Service Provider in the boot method. Make sure the Service Provider has the
wsgi attribute set to
False. This way the specific Service Provider will not keep adding the environment on every request.
from masonite.view import Viewwsgi = False...def boot(self, view: View):view.add_environment('dashboard/templates')
By default the environment will be added using the PackageLoader Jinja2 loader but you can explicitly set which loader you want to use:
from jinja2 import FileSystemLoaderfrom masonite.view import View...wsgi = False...def boot(self, view: View):view.add_environment('dashboard/templates', loader=FileSystemLoader)
The default loader of
PackageLoader will work for most cases but if it doesn't work for your use case, you may need to change the Jinja2 loader type.
If using a
/ doesn't seem as clean to you, you can also optionally use dots:
def show(self, view: View):view.render('dashboard.user.show')
if you want to use a global view you still need to use the first
/:
def show(self, view: View):view.render('/dashboard.user.show')
There are quite a few built in helpers in your views. Here is an extensive list of all view helpers:
You can get the request class:
<p> Path: {{ request().path }} </p>
You can get the location of static assets:
If you have a configuration file like this:
config/storage.py....'s3': {'s3_client': 'sIS8shn...'...'location': ''},....
...<img src="{{ static('s3', 'profile.jpg') }}" alt="profile">...
this will render:
<img src="" alt="profile">
You can create a CSRF token hidden field to be used with forms:
<form action="/some/url" method="POST">{{ csrf_field }}<input ..></form>
You can get only the token that generates. This is useful for JS frontends where you need to pass a CSRF token to the backend for an AJAX call
<p> Token: {{ csrf_token }} </p>
You can also get the current authenticated user. This is the same as doing
request.user().
<p> User: {{ auth().email }} </p>
On forms you can typically only have either a GET or a POST because of the nature of html. With Masonite you can use a helper to submit forms with PUT or DELETE
<form action="/some/url" method="POST">{{ request_method('PUT') }}<input ..></form>
This will now submit this form as a PUT request.
You can get a route by it's name by using this method:
<form action="{{ route('route.name') }}" method="POST">..</form>
If your route contains variables you need to pass then you can supply a dictionary as the second argument.
<form action="{{ route('route.name', {'id': 1}) }}" method="POST">..</form>
or a list:
<form action="{{ route('route.name', [1]) }}" method="POST">..</form>
Another cool feature is that if the current route already contains the correct dictionary then you do not have to pass a second parameter. For example if you have a 2 routes like:
Get('/dashboard/@id', '[email protected]').name('dashboard.show'),Get('/dashhboard/@id/users', '[email protected]').name('dashhboard.users')
If you are accessing these 2 routes then the @id parameter will be stored on the user object. So instead of doing this:
<form action="{{ route('dashboard.users', {'id': 1}) }}" method="POST">..</form>
You can just leave it out completely since the
id key is already stored on the request object:
<form action="{{ route('dashboard.users') }}" method="POST">..</form>
This is useful for redirecting back to the previous page. If you supply this helper then the request.back() method will go to this endpoint. It's typically good to use this to go back to a page after a form is submitted with errors:
<form action="/some/url" method="POST">{{ back(request().path) }}</form>
Now when a form is submitted and you want to send the user back then in your controller you just have to do:
def show(self, request: Request):# Some failed validationreturn request.back()
The
request.back() method will also flash the current inputs to the session so you can get them when you land back on your template. You can get these values by using the
old() method:
<form><input type="text" name="email" value="{{ old('email') }}">...</form>
You can access the session here:
<p> Error: {{ session().get('error') }} </p>
You can sign things using your secret token:
<p> Signed: {{ sign('token') }} </p>
You can also unsign already signed string:
<p> Signed: {{ unsign('signed_token') }} </p>
This is just an alias for sign
This is just an alias for unsign
This allows you to easily fetch configuration values in your templates:
<h2> App Name: {{ config('application.name') }}</h2>
Allows you to fetch values from objects that may or may not be None. Instead of doing something like:
{% if auth() and auth().name == 'Joe' %}<p>Hello!</p>{% endif %}
You can use this helper:
{% if optional(auth()).name == 'Joe' %}<p>Hello!</p>{% endif %}
This is the normal dd helper you use in your controllers
You can use this helper to quickly add a hidden field
<form action="/" method="POST">{{ hidden('secret' name='secret-value') }}</form>
Check if a template exists
{% if exists('auth/base') %}{% extends 'auth/base.html' %}{% else %}{% extends 'base.html' %}{% endif %}
Gets a cookie:
<h2> Token: {{ cookie('token') }}</h2>
Get the URL to a location:
<form action="{{ url('/about', full=True) }}" method="POST"></form>
Below are some examples of the Jinja2 syntax which Masonite uses to build views.
It's important to note that Jinja2 statements can be rewritten with line statements and line statements are preferred in Masonite. In comparison to Jinja2 line statements evaluate the whole line, thus the name line statement.
So Jinja2 syntax looks like this:
{% if expression %}<p>do something</p>{% endif %}
This can be rewritten like this with line statement syntax:
@if expression<p>do something</p>@endif
It's important to note though that these are line statements. Meaning nothing else can be on the line when doing these. For example you CANNOT do this:
<form action="@if expression: 'something' @endif"></form>
But you could achieve that with the regular formatting:
<form action="{% if expression %} 'something' {% endif %}"></form>
Whichever syntax you choose is up to you.
You can show variable text by using
{{ }} characters:
<p>{{ variable }}</p><p>{{ 'hello world' }}</p>
If statements are similar to python but require an endif!
Line Statements:
@if expression<p>do something</p>@elif expression<p>do something else</p>@else<p>above all are false</p>@endif
Using alternative Jinja2 syntax:
{% if expression %}<p>do something</p>{% elif %}<p>do something else</p>{% else %}<p>above all are false</p>{% endif %}
For loop look similar to the regular python syntax.
Line Statements:
@for item in items<p>{{ item }}</p>@endfor
Using alternative Jinja2 syntax:
{% for item in items %}<p>{{ item }}</p>{% endfor %}
An include statement is useful for including other templates.
Line Statements:
@include 'components/errors.html'<form action="/"></form>
Using alternative Jinja2 syntax:
{% include 'components/errors.html' %}<form action="/"></form>
Any place you have repeating code you can break out and put it into an include template. These templates will have access to all variables in the current template.
This is useful for having a child template extend a parent template. There can only be 1 extends per template:
Line Statements:
@extends 'components/base.html'@block content<p> read below to find out what a block is </p>@endblock
Using alternative Jinja2 syntax:
{% extends 'components/base.html' %}{% block content %}<p> read below to find out what a block is </p>{% endblock %}
Blocks are sections of code that can be used as placeholders for a parent template. These are only useful when used with the
extends above. The "base.html" template is the parent template and contains blocks, which are defined in the child template "blocks.html".
Line Statements:
<!--
Using alternative Jinja2 syntax:
<!-- %}
As you see blocks are fundamental and can be defined with Jinja2 and line statements. It allows you to structure your templates and have less repeating code.
The blocks defined in the child template will be passed to the parent template. | https://docs.masoniteproject.com/the-basics/views | 2020-08-03T11:55:05 | CC-MAIN-2020-34 | 1596439735810.18 | [] | docs.masoniteproject.com |
OpenXR app best practices
You can see an example of the best practices below in BasicXrApp's OpenXRProgram.cpp file. The Run() function at the beginning captures a typical OpenXR app code flow from initialization to the event and rendering loop.
Best practices for visual quality and stability
The best practices in this section describe how to get the best visual quality and stability in any OpenXR application.
For further performance recommendations specific to HoloLens 2, see the Best practices for performance on HoloLens 2 section below.
Gamma-correct rendering
Care must be taken to ensure that your rendering pipeline is gamma-correct. When rendering to a swapchain, the render-target view format should match the swapchain format (e.g.
DXGI_FORMAT_B8G8R8A8_UNORM_SRGB for both the swapchain format and the render-target view).
The exception is if the app's rendering pipeline does a manual sRGB conversion in shader code, in which case the app should request an sRGB swapchain format but use the linear format for the render-target view (e.g. request
DXGI_FORMAT_B8G8R8A8_UNORM_SRGB as the swapchain format but use
DXGI_FORMAT_B8G8R8A8_UNORM as the render-target view) to prevent content from being double-gamma corrected.
Submit depth buffer for projection layers
Always use
XR_KHR_composition_layer_depth extension and submit the depth buffer together with the projection layer when submitting a frame to
xrEndFrame.
This improves hologram stability by enabling hardware depth reprojection on HoloLens 2.
Choose a reasonable depth range
Prefer a narrower depth range to scope the virtual content to help hologram stability on HoloLens.
For example, the OpenXrProgram.cpp sample is using 0.1 to 20 meters.
Use reversed-Z for a more uniform depth resolution.
Note that, on HoloLens 2, using the preferred
DXGI_FORMAT_D16_UNORM depth format will help achieve better frame rate and performance, although 16-bit depth buffers provide less depth resolution than 24-bit depth buffers.
Therefore, following these best practices to make best use of the depth resolution becomes more important.
Prepare for different environment blend modes
If your application will also run on immersive headsets that completely block out the world, be sure to enumerate supported environment blend modes using
xrEnumerateEnvironmentBlendModes API, and prepare your rendering content accordingly.
For example, for a system with
XR_ENVIRONMENT_BLEND_MODE_ADDITIVE such as the HoloLens, the app should use transparent as the clear color, while for a system with
XR_ENVIRONMENT_BLEND_MODE_OPAQUE, the app should render some opaque color or some virtual room in the background.
Choose unbounded reference space as application's root space
Applications typically establish some root world coordinate space to connect views, actions and holograms together.
Use
XR_REFERENCE_SPACE_TYPE_UNBOUNDED_MSFT when the extension is supported to establish a world-scale coordinate system, enabling your app to avoid undesired hologram drift when the user moves far (e.g. 5 meters away) from where the app starts.
Use
XR_REFERENCE_SPACE_TYPE_LOCAL as a fallback if the unbounded space extension doesn't exist.
Associate hologram with spatial anchor
When using an unbounded reference space, holograms you place directly in that reference space may drift as the user walks to distant rooms and then comes back.
For hologram users place at a discrete location in the world, create a spatial anchor using the
xrCreateSpatialAnchorSpaceMSFT extension function and position the hologram at its origin.
That will keep that hologram independently stable over time.
Support mixed reality capture
Although HoloLens 2's primary display uses additive environment blending, when the user initiates mixed reality capture, the app's rendering content will be alpha-blended with the environment video stream.
To achieve the best visual quality in mixed reality capture videos, it's best to set the
XR_COMPOSITION_LAYER_BLEND_TEXTURE_SOURCE_ALPHA_BIT in your projection layer's
layerFlags.
Best practices for performance on HoloLens 2
As a mobile device with hardware reprojection support, HoloLens 2 has stricter requirements to obtain optimal performance. There are a number of ways to submit composition data through
xrEndFrame which will result in post-processing that will have a noticeable performance penalty.
Select a swapchain format
Always enumerate supported pixel formats using
xrEnumerateSwapchainFormats, and choose the first color and depth pixel format from the runtime that the app supports, because that's what the runtime prefers for optimal performance. Note, on HoloLens 2,
DXGI_FORMAT_B8G8R8A8_UNORM_SRGB and
DXGI_FORMAT_D16_UNORM is typically the first choice to achieve better rendering performance. This preference can be different on VR headsets running on a Desktop PC, where 24-bit depth buffers have less of a performance impact.
Performance Warning: Using a format other than the primary swapchain color format will result in runtime post-processing which comes at a significant performance penalty.
Render with recommended rendering parameters and frame timing
Always render with the recommended view configuration width/height (
recommendedImageRectWidth and
recommendedImageRectHeight from
XrViewConfigurationView), and always use
xrLocateViews API to query for the recommended view pose, fov, and other rendering parameters before rendering.
Always use the
XrFrameEndInfo.predictedDisplayTime from the latest
xrWaitFrame call when querying for poses and views.
This allows HoloLens to adjust rendering and optimize visual quality for the person who is wearing the HoloLens.
Use a single projection layer
HoloLens 2 has limited GPU power for applications to render content and a hardware compositor optimized for a single projection layer. Always using a single projection layer can help the application's framerate, hologram stability and visual quality.
Performance Warning: Submitting anything but a single protection layer will result in runtime post-processing which comes at a significant performance penalty.
Render with texture array and VPRT
Create one
xrSwapchain for both left and right eye using
arraySize=2 for color swapchain, and one for depth.
Render the left eye into slice 0 and the right eye into slice 1.
Use a shader with VPRT and instanced draw calls for stereoscopic rendering to minimize GPU load.
This also enables the runtime's optimization to achieve the best performance on HoloLens 2.
Alternatives to using a texture array, such as double-wide rendering or a separate swapchain per eye, will result in runtime post-processing which comes at a significant performance penalty.
Avoid quad layers
Rather than submitting quad layers as composition layers with
XrCompositionLayerQuad, render the quad content directly into the projection swapchain.
Performance Warning: Providing additional layers beyond a single projection layer, such as quad layers, will result in runtime post-processing which comes at a significant performance penalty. | https://docs.microsoft.com/en-gb/windows/mixed-reality/openxr-best-practices | 2020-08-03T12:32:18 | CC-MAIN-2020-34 | 1596439735810.18 | [] | docs.microsoft.com |
Mark Groves and Top-down Design
Mark just posted a blog about top-down design here:
This is so typical of Microsoft. Mark came in and said, "Show me the value!" and after throwing various TechNotes/Documentation at Mark, including those on Shadow Applications and Agile Development, Mark was asked to make it better so that others like him get it.
Therefore, please help Mark by answering the questions on his blog.
Cheers,
Ali | https://docs.microsoft.com/en-us/archive/blogs/a_pasha/mark-groves-and-top-down-design | 2020-08-03T13:04:41 | CC-MAIN-2020-34 | 1596439735810.18 | [] | docs.microsoft.com |
The Guidance module combines data from all other modules to make recommendations on the future plans for your Domino applications. The recommendation is based on a combination of the application usage, the design complexity and the business value.
The recommendation can take one of four values:
- Retain: The application is important to your organization and needs to be retained..
Guidance is updated automatically whenever any of the data on which it depends is updated. As it involves examining data for all of the databases in the system it may take a little while to run, so it runs as a background task. See Jobs for further details. The only exception is when you update the business value for a single database: in this case Adviser can immediately update the recommendation for the affected database.
Guidance Overview.
Viewing Database.
The down arrow next to the databases button allows you to list databases categorized by server, template or guidance result..
Overriding the Automatic Recommendation. | https://docs.teamstudio.com/display/TEADOC065/Guidance | 2020-08-03T11:28:38 | CC-MAIN-2020-34 | 1596439735810.18 | [] | docs.teamstudio.com |
Improving Transcription Accuracy
To get the most from Voci's Automatic Speech Recognition (ASR) solutions, it helps to understand the variables that affect transcription accuracy. Audio is the primary input to an ASR system; therefore, the quality of the audio files have a significant impact on transcription accuracy.
In general, the best audio recording practices are to:
The pages in this section elaborate on these audio recording practices. | https://docs.vocitec.com/en/improving-transcription-accuracy.html | 2020-08-03T12:21:38 | CC-MAIN-2020-34 | 1596439735810.18 | [] | docs.vocitec.com |
.
Retrieve the API version from the currently configured server.
A convenience method to make it easy for implementers to extract the count (or totalCount) values from response data.
Returns the Promise for a IFilterProcessor.
Create an OnmsHTTPOptions object for DAO calls given an optional filter.
the filter to use
Returns or creates the PropertiesCache for this dao.
the PropertiesCache for this dao. It is created if it does not exist.
Fetches the data from the result and verfifes that the
dataFieldName exists in the data property.
If it does not exist, an exception is thrown.
The result to fetch the data from
The property name (basically
result.data[dataFieldName].
The path where the result was fetched from. This is for error handling
Callback function to convert each entry from
result.data[dataFieldName].
Get the list properties that can be used in queries.
Gets the property identified by the id if it exists.
The id to search the property by.
The path to the event search properties endpoint.
Convert the given value to a date, or undefined if it cannot be converted.
Convert the given value to a number, or undefined if it cannot be converted.
Whether or not to use JSON when making ReST requests.
Data access for OnmsEvent objects. | http://docs.opennms.org/opennms-js/branches/ranger.1.3.1.version.updates/opennms-js/classes/eventdao.html | 2019-02-16T06:31:07 | CC-MAIN-2019-09 | 1550247479885.8 | [] | docs.opennms.org |
Use Cases and How ElastiCache Can Help
Whether serving the latest news, a Top-10 leaderboard,.
Topics
In-Memory Data Store
The primary purpose of an in-memory key-value store is to provide ultrafast (submillisecond latency) and inexpensive access to copies of data. Most data stores have areas of data that are frequently accessed but seldom updated. Additionally, querying a database is always slower and more expensive than locating a key in a key-value pair cache. Some database queries are especially expensive to perform, the data requires a relatively quick and simple query, it might still be a candidate for caching, depending on other factors.
Data and Access Pattern – Determining what to cache also involves understanding the data itself and its access patterns. For example, it doesn't make sense to cache data that Redis User Guide
Gaming Leaderboards (Redis Sorted Sets)
Redis sorted sets move the computational complexity associated with leaderboards from your application to your Redis cluster.
Leaderboards, such as the Top 10 scores for a game, are computationally complex, especially appropriate numeric position.
In the following diagram, you can see how an ElastiCache for Redis gaming leaderboard works.
Example - Redis Leaderboard
In this example, four gamers and their scores are entered into a sorted list using
ZADD. The command
ZREVRANGEBYSCORE lists the players
by their score, high to low. Next,
ZADD is used to update June's score
by overwriting the existing entry. Finally,
ZREVRANGEBYSCORE list the
players by their score, high to low, showing that June has moved up in the
rankings.
ZADD leaderboard 132 Robert ZADD leaderboard 231 Sandra ZADD leaderboard 32 June ZADD leaderboard 381 Adam ZREVRANGEBYSCORE leaderboard +inf -inf 1) Adam 2) Sandra 3) Robert 4) June ZADD leaderboard 232 June ZREVRANGEBYSCORE leaderboard +inf -inf 1) Adam 2) June 3) Sandra 4) Robert
The following command lets June know where she ranks among all the players. Because ranking is zero-based, ZREVRANK returns a 1 for June, who is in second position.
ZREVRANK leaderboard June 1
For more information, see the Redis Documentation on sorted sets.
Messaging (Redis Pub/Sub)
When you send an email message, you send it to one or more specified recipients. In the pub/sub paradigm, you send a message to a specific channel not knowing who, if anyone, receives it. Recipients of the message are those who are subscribed to the channel. For example, suppose that you subscribe to the news.sports.golf channel. You and all others subscribed to the news.sports.golf channel receive any messages published to news.sports.golf.
Redis pub/sub functionality has no relation to any key space. Therefore, it doesn't interfere on any level. In the following diagram, you can find an illustration of ElastiCache for Redis messaging.
Subscribing
To receive messages on a channel, you must subscribe to the channel. You can subscribe to a single channel, multiple specified channels, or all channels that match a pattern. To cancel a subscription, you unsubscribe from the channel specified when you subscribed to it or the same pattern you used if you subscribed using pattern matching.
Example - Subscription to a Single Channel
To subscribe to a single channel, use the SUBSCRIBE command specifying the channel you want to subscribe to. In the following example, a client subscribes to the news.sports.golf channel.
SUBSCRIBE news.sports.golf
After a while, the client cancels their subscription to the channel using the UNSUBSCRIBE command specifying the channel to unsubscribe from.
UNSUBSCRIBE news.sports.golf
Example - Subscriptions to Multiple Specified Channels
To subscribe to multiple specific channels, list the channels with the SUBSCRIBE command. In the following example, a client subscribes to both the news.sports.golf, news.sports.soccer and news.sports.skiing channels.
SUBSCRIBE news.sports.golf news.sports.soccer news.sports.skiing
To cancel a subscription to a specific channel, use the UNSUBSCRIBE command specifying the channel to unsubscribe from.
UNSUBSCRIBE news.sports.golf
To cancel subscriptions to multiple channels, use the UNSUBSCRIBE command specifying the channels to unsubscribe from.
UNSUBSCRIBE news.sports.golf news.sports.soccer
To cancel all subscriptions, use
UNSUBSCRIBE and specify each channel or
UNSUBSCRIBE without specifying any channel.
UNSUBSCRIBE news.sports.golf news.sports.soccer news.sports.skiing
or
UNSUBSCRIBE
Example - Subscriptions Using Pattern Matching
Clients can subscribe to all channels that match a pattern by using the PSUBSCRIBE command.
In the following example, a client subscribes to all sports channels. Rather than
listing
all the sports channels individually, as you do using
SUBSCRIBE, pattern
matching is used with the
PSUBSCRIBE command.
PSUBSCRIBE news.sports.*
Example Canceling Subscriptions
To cancel subscriptions to these channels, use the
PUNSUBSCRIBE command.
PUNSUBSCRIBE news.sports.*
Important
The channel string sent to a [P]SUBSCRIBE command and to the [P]UNSUBSCRIBE command
must match.
You cannot
PSUBSCRIBE to news.* and
PUNSUBSCRIBE from news.sports.* or
UNSUBSCRIBE from news.sports.golf.
Publishing
To send a message to all subscribers to a channel, use the PUBLISH command, specifying the channel and the message. The following example publishes the message, “It’s Saturday and sunny. I’m headed to the links.” to the news.sports.golf channel.
PUBLISH news.sports.golf "It's Saturday and sunny. I'm headed to the links."
A client cannot publish to a channel to which it is subscribed.
For more information, see Pub/Sub in the Redis documentation.
Recommendation Data (Redis Hashes)
Using INCR or DECR in Redis makes compiling recommendations simple. Each time a user "likes" a product, you increment an item:productID:like counter. Each time a user "dislikes" a product, you increment an item:productID:dislike counter. Using Redis hashes, you can also maintain a list of everyone who has liked or disliked a product. The following diagram illustrates an ElastiCache for Redis real-time analytics store.
Example - Likes and Dislikes
INCR item:38923:likes HSET item:38923:ratings Susan 1 INCR item:38923:dislikes HSET item:38923:ratings Tommy -1
Other Redis Uses
The blog post How to take advantage of Redis just adding it to your stack by Salvatore Sanfilippo discusses a number of common database uses and how they can be easily solved using Redis. This approach removes load from your database and improves performance.. | https://docs.aws.amazon.com/AmazonElastiCache/latest/red-ug/elasticache-use-cases.html | 2019-02-16T05:42:36 | CC-MAIN-2019-09 | 1550247479885.8 | [array(['./images/ElastiCache-Caching.png', 'Image: ElastiCache Claching'],
dtype=object)
array(['./images/ElastiCache-Redis-Gaming.png',
'Image: ElastiCache for Redis Gaming leaderboard diagram'],
dtype=object)
array(['./images/ElastiCache-Redis-PubSub.png',
'Image: ElastiCache for Redis messaging diagram'], dtype=object)
array(['./images/ElastiCache-Redis-Analytics.png',
'Image: ElastiCache for Redis real-time analytics store'],
dtype=object) ] | docs.aws.amazon.com |
OneBook Cheque Management helps to manage your bank transaction conveniently with follwing features:-
Receive Cheque:
-- also known as Receipt Book or Money Receipt Book
Menu: Voucher > Receipt
-- Single entry for receive from Multiple Customer
-- Cheque Status from Receive cheque list
-- Payement Voucher with Cheque Method
-- Single Payment Voucher with Multiple cheque Voucher to a Party
-- Single Voucher with multiple cheque for a single Bank Account
-- Cheque Status from issued cheque list
Bank Reconcillations
OneBook provide bank reconcillation solution with statement. | https://docs.onebookcloud.com/en/cheque-management-onebook/ | 2019-02-16T04:52:45 | CC-MAIN-2019-09 | 1550247479885.8 | [] | docs.onebookcloud.com |
Office 365 Protected Message Viewer Portal privacy statement
This privacy statement governs the Office 365 Protected Message Viewer Portal (the “Portal”) which enables you to view protected (encrypted) messages on your devices. It does not apply to other online or offline Microsoft sites, products, or services. Other privacy statements may also apply to the data you process through the Portal, such as the privacy statement for Microsoft Account (if it is used for authentication) or the privacy statement associated with your device.
Collection, processing, and use of your information
The Portal enables you to view protected messages from Office 365 from a variety of end points (e.g. desktop computers or mobile devices). An email message will arrive in your mailbox with a hyperlink to view the protected message. When you click on that hyperlink, you will be given options to sign into the Portal using O365, Microsoft Account, Gmail, or Yahoo accounts, to view that message. You also have an option to use a one-time passcode, depending on the sender’s tenant configuration. The Portal will redirect you to your email provider to authenticate you. After successful authentication, the message will be decrypted and displayed via the Portal.
Sign-in credential information for your email account, as well as the one-time passcode, will be used solely for the purpose of authentication; it will not be stored in the Portal, or used for any other purposes.
During the decryption process, the encrypted mail you receive will not be stored by the Portal; it will not be transmitted outside the Portal at any time.
For more information about privacy
Microsoft Privacy – Information Protection Microsoft Corporation One Microsoft Way Redmond, Washington 98052 USA
Changes to this statement
If we post changes to this statement, we will revise the “last updated” data at the top of the statement. Consult with the organization that provides you access to their services to learn more about changes to the privacy practices.
Feedback
We'd love to hear your thoughts. Choose the type you'd like to provide:
Our feedback system is built on GitHub Issues. Read more on our blog. | https://docs.microsoft.com/en-us/office365/securitycompliance/protected-message-viewer-portal-privacy-statement?redirectSourcePath=%252fru-ru%252farticle%252f%2525D0%252597%2525D0%2525B0%2525D1%25258F%2525D0%2525B2%2525D0%2525BB%2525D0%2525B5%2525D0%2525BD%2525D0%2525B8%2525D0%2525B5-%2525D0%2525BE-%2525D0%2525BA%2525D0%2525BE%2525D0%2525BD%2525D1%252584%2525D0%2525B8%2525D0%2525B4%2525D0%2525B5%2525D0%2525BD%2525D1%252586%2525D0%2525B8%2525D0%2525B0%2525D0%2525BB%2525D1%25258C%2525D0%2525BD%2525D0%2525BE%2525D1%252581%2525D1%252582%2525D0%2525B8-%2525D0%2525BF%2525D0%2525BE%2525D1%252580%2525D1%252582%2525D0%2525B0%2525D0%2525BB%2525D0%2525B0-%2525D0%2525BF%2525D1%252580%2525D0%2525BE%2525D1%252581%2525D0%2525BC%2525D0%2525BE%2525D1%252582%2525D1%252580%2525D0%2525B0-%2525D0%2525B7%2525D0%2525B0%2525D1%252589%2525D0%2525B8%2525D1%252589%2525D0%2525B5%2525D0%2525BD%2525D0%2525BD%2525D1%25258B%2525D1%252585-%2525D1%252581%2525D0%2525BE%2525D0%2525BE%2525D0%2525B1%2525D1%252589%2525D0%2525B5%2525D0%2525BD%2525D0%2525B8%2525D0%2525B9-office-365-05b2e7e4-230e-48a9-802c-4cafac0d7c9d | 2019-02-16T05:21:20 | CC-MAIN-2019-09 | 1550247479885.8 | [] | docs.microsoft.com |
Running Test Software¶
To test our input stream device, we want to write an application that uses the device to write data into memory, then reads the data and prints it out.
In project-template, test software is placed in the
tests/ directory,
which includes a Makefile and library code for developing a baremetal program.
We’ll create a new file at
tests/input-stream.c with the following code:
#include <stdio.h> #include <stdlib.h> #include <stdint.h> #include "mmio.h" #define N 4 #define INPUTSTREAM_BASE 0x10017000L #define INPUTSTREAM_ADDR (INPUTSTREAM_BASE + 0x00) #define INPUTSTREAM_LEN (INPUTSTREAM_BASE + 0x08) #define INPUTSTREAM_RUNNING (INPUTSTREAM_BASE + 0x10) #define INPUTSTREAM_COMPLETE (INPUTSTREAM_BASE + 0x18) uint64_t values[N]; int main(void) { reg_write64(INPUTSTREAM_ADDR, (uint64_t) values); reg_write64(INPUTSTREAM_LEN, N * sizeof(uint64_t)); asm volatile ("fence"); reg_write64(INPUTSTREAM_RUNNING, 1); while (reg_read64(INPUTSTREAM_COMPLETE) == 0) {} reg_write64(INPUTSTREAM_COMPLETE, 0); for (int i = 0; i < N; i++) printf("%016lx\n", values[i]); return 0; }
This program statically allocates an array for the data to be written to.
It then sets the
addr and
len registers, executes a
fence
instruction to make sure they are committed, and then sets the
running
register. It then continuously polls the
complete register until it sees
a non-zero value, at which point it knows the data has been written to memory
and is safe to read back.
To compile this program, add “input-stream” to the
PROGRAMS list in
tests/Makefile and run
make from the tests directory.
To run the program, return to the
vsim/ directory and run the simulator
executable, passing the newly compiled
input-stream.riscv executable
as an argument.
$ cd vsim $ ./simv-example-FixedInputStreamConfig ../tests/input-stream.riscv
The program should print out
000000001002abcd 0000000034510204 0000000010329999 0000000092101222
For verilator, the command is the following:
$ cd verisim $ ./simulator-example-FixedInputStreamConfig ../tests/input-stream.riscv
Debugging Verilog Simulation¶
If there is a bug in your hardware, one way to diagnose the issue is to generate a waveform from the simulation so that you can introspect into the design and see what values signals take over time.
In VCS, you can accomplish this with the
+vcdplusfile flag, which will
generate a VPD file that can be viewed in DVE. To use this flag, you will
need to build the debug version of the simulator executable.
$ cd vsim $ make CONFIG=FixedInputStreamConfig debug $ ./simv-example-FixedInputStreamConfig-debug +max-cycles=50000 +vcdplusfile=input-stream.vpd ../tests/input-stream.riscv $ dve -full64 -vpd input-stream.vpd
The
+max-cycles flag is used to set a timeout for the simulation. This is
useful in the case the program hangs without completing.
If you are using verilator, you can generate a VCD file that can be viewed in an open source waveform viewer like GTKwave.
$ cd verisim $ make CONFIG=FixedInputStreamConfig debug $ ./simulator-example-FixedInputStreamConfig-debug +max-cycles=50000 -vinput-stream.vcd ../tests/input-stream.riscv $ gtkwave -o input-stream.vcd | https://docs.fires.im/en/latest/Developing-New-Devices/Running-Test-Software.html | 2019-02-16T06:16:52 | CC-MAIN-2019-09 | 1550247479885.8 | [] | docs.fires.im |
TcpMaxConnectResponseRetransmissions
HKLM\SYSTEM\CurrentControlSet\Services\Tcpip\Parameters
Description
Determines how many times TCP retransmits an unanswered SYN-ACK (connection request acknowledgment). TCP retransmits acknowledgments until they are answered or until this value expires. This entry is designed to minimize the effect of denial-of-service attacks (also known as SYN flooding ) on the server.
This entry also determines, in part, whether the SYN attack protection feature of TCP is enabled. This feature detects symptoms of SYN flooding and responds by reducing the time the server spends on connection requests that it cannot acknowledge. SYN attack protection is enabled when the value of the SynAttackProtect entry is 1 and the value of this entry is at least 2.
TCP/IP adjusts the frequency of retransmissions over time. The delay between the first and second retransmission is three seconds. This delay doubles after each attempt. After the final attempt, TCP/IP waits for an interval equal to double the last delay, and then it closes the connection request.
Note
Windows 2000 does not add this entry to the registry. You can add it by editing the registry or by using a program that edits the registry.
Related Entries
.gif)
.gif)
.gif)
.gif) | https://docs.microsoft.com/en-us/previous-versions/windows/it-pro/windows-2000-server/cc938208(v=technet.10) | 2019-02-16T04:51:55 | CC-MAIN-2019-09 | 1550247479885.8 | [array(['images%5ccc938288.regentry_reltopic(en-us,technet.10',
'Page Image'], dtype=object)
array(['images%5ccc938288.regentry_reltopic(en-us,technet.10',
'Page Image'], dtype=object)
array(['images%5ccc938288.regentry_reltopic(en-us,technet.10',
'Page Image'], dtype=object)
array(['images%5ccc938288.regentry_reltopic(en-us,technet.10',
'Page Image'], dtype=object) ] | docs.microsoft.com |
# Loads plugins from both /path/to/dir1 and /path/to/dir2 $ KUBECTL_PLUGINS_PATH=/path/to/dir1:/path/to/dir2 kubectl plugin -h
This topic reviews how to install and write extensions for the CLI. Usually
called plug-ins or binary extensions, this feature allows you to extend the
default set of
oc commands available and, therefore, allows you to perform new
tasks.
A plug-in is a set of files: typically at least one plugin.yaml descriptor and one or more binary, script, or assets files.
CLI plug-ins are currently only available under the
oc plugin subcommand.
You must have:
Copy the plug-in’s plugin.yaml descriptor, binaries, scripts, and assets
files to one of the locations in the file system where
oc searches for
plug-ins.
Currently, OKD does not provide a package manager for plug-ins. Therefore, it is your responsibility to place the plug-in files in the correct location. It is recommended that each plug-in is located on its own directory.
To install a plug-in that is distributed as a compressed file, extract it to one of the locations specified in The Plug-in Loader section.
The plug-in loader is responsible for searching plug-in files, and checking if the plug-in provides the minimum amount of information required for it to run. Files placed in the correct location that do not provide the minimum amount of information (for example, an incomplete plugin.yaml descriptor) are ignored. is not present, the loader begins to search the
additional locations.
${XDG_DATA_DIRS}/kubectl/plugins
The plug-in loader searches one or more directories specified according to the XDG System Directory Structure specification.
Specifically, the loader locates the directories specified by the
XDG_DATA under the user’s kubeconfig directory. In most
cases, this is ~/.kube/plugins:
# Loads plugins from both /path/to/dir1 and /path/to/dir2 $ KUBECTL_PLUGINS_PATH=/path/to/dir1:/path/to/dir2 kubectl plugin -h
You can write a plug-in in any programming language or script that allows you to
write CLI commands. A plug-in does not necessarily need to have a binary
component. It could rely entirely on operating system utilities like
echo,
sed, or
grep. Alternatively, it could rely on the
oc binary.
The only strong requirement for an
oc plug-in is the plugin.yaml
descriptor file. This file is responsible for declaring at least the minimum
attributes required to register a plug-in and must be located under one of the
locations specified in the Search Order
section.
The descriptor file supports the following attributes:
name: "great-plugin" # REQUIRED: the plug-in command name, to be invoked under 'kubectl' shortDesc: "great-plugin plug-in" # REQUIRED: the command short description, for help longDesc: "" # the command long description, for help example: "" # command example(s), for help command: "./example" # REQUIRED: the command, binary, or script to invoke when running the plug-in flags: # flags supported by the plug-in - name: "flag-name" # REQUIRED for each flag: flag name shorthand: "f" # short version of the flag name desc: "example flag" # REQUIRED for each flag: flag description defValue: "extreme" # default value of the flag tree: # allows the declaration of subcommands - ... # subcommands support the same set of attributes
The preceding descriptor declares the
great-plugin plug-in, which has
one flag named
-f | --flag-name. It could be invoked as:
$ oc plugin great-plugin -f value
When the plug-in is invoked, it calls the
example binary or script, which is
located in the same directory as the descriptor file, passing a number of
arguments and environment variables. The
Accessing Runtime
Attributes section describes how the
example command accesses the flag value
and other runtime context.
It is recommended that each plug-in has its own subdirectory in the file system, preferably with the same name as the plug-in command. The directory must contain the plugin.yaml descriptor and any binary, script, asset, or other dependency it might require.
For example, the directory structure for the
great-plugin plug-in could look like
this:
~/.kube/plugins/ └── great-plugin ├── plugin.yaml └── example
In most use cases, the binary or script file you write to support the plug-in must have access to some contextual information provided by the plug-in framework. For example, if you declared flags in the descriptor file, your plug-in must have access to the user-provided flag values at runtime.
The same is true for global flags. The plug-in framework is responsible for
doing that, so plug-in writers do not need to worry about parsing arguments.
This also ensures the best level of consistency between plug-ins and regular
oc commands.
Plug-ins have access to runtime context attributes through environment variables. To access the value provided through a flag, for example, look for the value of the proper environment variable using the appropriate function call for your binary or script.
The supported environment variables are:
KUBECTL_PLUGINS_CALLER: The full path to the
oc binary that was used in the
current command invocation. As a plug-in writer, you do not have to implement
logic to authenticate and access the Kubernetes API. Instead, you can use the
value provided by this environment variable to invoke
oc and obtain the
information you need, using for example
oc get --raw=/apis.
KUBECTL_PLUGINS_CURRENT_NAMESPACE: The current namespace that is the context
for this call. This is the actual namespace to be considered in namespaced
operations, meaning it was already processed in terms of the precedence between
what was provided through the kubeconfig, the
--namespace global flag,
environment variables, and so on.
KUBECTL_PLUGINS_DESCRIPTOR_*: One environment variable for every attribute
declared in the plugin.yaml descriptor. For example,
KUBECTL_PLUGINS_DESCRIPTOR_NAME,
KUBECTL_PLUGINS_DESCRIPTOR_COMMAND.
KUBECTL_PLUGINS_GLOBAL_FLAG_*: One environment variable for every global flag
supported by
oc. For example,
KUBECTL_PLUGINS_GLOBAL_FLAG_NAMESPACE,
KUBECTL_PLUGINS_GLOBAL_FLAG_LOGLEVEL.
KUBECTL_PLUGINS_LOCAL_FLAG_*: One environment variable for every local flag
declared in the plugin.yaml descriptor. For example,
KUBECTL_PLUGINS_LOCAL_FLAG_HEAT in the preceding
great-plugin example.
The OKD source contains some plug-in examples. | https://docs.okd.io/latest/cli_reference/extend_cli.html | 2019-02-16T06:13:16 | CC-MAIN-2019-09 | 1550247479885.8 | [] | docs.okd.io |
Plugins
Do you know how you can make your own Apisearch instance much more amazing that it is by default? By using Apisearch Plugins you can expand as much as you want the software by only following some simple instructions.
You will find some base plugins in the main server repository, and some other official plugins developed by our partners.
Redis Metadata Fields
Save your read-only fields in a key-value technology instead of storing it inside your search infrastructure. Populate your items with these values depending on your needs every moment.
New Relic
Integrate Apisearch with New Relic and improve your control over Apisearch.
Callbacks
Make additional callbacks before or after your calls. By doing them before you will be able to modify the input data, and by doing them after, you will be able to change the result value.
Static Tokens
Store your tokens as static values, and make their access something completely local. Avoid any external access.
Most Relevant Words
Simplify some of your searchable metadata fields by applying some filters. Reduce the size of the fields by only keeping the most valuable words.
ELK
Send your domain events to a Redis queue, in order to be consumed by an ELK framework.
Enable a plugin
By default, Apisearch comes with only the basic plugins. That means that, if you don’t configure your installation, only the Elasticsearch and Redis plugins will be enabled.
There are many ways to enable plugins in Apisearch.
Enable plugins in your app.yml
A plugin is a simple Symfony Bundle, so enabling a plugin means enabling it inside the Symfony kernel.
We use a simple
app.yml configuration file in order to create the application,
so, if you check the first part of the file you will find an array of enabled
plugins.
Add yours there.
bundles: - Apisearch\Server\ApisearchServerBundle - Apisearch\Server\ApisearchPluginsBundle - My\Own\Plugin\Namespace - My\Other\Plugin\Namespace
Enable plugins in your .env file
If you’re using the
.env to configure your installation, you can manage your
plugins by using the variable
APISEARCH_ENABLED_PLUGINS as a concatenation of
namespaces, splitted by a simple comma.
APISEARCH_GOD_TOKEN=xxx APISEARCH_PING_TOKEN=xxx APISEARCH_ENABLED_PLUGINS="My\Own\Plugin\Namespace, My\Other\Plugin\Namespace"
You can use a better format in order to make sure your environment file is easy to read and understand.
APISEARCH_GOD_TOKEN=xxx APISEARCH_PING_TOKEN=xxx APISEARCH_ENABLED_PLUGINS=" My\Own\Plugin\Namespace, My\Other\Plugin\Namespace "
Enable plugins in your docker run
If you use docker and you need to enable specific plugins when deploying a container, then you can pass these environment variables in the docker run
docker run -d \ --name "apisearch_server" \ --link "apisearch_redis" \ --link "apisearch_elasticsearch" \ -e "APISEARCH_ENABLED_PLUGINS=My\Own\Plugin\Namespace, My\Other\Plugin\Namespace" apisearch/server/new-relic
Make your own plugin
Apisearch is implemented by doing CQRS. This means that each time anyone uses Apisearch, a new command is created as a Value Object. This value should not be changed by anyone during it’s life. This objects is taken by one and only one handler, and some actions are done by this handler.
For example.
You make a query. Inside the controller, the more external layer of the
application, we create a ValueObject called
Query and we add this object into
an engine called Command Bus. This bus is like a tube, with an start and a
finale. Inside our controller we have the start, and the handler would be the
finale of the tube.
The magic part of this tube is that, between the start and the finale, we have several (as many as we want) holes, where we can intercept all the Commands that we want, read them and even change them.
By default, the CQRS pattern would say that the command should’nt be changeable, but by adding this new plugins layer, some commands can be replaced by new ValueObjects of the same class.
The project uses a specific CQRS pattern implementation, and is not to be a perfect pattern implementation project.
And there’s where Apisearch Plugins take their effect. A plugin can basically change the behavior of all actions by creating Middlewares. Let’s see an example of a Plugin with one mission. Each time a query is done, we want to add a new Filter that would allow us to only serve those items inside a group. For example, very useful for adding an extra layer of security.
Creating the plugin base
An Apisearch, by default, is a Symfony plugin. You can see some examples of how these plugins are designed in our default set of Plugins, but let’s talk about a simple bundle architecture
plugin/ | |-- DependencyInjection/ | |-- CompilerPass/ .. |-- MyPluginExtension.php |-- MyPluginConfiguration.php |-- Domain | |-- Model/ .. | |-- Middleware/ | |-- QueryMiddleware.php | |-- IndexItemsMiddleware.php | |-- DeleteItemsMiddleware.php | |-- Repository/ | |-- MyRepository.php | |-- MyInMemoryRepository.php |-- Resources/ .. |-- Redis/ | |-- MyRedisRepository.php |-- MyPluginBundle.php
As you can see, anything different than other simple bundles inside the Symfony environment.
The difference between a simple Bundle and a Apisearch plugin is an interface. As simple as it sounds.
/** * Class CallbacksPluginBundle. */ class CallbacksPluginBundle extends BaseBundle implements Plugin { /** * Get plugin name. * * @return string */ public function getPluginName(): string { return 'callbacks'; } }
The method that any plugin must implement is the
getPluginName. It will be
used mainly for enabling desired plugins when using an specific token. For
example, this one we’re building now, we could configure some tokens where this
filter will be applied by enabling this plugin. But we could have other regular
tokens with all plugins disabled.
Enabling the plugin
Of course, we need to enable the plugin. Again, same strategy that is used inside Symfony environment. Enable the Bundle in our kernel.
bundles: - Apisearch\Server\ApisearchServerBundle - Apisearch\Plugin\Callbacks\CallbacksPluginBundle
Creating a Middleware
Let’s add some action in our plugin. And to do that, we are going to create a
new middleware called
QueryMiddleware. We’re going to configure the middleware
in order to make some action ONLY when a new Query is done and the command
Query is passed through the command bus.
/** * Class QueryApplySomeFiltersMiddleware. */ class QueryApplySomeFiltersMiddleware implements PluginMiddleware { /** * Execute middleware. * * @param CommandWithRepositoryReferenceAndToken $command * @param callable $next * * @return mixed */ public function execute( CommandWithRepositoryReferenceAndToken $command, $next ) { // Do some action before the Command handler is executed. We would place // the filters here $result = $next($command); // Do some action after the Command handler is executed, and before the // value is returned to the previous middleware return $result; } /** * Events subscribed namespace. * * @return string[] */ public function getSubscribedEvents(): array { return [Query::class]; } }
As you can see, the method
getSubscribedEvents allow us to work with different
commands in the same class. But remember that different actions related to
different commands should be placed in several middleware classes.
After defined our class, we need to create the middleware service and tell
Apisearch that this is a middleware that should be executed inside a plugin. For
such action, let’s create a service definition with the tag
apisearch_plugin.middleware.
services: # # Middlewares # apisearch_plugin.my_plugin.query_apply_some_filters: class: Apisearch\Plugin\MetadataFields\Domain\Middleware\IndexItemsMiddleware arguments: - "@apisearch_plugin.metadata_fields.repository" tags: - { name: apisearch_plugin.middleware }
Apisearch will not ensure you the order between middleware classes, so you should only work with main Apisearch specification, and never have a hard dependency with other plugins.
Apisearch Commands and Queries
But what is a Command and a Query? And the most important question, what Commands and Queries can we find here in Apisearch? This project uses
A Command is this Value Object that we generate in all these first layers that we can find around Apisearch. For example, the first layer when we come from the HTTP layer is called Controller, and the first layer when we come from the console is called command. A Command is like an imperative action, and should follow the same format always, no matter the interface we’re using, for example.
So, the difference between a Command and a Query? Well, a Command is an action that will change the internal status of the application, a write operation. The status before and after applying this command could be barely different.
And can a Command have a response? Not at all. Only changes the state, and because this operation could be asynchronous, we must assume that we will not receive any response.
So, the question is, how can we retrieve information from Apisearch? Well, by using Queries. A Query is a type of Command that instead of changing the status of the application, only gets information from it and returns it.
- So, if I need to make a change into Apisearch and return it’s new value, can I do it by using Commands?
- Nopes. You can’t. A single action means a single action, and this can be a write action, what could be executed asynchronously sometime from a queue, or a read action, which in this case you would return the value instantly.
Commands
This is the main list of Apisaerch Commands. As you will see, all these actions could be applied by using a queue, in an asynchronous way, and don’t really need to return any value.
Some of these Commands application action is reduced to an specific app_id, an index, and always previous verification of a valid token.
- AddInteraction - A new interaction has been inserted in the system
- AddToken - Adds a new token
- ConfigureIndex - Configures an existing and previously created index. This action may require to stop the index some time
- CreateIndex - Creates a new index
- DeleteAllInteractions - Deletes all available interactions
- DeleteIndex - Deletes an existing index
- DeleteItems - Deletes a set of Items defined by their identifiers
- DeleteToken - Deletes an existing token
- DeleteTokens - Deletes all existing tokens
- IndexItems - Indexes a set of items.
- ResetIndex - Resets the index. Removed all existing information.
- UpdateItems - Applies some updates over the existing items
Queries
This is the main list of Apisearch Queries. As you will see, all these actions don’t change the status of the application, and only ask for some information from inside the system
Some of these Query application action is reduced to an specific app_id, an index, and always previous verification of a valid token.
- CheckHealth - Checks the cluster health
- CheckIndex - Checks if an index is available
- GetTokens - Return all the existing tokens
- Ping - Makes a simple ping
- Query - Makes a query over all the existing Items
Events in Apisearch
By using Middlewares what we do is to change the compositions of the Commands themselves or the results that the associated handlers produce. But this is only one part of the Plugins environment. What happens if we want to add an specific action when Items are indexed in the system? Here some options for you.
- Add a middleware before these items are really indexed, and pray for not to have an exception. Otherwise you’ll have inconsistent data.
- Add a middleware after these items are really indexed, but this point is specially designed for response manipulation. What if this action creates several related actions, and you need one of them? For sure, you’ll not be able to access to this data
In order to take some actions at the same moment these happen, we introduce you the Apisearch Domain Events. These are specially created to make sure you have all needed information of what happen inside the project. And Event Subscribers are the right way of subscribing to these events.
If you define Apisearch to consume events, instead of a default inline mode, in an asynchronous way, you will not recieve these events once they happen, but when they are taken from the queue
Creating an Event Subscriber
Let’s see a simple event subscriber implementation.
/** * Class MyEventSubscriber. */ class MyEventSubscriber implements EventSubscriber { /** * Subscriber should handle event. * * @param DomainEventWithRepositoryReference $domainEventWithRepositoryReference * * @return bool */ public function shouldHandleEvent(DomainEventWithRepositoryReference $domainEventWithRepositoryReference): bool { return Query::class; } /** * Handle event. * * @param DomainEventWithRepositoryReference $domainEventWithRepositoryReference */ public function handle(DomainEventWithRepositoryReference $domainEventWithRepositoryReference) { // Do something when a new Query is handled by Apisearch $repositoryReference = $domainEventWithRepositoryReference->getRepositoryReference(); $domainEvent = $domainEventWithRepositoryReference->getDomainEvent(); } }
The first method would return if, given a Domain Event, the subscribe is valid. One Domain Event, one subscriber. That will help us the split between files all our domain logic.
After we created the class, we need to publish it as an event subscriber.
services: my_service: class: My\Event\Subscriber tags: - { name: apisearch_server.domain_event_subscriber }
Domain Events
These are the available Domain Events inside Apisearch.
- IndexWasConfigured - An existing Index has been configured
- IndexWasReset - Index has been reset and all Items inside were removed
- InteractionWasAdded - A new interaction has been added
- ItemsWereDeleted - Some items have been deleted from the engine
- ItemsWereAdded - Some items have been added in the engine
- ItemsWereUpdated - Some items have been updated
- QueryWasMade - A new Query has been done
- TokensWereDeleted - All tokens were deleted
- TokenWasAdded - A new token has been added
- TokenWasDeleted - An existing token was deleted
Edit this page! | http://docs.apisearch.io/plugins.html | 2019-02-16T06:00:03 | CC-MAIN-2019-09 | 1550247479885.8 | [] | docs.apisearch.io |
procedure Create_Temp (File : out Temporary_File);
Create a new temporary file. Exception Temporary_File_Error is raised if the procedure cannot create a new temp file.
procedure Redirect_Output (To_File : in out Temporary_File);
Redirect the standard output to the file. To_File must be opened using Create_Temp.
procedure Remove_Temp (File : in out Temporary_File);
Remove the temporary file. File can be either open or closed. | http://docs.ahven-framework.com/2.3/api-ahven-temporary_output.html | 2019-02-16T05:35:02 | CC-MAIN-2019-09 | 1550247479885.8 | [] | docs.ahven-framework.com |
This page mentions some tips and tricks to get the most out of Virtaal. Some hidden features mentioned here might just make you a little more productive, or help you to customise things to be exactly the way you want.
Some of these features mention how to make changes to a configuration file for Virtaal. Depending on the version of Virtaal, you might be able to do the same inside Virtaal in the Preferences dialog. You can use any text editor to make these changes. These files are stored in the following directory on your system:
Most features are available via easy shortcuts.
Close Virtaal. Then find the directory with all your settings and the file
tm.db (your translation memory database). Copy the contents of the whole
directory to the corresponding directory on the other account/computer.
To disable some functionality like autocorrect, go to the Preferences and deselect it from the list of plugins.
Alternatively, you can edit
virtaal.ini:
[plugin_state] autocorrect = disabled
You can modify the information put into the PO headers using Virtaal’s Preferences window.
For older versions, edit
virtaal.ini and look for the settings
name,
team. The field for the team can contain a description of
your team and a URL or a mailing list address – anything really.
The best way to change the language of the Virtaal interface, is to change the locale of your system. For Windows, this is done in the Control Center under the Regional Settings, for example.
Changed in version 0.7.
You can specify a language for the interface that is different from the
language of the system. To do this, first ensure that Virtaal is closed
entirely. Then open the file
virtaal.ini and edit the setting
uilang
under the
[language] heading. Note that native window dialogs
only work when this is not set, or set to the system’s language.
[language] uilang = fr
You can specify your own font settings to use in the translation area. These can be edited inside Virtaal in the Preferences.
For older versions, edit
virtaal.ini, and look for the settings
sourcefont and
targetfont. You can therefore set the font and font size
separately for the source and target language if you wish to do so. Valid
settings could look something like this:
targetfont = monospace 11 targetfont = georgia 12 targetfont = dejavu serif,italic 9
If you want to receive suggestions more often, even if they are not very
similar to what you are translating, edit
plugins.ini and lower the setting
for
min_quality.
If you want to receive more suggestions at a time, edit
plugins.ini and
increase the setting for
max_matches.
Note that you can specify these same parameters for most of the individual
sources of TM suggestions. Try adding or editing these settings in
tm.ini
under a heading corresponding to the name of the plugin.
Do you wish the suggestions from Open-Tran.eu could come faster? The speed of this translation memory plugin depends a lot on your network connection. One way that could help to make it faster, is to avoid DNS lookups. You can do that by adding open-tran.eu into your hosts file.
85.214.16.47 open-tran.eu
(or whatever the IP address of open-tran.eu is)
If you don’t know about hosts files and their syntax, it might be best not to play with this setting. | https://virtaal.readthedocs.io/en/latest/tips.html | 2019-02-16T05:13:11 | CC-MAIN-2019-09 | 1550247479885.8 | [] | virtaal.readthedocs.io |
Module math
nannou
A mathematical foundation for nannou including point and vector types and a range of
helper/utility functions.
pub extern crate approx;
pub extern crate cgmath;
Numeric traits for generic mathematics
This module contains the most common traits used in cgmath. By
glob-importing this module, you can avoid the need to import each trait
individually, while still being selective about what types you import.
cgmath
A two-dimensional rotation matrix.
A three-dimensional rotation matrix.
A generic transformation consisting of a rotation,
displacement vector and scale amount.
An angle, in degrees.
A set of Euler angles representing a rotation in three-dimensional space.
A 2 x 2, column major matrix
A 3 x 3, column major matrix
A 4 x 4, column major matrix
An orthographic projection with arbitrary left/right/bottom/top distances
A perspective projection with arbitrary left/right/bottom/top distances
A perspective projection based on a vertical field-of-view angle.
A quaternion in scalar/vector
form.
An angle, in radians.
Angles and their associated trigonometric functions.
An array containing elements of type Element
Element
Base floating point types
Base numeric types with partial ordering
Numbers which have upper and lower bounds
Element-wise arithmetic operations. These are supplied for pragmatic
reasons, but will usually fall outside of traditional algebraic properties.
Points in a Euclidean space
with an associated space of displacement vectors.
Generic trait for floating point numbers
Vectors that also have a dot
(or inner) product.
A column-major matrix of arbitrary dimensions.
A type with a distance function between values.
An interface for casting between machine scalars.
Defines a multiplicative identity element for Self.
Self
A trait for a generic rotation. A rotation is a transformation that
creates a circular motion, and preserves at least one point in the space.
A two-dimensional rotation.
A three-dimensional rotation.
A column-major major matrix where the rows and column vectors are of the same dimensions.
A trait representing an affine
transformation that
can be applied to points or vectors. An affine transformation is one which
Vectors that can be added
together and multiplied
by scalars.
Defines an additive identity element for Self.
Clamp a value between some range.
Convert the given angle in degrees to the same angle in radians.
Models the C++ fmod function.
Map a value from a given range to a new given range.
The max between two partially ordered values.
The min between two partially ordered values.
Convert the given angle in radians to the same angle in degrees.
Convert the given value in radians to the equivalent value as a number of turns.
Convert the given value as a number of "turns" into the equivalent angle in radians. | https://docs.rs/nannou/0.8.0/nannou/math/index.html | 2019-02-16T05:40:08 | CC-MAIN-2019-09 | 1550247479885.8 | [] | docs.rs |
This function returns the Suppression State (True or False) of the specified Node in the current 3D Preview Document being displayed in the given 3D Preview Control.
PreviewGetNodeSuppressionState( [Preview Control Name], [Target Node Address] )
Where:
Preview Control Name is the name of the 3D Preview Control from which to get the nodes suppression state. The 3D Preview Control name must be entered as a string (within quotes " ") and not include any suffix (Return, Enabled, etc.).
Target Node Address is the address of the node for which to find the suppression state. | http://docs.driveworkspro.com/Topic/PreviewGetNodeSuppressionState | 2019-02-16T06:25:45 | CC-MAIN-2019-09 | 1550247479885.8 | [] | docs.driveworkspro.com |
Contents Now Platform Capabilities Previous Topic Next Topic Running Help the Help Desk Subscribe Log in to subscribe to topics and get notified when content changes. ... SAVE AS PDF Selected Topic Topic & Subtopics All Topics in Contents Share Running Help the Help Desk Help. About this task To detect all system software successfully on a 64-bit machine, make sure to run the Help the Help Desk script from a 64-bit browser. A 64-bit browser can detect both 64-bit and 32-bit software, but a 32-bit browser cannot detect 64-bit software. Procedure On your instance, navigate to Self Service > Help the Help Desk. Click Start the Scan to Help the Help Desk. You are prompted to run or save the discovery.hta script. If your browser is done, the data is sent back to your ServiceNow instance and used to populate the configuration database (CMDB). On this page Send Feedback Previous Topic Next Topic | https://docs.servicenow.com/bundle/istanbul-servicenow-platform/page/use/employee-self-service/task/t_RunningHelpTheHelpDesk.html | 2019-02-16T05:52:47 | CC-MAIN-2019-09 | 1550247479885.8 | [] | docs.servicenow.com |
Kafka Security¶
In the 0.9.0.0 release, the Kafka community added a number of features that, used either separately or together, increase.
The Schema Registry and REST Proxy do not support Kafka’s security features yet. This is planned for a future release.
The guides below explain how to configure and use the security features in both clients and brokers.
- Encryption and Authentication using SSL
- Authentication using SASL
- Authorization and ACLs
- Adding Security to a Running Cluster
- ZooKeeper Authentication | https://docs.confluent.io/2.0.0/kafka/security.html | 2018-02-17T21:33:42 | CC-MAIN-2018-09 | 1518891807825.38 | [] | docs.confluent.io |
Due by noon on Thursday, Feb 13th.
Merge hw4 into your master. Please don’t delete the ‘hw4’ branch :)
Hand in this homework on branch ‘hw5’ on your github account, and set up a pull request between hw4 and your master branch. Don’t merge it, just set up the PR.
Move all of your content-creation code (anything in server.py after reading in the request) into a new file, ‘app.py’, and refactor it to look like a WSGI application.
More specifically, follow the WSGI app specification.
Your app should work in the ref-server.py available here – you should be able to merge this into your repo by doing:
git pull hw5-wsgi
This will give you a basic ‘app’ file and a demonstration of how to run it in ref-server.py.
Once you’ve refactored your app.py code, you should get the same Web page results as from hw4, but through the WSGI server running in ref-server.py instead of with your own socket handling code.
Refactor the handle_connection function to run WSGI apps; see the WSGI server info for an example, although you’ll need to ignore some of the CGI-specific details...
Basically, what you need to do is separate out the actual HTTP request parsing from the code that generates a response (which, in any case, is now a WSGI app in app.py, right?). A few tips:
- ‘environ’ is a dictionary that you create from the HTTP request; see environ variables.
- you should fill in ‘REQUEST_METHOD’, ‘PATH_INFO’ (leave SCRIPT_NAME blank), ‘QUERY_STRING’, ‘CONTENT_TYPE’, ‘CONTENT_LENGTH’, and ‘wsgi.input’ in the environ dictionary;
- wsgi.input is that StringIO object you used in hw4 – this contains the POST data, if any; empty otherwise.
- ‘start_response’ should probably be defined within your handle connection_function (although there are other ways to do it). It is responsible for storing the status code and headers returned by the app, until the time comes to create the HTTP response.
Your logic could look something like this:
- read in entire request
- parse request headers, build environ dictionary
- define start_response
- call WSGI app with start_response and environ
- build HTTP response from results of start_response and return value of WSGI app.
The ‘simple_app’ in app.py on my hw5-wsgi branch (see app.py) is potentially a useful debugging tool; your server should work with it, as well as with the app in your app.py
Template inheritance and proper HTML.
Create a file base.html and use it as a “base template” to return proper-ish HTML, as in the jinja2 example from last year.
(Each of your HTML files should inherit from this base.html and fill in only their specific content.)
Please do specify an HTML title (<title> tag) for each page.
Your tests still pass, right?
This file can be edited directly through the Web. Anyone can update and fix errors in this document with few clicks -- no downloads needed.
For an introduction to the documentation format please see the reST primer. | http://msu-web-dev.readthedocs.io/en/latest/hw5.html | 2018-02-17T21:39:43 | CC-MAIN-2018-09 | 1518891807825.38 | [] | msu-web-dev.readthedocs.io |
Tcl8.6.7/Tk8.6.7 Documentation > Tcl Commands, version 8.6.7 >
- open — Open a file-based or command pipeline channel
- SYNOPSIS
- DESCRIPTION
- COMMAND PIPELINES
- SERIAL COMMUNICATIONS
- -mode baud,parity,data,stop
- -handshake type
- -queue
- -timeout msec
- -ttycontrol {signal boolean signal boolean ...}
- -ttystatus
- -xchar {xonChar xoffChar}
- -pollinterval msec
- -sysbuffer inSize
- -sysbuffer {inSize outSize}
- -lasterror
- SERIAL PORT SIGNALS
- ERROR CODES (Windows only)
- PORTABILITY ISSUES
- EXAMPLE
- SEE ALSO
- KEYWORDS
NAMEopen — Open a file-based or command pipeline channel
SYNOPSISopen fileName
open fileName access
open fileName access permissions
DESCRIPTIONIf.
SERIAL COMMUNICATIONSIf):
- handshake control. Note that not all handshake types maybe supported by your operating system. The type parameter is case.
- .
- When running Tcl interactively, there may be some strange interactions between the real console, if one is present, and a command.
EXAMPLEOpen a command pipeline and catch any errors:
set fl [open "| ls this_file_does_not_exist"] set data [read $fl] if {[catch {close $fl} err]} { puts "ls command failed: $err" }
SEE ALSOfile, close, filename, fconfigure, gets, read, puts, exec, pid, fopen
KEYWORDSaccess mode, append, create, file, non-blocking, open, permissions, pipeline, process, serial | http://docs.activestate.com/activetcl/8.6/tcl/TclCmd/open.html | 2018-02-17T21:46:20 | CC-MAIN-2018-09 | 1518891807825.38 | [] | docs.activestate.com |
EDA simpler access to EDA tools and libraries.
For settings and options documentation, see Tools & Simulators Options
Available tools and simulators are below. EDA Playground can support many different tools. Contact us to add your EDA tool to EDA Playground.
- Commercial simulator for VHDL and SystemVerilog (VHDL simulation not yet implemented on EDA Playground)
NOTE: The synthesis tools will only process code in the right Design pane. The code in the left Testbench pane will be ignored.
For settings and options documentation, see Languages & Libraries Options
Available libraries and methodologies:
OVL Library Reference Manual
OVL Quick Reference
svlib User Guide
OVL Library Reference Manual
OVL Quick Reference
“This is a really useful web-based utility for anyone who is discussing/sharing/debugging a code segment with a colleague or a support person. Also, a very useful follow-up tool for post-training help among students or between instructor and students. Simple, easy, useful.”
—Hemendra Talesara, Verification Technologist at Synapse Design Automation Inc.
“I think EDA Playground is awesome! Great resource to learn without the hassle of setting up tools!”
—Alan Langman, Engineering Consultant
“I’ve used it a few times now to just check out some issues related to SV syntax and it’s been a big timesaver!”
—Eric White, MTS Design Engineer at AMD
“EDA Playground is sooo useful for interviews. I got a lot more feedback from being able to watch someone compile and debug errors. I would highly recommend others to use it if they are asking SV related questions.”
—Ricardo Goto, Verification Engineer
“I have recommended to use EDAPlayground.com to my team and am also trying to use it more for my debug. I find EDAPlayground.com is much easier than logging into my Unix machines.”
—Subhash Bhogadi, Verification Consultant
“I just wanted to thank you a lot for creating EDA Playground. I’ve been using it a lot lately together with StackOverflow and it makes asking and answering questions much easier.”
—Tudor Timisescu, System Verification Engineer at Infineon Technologies
New features are frequently being added to EDA Playground. Follow the updates on your favorite social media site: | http://eda-playground.readthedocs.io/en/latest/intro.html | 2018-02-17T21:30:13 | CC-MAIN-2018-09 | 1518891807825.38 | [] | eda-playground.readthedocs.io |
Delete an appointment
When you delete an appointment from a group, it no longer appears in your Calendar application.
- In a group, click Calendar > View Groups Calendar.
- Highlight an appointment.
- Press the
key > Delete > Delete.
- If you created the appointment or if you are an administrator of the group, to mark the appointment as cancelled in all members' calendars, click Yes.
Was this information helpful? Send us your comments. | http://docs.blackberry.com/en/smartphone_users/deliverables/48765/839279.jsp | 2014-03-07T12:12:27 | CC-MAIN-2014-10 | 1393999642307 | [] | docs.blackberry.com |
Updated Documentation: The Online Manual has moved. All updates to the documentation are now taking place in the new Online Manual. Please see the new Online Manual, and learn how you can help, as it continues to grow and improve.
Page created in 0.24 seconds with 11 queries.
Page served by: 10.0.100.112 | http://docs.simplemachines.org/index.php?P=d1fcee4ca64b492b1a8ffe6e1c553f6a | 2014-03-07T12:08:14 | CC-MAIN-2014-10 | 1393999642307 | [] | docs.simplemachines.org |
You are viewing an older version of this section. View current production version.
AGENT-UPGRADE
MemSQL Ops has been deprecated
Please follow this guide to learn how to migrate to SingleStore tools.
Upgrades all MemSQL Ops agents to the latest version. All MemSQL Ops agents are upgraded with this command.
Usage
usage: memsql-ops agent-upgrade [--settings-file SETTINGS_FILE] [--async] [--file-path FILE_PATH | -v VERSION] [--timeout TIMEOUT] [--no-prompt] Upgrade MemSQL Ops to the newest version. optional arguments: --settings-file SETTINGS_FILE A path to a MemSQL Ops settings.conf file. If not set, we will use the file in the same directory as the MemSQL Ops binary. --async If this option is true, we will exit without waiting for the agents to be fully upgraded. --file-path FILE_PATH A .tar.gz file that contains a MemSQL Ops binary to use during the upgrade. -v VERSION, --version VERSION The version of MemSQL Ops to use. By default, we will use the latest available. --timeout TIMEOUT Number of seconds to wait for each agent to upgrade. The default is 300 seconds --no-prompt If this option is specified, we will upgrade MemSQL Ops without interactive prompt. | https://archived.docs.singlestore.com/v6.8/reference/memsql-ops-cli-reference/remote-memsql-ops-agent-management/agent-upgrade/ | 2021-11-27T11:12:19 | CC-MAIN-2021-49 | 1637964358180.42 | [] | archived.docs.singlestore.com |
Conditionals
In a playbook, you may want to execute different tasks, or have different goals, depending on the value of a fact (data about the remote system), a variable, or the result of a previous task. You may want the value of some variables to depend on the value of other variables. Or you may want to create additional groups of hosts based on whether the hosts match other criteria. You can do all of these things with conditionals.
Ansible uses Jinja2 tests and filters in conditionals. Ansible supports all the standard tests and filters, and adds some unique ones as well.
Note
There are many options to control execution flow in Ansible. You can find more examples of supported conditionals at.
Basic conditionals with
when
Conditionals based on ansible_facts
Conditions based on registered variables
Conditionals based on variables
Using conditionals in loops
-
-
Selecting variables, files, or templates based on facts
-
Basic conditionals with
when
The simplest conditional statement applies to a single task. Create the task, then add a
when statement that applies a test. The
when clause is a raw Jinja2 expression without double curly braces (see group_by_module). When you run the task or playbook, Ansible evaluates the test for all hosts. On any host where the test passes (returns a value of True), Ansible runs that task. For example, if you are installing mysql on multiple machines, some of which have SELinux enabled, you might have a task to configure SELinux to allow mysql to run. You would only want that task to run on machines that have SELinux enabled:
tasks: - name: Configure SELinux to start mysql on any port ansible.posix.seboolean: name: mysql_connect_any state: true persistent: yes when: ansible_selinux.status == "enabled" # all variables can be used directly in conditionals without double curly braces
Conditionals based on ansible_facts
Often you want to execute or skip a task based on facts. Facts are attributes of individual hosts, including IP address, operating system, the status of a filesystem, and many more. With conditionals based on facts:
-
You can install a certain package only when the operating system is a particular version.
-
You can skip configuring a firewall on hosts with internal IP addresses.
-
You can perform cleanup tasks only when a filesystem is getting full.
See Commonly-used facts for a list of facts that frequently appear in conditional statements. Not all facts exist for all hosts. For example, the ‘lsb_major_release’ fact used in an example below only exists when the lsb_release package is installed on the target host. To see what facts are available on your systems, add a debug task to your playbook:
- name: Show facts available on the system ansible.builtin.debug: var: ansible_facts
Here is a sample conditional based on a fact:
tasks: - name: Shut down Debian flavored systems ansible.builtin.command: /sbin/shutdown -t now when: ansible_facts['os_family'] == "Debian"
If you have multiple conditions, you can group them with parentheses:
tasks: - name: Shut down CentOS 6 and Debian 7 systems ansible.builtin.command: /sbin/shutdown -t now when: (ansible_facts['distribution'] == "CentOS" and ansible_facts['distribution_major_version'] == "6") or (ansible_facts['distribution'] == "Debian" and ansible_facts['distribution_major_version'] == "7")
You can use logical operators to combine conditions. When you have multiple conditions that all need to be true (that is, a logical
and), you can specify them as a list:
tasks: - name: Shut down CentOS 6 systems ansible.builtin.command: /sbin/shutdown -t now when: - ansible_facts['distribution'] == "CentOS" - ansible_facts['distribution_major_version'] == "6"
If a fact or variable is a string, and you need to run a mathematical comparison on it, use a filter to ensure that Ansible reads the value as an integer:
tasks: - ansible.builtin.shell: echo "only on Red Hat 6, derivatives, and later" when: ansible_facts['os_family'] == "RedHat" and ansible_facts['lsb']['major_release'] | int >= 6
Conditions based on registered variables
Often in a playbook you want to execute or skip a task based on the outcome of an earlier task. For example, you might want to configure a service after it is upgraded by an earlier task. To create a conditional based on a registered variable:
-
Register the outcome of the earlier task as a variable.
-
Create a conditional test based on the registered variable.
You create the name of the registered variable using the
register keyword. A registered variable always contains the status of the task that created it as well as any output that task generated. You can use registered variables in templates and action lines as well as in conditional
when statements. You can access the string contents of the registered variable using
variable.stdout. For example:
- name: Test play hosts: all tasks: - name: Register a variable ansible.builtin.shell: cat /etc/motd register: motd_contents - name: Use the variable in conditional statement ansible.builtin.shell: echo "motd contains the word hi" when: motd_contents.stdout.find('hi') != -1
You can use registered results in the loop of a task if the variable is a list. If the variable is not a list, you can convert it into a list, with either
stdout_lines or with
variable.stdout.split(). You can also split the lines by other fields:
- name: Registered variable usage as a loop list hosts: all tasks: - name: Retrieve the list of home directories ansible.builtin.command: ls /home register: home_dirs - name: Add home dirs to the backup spooler ansible.builtin.file: path: /mnt/bkspool/{{ item }} src: /home/{{ item }} state: link loop: "{{ home_dirs.stdout_lines }}" # same as loop: "{{ home_dirs.stdout.split() }}"
The string content of a registered variable can be empty. If you want to run another task only on hosts where the stdout of your registered variable is empty, check the registered variable’s string contents for emptiness:
- name: check registered variable for emptiness hosts: all tasks: - name: List contents of directory ansible.builtin.command: ls mydir register: contents - name: Check contents for emptiness ansible.builtin.debug: msg: "Directory is empty" when: contents.stdout == ""
Ansible always registers something in a registered variable for every host, even on hosts where a task fails or Ansible skips a task because a condition is not met. To run a follow-up task on these hosts, query the registered variable for
is skipped (not for “undefined” or “default”). See Registering variables for more information. Here are sample conditionals based on the success or failure of a task. Remember to ignore errors if you want Ansible to continue executing on a host when a failure occurs:
tasks: - name: Register a variable, ignore errors and continue ansible.builtin.command: /bin/false register: result ignore_errors: true - name: Run only if the task that registered the "result" variable fails ansible.builtin.command: /bin/something when: result is failed - name: Run only if the task that registered the "result" variable succeeds ansible.builtin.command: /bin/something_else when: result is succeeded - name: Run only if the task that registered the "result" variable is skipped ansible.builtin.command: /bin/still/something_else when: result is skipped
Note
Older versions of Ansible used
success and
fail, but
succeeded and
failed use the correct tense. All of these options are now valid.
Conditionals based on variables
You can also create conditionals based on variables defined in the playbooks or inventory. Because conditionals require boolean input (a test must evaluate as True to trigger the condition), you must apply the
| bool filter to non boolean variables, such as string variables with content like ‘yes’, ‘on’, ‘1’, or ‘true’. You can define variables like this:
vars: epic: true monumental: "yes"
With the variables above, Ansible would run one of these tasks and skip the other:
tasks: - name: Run the command if "epic" or "monumental" is true ansible.builtin.shell: echo "This certainly is epic!" when: epic or monumental | bool - name: Run the command if "epic" is false ansible.builtin.shell: echo "This certainly isn't epic!" when: not epic
If a required variable has not been set, you can skip or fail using Jinja2’s defined test. For example:
tasks: - name: Run the command if "foo" is defined ansible.builtin.shell: echo "I've got '{{ foo }}' and am not afraid to use it!" when: foo is defined - name: Fail if "bar" is undefined ansible.builtin.fail: msg="Bailing out. This play requires 'bar'" when: bar is undefined
This is especially useful in combination with the conditional import of vars files (see below). As the examples show, you do not need to use {{ }} to use variables inside conditionals, as these are already implied.
Using conditionals in loops
If you combine a
when statement with a loop, Ansible processes the condition separately for each item. This is by design, so you can execute the task on some items in the loop and skip it on other items. For example:
tasks: - name: Run with items greater than 5 ansible.builtin.command: echo {{ item }} loop: [ 0, 2, 4, 6, 8, 10 ] when: item > 5
If you need to skip the whole task when the loop variable is undefined, use the |default filter to provide an empty iterator. For example, when looping over a list:
- name: Skip the whole task when a loop variable is undefined ansible.builtin.command: echo {{ item }} loop: "{{ mylist|default([]) }}" when: item > 5
You can do the same thing when looping over a dict:
- name: The same as above using a dict ansible.builtin.command: echo {{ item.key }} loop: "{{ query('dict', mydict|default({})) }}" when: item.value > 5
You can provide your own facts, as described in Should you develop a module?. To run them, just make a call to your own custom fact gathering module at the top of your list of tasks, and variables returned there will be accessible to future tasks:
tasks: - name: Gather site specific fact data action: site_facts - name: Use a custom fact ansible.builtin.command: /usr/bin/thingy when: my_custom_fact_just_retrieved_from_the_remote_system == '1234'
Conditionals with re-use
You can use conditionals with re-usable tasks files, playbooks, or roles. Ansible executes these conditional statements differently for dynamic re-use (includes) and for static re-use (imports). See Re-using Ansible artifacts for more information on re-use in Ansible.
Conditionals with imports
When you add a conditional to an import statement, Ansible applies the condition to all tasks within the imported file. This behavior is the equivalent of Tag inheritance: adding tags to multiple tasks. Ansible applies the condition to every task, and evaluates each task separately. For example, you might have a playbook called
main.yml and a tasks file called
other_tasks.yml:
# all tasks within an imported file inherit the condition from the import statement # main.yml - import_tasks: other_tasks.yml # note "import" when: x is not defined # other_tasks.yml - name: Set a variable ansible.builtin.set_fact: x: foo - name: Print a variable ansible.builtin.debug: var: x
Ansible expands this at execution time to the equivalent of:
- name: Set a variable if not defined ansible.builtin.set_fact: x: foo when: x is not defined # this task sets a value for x - name: Do the task if "x" is not defined ansible.builtin.debug: var: x when: x is not defined # Ansible skips this task, because x is now defined
Thus if
x is initially undefined, the
debug task will be skipped. If this is not the behavior you want, use an
include_* statement to apply a condition only to that statement itself.
You can apply conditions to
import_playbook as well as to the other
import_* statements. When you use this approach, Ansible returns a ‘skipped’ message for every task on every host that does not match the criteria, creating repetitive output. In many cases the group_by module can be a more streamlined way to accomplish the same objective; see Handling OS and distro differences.
Conditionals with includes
When you use a conditional on an
include_* statement, the condition is applied only to the include task itself and not to any other tasks within the included file(s). To contrast with the example used for conditionals on imports above, look at the same playbook and tasks file, but using an include instead of an import:
# Includes let you re-use a file to define a variable when it is not already defined # main.yml - include_tasks: other_tasks.yml when: x is not defined # other_tasks.yml - name: Set a variable ansible.builtin.set_fact: x: foo - name: Print a variable ansible.builtin.debug: var: x
Ansible expands this at execution time to the equivalent of:
# main.yml - include_tasks: other_tasks.yml when: x is not defined # if condition is met, Ansible includes other_tasks.yml # other_tasks.yml - name: Set a variable ansible.builtin.set_fact: x: foo # no condition applied to this task, Ansible sets the value of x to foo - name: Print a variable ansible.builtin.debug: var: x # no condition applied to this task, Ansible prints the debug statement
By using
include_tasks instead of
import_tasks, both tasks from
other_tasks.yml will be executed as expected. For more information on the differences between
include v
import see Re-using Ansible artifacts.
Conditionals with roles
There are three ways to apply conditions to roles:
-
Add the same condition or conditions to all tasks in the role by placing your
whenstatement under the
roleskeyword. See the example in this section.
-
Add the same condition or conditions to all tasks in the role by placing your
whenstatement on a static
import_rolein your playbook.
-
Add a condition or conditions to individual tasks or blocks within the role itself. This is the only approach that allows you to select or skip some tasks within the role based on your
whenstatement. To select or skip tasks within the role, you must have conditions set on individual tasks or blocks, use the dynamic
include_rolein your playbook, and add the condition or conditions to the include. When you use this approach, Ansible applies the condition to the include itself plus any tasks in the role that also have that
whenstatement.
When you incorporate a role in your playbook statically with the
roles keyword, Ansible adds the conditions you define to all the tasks in the role. For example:
- hosts: webservers roles: - role: debian_stock_config when: ansible_facts['os_family'] == 'Debian'
Selecting variables, files, or templates based on facts
Sometimes the facts about a host determine the values you want to use for certain variables or even the file or template you want to select for that host. For example, the names of packages are different on CentOS and on Debian. The configuration files for common services are also different on different OS flavors and versions. To load different variables file, templates, or other files based on a fact about the hosts:
-
name your vars files, templates, or files to match the Ansible fact that differentiates them
-
select the correct vars file, template, or file for each host with a variable based on that Ansible fact
Ansible separates variables from tasks, keeping your playbooks from turning into arbitrary code with nested conditionals. This approach results in more streamlined and auditable configuration rules because there are fewer decision points to track.
Selecting variables files based on facts
You can create a playbook that works on multiple platforms and OS versions with a minimum of syntax by placing your variable values in vars files and conditionally importing them. If you want to install Apache on some CentOS and some Debian servers, create variables files with YAML keys and values. For example:
--- # for vars/RedHat.yml apache: httpd somethingelse: 42
Then import those variables files based on the facts you gather on the hosts in your playbook:
--- - hosts: webservers remote_user: root vars_files: - "vars/common.yml" - [ "vars/{{ ansible_facts['os_family'] }}.yml", "vars/os_defaults.yml" ] tasks: - name: Make sure apache is started ansible.builtin.service: name: '{{ apache }}' state: started
Ansible gathers facts on the hosts in the webservers group, then interpolates the variable “ansible_facts[‘os_family’]” into a list of filenames. If you have hosts with Red Hat operating systems (CentOS, for example), Ansible looks for ‘vars/RedHat.yml’. If that file does not exist, Ansible attempts to load ‘vars/os_defaults.yml’. For Debian hosts, Ansible first looks for ‘vars/Debian.yml’, before falling back on ‘vars/os_defaults.yml’. If no files in the list are found, Ansible raises an error.
Selecting files and templates based on facts
You can use the same approach when different OS flavors or versions require different configuration files or templates. Select the appropriate file or template based on the variables assigned to each host. This approach is often much cleaner than putting a lot of conditionals into a single template to cover multiple OS or package versions.
For example, you can template out a configuration file that is very different between, say, CentOS and Debian:
- name: Template a file ansible.builtin.template: src: "{{ item }}" dest: /etc/myapp/foo.conf loop: "{{ query('first_found', { 'files': myfiles, 'paths': mypaths}) }}" vars: myfiles: - "{{ ansible_facts['distribution'] }}.conf" - default.conf mypaths: ['search_location_one/somedir/', '/opt/other_location/somedir/']
Commonly-used facts
The following Ansible facts are frequently used in conditionals.
ansible_facts[‘distribution’]
Possible values (sample, not complete list):
Alpine Altlinux Amazon Archlinux ClearLinux Coreos CentOS Debian Fedora Gentoo Mandriva NA OpenWrt OracleLinux RedHat Slackware SLES SMGL SUSE Ubuntu VMwareESX
ansible_facts[‘distribution_major_version’]
The major version of the operating system. For example, the value is 16 for Ubuntu 16.04.
ansible_facts[‘os_family’]
- Tips and tricks
Tips and tricks for playbooks
- Using Variables
All about variables
- User Mailing List
Have a question? Stop by the google group!
- irc.libera.chat
#ansible IRC chat channel | https://docs.ansible.com/ansible/latest/user_guide/playbooks_conditionals.html | 2021-11-27T11:42:44 | CC-MAIN-2021-49 | 1637964358180.42 | [] | docs.ansible.com |
This is home of Links Explorer for Cloud, a cloud version of popular Links Explorer Plugin for Jira Server and Data Centre. This version replicates functionality of Links Explorer for cloud version of Jira.
Links explorer is an excellent enhancement over traditional Jira issue links. This plugin enables quick navigation of issues and links from single window and thus greatly enhances efficiency of users. | https://docs.optimizory.com/plugins/viewsource/viewpagesrc.action?pageId=25494962 | 2021-11-27T11:10:05 | CC-MAIN-2021-49 | 1637964358180.42 | [] | docs.optimizory.com |
Upgrading VMware NSX-T Data Center from 2.5.1 to 3.0.2 involves upgrading its components in the following order: Edges, Hosts, and Management Nodes.
Prerequisites
Perform the Pre-Upgrade Tasks.Important:
Ensure that you back up the NSX Manager before upgrading any of the NSX-T components. For more information, see Backing Up and Restoring NSX Manager.
Procedure
- Review the NSX-T Data Center Upgrade Checklist to track your work on the NSX-T Data Center upgrade process.
- Prepare your infrastructure for the NSX-T Data Center upgrade.
- Upgrade NSX-T Data Center.
- Perform Post-Upgrade Tasks to verify whether the upgrade is successful. | https://docs.vmware.com/en/VMware-Telco-Cloud-Infrastructure---Cloud-Director-Edition/1.0/telco-cloud-infrastructure-cloud-director-edition-platform-upgrade-guide-10/GUID-6C79C71A-840D-4A76-81C1-5BEB29FB06C9.html | 2021-11-27T11:42:30 | CC-MAIN-2021-49 | 1637964358180.42 | [] | docs.vmware.com |
What is a Content Type
Content Types define sets of default fields for editors to add content on a Backdrop site and are the building blocks for structured authoring and content. In Backdrop, a Content Type is a pre-defined collection of data types (Fields).
One way to understand content types is to imagine needing to create a form to store information about your friends and contacts. If this were on paper, you would probably print a single long line for name, a block of several lines to write the address, a short line for the telephone number, and so on. Then you would make several copies of this form to reuse for each new contact.
If you were to do this on a Backdrop site you would create a Content Type. You would name your new Content Type (e.g. My Contacts), define the information that you wanted to store about each contact (called Fields in Backdrop), and then add those Fields (name, address, and telephone number, etc.) to that Content Type. You would then use this "My Contacts" Content Type to enter the information for each of the contacts you needed to store.
Note that you would use different types of fields for each piece of information: a field that could hold numbers for the telephone, a field which held single line data for the name and a field capable of holding multiple lines for the address.
As an example, this would be what the simple "My Contacts" form would look like to an editor:
(Read more about modifying Display Settings for Content Types) | https://docs.backdropcms.org/documentation/content-types | 2021-11-27T12:14:58 | CC-MAIN-2021-49 | 1637964358180.42 | [array(['https://docs.backdropcms.org/files/inline-images/Screen_Shot_2016-11-03_at_2.46.57_PM.png',
None], dtype=object)
array(['https://docs.backdropcms.org/files/inline-images/Screen_Shot_2016-11-03_at_3.00.13_PM.png',
None], dtype=object)
array(['https://docs.backdropcms.org/files/inline-images/Screen_Shot_2016-11-03_at_3.02.12_PM.png',
None], dtype=object) ] | docs.backdropcms.org |
Saving a Chart
After you edit a chart, you can save it to a new or existing custom dashboard.
Minimum Required Role: Configurator (also provided by Cluster Administrator, Limited Cluster Administrator , and Full Administrator)
Users with Viewer, Limited Operator, or Operator user roles can edit charts and view the results, but cannot save them to a. | https://docs.cloudera.com/cloudera-manager/7.5.2/monitoring-and-diagnostics/topics/cm-saving-a-chart.html | 2021-11-27T11:59:59 | CC-MAIN-2021-49 | 1637964358180.42 | [] | docs.cloudera.com |
Configuring API Gateway API Autodiscovery in a Mule 4 Application
To configure API Autodiscovery in your Mule 4.x application, you must meet the following requirements:
Making your API Available in API Manager
The API to which you want to pair your Mule application must be available in API Manager.
You can either:
Publish your API in Exchange and import it from there to API Manager.
See To Manage an API from Exchange.
Import it directly from your machine.
See To Import an API Instance for more details.
API Manager generates an "API ID" identifier for every imported API:
This API ID is required by the Autodiscovery element to identify the API to which you want to pair your application.
Configuring API Manager
This step is necessary to instruct API Manager to use your deployed Mule application as the API proxy itself.
For Managing type, select Basic Endpoint.
Add the Implementation URI that points to the deployed Mule application.
Select the checkmark
Check this box if you are managing this API in Mule 4 or above.
Configuring Anypoint Platform Organization Credentials in Mule Runtime
To use Autodiscovery in a Mule Application, the Runtime must start with Anypoint Platform credentials already configured.
Learn how to configure your organization credentials based on your deployment target in Configuring Organization Credentials in Mule Runtime 4.
Configuring the Autodiscovery Element in Your Mule Application
Within the code of your Mule Application, you must configure the
api-gateway:Autodiscovery element.
The Autodiscovery element points to the specific API in API Manager to which you want to pair.
<api-gateway:autodiscovery (2)
You can either replace
${apiId} with your specific value, or you can create a
config.properties file where you define it in a key-value fashion:
apiId=[your-specific-value] and reference it as
${apiId} in this configuration.
You can also configure this using Anypoint Studio’s UI:
In your existing flow, select the Global Elements tab in your Mule Configuration File editor.
Click the Create button, and look for the API Autodiscovery global element.
Set the API ID and Flow Reference.
You can also choose to use the
${apiId}value and reference it from a
config.propertiesfile.
After the element is defined in the application, and the runtime is configured with your Anypoint Platform credentials, Mule Runtime will automatically track and keep up to date with the API configuration. defined in API Manager.
Changes from Mule 3.x Configuration
API Autodiscovery element has syntactically changed from Mule 3.x but its purpose remains the same. The element in a Mule Application has the following format:
In Mule 4, the API is identified by the API Id and a reference to a Flow where the HTTP listener is defined. Mule 4’s apiId replaces the apiName and apiVersion used to specify the Autodiscovery element in Mule 3.x and prior. | https://docs.mulesoft.com/api-manager/2.x/configure-autodiscovery-4-task | 2021-11-27T11:06:57 | CC-MAIN-2021-49 | 1637964358180.42 | [array(['_images/api-id.png', 'api id'], dtype=object)
array(['_images/api-id.png', 'api id'], dtype=object)] | docs.mulesoft.com |
f5networks.f5_modules.bigip_profile_dns – Manage DNS profiles_profile_dns.
New in version 1.0.0: of f5networks.f5_modules
Synopsis
Manage DNS profiles on a BIG-IP. There are many DNS profiles options, each with their own adjustments to the standard
dnsprofile. Users of this module should be aware that many of the configurable options have no module default. Instead, the default is assigned by the BIG-IP system itself which, in most cases, is acceptable. DNS profile bigip_profile_dns: name: foo enable_dns_express: no enable_dnssec: no enable_gtm: no process_recursion_desired: no use_local_bind: no enable_dns_firewall: yes provider: password: secret server: lb.mydomain.com user: admin delegate_to: localhost
Return Values
Common return values are documented here, the following are the fields unique to this module: | https://docs.ansible.com/ansible/latest/collections/f5networks/f5_modules/bigip_profile_dns_module.html | 2021-11-27T12:47:22 | CC-MAIN-2021-49 | 1637964358180.42 | [] | docs.ansible.com |
implemented:
<?xml version="1.0" ?> <hibernate-configuration <session-factory> <property name="dialect">NHibernate.Dialect.MySQLDialect</property> <property name="connection.provider">NHibernate.Connection.DriverConnectionProvider</property> <property name="connection.driver_class">NHibernate.Driver.MySQLDataDriver</property> <property name="connection.connection_string">Server=localhost;Database=dotnetpersistency;User ID=root;CharSet=utf8</property> <!--Disable the writing of all the SQL statments to the console--> <property name="show_SQL">false</property> <!--Disabled the validation of your persistent classes, allows using .NET properties and not getters and setters on your fields--> <property name="use_proxy_validator">false</property> <!--This will create the tables in the database for your persistent classes according to the mapping file.--> <!--If the tables are already created this will recreate them and clear the data--> <property name="hbm2ddl.auto">create</property> </session-factory> </hibernate-configuration>. | https://docs.gigaspaces.com/xap/10.1/tut-dotnet/net-tutorial-part7.html | 2021-11-27T10:55:58 | CC-MAIN-2021-49 | 1637964358180.42 | [] | docs.gigaspaces.com |
Standalone Mode
The standalone processing unit container allows you to run a processing unit in standalone mode, which means that the processing unit constructs its own dedicated classloader, and automatically includes it its classpath all of the jar files located under the processing unit’s
lib directory.
It’s implementation class is StandaloneProcessingUnitContainer.
The standalone processing unit container is built around Spring’s
ApplicationContext with several extensions relevant to GigaSpaces, such as ClusterInfo.
It contains a
main() method and can be started from an external script or programmatically by using the
ProcessingUnitContainerProvider abstraction.
It should be used when you would like to run your processing unit in a non-managed environment (outside of the service grid) or from within your own code, and still benefit from the automatic classpath creation as with the managed mode.
Executable StandaloneProcessingUnitContainer
The
StandaloneProcessingUnitContainer provides an executable
main() method that allows it to be run directly. The
main() method uses the
StandaloneProcessingUnitContainerProvider and command-line conventions in order to create the
StandaloneProcessingUnitContainer. A required parameter is the processing unit location, pointing to the file-system location of the processing unit directory. The following is a list of all the possible command-line parameters available:
The
StandaloneProcessingUnitContainer class provides an executable
main() method, allowing you to run it directly via a shell script for example. The
main() method uses the
StandaloneProcessingUnitContainerProvider class and program arguments in order to create the
StandaloneProcessingUnitContainer. The following is a list of all the possible program arguments that can be specified to the
StandaloneProcessingUnitContainer:
Starting the Standalone Processing Unit Container via the puInstance Shell Script
GigaSpaces comes with the
puInstance shell script, which uses the
StandaloneProcessingUnitContainer in order to run a processing unit directly from the command line.
Here are some examples of using the
puInstance script in order to run a processing unit:
puInstance.sh -cluster schema=partitioned total_members=2 id=1 data-processor.jar
puInstance.bat -cluster schema=partitioned total_members=2 id=1 data-processor.jar
The above example starts a processing unit (which includes an embedded space) in a partitioned cluster schema, with two members and
id=1. In order to run the full cluster, another
puInstance has to be started with
id=2.
puInstance.sh -cluster schema=partitioned total_members=1,1 id=1 backup_id=1 -properties runtime.properties data-processor.jar
puInstance.bat -cluster schema=partitioned total_members=1,1 id=1 backup_id=1 -properties runtime.properties data-processor.jar
The above example starts a processing unit instance (with an embedded space) in a partitioned cluster schema, with one primary and one backup. It also uses an external properties file to inject property values at startup time.
Starting a StandaloneProcessingUnitContainer Programmatically
Here is an example of using a
ProcessingUnitContainerProvider in order to create a standalone processing unit container programmatically with two partitions:
StandaloneProcessingUnitContainerProvider provider = new StandaloneProcessingUnitContainerProvider("/usr/gigaspaces/data-processor.jar"); // provide cluster information for the specific PU instance ClusterInfo clusterInfo = new ClusterInfo(); clusterInfo.setSchema("partitioned"); clusterInfo.setNumberOfInstances(2); clusterInfo.setNumberOfBackups();
The
StandaloneProcessingUnitContainerProvider is constructed with a file-system path to the processing unit jar file. It constructs a new classloader and adds all the jar files in the processing unit’s
lib directory to it automatically.
Disabling Embedded Lookup Service
The StandaloneProcessingUnitContainer automatically starts an embedded Lookup service. If you intend to use a separate Lookup service you can disable the embedded Lookup service by passing the setting the
com.j_spaces.core.container.directory_services.jini_lus.enabled system property to false. This property can also be set within the Space definition:
<os-core:embedded-space <os-core:properties> <props> <prop key="com.j_spaces.core.container.directory_services.jini_lus.start-embedded-lus">false</prop> </props> </os-core:properties> </os-core:embedded-space> | https://docs.gigaspaces.com/xap/11.0/dev-java/running-in-standalone-mode.html | 2021-11-27T11:12:55 | CC-MAIN-2021-49 | 1637964358180.42 | [] | docs.gigaspaces.com |
The optional list of users to which the logging rules in this BEGIN LOGGING statement apply.
For logging of users authorized privileges by Vantage, the user name must be an existing user.
For logging of users authorized privileges in an LDAP directory (that is, AuthorizationSupported=yes in the authentication mechanism):
- If the user is mapped to a permanent database user object in the directory, specify the mapped database user name.
- If the user is not mapped to a permanent database user object in the directory, logging is not supported.
Absence of the BY user_name option specifies all users (those already defined to the system as well as any defined in the future while this logging directive is in effect).
If neither BY user_name nor ON keyword object_name are specified, then the specified action is logged for all users and objects throughout the system. | https://docs.teradata.com/r/UG7kfQnbU2ZiX41~Mu75kQ/5i3KJrGNhsp4LjYsWDgAOQ | 2021-11-27T12:07:21 | CC-MAIN-2021-49 | 1637964358180.42 | [] | docs.teradata.com |
1.1.2 Stability Index
Chapter 3 of this document indicates the stability of each ETP protocol, using values described in the table below. This concept of assigning a stability index was borrowed from the Node.js API and is intended to allow evolutionary development of the specification, while allowing implementers to use with confidence the portions that are stable. ETP is still changing, and as it matures, certain parts are more reliable than others. | http://docs.energistics.org/ETP/ETP_TOPICS/ETP-000-004-0-C-sv1100.html | 2021-11-27T11:22:53 | CC-MAIN-2021-49 | 1637964358180.42 | [] | docs.energistics.org |
VPN customer gateways.
For more information, see Amazon Web Services Site-to-Site VPN in the Amazon Web Services Site-to-Site VPN User Guide .
See also: AWS API Documentation
See 'aws help' for descriptions of global parameters.
describe-customer-gateways [--customer-gateway-ids <value>] [--filters <value>] [--dry-run | --no-dry-run] [--cli-input-json <value>] [--generate-cli-skeleton <value>]
--customer-gateway-ids (list)
One or more customer gateway IDs.
Default: Describes all your customer gateways.
(string).
CertificateArn -> (string)The Amazon Resource Name (ARN) for the customer gateway certificate.
State -> (string)The current state of the customer gateway (pending | available | deleting | deleted ).
Type -> (string)The type of VPN connection the customer gateway supports (ipsec.1 ).
DeviceName -> (string)The name of customer gateway device.. | https://docs.aws.amazon.com/cli/latest/reference/ec2/describe-customer-gateways.html | 2021-11-27T13:24:02 | CC-MAIN-2021-49 | 1637964358180.42 | [] | docs.aws.amazon.com |
TS-4900 Specifications
1 Power Specifications
The TS-4900 only accepts 5V input to the board.
2 Power Consumption.
3 Temperature Specifications
The i.MX6 CPUs we provide off the shelf are either a solo industrial, solo commercial, or quad core extended temperature. The TS-4900 is designed using industrial components that will support -40C to 85C operation, but the CPU is rated to a max junction temperature rather than an ambient temperature. We expect the solo to work to 80C ambient while idle with a heatsink and open air circulation. To reach higher temperatures with this or other variants of this CPU some custom passive or active cooling may be required.
For custom builds with different CPUs these are also exposed in /sys/:
# Passive cat /sys/devices/virtual/thermal/thermal_zone0/trip_point_0_temp # Critical cat /sys/devices/virtual/thermal/thermal_zone0/trip_point_1_temp
The current CPU die temp can be read with:
cat /sys/devices/virtual/thermal/thermal_zone0/temp
When the CPU heats up past the cooling temp on a first boot, it will take no action. Once it hits the passive temperature however the kernel will reduce clocks in an attempt to passively cool the CPU. This will show a kernel message:
[ 158.454693] System is too hot. GPU3D will work at 1/64 clock.
If it cools back down below the cooling temperature it will spin back up the clocks.
[ 394.082161] Hot alarm is canceled. GPU3D clock will return to 64/64
If it continues heating to the critical temperature it will overheat and reboot. When the system boots back up u-boot will block the boot until the temperature has been reduced to the Cooling Temp+5C. This will be shown on boot with:
U-Boot 2015.04-07857-g486fa69 (Jun 03 2016 - 12:04:30) CPU: Freescale i.MX6SOLO rev1.1 at 792 MHz CPU Temperature is 105 C, too hot to boot, waiting... CPU Temperature is 102 C, too hot to boot, waiting... CPU Temperature is 99 C, too hot to boot, waiting... CPU Temperature is 90 C, too hot to boot, waiting... CPU Temperature is 86 C, too hot to boot, waiting... CPU Temperature is 84 C, too hot to boot, waiting... CPU Temperature is 80 C, too hot to boot, waiting... CPU Temperature is 80 C, too hot to boot, waiting... CPU Temperature is 80 C, too hot to boot, waiting... CPU: Temperature 78 C Reset cause: WDOG Board: TS-4900
These temp tests show how the TS-4900 functions with/without the heatsink. Note that the listed adhesive heatsink is not recommended with the i.MX6, but the data is provided as a reference for a smaller heatsink.
4 IO Specifications
The GPIO external to the unit are all nominally 3.3 V, but will vary depending on if they are CPU/FPGA pins.
The CPU pins can be adjusted in software and will have initial values in the device tree. This allows for adjustment of the drive strength and pull strength of the I/O pins. See the device tree for your kernel for further details on a specific I/O.
The FPGA I/O cannot be adjusted further in software.
Refer to the iCE40 Family Datasheet for more detail on the FPGA I/O. Refer to the CPU quad or solo datasheet for further details on the CPU I/O.
5 Rail Specifications
The TS-4900 generates all rails from the the 5V input. This table does not document every rail. This will only cover those that can provide power to an external header for use in an application.
5V will allow you to bypass our regulator allowing more current, but the absolute max supply can provide 5A to the board. | https://docs.embeddedarm.com/TS-4900_Specifications | 2021-11-27T11:25:12 | CC-MAIN-2021-49 | 1637964358180.42 | [] | docs.embeddedarm.com |
Architecture and concepts¶
The overall architecture of Mopidy is organized around multiple frontends and backends. The frontends use the core API. The core actor makes multiple backends work as one. The backends connect to various music sources. Both the core actor and the backends use the audio actor to play audio and control audio volume.

Frontends¶
Frontends expose Mopidy to the external world. They can implement servers for protocols like MPD and MPRIS, and they can be used to update other services when something happens in Mopidy, like the Last.fm scrobbler frontend does. See Frontend API for more details.

Audio¶
The audio actor is a thin wrapper around the parts of the GStreamer library we use. In addition to playback, it’s responsible for volume control through both GStreamer’s own volume mixers, and mixers we’ve created ourselves. If you implement an advanced backend, you may need to implement your own playback provider using the Audio API. | https://docs.mopidy.com/en/release-0.19/api/concepts/ | 2021-11-27T12:02:53 | CC-MAIN-2021-49 | 1637964358180.42 | [] | docs.mopidy.com |
INFRASTRUCTURE
The INFRASTRUCTURE tab features the pages that are described in the following sections.
DEVICES
The INFRASTRUCTURE > DEVICES page provides access all of the devices in your environment, and is the default page displayed when you click the INFRASTRUCTURE tab.
NETWORKS
The INFRASTRUCTURE > NETWORKS page displays all of the IPv4 and IPv6 networks that have been modeled on your network.
PROCESSES
The INFRASTRUCTURE > PROCESSES page provides tools for monitoring processes on devices. For more information, see Process monitoring.
IP SERVICES
The INFRASTRUCTURE > IP SERVICES page provides tools for monitoring IP services on devices. For more information, see IP service monitoring.
WINDOWS SERVICES
The INFRASTRUCTURE > WINDOWS SERVICES page provides tools for monitoring Windows Services on devices. For more information, refer to the Microsoft Windows ZenPack documentation.
NETWORK MAP
The INFRASTRUCTURE > NETWORK MAP page displays a representation of your network's layer 3 topology. For more information, see Network map.
MANUFACTURERS
The INFRASTRUCTURE > MANUFACTURERS page displays information about the products, by manufacturer, that are present in your environment. | https://docs.zenoss.io/cz/ui/infrastructure.html | 2021-11-27T11:44:26 | CC-MAIN-2021-49 | 1637964358180.42 | [] | docs.zenoss.io |
qxl_dev_nr¶
Description¶
Sets the number of display devices available through SPICE. This is only valid when qxl is set.
The default configuration enables a single display device:
qxl_dev_nr = 1
Note that due to a limitation in the current Autotest code (see client/virt/kvm_vm.py) this setting is only applied when the QEMU syntax is:
# qemu -qxl 2
and not applied when the syntax is:
# qemu -vga qxl | https://avocado-vt.readthedocs.io/en/stable/cartesian/CartesianConfigReference-KVM-qxl_dev_nr.html | 2021-11-27T10:37:21 | CC-MAIN-2021-49 | 1637964358180.42 | [] | avocado-vt.readthedocs.io |
Reduce downtime with Lightstep Incident Response, now available for free early access Model a GitLab basic CI pipeline in DevOps Starting with version 1.16, model a GitLab basic CI pipeline by mapping the pipeline to an app, and mapping DevOps pipeline steps to GitLab pipeline jobs. Before you begin Role required: sn_devops.admin Procedure Map your pipeline to an app. Navigate to DevOps > Apps & Pipelines > Apps and open the application record to associate with the pipeline. In the Pipelines related list, click Edit... to select a pipeline to associate with the app, or click New to create the pipeline. For a new pipeline, fill in the Orchestration pipeline field using the group name, subgroup name (if applicable), and project name as specified in GitLab. For example, My Group/My SubGroup/My Project.If a project is not under a group, simply specify My Project. Click Submit. Open the pipeline record again and create DevOps steps to map to each GitLab pipeline job so an orchestration task can be created. Steps can be created in one of the following ways. Starting with version 1.18, automatically create and map pipeline steps in DevOps by running your GitLab pipeline.Pipeline steps are automatically created, mapped, and associated when DevOps receives step notifications from your GitLab pipeline during the run. Manually create and map each pipeline step to a GitLab pipeline job.In the Steps related list, click New to create a DevOps step for each GitLab pipeline job (Orchestration stage field). Note: The Orchestration stage field value of each step is case-sensitive and must match the original name of the corresponding GitLab pipeline job. Name Name of the pipeline step. Pipeline Pipeline in which the step is configured. Type Pipeline step type. Build and Test Test Deploy Deploy and Test Manual Prod Deploy Order Order in which the steps are run. Note: The step order determines the order of the cards in the Pipeline UI.Starting with version 1.18, the order of the cards in the Pipeline UI is by task execution. Orchestration stage GitLab pipeline job name (case-sensitive). Note: For step association with GitLab CI pipeline jobs, the Orchestration stage field must be configured. Business service Configuration service that applies to the step. Once orchestration tasks are created, associate each orchestration task in the Orchestration Tasks related list with a DevOps pipeline step. (Optional) Select the Change control check box in a step to enable change acceleration and the corresponding configuration fields. Note: ServiceNow Change Management must be installed for change acceleration. Change receipt Starting with version 1.20, select to enable change receipt for the step so the pipeline doesn't pause when a change request is created. All pipeline data is included in the change, but approval is not required for the pipeline to proceed. Change approval group Approval group for the change request. The change approval group becomes the Assignment group in the DevOps change request. Note: Ensure that the selected group has members and a group manager so approver field is not empty. Change type Change request type to create. Normal (default) Standard Emergency Template Note: This field is shown only when Change type is Normal or Emergency. List of templates to use to auto populate fields for Normal or Emergency change requests. Select a template or create a new one. Standard change template Note: This field is shown only when Change type is Standard. List of standard change templates to use for Standard change requests.Note: This field is required for Standard change type. Change controlled branches (Multibranch only) Comma-separated list of branches under change control. Wildcards are supported. You can set up change control in GitLab for manual GitLab jobs. For versions 1.17 and earlier, navigate to DevOps > Tools > Orchestration Tools and in the GitLab tool record, copy the DevOps Webhook URL field value. The webhook URL contains the DevOps location for GitLab CI pipelines to send messages, including the sys_id for the tool:http://<devops.integration.user>:<integration user password>@<your-instance>.service-now.com/api/sn_devops/v1/devops/tool/{code | plan | artifact | orchestration | test}?toolId={sys_id of the GitLab tool record}. Example Figure 1. DevOps pipeline What to do next Configure the GitLab pipeline for DevOps | https://docs.servicenow.com/bundle/paris-devops/page/product/enterprise-dev-ops/task/model-gitlab-pipeline-dev-ops.html | 2021-11-27T11:02:15 | CC-MAIN-2021-49 | 1637964358180.42 | [] | docs.servicenow.com |
Natural Language Understanding ServiceNow® Natural Language Understanding (NLU) provides an NLU Workbench and an NLU inference service that you can use to enable the system to learn and respond to human-expressed intent. By entering natural language examples into the system, you help it understand word meanings and contexts so it can infer user or system actions. NLU terminology In NLU parlance, these terms identify the key language components the system uses to classify, parse, and otherwise process natural language content. Intent Something a user wants to do or what you want your application to handle, such as granting access. Utterance A natural language example of a user intent. For example, a text string in an incident's short description, a chat entry, or an email subject line. Utterances are used to build and train intents and should therefore not include several or ambiguous meanings or intents. Entity The object of, or context for, an action. For example: a laptop, a user role, or a priority level. System defined entity These are predefined in an instance and have highly reusable meanings, such as date, time, and location. User defined entity These are created in the system by users and can be built from words in the utterances they create. Common Entity A context commonly used and extracted via a pre-defined entity model, such as currency, organization, people, or quantity. Vocabulary Vocabulary is used to define or overwrite word meanings. For example, you can assign the synonym “Microsoft” to the acronym “MS”. Workbench Use the NLU Work nlu_admin role, you build your models in the NLU Workbench, where you create, train, test, and publish them iteratively. NLU algorithms by using sample record data to identify intents and entities that are strong candidates for accurate. | https://docs.servicenow.com/bundle/quebec-now-intelligence/page/administer/natural-language-understanding/concept/natural-language-understanding.html | 2021-11-27T11:23:49 | CC-MAIN-2021-49 | 1637964358180.42 | [] | docs.servicenow.com |
Breaking: #80149 - Remove $GLOBALS[‘TYPO3_CONF_VARS’][‘FE’][‘pageOverlayFields’]¶
See Issue #80149
Description¶.
Impact¶
Since
$GLOBALS['TYPO3_CONF_VARS']['FE']['pageOverlayFields'] was used as a
filter for field names to be taken from
pages_language_overlay and merged
onto those fields in
pages, all fields are overlaid per default.
Affected Installations¶
All installations having custom fields in table
pages_language_overlay and
custom settings in
$GLOBALS['TYPO3_CONF_VARS']['FE']['pageOverlayFields']. | https://docs.typo3.org/c/typo3/cms-core/9.5/en-us/Changelog/8.7/Breaking-80149-RemoveGLOBALSTYPO3_CONF_VARSFEpageOverlayFields.html | 2021-11-27T11:26:34 | CC-MAIN-2021-49 | 1637964358180.42 | [] | docs.typo3.org |
Unicly Litepaper
Version 1.0 - March 30th, 2021
Unicly - The protocol to combine, fractionalize, and trade NFTs.
Introduction
Unicly is a permissionless, community-governed protocol to combine, fractionalize, and trade NFTs. Built by NFT collectors and DeFi enthusiasts, the protocol incentivizes NFT liquidity and provides a seamless trading experience for NFT assets by bringing AMMs and yield farming into the world of NFTs.
Why fractionalize?
Buying NFTs is quite a laborious process. Fungible tokens may have thousands of buyers and sellers, but every NFT transaction depends on matching a single buyer and a single seller, which leads to low liquidity. In addition, many users are being priced out of some of the most desirable items, leading to more concentrated ownership and pent-up demand. Furthermore, it is difficult for an average investor to build a large, diverse portfolio of NFTs, which makes investing in them inherently riskier. As a result, the NFT space is experiencing accessibility issues.
Fractionalization allows more people to trade NFTs at lower price points, enabling more of us to have ownership of even the most coveted NFTs. It even allows casual investors to buy small amounts in NFT projects that they like as if they were liquid tokens.
Unicly will attract and incentivize all stakeholders in the wider blockchain ecosystem to participate in the NFT ecosystem. Just like how projects launch tokens on Uniswap, collectors and/or creators will be able to launch exciting NFT collections through Unicly, and benefit from better price discovery and more accessibility. Traders and casual investors will experience higher liquidity on a familiar AMM model. Meanwhile, yield farmers will be incentivized to earn rewards by providing liquidity.
Existing problems in NFT fractionalization
There have been a couple early efforts to fractionalize NFTs. The following are the main issues that could be identified:
Sharding single NFTs instead of collections. The market cap for a single NFT has a ceiling. However, by fractionalizing any collection of NFTs across multiple smart contracts, we can create fractionalized tokens based on NFT pools that hold significantly more value.
Un-fractionalization. Fractionalizing NFTs is just one part of the solution. Un-fractionalizing those NFTs and determining their rightful owners is challenging in its own right. An example of a prior solution is implementing a carry-on clause, where people can offer to buy-out an NFT. The issue with this model is that unless the shard holders have enough funds to counterbid, they are forced to sell to the bidder. Another example is random distribution of the NFTs to the shard holders. However, given that each NFT is unique, where even a difference in mint number can add or remove a digit to the valuation of an NFT, random distribution may not be ideal.
How Unicly works
The Unicly protocol improves upon the above issues through the following key attributes:
Enabling sharding of collections containing multiple NFTs (ERC-721 and/or ERC-1155s).
Treating every NFT within a sharded collection as a unique, irreplaceable item.
Allowing collectors to bid for specific NFTs rather than entire collections.
Treating all shards equally and never relying on randomization of reward distributions.
Sharing the proceeds from NFT disposals equally between all shard holders (or
uToken
holders).
Allowing shard holders to accept bids by voting with their shards as opposed to forcing them to counterbid.
Eliminating incentives for NFT collections to depreciate through selective inventory churning.
Rewarding contributors of the best NFT collections through whitelisting, which allows the pool to liquidity mine the UNIC governance token.
The following sections will delve deeper into how these attributes are implemented.
uTokens, Fractionalization & Un-Fractionalization
Anyone with NFTs can create their own uToken. At its core, a uToken is an ERC-20 token that represents a collection or bundle of NFTs. It allows users to deposit and lock any number of ERC-721 and/or ERC-1155 NFTs into the smart contract. For their uToken, users can customize the name, ticker symbol, total supply, description, and the percentage of the uTokens needed later in order to vote to unlock the NFTs.
Once the creator of the uToken adds liquidity for it, anyone has the ability to trade it, similar to how tokens can be distributed once liquidity is provided on Uniswap.
Step-by-step snippets on the uToken creation process can be found
here
.
Bidding
Once a uToken is issued, a standalone page for the uToken page is created. It contains all the metadata (mentioned above) set by the issuer, and looks like the following:
NFT collectors and buyers can access the uToken collection’s page directly via its URL (likely shared by its uToken holders), which contains the smart contract address of the uToken, or find the collection through the Discover page, which lists all uToken collections.
After bidding on an NFT, users are not able to unbid for 3 days if they remain the highest bidder for that specific NFT. Afterwards, they can remove their bid as usual. If someone outbids the current highest bidder, the highest bidder may unbid and bid higher immediately.
The top bidder for each NFT at the time a collection reaches the unlock threshold wins those NFTs. Winners can claim their NFTs immediately.
If an NFT receives no bids at time of unlock, it will not be claimable and stay locked in the contract. Users must have a non-zero bid in order to be considered a winner.
uToken governance & Un-fractionalization
uToken holders have the ability to vote to unlock the collection. Once a certain percentage of the total uToken supply is voted in favor of unlocking, the NFTs are distributed to the top bidders, and the pooled ETH bids for the whole collection (at time of unlock) can be claimed proportionally by the collection's uToken holders.
Utility
uTokens are essentially governance tokens that give voting rights to its holders. uToken holders are incentivized to vote at an appropriate valuation for the NFT collection in order to maximize the amount of ETH that they can receive. They are also incentivized to promote the collection itself to bring more exposure to the NFTs and hopefully higher bids. Furthermore, uTokens also function like ETFs. uToken holders are guaranteed proportional value generated by the collection, since they are able to convert the uTokens into ETH (from the pool of bids that won each NFT in the collection) once the collection unlocks. The main difference is that a uToken’s value is backed by NFTs.
Flexibility
To give uToken creators more flexibility around their collections, the protocol allows them to deposit additional NFTs even after issuing the uTokens. This could be used by the uToken creator to push the value of the uToken up. This feature also opens doors to interesting governance models for individual uToken communities. For example, on top of standard models such as rewarding uToken holders with their own staking rewards system, uToken creators could reward their token holders by buying NFTs with proceeds and adding more assets to the collection, helping it appreciate in value.
Example Scenario
The following is an example scenario to walk through the protocol with some numbers.
Assume that Leia has created a uToken collection called uLEIA backed by 2 of Leia’s favorite NFTs. She bought the NFTs for 5 ETH each for a total of 10 ETH. Leia sets the total supply of uLEIA as 1,000 and sets the unlock threshold to 50%. She keeps 200 uLEIA to herself, and provides liquidity with 800 uLEIA and 8 ETH.
At this point, the initial price of uLEIA is 0.01 ETH. The market cap of uLEIA is 10 ETH. People can swap into uLEIA, but nobody but Leia will be able to sell beyond the initial price of 0.01 ETH.
Imagine Alice and Bob come along and acquire 200 uLEIA for 2 ETH (0.01 ETH/uLEIA) and 400 uLEIA for 6.67 ETH (0.0167 ETH/uLEIA) respectively, through the liquidity pool (the above are broad price estimates for simplicity’s sake).
The price of uLEIA after these swaps, according to the liquidity pool, would be 18.67 ETH / 200 uLEIA = 0.09335 ETH. The new market cap would be 0.09335 * 1000 = 93.35 ETH.
Assume that Carol comes across the uLEIA NFT collection and bids 45 ETH on each of the 2 NFTs. Given that Carol has the top bids, the collection is now valued (in terms of total bid amount) at 90 ETH.
Note: If Carol had bid a total of 100 ETH, the uLEIA-ETH pair would likely arbitrage up to make uLEIA’s market cap above 100 ETH, since uLEIA holders could vote to unlock the collection and earn a guaranteed 0.1 ETH per uLEIA (100 ETH divided amongst 1,000 total supply of uLEIA).
Let us go back to the original example scenario where Carol has bid a total of 90 ETH. If the collection gets unlocked at this point, each uLEIA can be converted into 0.09 ETH. Since the market price of uLEIA is 0.09335 ETH > 0.09 ETH on the AMM, one of Leia, Alice, or Bob may arbitrage the gap between total value bid vs. the uLEIA market cap by selling uLEIA on the AMM. However, there is a risk in selling, since someone could always outbid Carol and bid even higher on one of the NFTs, pushing the total valuation of the uLEIA collection up more. Due to price slippage, nobody will be able to capture excessive value from this arbitrage either.
Assume that Leia is happy with this price and votes to unlock the collection. At this point, the collection has 20% of the 50% of uLEIA needed to unlock the NFTs to the top bidder (Carol) and ETH to the uLEIA holders (Leia, Alice, and Bob). Alice sees that she needs to vote to unlock with just 300 uLEIA in order to reach the threshold to unlock the collection. By unlocking the collection, she is guaranteed to take the profit. Therefore, she arbitrages the pool price down to 0.09 ETH and votes with her remaining tokens to unlock the collection. The collection unlocks at 0.09 ETH / uLEIA, Carol claims the NFTs that she won, and the uLEIA token holders can redeem their portion of the ETH.
Leia, Alice, and Bob are all winners in this scenario. However, note that this is an example scenario with broad estimates on the numbers and several assumptions. Participants are not always guaranteed to win. With more participants, the game theory of uToken collections may look completely differently as well.
UnicSwap
UnicSwap is a fork of UniswapV2. Therefore, it uses ERC-20 LP (liquidity provider) tokens, which can be staked directly into UnicFarm for farming.
Just like Uniswap and Sushiswap, anybody will be able to add new pairs, provide liquidity, and swap between pairs.
Unicly’s fork of the classic AMM model will provide incentivized on-chain liquidity for NFTs (in the form of uTokens), while keeping the same swap & pool UI/UX that users are now very familiar with.
0.25% of all trading volume on UnicSwap is taken in fees and distributed to liquidity providers for the swap pair. An additional 0.05% of all trading volume on UnicSwap is collected in fees in order to automatically buy back the UNIC token regularly.
UnicFarm
LP token holders of whitelisted pools can stake their LP tokens on the Farm in order to earn UNIC. This is similar to Sushiswap's farming mechanism. In order to acquire these LP tokens, one must provide liquidity for whitelisted pools on UnicSwap.
Step-by-step snippets for liquidity mining / farming can be found
here
.
When a whitelisted uToken collection gets unlocked, the collection is automatically removed from the whitelist and the UnicFarm.
Whitelisted pools are determined by UNIC token holders through voting.
UNIC holders can convert UNIC into xUNIC to earn more UNIC. The yield comes from the UNIC bought back on UnicSwap (through the fees).
Decentralized Governance
Unicly will be fully governed by the community. There will be a voting portal on the platform where UNIC token holders will be able to vote for proposals. The protocol will mainly need the UNIC community to vote for new uToken pairs to whitelist.
2% of the total supply of UNIC will be needed to create a new proposal. 6.25% of the total supply of UNIC will be needed for a proposal to reach quorum. Of the 6.25%, the majority must be in favor of the proposal for it to pass.
All governance will occur transparently and fully onchain. Unicly’s smart contracts will be made permissionless and community-owned (through TimeLock and GovernorAlpha contracts) following initial deployments and launch.
Tokenomics & Fair Launch
Unicly will have a true fair launch. There will be no pre-mining and no pre-sale investors. 90% of UNIC tokens will only be minted via liquidity mining and 10% will progressively be allocated to the development team as they get minted over time. On day 1, 0xLeia will start with 1 token in order to vote in the first set of whitelisted pools that will enable liquidity mining. As soon as more UNIC is minted, the community will have the power to govern the platform themselves.
UNIC token distribution
Each month, the mint rate of UNIC tokens will decrease by 5%, starting at a monthly mint amount of 50,000 UNIC. Therefore, the supply will never reach 1M UNIC.
UNIC supply over time
Unicly 简版白皮书
Last modified
3mo ago
Copy link
Contents
Unicly - The protocol to combine, fractionalize, and trade NFTs.
Introduction
Why fractionalize?
Existing problems in NFT fractionalization
How Unicly works
uTokens, Fractionalization & Un-Fractionalization
UnicSwap
UnicFarm
Decentralized Governance
Tokenomics & Fair Launch | https://docs.unic.ly/ | 2021-11-27T11:27:21 | CC-MAIN-2021-49 | 1637964358180.42 | [] | docs.unic.ly |
AWS services or capabilities described in AWS Documentation may vary by region/location. Click Getting Started with Amazon AWS to see specific differences applicable to the China (Beijing) Region.
Update-KMSPrimaryRegion-KeyId <String>-PrimaryRegion <String>-Select <String>-PassThru <SwitchParameter>-Force <SwitchParameter>
us-east-1and a replica key in
eu-west-2. If you run
UpdatePrimaryRegionwith a
PrimaryRegionvalue of
eu-west-2, the primary key is now the key in
eu-west-2, and the key in
us-east-1becomesmightkey:.
us-east-1or
ap-southeast-2. There must be an existing replica key in this Region. When the operation completes, the multi-Region key in this Region will be the primary key.
AWS Tools for PowerShell: 2.x.y.z | https://docs.aws.amazon.com/powershell/latest/reference/items/Update-KMSPrimaryRegion.html | 2021-11-27T11:21:15 | CC-MAIN-2021-49 | 1637964358180.42 | [] | docs.aws.amazon.com |
Review changes made to risks
This topic applies to CaseWare Audit
Learn more about CaseWare Audit
The Risks tab displays a list of the risks identified in your engagement, where you can view, add, modify and delete risks.
If your documents have signoffs enabled, you can use review tools to review changes made to risks. To learn more, see Define signoff roles and schemes.
After a reviewer has signed off on your risks, the following review tools are available:
Added - Marks risks that have been added.
Modified - Marks risks that have been modified.
Deleted - Marks risks that have been deleted.
To review changes made to risks:
In the Risks tab, select the Review icon (
).
Turn on Review tools.
You can display different review content by selecting the appropriate checkboxes in the review tools dialog.
Note that a count displays next to each review tool. For example, if one risk was modified, the number 1 displays beside the Modified review tool.
To remove the Review tools notifications, select CLEAR NOTIFICATION.
| https://docs.caseware.com/2020/webapps/31/Audit/Explore/Audit/Review-changes-made-to-risks.htm?region=ca | 2021-11-27T12:03:08 | CC-MAIN-2021-49 | 1637964358180.42 | [array(['/documentation_files/2020/webapps/31/Content/en/Resources//Images/CW_Audit_Logo_textonly.png',
None], dtype=object)
array(['/documentation_files/2020/webapps/31/Content/en/Resources//Images/risks-review-tools_1066x465.png',
'Review tools in the risk module.'], dtype=object) ] | docs.caseware.com |
Deprecated Features in W1
This article describes the features that have been moved, removed, or replaced in the W1 version of Business Central.
Deprecated features won't be available in future versions of Business Central, and they are deprecated for different kinds of reasons. For example, a feature may no longer be relevant, or something better may have become available. If you use a feature that is listed, either the feature itself or an extension of it, you should look for or develop an alternative.
The next sections give a brief description of the deprecated features, state what happened to the feature, and explain why. The following table gives a few examples of what we mean by moved, removed, or replaced.
This information will change with future releases, and might not include each deprecated feature.
Changes in 2022 release wave 1
.NET add-ins not using .NET Standard (Removal)
The following feature will be Removed with Business Central 2022 release wave 1.
Web Service Access Keys (Basic Auth) for Business Central Online
The following feature will be Removed with Business Central 2022 release wave 1.
Permissions defined as data
The following feature will be Removed with Business Central 2022 release wave 1.
Changes in 2021 release wave 2
Business Central app for Windows
The Business Central app that's available from the Microsoft Store is no longer supported with Business Central 2021 release wave 2.
Removal of the Business Central Server Administration tool (Warning)
The following feature will be Removed in a later release.
StartSession calls in upgrade/install context will fail
The following feature will be Removed with Business Central 2021 release wave 2.
Standard APIs, Beta version
The following feature will be Removed with Business Central 2021 release wave 2.
Automation APIs, Beta version
The following feature will be Removed with Business Central 2021 release wave 2.
Client secret authentication in integrations between Microsoft-hosted Business Central online and Microsoft Dataverse
The following feature will be Removed with Business Central 2021 release wave 2.
Legacy Outlook add-in for synchronizing data
The legacy Outlook add-in for synchronizing data, such as to-dos, contacts, and tasks, between Business Central and Outlook will be Removed with Business Central 2021 release wave 2.
Note
The feature is separate from and has no affect on the Business Centraladd-in for Outlook, which is described at Using Business Central as your Business Inbox in Outlook.
Changes in 2021 release wave 1
.NET add-ins not using .NET Standard (Warning)
In Business Central 2021 release wave 1, a warning shows if you include .NET add-ins that are compiled with .NET Framework and not with .NET Standard. The capability of using .NET add-ins compiled with .NET Framework will be removed in a later release.
Expose UI pages as SOAP endpoints (Warning)
In Business Central 2021 release wave 1, a warning shows if you expose UI pages as SOAP endpoints. The capability of exposing UI pages as SOAP endpoints will be removed in a later release.
OData V3
The following feature is Removed with Business Central 2021 release wave 1.
The Help Server component
The following component is Removed with Business Central 2021 release wave 1.
What does this mean?
We have simplified the story for how to deploy Help for a customer-specific solution of Business Central, and for deploying Help for an AppSource app. No matter what your solution is, deploy your solution-specific or customized Help to any website that you prefer. Out of the box, Business Central uses the Docs.microsoft.com site for the Learn more-links and contextual Help. Each customer and each partner can override this with their own Help. It's now the same for Business Central online and on-premises, so any investment on-premises carries forward if you migrate to Business Central online.
Deprecated Features in 2020 release wave 1
The following feature was marked as
obsolete:pending in 2020 release wave 1.
Best Price Calculations
When you have recorded special prices and line discounts for sales and purchases, Business Central ensures that your profit on item trade is always optimal by automatically calculating the best price on sales and purchase documents and on job and item journal lines.
Deprecated Features in 2019 release wave 2
The following sections describe the features that were deprecated in 2019 release wave 2.
The Bank Data Conversion Service
You can use the bank data conversion service from AMC to convert bank data from an XML file that you export from Business Central into a format that your bank can accept.
The Windows Client
You can use Business Central in the Windows client that is installed on your computer.
Reports 204-207
You can generate external-facing documents, such as sales invoices and order confirmations, that you send to customers as PDF files.
The reports in the 204-207 range are replaced by the following updated reports in the 1304 to 1307 range:
- 204, Sales-Quote -> 1304, Standard Sales-Quote
- 205, Order-Confirmation -> 1305, Standard Sales-Order Conf.
- 206, Sales-Invoice -> 1306, Standard Sales-Invoice
- 207, Credit-Memo -> 1307, Standard Sales-Credit Memo
Note
The No. of Copies field is removed from the new reports because of performance reasons and because selection of the quantity to print works differently on modern printers. To print a report more than once, users can list the same report multiple times on the Report Selection page for the document type in question.
User Personalizations and Profile Configurations
You can personalize pages and configure profiles by adding or removing fields, and Business Central will save your changes.
Excel COM Add-In
You can export data to an Excel workbook.
Printing Programmatically
You can print documents such as invoices automatically, without prompting the user or without the user choosing to do so.
Objects that have been marked as obsolete
Part of deprecating features is marking the objects that comprise them as "obsolete." Before we deprecate an object, we tag it as "obsolete:pending" to alert our partners of it's deprecation. The object will have the tag for one year before we remove it from Business Central.
Breaking Changes
When we move, remove, or replace an object, breaking changes can result in other apps or extensions that use the object. To help our partners identify and resolve breaking changes, we have created a Breaking Changes document that lists known issues and suggestions for what to do about them.
Features that are available only in the online version
Some features are available only under very specific circumstances, or not at all intended for use in on-premises versions of Business Central. For a list and descriptions of those features, see Features not implemented in on-premises deployments.
See Also
AlAppExtensions repository
Best Practices for Deprecation of Code in the Base App
Microsoft Timeline for Deprecating Code in Business Central | https://docs.microsoft.com/de-de/dynamics365/business-central/dev-itpro/upgrade/deprecated-features-w1 | 2021-11-27T13:27:12 | CC-MAIN-2021-49 | 1637964358180.42 | [] | docs.microsoft.com |
Glossary¶
- backend
- A part of Mopidy providing music library, playlist storage and/or playback capability to the core. Mopidy have a backend for each music store or music service it supports. See Backend API for details.
- core
- The part of Mopidy that makes multiple frontends capable of using multiple backends. The core module is also the owner of the tracklist. To use the core module, see Core API.
- extension
- A Python package that can extend Mopidy with one or more backends, frontends, or GStreamer elements like mixers. See Extensions for a list of existing extensions and Extension development for how to make a new extension.
- frontend
- A part of Mopidy using the core API. Existing frontends include the MPD server, the MPRIS/D-Bus integration, the Last.fm scrobbler, and the HTTP server with JavaScript API. See Frontend API for details.
- mixer
- A GStreamer element that controls audio volume.
- tracklist
- Mopidy’s name for the play queue or current playlist. The name is inspired by the MPRIS specification. | https://docs.mopidy.com/en/release-0.19/glossary/ | 2021-11-27T11:31:30 | CC-MAIN-2021-49 | 1637964358180.42 | [] | docs.mopidy.com |
Header
The header is white by default with left-aligned text logo.
Site identity (logo)
Follow the steps to configure your site identity information:
- Go to Appearance → Customize → Site Identity.
- Specify your "Site Title" and optionally the "Tagline".
- Optionally upload your main image logo (dark).
- Format:
- Supported formats: JPG, PNG, GIF and more (see get_allowed_mime_types()).
- Recommended format: SVG (can be enabled with plugins, see #24251).
- Size:
- The maximum height of desktop logo is 170px. There is no width restriction.
- The maximum height of mobile logo is 32px. There is no width restriction but it should be narrow.
- If not uploaded, your "Site Title" and "Tagline" will be displayed instead.
- Optionally upload "Mobile logo — dark" or the text logo will be displayed on mobile resolutions.
- Optionally upload "Logo — light" if you'll be using the dark desktop header or transparent header (supported only on certain page types).
- Optionally upload "Mobile logo — light" if you'll be using the dark mobile header.
Notes:
- The Theme Logo WordPress feature doesn't allow you to specify external URL for your logo. Our three additional logo upload fields also don't allow this.
- The logo options are theme modifications. It means they are remembered for each theme separately. Therefore, if you switch your theme to a child theme all your logo fields will become blank for the newly activated child theme. You will have to set them again for the child theme.
- By default it's not possible to have an image logo and text "Site Title" and "Tagline" enabled together.
Menus
Follow the steps to create your menus:
- Create three new menus in Appearance → Menus.
- You can turn menu items into icons by specifying one of the CSS classes from the fonts page in the "CSS Classes" field.
- Attach your menus to the three locations.
- Locations:
- Header menu left
- Header menu right
- Header menu mobile
- The logo is centered if both locations (left and right) have menus.
- Lack of "Header menu left" will make the logo left-aligned.
- Lack of "Header menu right" will make the logo right-aligned.
Common issues
- Missing "CSS Classes" field.
You can enable them in "Screen Options" in the top right corner or using the "gear" icon if you're using Appearance → Customize → Menus.
Appearance → Customize → Menus → CSS Classes should be enabled.
Header options
Visit the body classes article to learn what header style modifiers are available to use in Administration → Theme Settings → Body classes.
Example configurations
| http://docs.ikonize.com/blk-black-label/theme-setup/header/ | 2021-11-27T12:13:13 | CC-MAIN-2021-49 | 1637964358180.42 | [array(['../../images/theme-setup/header-desktop-default.png', None],
dtype=object)
array(['../../images/theme-setup/header-mobile-text-logo.png', None],
dtype=object)
array(['../../images/theme-setup/header-site-identity.png', None],
dtype=object)
array(['../../images/theme-setup/header-menus.png', None], dtype=object)
array(['../../images/theme-setup/header-desktop-white.png', None],
dtype=object)
array(['../../images/theme-setup/header-desktop-white-v2.png', None],
dtype=object)
array(['../../images/theme-setup/header-desktop-white-v3.png', None],
dtype=object)
array(['../../images/theme-setup/header-desktop-white-v4.png', None],
dtype=object)
array(['../../images/theme-setup/header-desktop-white-v5.png', None],
dtype=object)
array(['../../images/theme-setup/header-desktop-white-v6.png', None],
dtype=object)
array(['../../images/theme-setup/header-desktop-white-v7.png', None],
dtype=object)
array(['../../images/theme-setup/header-desktop-white-v8.png', None],
dtype=object)
array(['../../images/theme-setup/header-mobile-text-logo.png', None],
dtype=object)
array(['../../images/theme-setup/header-mobile-white-logo.png', None],
dtype=object)
array(['../../images/theme-setup/header-mobile-white.png', None],
dtype=object)
array(['../../images/theme-setup/header-mobile-black-text-logo.png', None],
dtype=object)
array(['../../images/theme-setup/header-mobile-black-logo.png', None],
dtype=object)
array(['../../images/theme-setup/header-mobile-black.png', None],
dtype=object) ] | docs.ikonize.com |
To occasionally migrate data from one instance to another, you can export the XML data from one instance and import it to another.
To occasionally migrate data from one instance to another, you can export the XML data from one instance and import it to another.
This method ensures that all fields and values are transferred exactly. Exporting and importing data in XML files is commonly used for records created in a development instance that must be migrated with the update sets as part of a migration procedure.
Here is the list of typical files required:>
Cross scope privileges <sys_scope_privilege>
Global Settings <x_inpgh_des_global_settings>
Designer is fully dependent on the following tables which needs to be migrated, too if configuration changes have been made:
Images <db_image>
Relationship Types <cmdb_rel_type>
Suggested Relationships <cmdb_rel_type_suggest>
Additional configurations done on or in relation to Designer needs to be migrated, too. Those changes should be captured and migrated by UpdatedSets:
- Field extensions (captured via UpdateSet)
- Dashboards
- Reports
- Workflows
- …
Migrating Diagrams between environments
There are two ways to migrate Designer Diagrams between ServiceNow environments:
- Migrating data from the diagrams table, this is recommended for migrating multiple or all diagrams and includes the diagram metadata
- Migrating the diagram content as XML data, this is recommended to move individual diagrams between systems fast. Does not include diagram metadata.
Migrating multiple or all diagrams of an instance
This approach is suited to migrate multiple diagrams or the entire diagram table content of an environment. Diagram metadata is included with the diagrams.
A – Type ‘x_inpgh_des_diagrams.list’ in the app navigator on the source environment to open the diagrams table (Hint: typing 'LIST' in capitals will open the table in a new browser tab)
B – Apply any desired filters or selections
C – Right-click on a column heading and select ‘Export > XML’
D – Download the file from the pop-up window when the export is ready
E – Type ‘x_inpgh_des_diagrams.list’ in the app navigator on the target environment to open the diagrams table
F – Right-click on a column heading and select ‘Import XML’
G – Select the previously exported and downloaded XML file from your filesystem and confirm with ‘Upload’
Migrating the diagram content XML
This approach is suited to migrate individual diagrams and transfers only the diagram content to the target environment.
No Metadata is transferred along with the diagram using this approach
This approach is a very fast way to transfer diagrams between environments that does not require admin privileges.
However the diagram metadata, i.e. information about the diagram, e.g. its' description, who created/updated it, the contributors and consumers and so on, are not exported along with the diagram.
A – Select ‘Export’ then ’XML’ on the DIAGRAM pane
B – Uncheck the ‘Compressed’ option, if you want to export it uncompressed, i.e. in human readable XML format
C – Click EXPORT
D – The XML file will download automatically once ready. Drag and drop it (compressed or uncompressed) on the canvas on the target environment (option 1)
OR
E – Navigate to ‘Edit Diagram’ on the DIAGRAM pane
F – Select ‘Add to Existing Diagram’ in the drop-down menu
G – Delete the content here and paste the clean (uncompressed) XML here and click OK (option 2) | http://docs.ins-pi.com/knowledge/migrating-designer-configuration | 2021-11-27T10:41:32 | CC-MAIN-2021-49 | 1637964358180.42 | [array(['https://docs.ins-pi.com/hs-fs/hubfs/Knowledgebase/Designer/112fa4f-Designer_V5.1_export_diagram_XML.png?width=688&name=112fa4f-Designer_V5.1_export_diagram_XML.png',
'112fa4f-Designer_V5.1_export_diagram_XML'], dtype=object) ] | docs.ins-pi.com |
20210628 - Upgrade of php fixed the page rendering issue.
Media Manager
Namespaces
Choose namespace
Media Files
- Media Files
- Upload
- Search
Upload to [root]
Sorry, you don't have enough rights to upload files.
File
- Date:
- 2012/08/31 13:17 (UTC)
- Filename:
- 37-lilo-utf-8.png
- Format:
- PNG
- Size:
- 4KB
- Width:
- 720
- Height:
- 400
- References for:
- localization
- localization | http://docs.slackware.com/start?tab_files=upload&do=media&tab_details=view&image=slackware%3Ainstall%3A37-lilo-utf-8.png&ns= | 2021-11-27T11:17:51 | CC-MAIN-2021-49 | 1637964358180.42 | [] | docs.slackware.com |
ansible.builtin.apt – Manages apt-packages
Note
This module is part of
ansible-core and included in all Ansible
installations. In most cases, you can use the short
module name
apt even without specifying the
collections: keyword.
However, we recommend you use the FQCN for easy linking to the
module documentation and to avoid conflicting with other collections that may have
the same module name.
New in version 0.0.2: of ansible.builtin
Requirements
The below requirements are needed on the host that executes this module.
python-apt (python 2)
python3-apt (python 3)
aptitude (before 2.4)
Notes zfsutils-linux with ensuring conflicted packages (e.g. zfs-fuse) will not be removed. apt: name: zfsutils-linux state: latest fail_on_autoremove: yes - name: Install latest version of "openjdk-6-jdk" ignoring "install-recommends" apt: name: openjdk-6-jdk state: latest install_recommends: no - name: Update all packages to their latest version apt: name: "*" state: latest - name: Upgrade the OS (apt-get dist-upgrade) # Sometimes apt tasks fail because apt is locked by an autoupdate or by a race condition on a thread. # To check for a lock file before executing, and keep trying until the lock file is released: - name: Install packages only when the apt process is not locked apt: name: foo state: present register: apt_action retries: 100 until: apt_action is success or ('Failed to lock apt for exclusive operation' not in apt_action.msg and '/var/lib/dpkg/lock' not in apt_action.msg)
Return Values
Common return values are documented here, the following are the fields unique to this module: | https://docs.ansible.com/ansible/latest/collections/ansible/builtin/apt_module.html | 2021-11-27T11:32:49 | CC-MAIN-2021-49 | 1637964358180.42 | [] | docs.ansible.com |
Date: Fri, 17 Dec 1999 07:21:20 +0000 From: Mark Ovens <[email protected]> To: Marc Wandschneider <[email protected]> Cc: [email protected] Subject: Re: 80x50 console mode? Message-ID: <19991217072119.A322@marder-1> In-Reply-To: <000001bf485b$85c2e770$230a0cd0@SHURIKEN> References: <000001bf485b$85c2e770$230a0cd0@SHURIKEN>
Next in thread | Previous in thread | Raw E-Mail | Index | Archive | Help
On Thu, Dec 16, 1999 at 10:54:10PM -0800, Marc Wandschneider wrote: > > > is there a way to set up console mode to do 80x50? I'm currently actually > using some funky looking swiss font, but would be happy to switch if > required. > > kind of like the equivalent of the old DOS mode 80x50. > > can this be done? > # vidcontrol VGA_80x50 man vidcontrol for the full range of options. You can also put allscreens_flags="VGA_80x50" in /etc/rc.conf to set it for all console screens at boot-up. As someone pointed out yesterday, the "allscreens_flags" option only works for some vidcontrol options; you can't set fg and bg colours this way, for instance. HTH > Thanks! > > marc. > > > > To Unsubscribe: send mail to [email protected] > with "unsubscribe freebsd-questions" in the body of the message -- PERL has been described as "the duct tape of the Internet" and "the Unix Swiss Army chainsaw" - Computer Shopper 12/99 ________________________________________________________________ FreeBSD - The Power To Serve My Webpage mailto:[email protected] To Unsubscribe: send mail to [email protected] with "unsubscribe freebsd-questions" in the body of the message
Want to link to this message? Use this URL: <> | https://docs.freebsd.org/cgi/getmsg.cgi?fetch=3909822+0+/usr/local/www/mailindex/archive/1999/freebsd-questions/19991226.freebsd-questions | 2021-11-27T12:48:34 | CC-MAIN-2021-49 | 1637964358180.42 | [] | docs.freebsd.org |
Attention: If you skipped one or more releases, please also check the release-notes of the skipped ones.
Newsletter
- Subscribe here:
Webinar
Features
Developers
System Requirements
Minimal
- Elasticsearch 6.8.5 🔥
Suggested
- Elasticsearch 7
- Base Docker Images
- livingdocs-server:
livingdocs/server-base:14.3
- livingdocs-editor:
livingdocs/editor-base:14.3
Highlights
Internal Document Links 🎉
When selecting text in the editor, the link tool in the editable toolbar allows to link to documents instead of URLs only. The input field is a combined input and detects if you want to search, or if you entered an URL.
Attention: The delivery needs a strategy to redirect id route patterns in order for inline links to work in the delivery.
- If you search, you get back a list of internal documents
- If you paste an URL that matches one of your deliveries, the link is automatically upgraded to a document reference.
References:
- Internal Document Links PR with Screenshots and Explanations
- Internal Document Links Extended Search
- Internal Document Server PR
- Documentation
Custom Indexes / Queue Base Work 🎉
Queue Base Work
To support all kind of features which need (job-)queues (now/future), we did some base work. These are the most important changes:
- Get rid of bullmq
- Introduce Redis streams for messaging (more control/reliability/transparency)
Custom Indexes
Some customers want to have their customised Elasticsearch publication index to make customised search requests. The switch to Redis streams was necessary to make custom indexes possible. Why you could need a custom index and how to setup one, can be found here.
Migration Guide If you already have implemented a custom index in your downstream and want to replace it with the Livingdocs custom index solution, please contact us to make a planning. The upgrade is not difficult, but every customer is different and therefore it needs individual planning.
References:
- Base Work - Queue Refactoring - Part I
- Base Work - Queue Refactoring - Part II
- Base Work - Redis Streams
- Visualize Redis Queue Infos - Editor
- Visualize Redis Queue Infos - Server
- Custom Elasticsearch Index
- Custom Elasticsearch Index - Documentation
- Indexing Cleanup
Cloudinary Storage Support 🎉
Beside Amazon S3 we introduced Cloudinary as storage. Look into the Server PR for instructions.
References:
More Secure Authentication Workflow 🎉
We worked on the security for the user authentication in Livingdocs. Some improvements are:
- Increased security for the accessTokens
- if an accessToken is stolen, it can’t be used without a valid client cookie
- accessTokens can’t renew themselves anymore
- an accessToken is bound to a valid client and session
For more information, read here.
References:
Airship Integration for Push Notifications 🎉
We integrated Airship for push notifications.
References:
Experimental
Mobile - Inline Editing 🎉
With this release, we allow a user to inline add/edit components and its settings in the editor with your mobile. This is a MVP, but we will gradually improve the inline editing in the next few releases.
References:
Editable Teasers 🎉
Editable teasers are embedded editable livingdocs components. Technically editable teasers are doc-includes returning livingdocs components, which can be edited like any other component. For more information read the documentation and look into the example on the example-server.
Attention: Editable teasers do not work with the render pipeline v1 (which most of the customers are using at the moment). This should be fixed in an upcoming release.
References:
- Editable Teasers Editor Integration
- Base Work - Properties Panel Refactoring
- Base Work - Resolve Includes
- Example - Teaser Include on Example Server
- Documentation
Videos 🎉
We introduce videos in Livingdocs with these abilities:
- upload videos and set metadata in media library
- upload videos and set metadata in editor via drag + drop / upload button
- import videos via public API
- Add project configuration for mediaVideo MediaType
- Add new directive
doc-videoin a livingdocs design
In the upcoming releases we will bring in some improvements and make the video feature more stable.
References:
- Editor PR with Screenshots
- Editor PR with Improvements (drag+drop from filesystem)
- Server PR
- Documentation
Breaking Changes 🔥
Migrate the database 🔥
It’s a simple/fast migration with no expected data losses.
# run grunt migrate to update to the newest database scheme # migration - 142-add-legacy-design-name.js # rename document_migrations.design_name to target_design_name # migration - 143-drop-unused-postgres-extensions.js # drop unused uuid-ossp extension # migration - 144-add-missing-primary-keys.js # add missing primary keys to tables to support replication better # migration - 145-document-content-types.js # introduce document_content_types table to support migrations better in the future # migration - 146-add-metadata-id-to-revisions.js # add document_revisions.metadata_id to support metadata to document/publication relations better in the future # migration - 147-add-user-sessions.js # add user_sessions table to support new auth workflow livingdocs-server migrate up
Drop Support for Elasticsearch < 6.8.5 🔥
🔥 The support for Elasticsearch versions < 6.8.5 has been dropped. Please update your Elasticsearch cluster to Elasticsearch >= 6.8.5.
Important! You have to do an Elasticsearch update to >=6.8.5 before installing the December release. How to to a rolling upgrade is documented here.
Elasticsearch Indexes 🔥
Server Configuration
- 🔥 moved server config
search.articlePublicationIndexEnabledto
elasticIndex.documentPublicationIndexEnabled
- 🔥 removed server config
search.articlePublicationIndex. The publication index name is now auto generated:
${indexNamePrefix}-li-document-publications-index
CLI
- 🔥 removed
livingdocs-server es-publication-delete-indexuse
livingdocs-server elasticsearch-delete-index --handle=li-publications -yinstead
- 🔥 removed
livingdocs-server es-publication-reindexuse
livingdocs-server elasticsearch-index --handle=li-publications -yinstead
Environment Config
We automatically create a publication index for the public API. Therefore you must define an
indexNamePrefix for every environment. The definition of
indexNamePrefix is free, but we suggest a pattern like
${your-company}-${environment}.
🔥 Define an
indexNamePrefix for every environment
elasticIndex: { // every index name will be prefixed to prevent name clashes // the index will be created with this pattern: `${indexNamePrefix}-${handle}-index` indexNamePrefix: 'your-company-local', }
Publication Index for public API
🔥 Run a background index job for the publication index
To support future updates, we did an internal update where we define Elasticsearch aliases pointing on an index. With this change, you need to re-index the publication index used for the public API search endpoint.
After the deployment, please immediately run this cli task (the publication index will be empty after the deployment):
livingdocs-server elasticsearch-index --handle=li-publications -y
public API publications search
🔥 When using the public API search endpoint (
/api/v1/publications/search), then you have to reindex all publications, with the command
livingdocs-server elasticsearch-index --handle=li-publications -y.
When you start the new server version, the publication index is empty. To use the publication index in production for the public API as fast as possible, you have 2 options:
- Start a sidekick container with the new server version and make a full background index with
livingdocs-server elasticsearch-index --handle=li-publications -y. As soon as it’s done, you can start your casual servers with the new server version
- If you want to deploy/start your new server version without a preparation step, you can index the most recent publication with an additional parameter
--since. For example
livingdocs-server elasticsearch-index --handle=li-publications --since 1m -ydoes indexing all publications published within the last month. As soon as you’re done with this indexing step, you can run another background without
--sinceargument to index all publications.
Authentication
With the improved authentication workflow, we have some additional breaking changes.
🔥 third party applications / e2e-tests may need some adaptions correctly supporting cookies
🔥 Local development on Chrome now requires a SSL setup for authentication to work. Setting up a certificate locally and proxying requests using the editor environment config is advised.
// advised configs for local development module.exports= { // https is not _required_ but there may be some complications such as cookies being filtered when trying to overwrite secure cookies or other behavior that's dependant on the browser vendor https: require('../cert'), api: { // disabling the proxiedHost config (CORS-mode) // will only work in production // or a server with a valid SSL setup proxiedHost: '' } }
🔥 all requests need to allow sending credentials or forward cookies if requests are made with user/project tokens. (API Tokens should still work exactly as before!)
🔥 (3rd-party) applications that use the /auth/local/login endpoint, need to support cookies. It should be as easy as forwarding the liClient cookie.
🔥 For security reasons CORS is now disabled by default. We encourage a more secure approach where we forward a path on the editor domain to the server instance. For Example: ’' should be forwarded to ‘’
- This guards against CSRF attacks
- Latency improvements as requests are halved (no more OPTIONS requests)
- Cookies are more secure due to the possibility of using the sameSite: ‘strict’ option.
API Changes
🔥 removed authApi.getLoginHistory
🔥 removed authApi.revokeAccessTokensOfUser
🔥 removed authApi.reissueAccessToken
🔥 removed authApi.revokeAccessToken
🔥 changed authApi.createAccessTokenForTests -> authApi._createAccessTokenForTests | now returns token instead of {token}
🔥 move authApi.authorizationMiddleware out of the authApi and do not expose it anymore
🔥 Removed authUtils. authenticationSuccess (The promise version is still the same)
Routes 🔥 /authenticate has moved to /auth/local/login
🔥 /users/me has moved to /auth/me
🔥 removed POST /authenticate/refresh
🔥 removed POST /token/revoke
🔥 removed POST /token/reissue -> access tokens are reissued on /auth/me now
🔥 removed POST /token/revoke-tokens
🔥 changed GET /users/:id does not return a login history anymore
SSO Logins 🔥
With the improved authentication workflow, we have some additional breaking changes for SSO Logins. If the callback URL for SSO does not match the editorUrl, we set the auth cookies for a wrong domain leading to issues when logging in.
Migrating an existing SSO login
With cookies being set on a new URL, the SSO Logins need to be re-configured. Do the following:
- Make sure your
editor__public_hostenv variable is set to the editor URL (e.g.)
- Change all callback URLs for your SSO provider to the pattern
https://{editorUrl}/proxy/auth/{provider}/callback. For the livingdocs service this looked e.g. as follows:
auth__connections__github__config__callbackURL =. (NOTE: depending on your traefik setup it might also require
proxy/apiinstead of
/proxy).
- In the settings for your social providers, allow the new callback URLs (for FB for example we had to allow the redirect URL our Facebook app)
Migration to Redis Streams 🔥
- 🔥 Existing messages from
bullwon’t be processed anymore.
- 🔥 If you had pending index or imports, you’ll need to restart them. Our whole setup already supported retries everywhere except in the importer of the public api. It’s probably best if you re-trigger the imports after deployment.
- ❌ Removed bull dashboard and replaced it with an operations screen in the admin interface of the editor.
Slug Metadata in Slug Routing Pattern 🔥
If you activated the routing feature and if you are using a route pattern containing
:slug and at the same time you have a metadata property with the handle
slug the behavior will change in such a way that the route pattern for
:slug will be built out of the
slug metadata property value and not the document title. In most cases this is what you want. If it is not, you can rename the handle of your existing slug metadata property to prevent clashes.
DocumentsApi - Removed functions 🔥
- 🔥 Removed
DocumentEntity.findand
DocumentEntity.findByIdmethods, please use the official apis
- 🔥 Removed
RevisionEntity.findand
RevisionEntity.findByIdmethods, please use the official apis
- 🔥 Changed parameters of
documentApi.getLocksfrom
(documentId)to
({projectId, documentId}), so we can save a few roundtrips to check whether a user is allowed to access the document.
S3 Storage Default 🔥
🔥 The S3 default changed from
{ACL: 'public-read'} to
undefined - it now uses the bucket ACL-defaults.
How to revert to the behaviour before the release 🔥
To restore the behaviour you can explicitly pass the
{ACL: 'public-read'} in
(images|files).storage.config.params.ACL
storage: { strategy: 's3', config: { bucket: 'li-bucket-dev', region: 'eu-central-1', accessKeyId: 'secret', secretAccessKey: 'secret', params: { ACL: 'public-read' // <--------- add this to go back to the old behavior } } }
DocumentRelationApi - Changed Params 🔥
🔥 Changed params on
documentRelationApi.setDocumentEmbedReferences from
(documentVersion) to
({documentVersion, trx})
NOTE: The new param
trx is optional and only necessary if you want to call within a transactional context. If you do pass it, you are responsible to call
trx.commit or
trx.rollback when you’re done.
Index Feature - Removed publicationTransformModule 🔥
- 🔥 Remove server config option search.indexer.publicationTransformModule
- 🔥 Remove parameter publication-transform-module in livingdocs-server es-publication-reindex task
Note: This change should have no consequences for customers.
Index Feature - Removed/Renamed Functions 🔥
- 🔥 Removed
req.accessTokenfrom all requests. Please migrate to
req.verifiedToken.bearerToken
- 🔥 Removed
searchManager.putDocument. Please migrate to
indexingApi.addJobthat executes the same method internally
Security - User access tokens require valid cookies
- 🔥 Cypress helpers need some adaptions to correctly support cookies.
Our core implementation can serve as reference for a cypress login helper:
- 🔥 All requests against the server using an user access token require a valid session and the cookies that belong to it.
You may need to allow credentials or forwarding cookies if you have been working with User tokens. (API Tokens should still work exactly as before!)
Example an
axios instance using the withCredentials flag
const axios = require('axios').create({..., withCredentials: true})
Editor CSS Class Renamings 🔥
🔥 If you’ve done CSS modifications based on the original upstream classes in the editor please look into this PR. We did a lot of small refactorings/renamings.
CSS Stylesheets vh is not supported anymore 🔥
Because of changes in the editor we need to replace the vh unit in the design assets. This is done automatically but the css assets in the design should support CORS when the files are on a different domain then your livingdocs editor. Otherwise we can’t read the CSS Rules and it can lead to an unexpected behavior in the editor. If CORS can’t be enabled on the css assets it should be replaced with a unit which does not directly base on the height of the viewport.
APIs 🎁
Public API
For all endpoints documentation, look into your editor’s public API doc - ‘'.
Added Endpoint for a MediaLibrary Import 🎁
🎁
References:
Added Endpoint for a MediaLibrary Metadata Update 🎁
🎁
PATCH /api/v1/mediaLibrary/:id
References:
- Documentation - ‘'
- Editor PR
- Server PR
Added Endpoints for Document Lists 🎁
- 🎁
GET /api/v1/document-listsSearch endpoint for document lists
- 🎁
GET /api/v1/document-lists/:idGet a document list by :id
References:
- Documentation - ‘'
- Editor PR
- Server PR
Added Endpoint for a Video Import 🎁
Attention! Even when we’ve added the video endpoint already, sthe video feature is still experimental.
🎁
References:
- Documentation - ‘'
- Server PR
Server Events API
We added new events to the sever events API
- 🎁
mediaLibraryEntry.create
- 🎁
mediaLibraryEntry.update
- 🎁
mediaLibraryEntry.archive
References:
Internal Changes
Beta Endpoints for Publications 🎁
We added a new base endpoint
/api/beta/. This allows us to expose things on the public API we are not
sure enough yet, to introduce it on the
v1 endpoint that we can never break.
The first 2 introduced beta endpoints are already existing and have the same format on
v1, but extend the response
with
references. This might break in the future.
New endpoints:
- 🎁
GET /api/beta/documents/:documentId/latestPublication
- 🎁
GET /api/beta/documents/latestPublications
References:
Other Changes
Security
- Registration: Do not allow long usernames or // in a username livingdocs-server #3257 🎁
- Registration: Escape user input in email html livingdocs-server #3256 🎁
Design
- Document reference polish
- Part I livingdocs-editor #3897 🎁
- Part II livingdocs-editor #3950 🎁
- Use consistent button styles on metadata screen livingdocs-editor #3918 🎁
- Improve conflict mode livingdocs-editor #3927 🎁
- Overhauled Link-Buttons livingdocs-editor #3982 🎁
- Refactor: Remove Atomic Design livingdocs-editor #4012 🎁
- A huge amount of design fixes for release-2020-12 livingdocs-editor #4015 🎁
- Updated mail templates livingdocs-server #3203 🎁
Improvements
Editor
- Login: Show login buttons for all auth providers livingdocs-editor #3934 🎁
- Dashboard: Scroll to last article livingdocs-editor #4006 🎁
- Operation Screen
- Support design bumps from referenced to embedded designs via UI livingdocs-editor #3932
- Migrate server operations screen to websockets livingdocs-server #3192 🎁
- Improve import jobs log in editor livingdocs-server #3202 🎁
- Allow to register icons in downstream livingdocs-editor #3925 🎁
Server
- Media services improvements livingdocs-server #3243 🎁
- Proxy: Add support for proxying websockets livingdocs-editor #3905 🎁
- APIs: Serve Cache-Control header in authenticated requests livingdocs-server #3280 🎁
- Migrations: Support
migrateAsyncmethod on migration files livingdocs-server #3204 🎁
- DataSources: Support documentId in params livingdocs-editor #3896 🎁
- Postgres:
- Add missing primary keys to support logical replication livingdocs-server #3236 🎁
- Introduce a
document_content_typestable to keep the media types similar livingdocs-server #3238 🎁
- Example Server: Add Twitch include example livingdocs-server #3246 🎁
- Notifications: Pass server error messages to li-notifications livingdocs-editor #3929 🎁
Bugfixes
- Metadata: Fix metadata save trigger livingdocs-editor #3967 🪲
- MediaLibrary
- Fix drop behavior for galleries livingdocs-editor #3884 🪲
- Correctly handle drops in all browsers livingdocs-editor #3878 🪲
- Filter: Allow strings for dateRange query livingdocs-editor #3941 🪲
- Public API: Fix public API docs livingdocs-editor #3976 🪲
- Operation Screen
- Correctly indicate total users livingdocs-editor #4002 🪲
- Fix Add Member Screen for users that are already in a group livingdocs-editor #4004 🪲
- Display error message during user create on admin screen livingdocs-editor #4007 🪲
- Directives
- Don’t show UI elements in non-interactive iframe view livingdocs-editor #4008 🪲
- Set clean data from paramsSchema form instead of reactive vue objects to the include directive livingdocs-editor #4018 🪲
- Fix focus reset and error log in embedded teaser livingdocs-editor #4028 🪲
- Desknet: Fix Desk-Net Plugin for embedded designs livingdocs-server #3183 🪲
- Imatrics: Fix tag slugging livingdocs-server #3188 🪲
- Includes: Allow
ui.labelfor paramsSchema entries livingdocs-server #3239 🪲
Patches
Livingdocs Server Patches
- v114.0.59: chore: rerun checks
- v114.0.58: fix(design): add new cli command design-set-active
- v114.0.57: fix: update integration test because of an outdated github API
- v114.0.56: fix(hugo): require auth on all print routes
- v114.0.55: fix(print-api): Force
no-cachefor print API requests
If the Livingdocs Editor is cached via CDN print API requests like
getLayouts will return a cached/outdated version. This will fix the issue.
- v114.0.54: chore(import-public-api): correctly validate publicationDate
- v114.0.53: fix(login): Users will be able to log in even if new device emails can not be sent
- v114.0.52: fix(print): fix crash on certificate errors
- v114.0.51: fix(queue): Fix a typo and apply the same pending check to the xcleanup script
- v114.0.50: fix(queue): Do not delete the consumer if we can’t transfer the pending messages
- v114.0.49: fix: Do not send out too many ‘new device login’ emails
- v114.0.48: fix(openid-connect): fix various coercion issues
- v114.0.47: fix(open-id-connect-sso): correctly resolve projectIds as strings
- v114.0.46: fix(print): fix
filterText()type-check
Fixes
uncaughtException: content.replace is not a function
that could occur under certain circumstances.
- v114.0.45: fix(print): fix print host
- v114.0.44: fix(render-pipeline): log documentId for failed renderings
- v114.0.43: fix(list-update): finish trx if not passed in
- v114.0.42: fix(migrations): Enable error logs for document migrations
Customers need more informations when a migration fails.
- v114.0.41: fix(indexing): the custom indexer passes ids instead of documentIds
- v114.0.40: fix(comments): Add maxThreadCount config property
- v114.0.39: fix(policies): Introduce a more strict schema and allow additional properties
- v114.0.38: fix(publish): Fix the indexing for document publish calls that are nested in a transaction
- v114.0.37: fix(indexing): Change how we create the redis instance in the indexing controller as it didn’t respect the sentinel config
- v114.0.36: fix(airship): enable push notifications for web channel as well
- v114.0.35: fix(includes): Add interaction blocker config
- v114.0.34: fix(websocket): Rewrite the url for websockets as we do it for requests if /proxy/api is present
- v114.0.33: fix: add data field ‘classifications’ for hugo_article
- v114.0.32: fix(print): Handle image components
- v114.0.31: fix(open-id-connect): correctly create users
- v114.0.30: fix: add new npm read token
- v114.0.29: fix: correct expiration date for cookies and accessTokens
Livingdocs Editor Patches
v57.33.63: chore: rerun checks
v57.33.62: fix(modals): allow printing articles with ctrl+p / cmd+p
v57.33.61: chore(ci): remove cypress from CI for dez release
v57.33.60: fix: update integration test because of an outdated github API
v57.33.59: fix(imatrics): fix styling issues leading to invisible suggestions
v57.33.58: fix(clipboard): render includes when dropping a component from the clipboard
v57.33.57: fix(viewport-viewport-units-buggyfill): improve regex to match only vh units
v57.33.56: fix: show an indicator incase the ES limit defaults to 10000 total documents
v57.33.55: fix(BW service changer): Markup updated to new standard
Updated BW service changer’s markup on par with recently set standard
v57.33.54: fix(lists): cancel spinner after error
v57.33.53: fix(teaser-preview): render includes in teaser preview
v57.33.52: fix(clipboard): fix clipboard paste for a container
v57.33.51: fix: alignment of component title when no description is available
v57.33.50: fix(comments): Show max thread count limit error
v57.33.49: chore(policies): Add additionalProperties: false to the policy schema to keep it in sync with the one on the server
v57.33.48: fix(conflict): Hide comments in conflict mode
v57.33.47: fix(split-pane): Minimize sidebar on conflict
v57.33.46: chore: Fix the livingdocs-integration.json for the release-2020-12 in bluewin
v57.33.45: fix(ci): Use our docker images instead of the official docker image
v57.33.44: fix(includes): Add interaction blocker config
v57.33.43: chore: adapt cypress tests
v57.33.42: fix: correctly navigate back from a custom dashboard
v57.33.41: fix(server): Fix support for redirecting based on x-forwarded-proto header
v57.33.40: fix(link tool): show linked URL when valid but not accessible via iframely link checker
v57.33.39: fix(media library): fix download when storage bucket is private
v57.33.38: fix(media library): make the context menu edit button work again
v57.33.37: fix: add new npm read token
v57.33.36: fix: make integration settings changes in the UI properly change the channelConfigDraft
Icon Legend
- Breaking changes: 🔥
- Feature: 🎁
- Bugfix: 🪲
- Chore: 🔧 | https://docs.livingdocs.io/operations/releases/old/release-2020-12/ | 2021-11-27T12:29:28 | CC-MAIN-2021-49 | 1637964358180.42 | [array(['https://user-images.githubusercontent.com/821875/95599481-6ea70380-0a51-11eb-9cb5-639db68b238c.png',
'image'], dtype=object) ] | docs.livingdocs.io |
Release 7.89.0
Release period: 2021-09-29 to 2021-10-06
This release includes the following issues:
- Detailed user information
- Logs failed replications due to seq. number
- Fix Access Denied error when importing meshServiceInstances
Ticket DetailsTicket Details
Detailed user informationDetailed user information
Audience: Partner
DescriptionDescription
There is a new screen that shows detailed information for a selected user, for example, the e-mail, the last login date and all the meshCustomers the user is assigned to. The screen can be reached by clicking on the name of a user within the user list in the administration area.
Logs failed replications due to seq. numberLogs failed replications due to seq. number
Audience: Operator
DescriptionDescription
If a replication fails early due to a mismatch in the replication sequence number this information is now properly propagated to the meshPanel for faster troubleshooting.
Fix Access Denied error when importing meshServiceInstancesFix Access Denied error when importing meshServiceInstances
Audience: Operator
DescriptionDescription
When importing meshServiceInstances, an inconsistent authentication session was generated, which resulted in an access denied error if the JSESSIONID that was returned in the first response is used for the next call. This is now solved. | https://docs.meshcloud.io/blog/2021/10/06/Release-0.html | 2021-11-27T12:20:37 | CC-MAIN-2021-49 | 1637964358180.42 | [] | docs.meshcloud.io |
Config administrators can run the following command at an elevated command prompt:
net stop sppsvc && net start sppsvc:
slmgr.vbs /parameter
Table 1:
slmgr.vbs TargetComputerName [username] [password] /parameter [options]
Configuring Windows Firewall for Remote Software License Manager Operations
Slmgr.vbs uses Windows Management Instrumentation (WMI), so administrators must configure the the registry subkey HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Windows\CurrentVersion\Policies\System on KMS clients. Set this value to 0x01. | https://docs.microsoft.com/ru-ru/previous-versions/tn-archive/ff793407(v=technet.10) | 2021-11-27T11:06:05 | CC-MAIN-2021-49 | 1637964358180.42 | [] | docs.microsoft.com |
Question : If I decide not to implement a requirement and want to send it back to “Unplanned Requirements” table, how can I move the requirement from Planned Requirements table to Unplanned Requirements table?
Answer :
- Currently a move from Planned to Unplanned Requirement is not supported. In case you wish to remove a Requirement from view,
- you can create a Release (say named as Backlog) and assign all such Requirements to the same.
- Please refer to to know about the mechanism(s) to create Releases
- Use the filter functionality to eliminate all such requirements from view.
- For details on filters, please refer to
- For greater clarity on our view about Planned / Unplanned requirements, please refer to the following document | https://docs.optimizory.com/display/faq/Move+requirements+from+Planned+to+Unplanned+Requirements+table | 2021-11-27T10:53:57 | CC-MAIN-2021-49 | 1637964358180.42 | [] | docs.optimizory.com |
Actions service
The action service sends one-way messages (notifications) when conditions match the criteria you specify. The action service includes resources for creating and managing rules, triggers, and destinations.
- Trigger resources
- Create a trigger (POST /v1/notification/triggers)
- Get one trigger (GET /v1/notification/triggers/{name})
- Get multiple triggers (GET /v1/notification/triggers[?query])
- Update a trigger (PUT /v1/notification/triggers/{name})
- Delete a trigger (DELETE /v1/notification/triggers/{name})
- Destination resources
- Create a destination (POST /v1/notification/destinations)
- Get one destination (GET /v1/notification/destinations/{name})
- Get multiple destinations (GET /v1/notification/destinations[?query])
- Update a destination (PUT /v1/notification/destinations/{name})
- Delete a destination (DELETE /v1/notification/destinations/{name})
- Test a destination (POST /v1/notification/destinations:test)
- Rule resources
- Create a rule (POST /v1/notification/rules)
- Get one rule (GET /v1/notification/rules/{name})
- Get multiple rules (GET /v1/notification/rules[?query])
- Update a rule (PUT /v1/notification/rules/{name})
- Delete a rule (DELETE /v1/notification/rules/{name}) | https://docs.zenoss.io/api/actions/actions.html | 2021-11-27T12:27:06 | CC-MAIN-2021-49 | 1637964358180.42 | [] | docs.zenoss.io |
Overview of the Microsoft Edge channels
One of the benefits of the next version of Microsoft Edge is that Microsoft can provide new features on a regular basis. However, as the admin who deploys Microsoft Edge to the users in your organization, you might want to have more control over how often your users get these new features. Microsoft provides you four options, called channels, to control how often Microsoft Edge is updated with new features. Here's an overview of the four options.
For more information on support for each channel read: Microsoft Edge Lifecycle
Note
This article applies to Microsoft Edge version 77 or later.
Channel overview
Which update channel you decide to deploy to your users depends on several factors, such as how many line of business applications the user leverages and that you need to test any time they have an updated version of Microsoft Edge. To help you make this decision, review the following information about the four update channels that are available for Microsoft Edge.
Stable Channel
The Stable Channel is intended for broad deployment in your organization, and it is the channel that most users should be on. It is the most stable of the channels and is the a result of the stabilization of the feature set available in the prior Beta Channel release. New features ship about every 6 weeks. Security and quality updates ship as needed. A release from the Stable Channel is serviced until the next release from the channel is available.
Beta Channel
The Beta Channel is intended for production deployment in your organization to a representative sample set of users. It is a supported release, and each release from Beta is serviced until the next release from this channel is available. This is a great opportunity to validate that things work as expected in your environment, and if you encounter an issue have it remediated prior to the release going publishing to the Stable Channel. New features ship about every 6 weeks. Security and quality updates ship as needed.
Dev Channel
The Dev Channel is intended to help you plan and develop with the latest capabilities of Microsoft Edge, but with higher quality than the Canary Channel. This is your opportunity to get an early look at what is coming next and prepare for the next Beta release.
Canary Channel
The Canary Channel ships daily and is the most bleeding edge of all the channels. If you want access to the newest investments then they will appear here first. Because of the nature of this cadence problems will arise overtime, so you may want another channel installed side by side if you are leveraging the Canary releases. | https://docs.microsoft.com/en-us/deployedge/microsoft-edge-channels | 2021-06-13T02:49:50 | CC-MAIN-2021-25 | 1623487598213.5 | [] | docs.microsoft.com |
Error Handling
There are several error types in the GraphQL API and you may come across different ones depending on the operations you are trying to perform.
The Saleor GraphQL API handles the following three types of errors.
#Query-level errors
This error occurs if, while performing some specified operation, you provide wrong or unrecognized input data. The GraphQL checks the syntax as you write and, if you are trying to perform an unknown operation, you will be notified by the editor you are using. If you proceed with sending the request, you will get a syntax error.
Below is an example of an error triggered by the wrong syntax. The following query tries to fetch the
fullName field which doesn't exist on the
User type:
Sending this query to the server would result in the following syntax error:
#Data-level errors
This type of error occurs when the user passes invalid data as the mutation input. For example, while creating a new user, you provide an email address that is already being used in another user's account. It is therefore not unique and, as a result, you will get a validation error.
Validation errors are part of the schema, which means that we need to include them in the query to get them explicitly. For example, in all mutations, they can be obtained through the errors field.
Below is an example of an error triggered by validation issues:
Validation errors are returned in a dedicated error field inside mutation results:
caution
Although all error types contain a
message field, the returned message is only meant for debugging and is not suitable for display to your customers. Please use the
code field and provide your own meaningful error messages.
#Permission errors
This type of error occurs when you are trying to perform a specific operation but you are not authorized to do so; in other words, you have no sufficient permissions assigned.
Below is an example of an error triggered by insufficient authorization. The
staffUsers query requires appropriate admin permissions:
Executing it without proper permission would result in the following error: | https://docs.saleor.io/docs/developer/error-handling/ | 2021-06-13T01:30:24 | CC-MAIN-2021-25 | 1623487598213.5 | [] | docs.saleor.io |
Trait diesel::
expression::[−][src] NonAggregate
Trait alias to represent an expression that isn’t aggregate by default.
This trait should never be implemented directly. It is replaced with a
trait alias when the
unstable feature is enabled.
This alias represents a type which is not aggregate if there is no group by
clause. More specifically, it represents for types which implement
ValidGrouping<()> where
IsAggregate is
is_aggregate::No or
is_aggregate::Yes.
While this trait is a useful stand-in for common cases,
T: NonAggregate
cannot always be used when
T: ValidGrouping<(), IsAggregate = No> or
T: ValidGrouping<(), IsAggregate = Never> could be. For that reason,
unless you need to abstract over both columns and literals, you should
prefer to use
ValidGrouping<()> in your bounds instead.
Implementors
impl<T> NonAggregate for T where[src]
T: ValidGrouping<()>,
T::IsAggregate: MixedAggregates<No, Output = No>,
T: ValidGrouping<()>,
T::IsAggregate: MixedAggregates<No, Output = No>, | https://docs.diesel.rs/master/diesel/expression/trait.NonAggregate.html | 2021-06-13T02:10:11 | CC-MAIN-2021-25 | 1623487598213.5 | [] | docs.diesel.rs |
Monitor operations and activity of Azure Cognitive Search
This article is an overview of monitoring concepts and tools for Azure Cognitive Search. For holistic monitoring, you can use a combination of built-in functionality and add-on services like Azure Monitor.
Altogether, you can track the following:
- Service: health/availability and changes to service configuration.
- Storage: both used and available, with counts for each content type relative to the quota allowed for the service tier.
- Query activity: volume, latency, and throttled or dropped queries. Logged query requests require Azure Monitor.
- Indexing activity: requires diagnostic logging with Azure Monitor.
A search service does not support per-user authentication, so no identity information will be found in the logs.
Built-in monitoring
Built-in monitoring refers to activities that are logged by a search service. With the exception of diagnostics, no configuration is required for this level of monitoring.
Azure Cognitive Search maintains internal data on a rolling 30-day schedule for reporting on service health and query metrics, which you can find in the portal or through these REST APIs.
The following screenshot helps you locate monitoring information in the portal. Data becomes available as soon as you start using the service. Portal pages are refreshed every few minutes.
Monitoring tab, on the main Overview page, shows query volume, latency, and whether the service is under pressure.
Activity log, in the left navigation pane, is connected to Azure Resource Manager. The activity log reports on actions undertaken by Resource Manager: service availability and status, changes to capacity (replicas and partitions), and API key-related activities.
Monitoring settings, further down, provides configurable alerts, metrics visualization, and diagnostic logs. Create these when you need them. Once data is collected and stored, you can query or visualize the information for insights.
Note
Because portal pages are refreshed every few minutes, the numbers reported are approximate, intended to give you a general sense of how well your system is servicing requests. Actual metrics, such as queries per second (QPS) may be higher or lower than the number shown on the page. If precision is a requirement, consider using APIs.
APIs useful for monitoring
You can use the following APIs to retrieve the same information found in the Monitoring and Usage tabs in the portal.
Activity logs and service health
The Activity log page in the portal collects information from Azure Resource Manager and reports on changes to service health. You can monitor the activity log for critical, error, and warning conditions related to service health.
Common entries include references to API keys - generic informational notifications like Get Admin Key and Get Query keys. These activities indicate requests that were made using the admin key (create or delete objects) or query key, but do not show the request itself. For information of this grain, you must configure diagnostic logging.
You can access the Activity log from the left-navigation pane, or from Notifications in the top window command bar, or from the Diagnose and solve problems page.
Monitor storage in the Usage tab
For visual monitoring in the portal, the Usage tab shows you resource availability relative to current limits imposed by the service tier. If you are finalizing decisions about which tier to use for production workloads, or whether to adjust the number of active replicas and partitions, these metrics can help you with those decisions by showing you how quickly resources are consumed and how well the current configuration handles the existing load.
The following illustration is for the free service, which is capped at 3 objects of each type and 50 MB of storage. A Basic or Standard service has higher limits, and if you increase the partition counts, maximum storage goes up proportionally.
Note
Alerts related to storage are not currently available; storage consumption is not aggregated or logged into the AzureMetrics table in Azure Monitor. To get storage alerts, you would need to build a custom solution that emits resource-related notifications, where your code checks for storage size and handles the response.
Add-on monitoring with Azure Monitor
Many services, including Azure Cognitive Search, integrate with Azure Monitor for additional alerts, metrics, and logging diagnostic data.
Enable diagnostic logging for a search service if you want control over data collection and storage. Logged events captured by Azure Monitor are stored in the AzureDiagnostics table and consists of operational data related to queries and indexing.
Azure Monitor provides several storage options, and your choice determines how you can consume the data:
- Choose Azure Blob Storage if you want to visualize log data in a Power BI report.
- Choose Log Analytics if you want to explore data through Kusto queries.
Azure Monitor has its own billing structure and the diagnostic logs referenced in this section have an associated cost. For more information, see Usage and estimated costs in Azure Monitor.
Monitor user access
Because search indexes are a component of a larger client application, there is no built-in methodology for controlling or monitoring per-user access to an index. Requests are assumed to come from a client application that present either an admin or query request. Admin read-write operations include creating, updating, deleting objects across the entire service. Read-only operations are queries against the documents collection, scoped to a single index.
As such, what you'll see in the activity logs are references to calls using admin keys or query keys. The appropriate key is included in requests originating from client code. The service is not equipped to handle identity tokens or impersonation.
When business requirements do exist for per-user authorization, the recommendation is integration with Azure Active Directory. You can use $filter and user identities to trim search results of documents that a user should not see.
There is no way to log this information separately from the query string that includes the $filter parameter. See Monitor queries for details on reporting query strings.
Next steps
Fluency with Azure Monitor is essential for oversight of any Azure service, including resources like Azure Cognitive Search. If you are not familiar with Azure Monitor, take the time to review articles related to resources. In addition to tutorials, the following article is a good place to start. | https://docs.microsoft.com/en-us/azure/search/search-monitor-usage?WT.mc_id=AZ-MVP-5003451 | 2021-06-13T03:36:48 | CC-MAIN-2021-25 | 1623487598213.5 | [array(['media/search-monitor-usage/usage-tab.png',
'Usage status relative to tier limits Usage status relative to tier limits'],
dtype=object) ] | docs.microsoft.com |
[Nimble Builder Pro] Control which user can use Nimble Builder on a site
Nimble Builder can be used by default by all users with an administrator role. With Nimble Builder Pro, you can define a list of authorized or unauthorized administrator users.
Unauthorized users will not see any reference to Nimble Builder when editing a page, in the live customizer and in the WordPress admin screens.
You can also check an option to hide Nimble Builder free and pro in the list of plugins.
Go to settings > Nimble Builder Pro.
Then click on "Manage authorized Users" tab.
Enter the authorized or unauthoized user's email, one per line. | https://docs.presscustomizr.com/article/436-nimble-builder-pro-control-which-user-can-use-nimble-builder-on-a-site | 2021-06-13T03:13:49 | CC-MAIN-2021-25 | 1623487598213.5 | [array(['http://d33v4339jhl8k0.cloudfront.net/docs/assets/55e7171d90336027d77078f6/images/60a68efbeb3af44cc120ac9c/file-H7WrEBO1G3.jpg',
None], dtype=object) ] | docs.presscustomizr.com |
#include <nav2d_karto/spa2d.h>
#include <stdio.h>
#include <Eigen/Cholesky>
#include <iostream>
#include <iomanip>
#include <fstream>
#include <sys/time.h>
Go to the source code of this file.
Run the LM algorithm that computes a nonlinear SPA estimate. <window> is the number of nodes in the window, the last nodes added <niter> is the max number of iterations to perform; returns the number actually performed. <lambda> is the diagonal augmentation for LM. <useCSParse> = 0 for dense Cholesky, 1 for sparse Cholesky, 2 for sparse PCG
Definition at line 640 of file spa2d.cpp. | https://docs.ros.org/en/indigo/api/nav2d_karto/html/spa2d_8cpp.html | 2021-06-13T03:22:52 | CC-MAIN-2021-25 | 1623487598213.5 | [] | docs.ros.org |
Threaded Comments
Jira default comments are pretty basic. Have you ever wished for more functionality?
Threaded Comments adds these useful features to make your comments easier and more informative:
Facebook-like thread conversation experience - make comments more friendly and readable
Reply button - directly respond to a comment
Like/dislike button - acknowledge with one-click
List of people who like/dislike - see who are on your side
Project-based configurations - it’s your call to select which project you want to enable the plugin | https://docs.votazz.co/threaded-comments/ | 2021-06-13T02:30:03 | CC-MAIN-2021-25 | 1623487598213.5 | [] | docs.votazz.co |
Pass your actual test with our SAP C-ARCON-2105 training material at first attempt
Last Updated: Jun 09, 2021
No. of Questions: 82 Questions & Answers with Testing Engine
Latest Version: V12.35
Download Limit: Unlimited
We provide the most up to date and accurate C-ARCON-2105 questions and answers which are the best for clearing the actual test. Instantly download of the SAP SAP Certified Application Associate - SAP Ariba Contracts exam practice torrent is available for all of you. 100% pass is our guarantee of C-ARCON-2105CON-2105 actual test that can prove a great deal about your professional ability, we are here to introduce our SAP Certified Application Associate C-ARCON-2105 practice torrent to you. With our heartfelt sincerity, we want to help you get acquainted with our C-ARCON-2105 exam vce. The introduction is mentioned as follows.
Our C-ARCON-2105 latest vce team with information and questions based on real knowledge the exam required for candidates. All these useful materials ascribe to the hardworking of our professional experts. They not only are professional experts dedicated to this C-ARCON-2105 training material painstakingly but pooling ideals from various channels like examiners, former candidates and buyers. To make the C-ARCON-2105 actual questions more perfect, they wrote our C-ARCON-2105 prep training with perfect arrangement and scientific compilation of messages, so you do not need to plunge into other numerous materials to find the perfect one anymore. They will offer you the best help with our C-ARCON-2105 questions & answers.
We offer three versions of C-ARCON-2105 practice pdf for you and help you give scope to your initiative according to your taste and preference. Tens of thousands of candidates have fostered learning abilities by using our C-ARCON-2105 updated torrent. Let us get to know the three versions of we have developed three versions of C-ARCON-2105 training vce for your reference.
The PDF version has a large number of actual questions, and allows you to take notes when met with difficulties to notice the misunderstanding in the process of reviewing. The APP version of SAP Certified Application Associate C-ARCON-2105CON-2105 free pdf maybe too large to afford by themselves, which is superfluous worry in reality. Our C-ARCON-2105 exam training is of high quality and accuracy accompanied with desirable prices which is exactly affordable to everyone. And we offer some discounts at intervals, is not that amazing?
As online products, our C-ARCON-2105 : SAP Certified Application Associate - SAP Ariba Contracts useful training can be obtained immediately after you placing your order. It is convenient to get. Although you cannot touch them, but we offer free demos before you really choose our three versions of C-ARCON-2105 practice materials. Transcending over distance limitations, you do not need to wait for delivery or tiresome to buy in physical store but can begin your journey as soon as possible. We promise that once you have experience of our C-ARCON-2105 practice materials once, you will be thankful all lifetime long for the benefits it may bring in the future.so our SAP C-ARCON-2105 practice guide are not harmful to the detriment of your personal interests but full of benefits for you.
Liz
Naomi
Roxanne
Vera
Alvin
Benson
Exam4Docs is the world's largest certification preparation company with 99.6% Pass Rate History from 69850+ Satisfied Customers in 148 Countries.
Over 69850+ Satisfied Customers | https://www.exam4docs.com/sap-certified-application-associate-sap-ariba-contracts-accurate-pdf-13849.html | 2021-06-13T01:45:07 | CC-MAIN-2021-25 | 1623487598213.5 | [] | www.exam4docs.com |
New to Telerik Reporting? Download free 30-day trial
Accessibility.
Setting up accessibility features in WPF. The default value of this property is false. This option also affects the accessibility of the exported PDF documents, i.e. if enableAccessibility is set to true, the exported PDF will be created according to PDF/UA (ISO standard 14289-1) specification.
The accessibility routines access the report viewer parts through the theme template Telerik.ReportViewer.Wpf.xaml. In case you have modified the template, the accessibility features might be affected or disabled.() { //Note that the accessibility classes are accessible after the report viewer loads its template, so this code should be called in or after Loaded event var keyMap = this.reportViewer1.AccessibilityKeyMap; keyMap.Remove((int)System.Windows.Input.Key.M); keyMap[(int)System.Windows.Input.Key.N] = Telerik.ReportViewer.Common.Accessibility.ShortcutKeys.MENU_AREA_KEY; this.reportViewer1.AccessibilityKeyMap = keyMap; }
Private Sub SetToolbarShortcutKey() 'Note that the accessibility classes are accessible after the report viewer loads its template, so this code should be called in or after Loaded event Dim map As System.Collections.Generic.Dictionary(Of Integer, Telerik.ReportViewer.Common.Accessibility.ShortcutKeys) = Me.ReportViewer1.AccessibilityKeyMap map.Remove(CType(Input.Key.M, Integer)) map(CType(Input.Key.T, Integer)) = Telerik.ReportViewer.Common.Accessibility.ShortcutKeys.MENU_AREA_KEY Me.ReportViewer1.AccessibilityKeyMap = map End Sub
Since the accessibility uses the theme template, the modification of the accessibility key map must be done after the template is loaded. We recommend using the report viewer's Loaded event handler.
All the accessibility messages and labels support localization. You can modify them, following the procedure, described here.
Please note that the additional markup, added to the report content when the accessibility is enabled, might result in a small performance penalty, especially on machines with outdated hardware. For best experience we recommend to enable the accessibility features conditionally according to your user's needs.
Supported accessibility features in WPF report viewer
The WPF Windows Automation. | https://docs.telerik.com/reporting/wpf-report-viewer-accessibility | 2021-06-13T02:07:10 | CC-MAIN-2021-25 | 1623487598213.5 | [] | docs.telerik.com |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.