content
stringlengths
0
557k
url
stringlengths
16
1.78k
timestamp
timestamp[ms]
dump
stringlengths
9
15
segment
stringlengths
13
17
image_urls
stringlengths
2
55.5k
netloc
stringlengths
7
77
apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: name: my-group namespace: my-namespace spec: targetNamespaces: - my-namespace This guide outlines the use of Operator groups with Operator Lifecycle Manager (OLM) in OKD.. An Operator is considered a member of an Operator group if the following conditions are true: The CSV of the Operator exists in the same namespace as the Operator group. The install modes in the CSV of the Operator support the set of namespaces targeted by the Operator group. An install mode in a CSV consists of an InstallModeType field and a boolean Supported field. The spec of a CSV can contain a set of install modes of four distinct InstallModeTypes: You can explicitly name the target namespace for an Operator group using the spec.targetNamespaces parameter: apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: name: my-group namespace: my-namespace spec: targetNamespaces: - my-namespace You can alternatively specify a namespace using a label selector with the spec.selector parameter: apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: name: my-group namespace: my-namespace spec: selector: cool.io/prod: "true" If both spec.targetNamespaces and spec.selector are defined, spec.selector is ignored. Alternatively, you can omit both spec.selector and spec.targetNamespaces to specify a global Operator group, which selects all namespaces: apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: name: my-group namespace: my-namespace The resolved set of selected namespaces is shown in the status.namespaces parameter of an Opeator group. The status.namespace of a global Operator group contains the empty string ( ""), which signals to a consuming Operator that it should watch all namespaces. A group/version/kind (GVK) is a unique identifier for a Kubernetes API. Information about what GVKs are provided by an Operator group are shown in an olm.providedAPIs annotation. The value of the annotation is a string consisting of <kind>.<version>.<group> delimited with commas. The GVKs of CRDs and API services provided by all active member CSVs of an Operator group are included. Review the following example of an OperatorGroup object group is created, three cluster roles are generated. Each contains a single aggregation rule with a cluster role selector set to match a label, as shown below: The following RBAC resources are generated when a CSV becomes an active member of an Operator group, as long as the CSV is watching all namespaces with the AllNamespaces install mode and is not in a failed state with reason InterOperatorGroupOwnerConflict: Cluster roles for each API resource from a CRD Cluster roles for each API resource from an API service Additional roles and role bindings If the CSV defines exactly one target namespace that contains permissions field of the CSV. All resources generated are given the olm.owner: <csv_name> and olm.owner.namespace: <csv_namespace> labels. *, then a cluster role and corresponding cluster role binding are generated for each permission defined in the If the CSV does not define exactly one target namespace that contains olm.owner: <csv_name> and olm.owner.namespace: <csv_namespace> labels are copied into the target namespace. *, then all roles and role bindings in the Operator namespace with the OLM creates copies of all active member CSVs of an Operator group in each of the target namespaces of that Operator group. group that their source CSV belongs to no longer targets the namespace of the copied CSV. An Operator group is static if its spec.staticProvidedAPIs field is set to true. As a result, OLM does not modify the olm.providedAPIs annotation of an Operator group, which means that it can be set in advance. This is useful when a user wants to use an Operator group to prevent resource contention in a set of namespaces but does not have active member CSVs that provide the APIs for those resources. Below is an example of an Operator group groups are said to have intersecting provided APIs if the intersection of their target namespace sets is not an empty set and the intersection of their provided API sets, defined by olm.providedAPIs annotations, is not an empty set. A potential issue is that Operator groups with intersecting provided APIs can compete for the same resources in the set of intersecting namespaces. Each time an active member CSV synchronizes, OLM queries the cluster for the set of intersecting provided APIs between the Operator group of the CSV and all others. OLM then checks if that set is an empty set: If true and the CSV’s provided APIs are a subset of the Operator group’s: Continue transitioning. If true and the CSV’s provided APIs are union of itself and the CSV’s provided APIs. If false and the CSV’s provided APIs are not a subset of the Operator group group. OKD provides limited support for simultaneously installing different variations of an Operator on a cluster. Operators are control plane extensions. All tenants, or namespaces, share the same control plane of a cluster. Therefore, tenants in a multi-tenant environment also have to share Operators. The Operator Lifecycle Manager (OLM) installs Operators multiple times in different namespaces. One constraint of this is that the Operator’s API versions must be the same. Different major versions of an Operator often have incompatible custom resource definitions (CRDs). This makes it difficult to quickly verify OLMs. An install plan’s namespace must contain only one Operator group. When attempting to generate a cluster service version (CSV) in a namespace, an install plan considers an Operator group invalid in the following scenarios: No Operator groups exist in the install plan’s namespace. Multiple Operator groups exist in the install plan’s namespace. An incorrect or non-existent service account name is specified in the Operator group. If an install plan encounters an invalid Operator group, the CSV is not generated and the InstallPlan resource fails with a relevant message. For example, the following message is provided if more than one Operator group exists in the same namespace: attenuated service account query failed - more than one operator group(s) are managing this namespace count=2 where count= specifies the number of Operator groups in the namespace. If the install modes of a CSV do not support the target namespace selection of the Operator group in its namespace, the CSV transitions to a failure state with the reason UnsupportedOperatorGroup. CSVs in a failed state for this reason transition to pending after either the target namespace selection of the Operator group changes to a supported configuration, or the install modes of the CSV are modified to support the target namespace selection.
https://docs.okd.io/4.10/operators/understanding/olm/olm-understanding-operatorgroups.html
2022-05-16T11:43:47
CC-MAIN-2022-21
1652662510117.12
[]
docs.okd.io
. - Easy-to-configure data inputs for your Config, CloudTrail, CloudWatch, VPC Flow Logs, Billing, and S3 data. - A logical topology dashboard that displays your entire AWS infrastructure to help you optimize resources and detect problems. - CIM-compliant fields and tags so that you can integrate your AWS data with your other infrastructure and security data sources. If you are a Splunk software admin, get the app on Splunkbase and proceed through this manual for detailed installation and configuration instructions. If you are a Splunk software user, check out the user guide to get familiar with the dashboards. Access the release notes for a list of new features and known issues in the current release. This documentation applies to the following versions of Splunk® App for AWS: 4.0.0, 4.1.0 Feedback submitted, thanks!
https://docs.splunk.com/Documentation/AWS/4.1.0/Installation/Abouttheapp
2022-05-16T12:59:26
CC-MAIN-2022-21
1652662510117.12
[]
docs.splunk.com
About this Ionic template Take your Ionic Framework app to the next leveI using this starter app template. Check all the features, designs and beautiful components that you can use on your app! PWA ready Ionic 6 is a step forward for Progressive Web Apps, that's why we are building this new ionic template with 100% support for PWA. Learn more about PWA with Angular and Ionic. You will be able to use this Ionic 6 starter as an iOS app, an Android app, a web app or as a PWA! Too many options, right? For a web app to be considered a PWA, it needs to comply with 10 principles . You can see how this Ionic Framework app addresses each of them and learn more about the magic of PWAs. All in with Ionic 6 and Angular 13 This Ionic Template includes lots of Ionic 5 components coded the angular way and features that you will love. Along this documentation we will explain you how to use each of them. 100% Flexible and Customizable The template includes lots of pages, features and components but you are free to use just what you need and delete the code you don't. The code structure is super modularized so you will find easy to modify the code to fit your needs. CSS variables for the win! Documentation This documentation was built with a lot of effort to help you get the most out of this Ionic 5 template. If there is anything that you don't understand please drop us a line and we will try to explain it better. Main Features and Pages Go to the template page to see the main Features and Pages of this amazing ionic4 template. What is Ionic 6 Full Starter App? Template versions Last modified 4mo ago Copy link Contents PWA ready All in with Ionic 6 and Angular 13 100% Flexible and Customizable CSS variables for the win Documentation Main Features and Pages
https://ionic-5-full-starter-app-docs.ionicthemes.com/about-this-template
2022-05-16T12:30:26
CC-MAIN-2022-21
1652662510117.12
[]
ionic-5-full-starter-app-docs.ionicthemes.com
Manage CloudCache Servers License editions: To understand the applicable license editions, see Plans & Pricing. Overview This page lists all the inSync CloudCache Server with their configuration settings in a tabular format. Access path On the inSync Management Console click > CloudCache Servers. The Manage CloudCache Servers page appears. CloudCache Server List The following table lists the fields in the inSync CloudCache Server List table.
https://docs.druva.com/Endpoints/020_Introduction/020_About_inSync_Management_Console_User_Interface/Manage_CloudCache_Servers
2022-05-16T11:15:01
CC-MAIN-2022-21
1652662510117.12
[]
docs.druva.com
- Passing properties to GitLab Runner Operator - Operator properties - Cache properties - Configure a proxy environment - Customize config.tomlwith a configuration template - Configure a custom TLS cert - Configure the CPU and memory size of runner pods - Configure job concurrency per runner based on cluster resources - Troubleshooting Configuring GitLab Runner on OpenShift This document explains how to configure GitLab Runner on OpenShift. Passing properties to GitLab Runner Operator When creating a Runner, you can configure it by setting properties in its spec. For example, you can specify the GitLab URL it will be registered in, or the name of the secret that contains the registration token: apiVersion: apps.gitlab.com/v1beta2 kind: Runner metadata: name: dev spec: gitlabUrl: token: gitlab-runner-secret # Name of the secret containing the Runner token Read about all the available properties in Operator properties. Operator properties This is a list of the supported properties that can be passed to the Operator. Some properties are only available with more recent versions of the Operator. Cache properties S3 cache GCS cache Azure cache Configure a proxy environment To create a proxy environment: Edit the custom-env.yamlfile. For example: apiVersion: v1 data: HTTP_PROXY: example.com kind: ConfigMap metadata: name: custom-env Update OpenShift to apply the changes. oc apply -f custom-env.yaml Update your gitlab-runner.ymlfile. apiVersion: apps.gitlab.com/v1beta2 kind: Runner metadata: name: dev spec: gitlabUrl: token: gitlab-runner-secret # Name of the secret containing the Runner token env: custom-env If the proxy can’t reach the Kubernetes API, you might see an error in your CI/CD job: ERROR: Job failed (system failure): prepare environment: setting up credentials: Post net/ TLS handshake timeout. Check for more information To resolve this error, add the IP address of the Kubernetes API to NO_PROXY configuration in the custom-env.yaml file: apiVersion: v1 data: NO_PROXY: 172.21.0.1 HTTP_PROXY: example.com kind: ConfigMap metadata: name: custom-env You can verify the IP address of the Kubernetes API by running: oc get services --namespace default --field-selector='metadata.name=kubernetes' | grep -v NAME | awk '{print $3}' Customize config.toml with a configuration template You can customize the runner’s config.toml file by using the configuration template. Create a custom config template file. For example, let’s instruct our runner to mount an EmptyDirvolume. Create the custom-config.tomlfile: [[runners]] [runners.kubernetes] [runners.kubernetes.volumes] [[runners.kubernetes.volumes.empty_dir]] name = "empty-dir" mount_path = "/path/to/empty_dir" medium = "Memory" Create a ConfigMapnamed custom-config-tomlfrom our custom-config.tomlfile: oc create configmap custom-config-toml --from-file config.toml=custom-config.toml Set the configproperty of the Runner: apiVersion: apps.gitlab.com/v1beta2 kind: Runner metadata: name: dev spec: gitlabUrl: token: gitlab-runner-secret config: custom-config-toml Configure a custom TLS cert To set a custom TLS cert, create a secret with key tls.crt. In this example, the file is named custom-tls-ca-secret.yaml: apiVersion: v1 kind: Secret metadata: name: custom-tls-ca type: Opaque stringData: tls.crt: | -----BEGIN CERTIFICATE----- MIIEczCCA1ugAwIBAgIBADANBgkqhkiG9w0BAQQFAD..AkGA1UEBhMCR0Ix ..... 7vQMfXdGsRrXNGRGnX+vWDZ3/zWI0joDtCkNnqEpVn..HoX -----END CERTIFICATE----- Create the secret: oc apply -f custom-tls-ca-secret.yaml Set the cakey in the runner.yamlto the same name as the name of our secret: apiVersion: apps.gitlab.com/v1beta2 kind: Runner metadata: name: dev spec: gitlabUrl: token: gitlab-runner-secret ca: custom-tls-ca Configure the CPU and memory size of runner pods To set CPU limits and memory limits in a custom config.toml file, follow the instructions in this topic. Configure job concurrency per runner based on cluster resources Set the concurrent property of the Runner resource: apiVersion: apps.gitlab.com/v1beta2 kind: Runner metadata: name: dev spec: gitlabUrl: token: gitlab-runner-secret concurrent: 2 Job concurrency is dictated by the requirements of the specific project. - Start by trying to determine the compute and memory resources required to execute a CI job. - Calculate how many times that job would be able to execute given the resources in the cluster. If you set too large a concurrency value, the Kubernetes executor will process the jobs as soon as it can. However, the Kubernetes cluster’s scheduler capacity determines when the job is scheduled. Troubleshooting Root vs non-root The GitLab Runner Operator and the GitLab Runner pod run as non-root users. As a result, the build image used in the job would need to run as a non-root user to be able to complete successfully. This is to ensure that jobs can run successfully with the least permission. However, for this to work, the build image used for the CI jobs also needs to be built to run as non-root and should not write to a restricted filesystem. Keep in mind that most container filesystems on an OpenShift cluster will be read-only, except for mounted volumes, /var/tmp, /tmp and other volumes mounted on the root filesystem as tmpfs. Overriding the HOME environment variable If creating a custom build image or overriding env variables, ensure that the HOME environment variables is not set to /home for example /home/ci and set ENV HOME=/home/ci in your Dockerfile. /which would be read-only. Especially if your jobs would need to write files to the home directory. You could create a directory under For the runner pods it’s expected that HOME would be set to /home/gitlab-runner. If this variable is changed, the new location must have the proper permissions. These guidelines are also documented in the RedHat Container Platform Docs > Creating Images > Support arbitrary user ids. Watch out for SCC By default, when installed in a new OpenShift project, the GitLab Runner Operator will run as non-root. There are exceptions, when all the service accounts in a project are granted anyuid access, such as the default project. In that case, the user of the image will be root. This can be easily checked by running the whoami inside any container shell, e.g. a job. Read more about SCC in RedHat Container Platform Docs > Managing security context constraints. Run As anyuid SCC Though discouraged, in the event that is it absolutely necessary for a CI job to run as the root user or to write to the root filesystem, you will need to set the anyuid SCC on the GitLab Runner service account, gitlab-runner-sa, which is used by the GitLab Runner container. oc adm policy add-scc-to-user anyuid -z gitlab-runner-sa Using FIPS Compliant GitLab Runner To use a FIPS compliant GitLab Runner Helper, change the helper image as follows: apiVersion: apps.gitlab.com/v1beta2 kind: Runner metadata: name: dev spec: gitlabUrl: token: gitlab-runner-secret helperImage: gitlab/gitlab-runner-helper:ubi-fips concurrent: 2 Register GitLab Runner by using a self-signed certificate When you use a self-signed certificate with your GitLab self-managed installation, you must create a secret that contains the CA certificate used to sign your private certificates. The name of the secret is then provided as the CA in the Runner spec section: KIND: Runner VERSION: apps.gitlab.com/v1beta2 FIELD: ca <string> DESCRIPTION: Name of tls secret containing the custom certificate authority (CA) certificates The secret can be created using the following command: oc create secret generic mySecret --from-file=tls.crt=myCert.pem -o yaml Register GitLab Runner with an external URL that points to an IP address If the runner cannot match the self-signed certificate with the hostname, you might get an error message. This can happen when the GitLab self-managed instance is configured to be accessed from an IP address instead of a hostname (where ###.##.##.## is the IP address of the GitLab server): [31;1mERROR: Registering runner... failed [0;m [31;1mrunner[0;m=A5abcdEF [31;1mstatus[0;m=couldn't execute POST against Post x509: cannot validate certificate for ###.##.##.## because it doesn't contain any IP SANs [31;1mPANIC: Failed to register the runner. You may be having network problems.[0;m To fix this issue: On the GitLab self-managed server, modify the opensslto add the IP address to the subjectAltNameparameter: # vim /etc/pki/tls/openssl.cnf [ v3_ca ] subjectAltName=IP:169.57.64.36 <---- Add this line. 169.57.64.36 is your GitLab server IP. Then re-generate a self-signed CA with the commands below: # cd /etc/gitlab/ssl # openssl req -x509 -nodes -days 3650 -newkey rsa:4096 -keyout /etc/gitlab/ssl/169.57.64.36.key -out /etc/gitlab/ssl/169.57.64.36.crt # openssl dhparam -out /etc/gitlab/ssl/dhparam.pem 4096 # gitlab-ctl restart Use this new certificate to generate a new secret.
https://docs.gitlab.com/runner/configuration/configuring_runner_operator.html
2022-05-16T12:56:37
CC-MAIN-2022-21
1652662510117.12
[]
docs.gitlab.com
To Configure the Rate Limiting Policy When you click Apply to apply a policy, the policy configuration dialog appears. You set the number of requests, period of time for receiving the requests, and a time unit. For example, in the Limits section of the policy configuration dialog, you can set the following limits. In the case of the rate limiting policy, if the API receives 123 requests within 2000 ms, the API rejects further requests, and if the API receives 100 requests within 1000 ms, the API rejects further requests. In the case of the throttling policy, requests are queued instead of rejected. Submit your feedback! Share your thoughts to help us build the best documentation experience for you!Take our latest survey!
https://docs.mulesoft.com/api-manager/1.x/configure-rate-limiting-task
2022-05-16T11:34:18
CC-MAIN-2022-21
1652662510117.12
[]
docs.mulesoft.com
<flow name="flow1"> < doc: <ee:transform doc: <ee:message > <ee:set-payload ><![CDATA[%dw 2.0 output application/json --- Mule::lookup('flow2', {test:'hello '})]]></ee:set-payload> </ee:message> </ee:transform> </flow> <flow name="flow2" > <set-payload </flow> lookup lookup(String, Any, Number) This function enables you to execute a flow within a Mule app and retrieve the resulting payload. It works in Mule apps that are running on Mule Runtime version 4.1.4 and later. Similar to the Flow Reference component (recommended), the lookup function enables you to execute another flow within your app and to retrieve the resulting payload. It takes the flow’s name and an input payload as parameters. For example, lookup("anotherFlow", payload) executes a flow named anotherFlow. The function executes the specified flow using the current attributes, variables, and any error, but it only passes in the payload without any attributes or variables. Similarly, the called flow will only return its payload. Note that lookup function does not support calling subflows. Example This example shows XML for two flows. The lookup function in flow1 executes flow2 and passes the object {test:'hello '} as its payload to flow2. The Set Payload component ( <set-payload/>) in flow2 then concatenates the value of {test:'hello '} with the string world to output and log hello world.
https://docs.mulesoft.com/dataweave/2.2/dw-mule-functions-lookup
2022-05-16T12:17:00
CC-MAIN-2022-21
1652662510117.12
[]
docs.mulesoft.com
%dw 2.0 import * from dw::core::Types type ALiteralType = "Mariano" output application/json --- { a: isLiteralType(ALiteralType), b: isLiteralType(Boolean) } isLiteralType isLiteralType(Type): Boolean Returns true if the input is the Literal type. Introduced in DataWeave 2.3.0. Supported by Mule 4.3 and later. Submit your feedback! Share your thoughts to help us build the best documentation experience for you!Take our latest survey!
https://docs.mulesoft.com/dataweave/2.3/dw-types-functions-isliteraltype
2022-05-16T12:31:53
CC-MAIN-2022-21
1652662510117.12
[]
docs.mulesoft.com
Creating.For information about regions in templates and forms, search for the article Section preview shows changes in real time for templated sections on Pega Community. - In the navigation panel, click . - From the list of section instances, open a section that is based on a design template (a templated section) and click Convert to full section editor to make all editing options available. - On the Settings tab of the Section form, select the Allow Section to be edited at runtime check box. - Before you can create a templated region, you must configure an empty dynamic layout control by completing the following steps: - On the Settings tab, select the Design template option and specify a template icon. - Click the Design tab and, if an empty dynamic layout does not currently exist, drag Dynamic layout from the Structural list onto the section. - In the header of the empty dynamic layout control, click the View properties icon. - On the Section properties panel, select the Use as template region check box, specify a name in the Region name field, and click Submit. - On the Design tab of the Section form, drag Layout group from the Structural list onto the empty dynamic layout control. - In the Choose layout-group format dialog box, select the format (for example, Tab or Accordion ), and click OK.Users can change this format at run time in App Studio. - On the Design tab of the Section form, in the header of the layout group control, click the View properties icon. - On the Layout group properties panel, select the Use as template region check box, specify a name in the Region name field, and click Submit. - Confirm that the word Region is displayed in the header of both the dynamic layout control and the layout group control and click Save on the Section form. Previous topic Creating a dynamic layout group Next topic Managing visibility of a group layout tab
https://docs.pega.com/user-experience/86/creating-templated-region-based-layout-group
2022-05-16T13:28:47
CC-MAIN-2022-21
1652662510117.12
[]
docs.pega.com
7 Customizing the Standard SDK¶ This appendix presents customizations you can apply to the standard SDK. 7.1 Adding Individual Packages to the Standard SDK¶ When you build a standard SDK using the bitbake -c populate_sdk, a default set of packages is included in the resulting SDK. The TOOLCHAIN_HOST_TASK and TOOLCHAIN_TARGET_TASK variables control the set of packages adding to the SDK. If you want to add individual packages to the toolchain that runs on the host, simply add those packages to the TOOLCHAIN_HOST_TASK variable. Similarly, if you want to add packages to the default set that is part of the toolchain that runs on the target, add the packages to the TOOLCHAIN_TARGET_TASK variable. 7.2 Adding API Documentation to the Standard SDK¶ You can include API documentation as well as any other documentation provided by recipes with the standard SDK by adding “api-documentation” to the DISTRO_FEATURES variable: DISTRO_FEATURES:append = ” api-documentation” Setting this variable as shown here causes the OpenEmbedded build system to build the documentation and then include it in the standard SDK.
https://docs.yoctoproject.org/sdk-manual/appendix-customizing-standard.html
2022-05-16T11:33:17
CC-MAIN-2022-21
1652662510117.12
[]
docs.yoctoproject.org
Common Functions¶ These functions are available from within a process: Calculate hillshading from elevation data¶ mp.hillshade( elevation, azimuth=315.0, altitude=45.0, z=1.0, scale=1.0 ) Returns an array with the same shape as the input array. elevation: input array azimuth: horizontal angle of light source (315: North-West) altitude: vertical angle of light source (90 would result in slope shading) z: vertical exaggeration scale: scale factor of pixel size units versus height units (insert 112000 when having elevation values in meters in a geodetic projection) Extract contour lines from elevation data¶ mp.contours( array, interval=100, pixelbuffer=0, field='elev' ) Returns contours as GeoJSON-like pairs of properties and geometry. elevation: input array interval: elevation value interval field: output field name containing elevation value Clip array by vector data¶ mp.clip( array, geometries, inverted=False, clip_buffer=0 ) array: source array geometries: geometries used to clip source array inverted: bool, invert clipping clip_buffer: int (in pixels), buffer geometries before applying clip
https://mapchete.readthedocs.io/en/stable/common_functions.html
2022-05-16T12:09:33
CC-MAIN-2022-21
1652662510117.12
[]
mapchete.readthedocs.io
This document describes the Sync Server Storage API, version 1.0. It has been replaced by Storage API v1.1. A Weave Basic Object is the generic wrapper around all items passed into and out of the Weave server. The Weave Basic Object has the following fields: Weave Basic Objects and all data passed into the Weave Server should be utf-8 encoded. Sample: { "id": "B1549145-55CB-4A6B-9526-70D370821BB5", "parentid": "88C3865F-05A6-4E5C-8867-0FAC9AE264FC", "modified": "2454725.98", "payload": "{\"encryption\":\" \"data\": \"a89sdmawo58aqlva.8vj2w9fmq2af8vamva98fgqamff...\"}" } Each WBO is assigned to a collection with other related WBOs. Collection names may only contain alphanumeric characters, period, underscore and hyphen. Collections supported at this time are: Additionally, the following collections are supported for internal Weave client use: Weave URLs follow, for the most part, REST semantics. Request and response bodies are all JSON-encoded. The URL for Weave Storage requests is structured as follows: name>/<api pathname>/<version>/<username>/<further instruction> Weave uses HTTP basic auth (over SSL, so as to maintain password security). If the auth username does not match the username in the path, the server will issue an Error Response. The Weave API has a set of Weave Response Codes to cover errors in the request or on the server side. The format of a successful response is defined in the appropriate request method section. Returns a hash of collections associated with the account, along with the last modified timestamp for each collection. Returns a hash of collections associated with the account, along with the total number of items for each collection. Returns a tuple containing the user’s current usage (in K) and quota. Returns a list of the WBO ids contained in a collection. This request has additional optional parameters: - ids: Returns the ids for objects in the collection that are in the provided comma-separated list. - predecessorid: Returns the ids for objects in the collection that are directly preceded by the id given. Usually only returns one result. - parentid: Returns the ids for objects in the collection that are the children of the parent id given. - older: Returns only ids for objects in the collection that have been last modified before the date given. - newer: Returns only ids for objects in the collection that have been last modified since the date given. - full: If defined, returns the full WBO, rather than just the id. - index_above: If defined, only returns items with a higher sortindex than the value specified. - index_below: If defined, only returns items with a lower sortindex than the value specified. - limit: Sets the maximum number of ids that will be returned. - offset: Skips the first n ids. For use with the limit parameter (required) to paginate through a result set. - sort:: sorts before getting - ‘oldest’ - Orders by modification date (oldest first) - ‘newest’ - Orders by modification date (newest first) - ‘index’ - Orders by the sortindex descending (highest weight first) Returns the WBO in the collection corresponding to the requested id Two alternate output formats are available for multiple record GET requests. They are triggered by the presence of the appropriate format in the Accept header (with application/whoisi taking precedence) Adds the WBO defined in the request body to the collection. If the WBO does not contain a payload, it will only update the provided metadata fields on an already defined object. The server will return the timestamp associated with the modification. Takes an array of WBOs in the request body and iterates over them, effectively doing a series of atomic PUTs with the same timestamp. Returns a hash of successful and unsuccessful saves, including guidance as to possible errors: { parentid”],”{GXS58IDC}14”:[“invalid parentid”],”{GXS58IDC}17”:[“invalid parentid”],”{GXS58IDC}20”:[“invalid parentid”]}} Deletes the collection and all contents. Additional request parameters may modify the selection of which items to delete: Deletes the WBO at the location given All delete requests return the timestamp of the action. Deletes all records for the user. Will return a precondition error unless an X-Confirm-Delete header is included. All delete requests return the timestamp of the action. X-Weave-Backoff Indicates that the server is under heavy load or has suffered a failure and the client should not try again for the specified number of seconds (usually 1800) X-If-Unmodified-Since On any write transaction (PUT, POST, DELETE), this header may be added to the request, set to a timestamp. If the collection to be acted on has been modified since the timestamp given, the request will fail. X-Weave-Alert This header may be sent back from any transaction, and contains potential warning messages, information, or other alerts. The contents are intended to be human-readable. X-Weave-Timestamp This header will be sent back with all requests, indicating the current timestamp on the server. If the request was a PUT or POST, this will also be the modification date of any WBOs submitted or modified.
https://moz-services-docs.readthedocs.io/en/latest/storage/apis-1.0.html
2022-05-16T11:17:46
CC-MAIN-2022-21
1652662510117.12
[]
moz-services-docs.readthedocs.io
Tip: Learn Commands for Viewing and Managing Boot Configuration Data on Windows 7 BCD Editor (Bcdedit.exe) is a command-line utility that lets you view and manage the BCD store. To use BCD Editor: 1. Click Start, point to All Programs, and then click Accessories. 2. Right-click Command Prompt, and then click Run As Administrator. 3. Type bcdedit at the command prompt. Follow Our Daily Tips RSS | Twitter | Blog | Facebook Tell Us Your Tips Share your tips and tweaks. Armed with the right set of commands, you can use bcdedit to: - Create, import, export, and identify the entire BCD store. - Create, delete, and copy individual entries in the BCD store. - Set or delete entry option values in the BCD store. - Control the boot sequence and the boot manager. - Configure and control Emergency Management Services (EMS). - Configure and control boot debugging as well as hypervisor debugging. summarizes commands you can use when you are working with the BCD store. Note that BCD Editor is an advanced command-line tool and you should attempt to modify the BCD store only if you are an experienced IT pro. As a safe¬guard, you should make a full backup of the computer prior to making any changes to the BCD store. Why? If you make a mistake, your computer might end up in a non-bootable state, and then you would need to initiate a recovery. Here’s a list of commands you can use with BCD Editor and a description of what each does: /bootdebugEnables or disables boot debugging for a boot application. /bootemsEnables or disables Emergency Management Services for a boot application. /bootsequenceSets the one-time boot sequence for the boot manager. /copyMakes copies of entries in the store. /create Creates new entries in the store. /createstore Creates a new (empty) boot configuration data store. /dbgsettings Sets the global debugger parameters. /debug Enables or disables kernel debugging for an operating system entry. /default Sets the default entry that the boot manager will use. /delete Deletes entries from the store. /deletevalue Deletes entry options from the store. /displayorder Sets the order in which the boot manager displays the multiboot menu. /ems Enables or disables Emergency Management Services for an operating system entry. /emssettings Sets the global Emergency Management Services parameters. /enum Lists entries in the store. /export Exports the contents of the system store to a file. This file can be used later to restore the state of the system store. /hypervisorsettings Sets the hypervisor parameters. /import Restores the state of the system store by using a backup file created with the /export command. /mirror Creates a mirror of entries in the store. /set Sets entry option values in the store. /sysstore Sets the system store device. This only affects EFI systems. /timeout Sets the boot manager timeout value. /toolsdisplayorder Sets the order in which the boot manager displays the tools menu. /v Sets output to verbose mode. From the Microsoft Press book Windows 7 Administrator’s Pocket Consultant by William R. Stanek. Looking for More Tips? For more tips on Windows 7 and other Microsoft technologies, visit the TechNet Magazine Tips library.
https://docs.microsoft.com/en-us/previous-versions/technet-magazine/ff404185(v=msdn.10)
2018-11-13T03:05:17
CC-MAIN-2018-47
1542039741192.34
[]
docs.microsoft.com
You enable encryption in your store by creating an encryption key. As soon as you create the key, credit card data on new orders will be encrypted. You create the key with the Encryption Key Wizard. Follow the steps in this section if you want to enable encryption in your store, and you do not have existing orders with credit card information. But see also Processing Existing Orders with the Encryption Key Wizard.
https://docs.miva.com/reference-guide/enabling-encryption-in-store
2018-11-13T02:13:14
CC-MAIN-2018-47
1542039741192.34
[]
docs.miva.com
Custom code The libs/ subdirectory of your repository provides a convenient place to put reusable code used throughout your bundles and hooks. A Python module called example.py placed in this directory will be available as repo.libs.example wherever you have access to a bundlewrap.repo.Repository object. In nodes.py and groups.py, you can do the same thing with just libs.example. Only single files, no subdirectories or packages, are supported at the moment.
https://docs.bundlewrap.org/repo/libs/
2018-11-13T03:13:08
CC-MAIN-2018-47
1542039741192.34
[]
docs.bundlewrap.org
The Version Installer file will always create/update the ODBC DSN specified in the install. So the installation-specific database connection settings will need to be entered during the install. There are two ways to do this. 1. The user enters them in the Version Installer Setup the Local Database dialog or 2. An administrator writes a batch file or script file to pass the settings on the MSI command line. For example: Msiexec.exe /i MYAPP_v1.0.0_en-us.msi DBII=MYAPPDSN DBSV=myservername\sqlserver DBAS=mydatabaseinstance.
https://docs.lansa.com/14/en/lansa022/content/lansa/vldtool20_0020.htm
2018-11-13T02:20:30
CC-MAIN-2018-47
1542039741192.34
[]
docs.lansa.com
Perspective Tool Properties The perspective tool allows you to deform artwork by creating a rectangular bounding box around it and allowing you to manipulate any of the four corners. The selected artwork will be deformed to fit the shape you make by simulating a perspective effect. - In the Tools toolbar, select the Perspective tool. The tool's properties are displayed in the Tool Properties view. NOTE To learn how to use the Perspective tool, see About the Perspective Tool.
https://docs.toonboom.com/help/harmony-16/essentials/reference/tool-properties/perspective-tool-properties.html
2018-11-13T02:24:00
CC-MAIN-2018-47
1542039741192.34
[array(['../../Skins/Default/Stylesheets/Images/transparent.gif', 'Closed'], dtype=object) array(['../../Resources/Images/HAR/Stage/Character_Design/Perspective_Tool/HAR12_ESS_PerpectiveTool_Options.png', None], dtype=object) ]
docs.toonboom.com
6.4. Virtuoso Cluster Fault Tolerance Abstract This chapter discusses fault tolerance and how to configure it. The following aspects are covered: Setting up a fault tolerant logical cluster inside a physical cluster. Creating tables and indices for fault tolerance. Interpreting status and error messages and managing failures and recovery. Optimizing a schema for fault tolerance: For read-intensive workloads, the work can be profitably split among many copies of the same partition. RDF specifics relating to fault tolerance. Splitting a cluster so that one copy of the partitions does bulk load while another serves online queries.
http://docs.openlinksw.com/virtuoso/fault/
2018-11-13T02:35:59
CC-MAIN-2018-47
1542039741192.34
[]
docs.openlinksw.com
8.5.101.10 Workforce Management Server Release Notes Helpful Links Releases Info Product Documentation Genesys Products What's New This release includes only resolved issues. Resolved Issues This release contains the following resolved issues: The Scheduled status of Calendar Exception items is now always resolved correctly. Previously, the resolved status of Calendar Exception items sometimes changed suddenly from Scheduled to Not Scheduled after other Calendar items were published to the schedule if the corresponding Exception schedule states overlapped Meals or Breaks in the schedule. (WFM-25118) Selected records are now correctly approved or declined in WFM Web for Supervisors Schedule > Changes Approval view. Previously, incorrect records were sometimes approved instead of the selected record. (WFM-25077) Upgrade Notes No special procedure is required to upgrade to release 8.5.101.10. Feedback Comment on this article:
https://docs.genesys.com/Documentation/RN/latest/wm-svr85rn/wm-svr8510110
2018-11-13T03:12:59
CC-MAIN-2018-47
1542039741192.34
[]
docs.genesys.com
Centralized Logging Service in Skype for Business 2015 Summary: Learn about the service components and configuration settings for the Centralized Logging Service in Skype for Business Server 2015. The Centralized Logging Service can: Start or stop logging on one or more computers and pools with a single command from a central location. Search logs on one or more computers and pools. You can tailor the search to return all logs on all machines, or return more concise results. Configure logging sessions as follows: Define a Scenario, or use a default scenario. A scenario in Centralized Logging Service is made up of scope (global or site), a scenario name to identify the purpose of the scenario, and one or more providers. You can run the default scenario and one defined scenario at any given time on a computer. Use an existing provider or create a new provider. Aprovider defines what the logging session collects, what level of detail, what components to trace, and what flags are applied. Tip If you are familiar with OCSLogger, the termproviders refers to the collection of components (for example, S4, SIPStack), a logging type (for example, WPP, EventLog, or IIS logfile), a tracing level (for example, All, verbose, debug), and flags (for example, TF_COMPONENT, TF_DIAG). These items are defined in the provider (a Windows PowerShell variable) and passed into the Centralized Logging Service command. Configure logs for specific computers and pools. Define the scope for the logging session from the options Site (to run logging captures on computers in that site only), or Global (to run logging captures on all computers in the deployment). The Centralized Logging Service is a powerful troubleshooting tool for problems large or small, from root cause analysis to performance problems. All examples are shown using the Skype for Business Server Management Shell. Help is provided for the command-line tool through the tool itself, but there is a limited set of functions that you can execute from the command line. By using Skype for Business Server Management Shell, you have access to a much larger and much more configurable set of features, so that should always be your first choice. Logging service components The Centralized Logging Service runs on all servers in your deployment, and is made up of the following agents and services: Centralized Logging Service Agent ClsAgent runs on every machine with Skype for Business Server deployed. It listens ( on ports TCP 50001-50003) for commands from ClsController over WCF and sends responses back to the controller. It manages log sessions (start/stop/update), and searches logs. It also performs housekeeping operations like log archiving and purges. Centralized Logging Service Controller Cmdlets The Skype for Business Server Management Shell sends Start, Stop, Flush, and Search commands to the ClsAgent. When search commands are sent, the resulting logs are returned to the ClsControllerLib.dll and aggregated. The controller sends commands to the agent, receives the status of those commands and manages the search log file data as it is returned from all agents on any computer in the search scope, and aggregates the log data into a meaningful and ordered output set. The information in the following topics is focused on using the Skype for Business Server Management Shell. ClsController communications to ClsAgent You issue commands using the Windows Server command-line interface or using the Skype for Business Server Management Shell. The commands are executed on the computer you are logged in to and sent to the ClsAgent locally or to the other computers and pools in your deployment. ClsAgent maintains an index file of all .CACHE files that it has on the local machine. ClsAgent allocates them so that they are evenly distributed across volumes defined by the option CacheFileLocalFolders, never consuming more than 80% of each volume (that is, the local cache location and the percentage is configurable using the Set-CsClsConfiguration cmdlet). ClsAgent is also responsible for aging old cached event trace log (.etl) files off the local machine. After two weeks (that is, the timeframe is configurable using the Set-CsClsConfiguration cmdlet) these files are copied to a file share and deleted from the local computer. For details, see Set-CsClsConfiguration. When a search request is received, the search criteria is used to select the set of cached .etl files to perform the search based on the values in the index maintained by the agent. Note Files that are moved to the file share from the local computer can be searched by ClsAgent. Once ClsAgent moves the files to the file share, the aging and removal of files is not maintained by ClsAgent. You should define an administrative task to monitor the size of the files in the file share and delete them or archive them. The resulting log files can be read and analyzed using a variety of tools, including Snooper.exe and any tool that can read a text file, such as Notepad.exe. Snooper.exe is part of the Skype for Business Server 2015 Debug Tools and is available as a Web download. Like OCSLogger, the Centralized Logging Service has several components to trace against, and provides options to select flags, such as TF_COMPONENT and TF_DIAG. Centralized Logging Service also retains the logging level options of OCSLogger. The most important advantage to using the Skype for Business Server Management Shell over the command-line ClsController is that you can configure and define new scenarios using selected providers that target the problem space, custom flags, and logging levels. The scenarios available to ClsController are limited to those that are defined for the executable. In previous versions, OCSLogger.exe was provided to enable administrators and support personnel to collect trace files from computers in the deployment. OCSLogger, for all of its strengths, had a shortcoming. You could only collect logs on one computer at a given time. You could log on to multiple computers by using separate copies of OCSLogger, but you ended up with multiple logs and no easy way to aggregate the results. When a user requests a log search, the ClsController determines which machines to send the request to (that is, based on the scenarios selected). It also determines whether the search needs to be sent to the file share where the saved .etl files are located. When the search results are returned to the ClsController, the controller merges the results into a single time-ordered result set that is presented to the user. Users can save the search results to their local machine for further analysis. When you start a logging session, you specify scenarios that are relative to the problem that you are trying to resolve. You can have two scenarios running at any time. One of these two scenarios should be the AlwaysOn scenario. As the name implies, it should always be running in your deployment, collecting information on all computers, pools, and components. Important By default, the AlwaysOn scenario is not running in your deployment. You must explicitly start the scenario. Once started, it will continue to run until explicitly stopped, and the running state will persist through reboots of the computers. For details on starting and stopping scenarios, see Start or stop CLS log capture in Skype for Business Server 2015. When a problem occurs, start a second scenario that relates to the problem reported. Reproduce the problem, and stop the logging for the second scenario. Begin your log searches relative to the problem reported. The aggregated collection of logs produces a log file that contains trace messages from all computers in your site or global scope of your deployment. If the search returns more data than you can feasibly analyze (typically known as a signal-to-noise ratio, where the noise is too high), you run another search with narrower parameters. At this point, you can begin to notice patterns that show up and can help you get a clearer focus on the problem. Ultimately, after you perform a couple of refined searches you can find data that is relevant to the problem and figure out the root cause. Tip When presented with a problem scenario in Skype for Business Server, start by asking yourself "What do I already know about the problem?" If you quantify the problem boundaries, you can eliminate a large part of the operational entities in Skype for Business Server. Consider an example scenario where you know that users are not getting current results when looking for a contact. There is no point in looking for problems in the media components, Enterprise Voice, conferencing, and a number of other components. What you may not know is where the problem actually is: on the client, or is this a server-side problem? Contacts are collected from Active Directory by the User Replicator and delivered to the client by way of the Address Book Server (ABServer). The ABServer gets its updates from the RTC database (where User Replicator wrote them) and collects them into address book files, by default - 1:30 AM. The Skype for Business Server clients retrieve the new address book on a randomized schedule. Because you know how the process works, you can reduce your search for the potential cause to an issue related to data being collected from Active Directory by the User Replicator, the ABServer not retrieving and creating the address book files, or the clients not downloading the address book file. Current configuration. To display the current Centralized Logging Service configuration Start the Skype for Business Server Management Shell: Click Start, click All Programs, click Skype for Business 2015, and then click Skype for Business Server Management Shell. Type the following at a command-line prompt: Get-CsClsConfiguration Tip You can narrow or expand the scope of the configuration settings that are returned by defining -Identityand a scope, such as "Site:Redmond" to return only the CsClsConfiguration for the site Redmond. If you want details about a given portion of the configuration, you can pipe the output into another Windows PowerShell cmdlet. For example, to get details about the scenarios defined in the configuration for site "Redmond", type: Get-CsClsConfiguration -Identity "site:Redmond" | Select-Object -ExpandProperty Scenarios The result from the cmdlet displays the current configuration of the Centralized Logging Service.
https://docs.microsoft.com/en-us/skypeforbusiness/management-tools/centralized-logging-service/centralized-logging-service
2018-11-13T03:02:25
CC-MAIN-2018-47
1542039741192.34
[array(['../../sfbserver/media/ops_cls_architecture.jpg', 'Relationship between CLSController and CLSAgent.'], dtype=object)]
docs.microsoft.com
Method annotation to make a method call synchronized for concurrency handling with some useful baked-in conventions. @Synchronized is a safer variant of the synchronized method modifier. The annotation can only be used on static and instance methods. It operates similarly to the synchronized keyword, but it locks on different objects. When used with an instance method, the synchronized keyword locks on this, but the annotation locks on a (by default automatically generated) field named $lock. If the field does not exist, it is created for you. If you annotate a static method, the annotation locks on a static field named $LOCK instead. If you want, you can create these locks yourself. The $lock and $LOCK fields will not be generated if you create them yourself. You can also choose to lock on another field, by specifying its name as parameter to the @Synchronized annotation. In this usage variant, the lock field will not be created automatically, and you must explicitly create it yourself. Rationale: Locking on this or your own class object can have unfortunate side-effects, as other code not under your control can lock on these objects as well, which can cause race conditions and other nasty threading-related bugs. Example usage: class SynchronizedExample { private final myLock = new Object()which becomes: @Synchronized static void greet() { println "world" } @Synchronized int answerToEverything() { return 42 } @Synchronized("myLock") void foo() { println "bar" } } class SynchronizedExample { private static final $LOCK = new Object[0] private final $lock = new Object[0] private final myLock = new Object() static void greet() { synchronized($LOCK) { println "world" } } int answerToEverything() { synchronized($lock) { return 42 } } void foo() { synchronized(myLock) { println "bar" } } }Credits: this annotation is inspired by the Project Lombok annotation of the same name. The functionality has been kept similar to ease the learning curve when swapping between these two tools. Details: If $lock and/or $LOCK are auto-generated, the fields are initialized with an empty Object[] array, and not just a new Object() as many snippets using this pattern tend to use. This is because a new Object is NOT serializable, but a 0-size array is. Therefore, using @Synchronized will not prevent your object from being serialized. More examples: import groovy.transform.Synchronized class Util { private counter = 0 private def list = ['Groovy'] private Object listLock = new Object[0] @Synchronized void workOnCounter() { assert 0 == counter counter++ assert 1 == counter counter -- assert 0 == counter } @Synchronized('listLock') void workOnList() { assert 'Groovy' == list[0] list << 'Grails' assert 2 == list.size() list = list - 'Grails' assert 'Groovy' == list[0] } } def util = new Util() def tc1 = Thread.start { 100.times { util.workOnCounter() sleep 20 util.workOnList() sleep 10 } } def tc2 = Thread.start { 100.times { util.workOnCounter() sleep 10 util.workOnList() sleep 15 } } tc1.join() tc2.join() @default ""
http://docs.groovy-lang.org/latest/html/gapi/groovy/transform/Synchronized.html
2016-12-02T20:00:54
CC-MAIN-2016-50
1480698540563.83
[]
docs.groovy-lang.org
class OEBondGlyphStitch : public OEBondGlyphBase By using the OEBondGlyphStitch class, bonds can be annotated by drawing perpendicular lines across them. See also The following methods are publicly inherited from OEBondGlyphBase: OEBondGlyphStitch(const OEDepict::OEPen &pen, unsigned int nrstitches=2u, double stitchLengthScale=1.0, unsigned int layer=OEDepict::OELayerPosition::Above) Creates an OEBondGlyphZigZag object with the specified parameters. Example of bond annotations The pen used when drawing the perpendicular lines across the bond. See examples (A) and (B) in Figure: Example of bond annotations. The number of perpendicular lines drawn across the bond. This value has to be in the range of [1, 5]. See examples (A) and (C) in Figure: Example of bond annotations. The multiplier used to modify the length of the drawn perpendicular lines. See examples (A) and (D) in Figure: Example of bond annotations. Specifies whether the perpendicular lines are drawn above (OELayerPosition.Above) or below (OELayerPosition.Below) the molecule. See examples (B) and (E) in Figure: Example of bond annotations. OEBondGlyphBase *CreateCopy() const Deep copy constructor that returns a copy of the object. The memory for the returned OEBondGlyphStitch object is dynamically allocated and owned by the caller.
https://docs.eyesopen.com/toolkits/csharp/graphemetk/OEGraphemeClasses/OEBondGlyphStitch.html
2018-11-12T23:19:49
CC-MAIN-2018-47
1542039741151.56
[]
docs.eyesopen.com
This document covers the following key points for the month end closing process in SAP: - The end of each period is characterized by a series of activities which includes opening of a new accounting period and closing of the current period - The Period-End Closing process is divided into four phases: Pre-closing Check/Readiness, Data Collection Closing, Reconciliation and Adjustment and Final Close - The G/L, AP and AR clearing is automated. Manual processing will only be conducted for exception items that cannot be reconciled - The reconciliation and adjustment phase is used to reconcile account entries across financial siloes and adjust financial postings as necessary - The Month End Closing process is considered complete when the posting periods across FI, FM and CO have been locked
http://www.erp-docs.com/665/step-by-step-guide-to-month-end-closing-process-in-sap/
2018-11-12T22:10:08
CC-MAIN-2018-47
1542039741151.56
[]
www.erp-docs.com
Magento Open Source 2.3.x This is the 2.3 Beta release version of Magento documentation. Content in this version is subject to change. For additional versions, see Magento Documentation and Resources. Dispatches If Magento Shipping is enabled, the Dispatches grid lists all shipments that are ready to ship. For each scheduled pickup, you can create a Dispatch and printed manifest that includes each package that is to be included, per carrier. A quick rating takes only 3 clicks. Add a comment to help us improve Magento even more.
https://docs.magento.com/m2/ce/user_guide/sales/dispatches.html
2018-11-12T23:19:36
CC-MAIN-2018-47
1542039741151.56
[array(['../Skins/Default/Stylesheets/Images/transparent.gif', 'Closed'], dtype=object) array(['../Skins/Default/Stylesheets/Images/transparent.gif', 'Closed'], dtype=object) ]
docs.magento.com
Authorization Code¶ The first step is to request the authorization from the user. This call will return a redirect to the login page. Authorize Request: curl -i -X GET -d "client_id=pekfaf6jxk6suyXXXXXXXXXX" --data-urlencode "redirect_uri=" -d "response_type=code" "" HTTP/1.1 302 Found Content-Type: text/plain; charset=UTF-8 Date: Mon, 28 Apr 2014 14:03:51 GMT Location: Server: Apache-Coyote/1.1 Set-Cookie: JSESSIONID=2D3C7E5AA412DF98124B8AC7121FEF7D; Path=/; Secure; HttpOnly Content-Length: 0 Connection: keep-alive If you did this through a browser, the login screen would appear to the user: After logging in, the user will be redirected to the redirect_url specified in the authorization request with the authorization code appended to it. For the request above, the redirect uri would look like:: From this response, the server running at test.carvoyant.com would parse the authorization code and make a request to the token endpoint to read an actual access token. Note that this request requires the client id and secret key for the development partner to be passed as the Basic Authentication credentials. Access Token Request: curl -i --user pekfaf6jxk6suyXXXXXXXXXX:XXXXXXXXXX -d "grant_type=authorization_code" -d "code=v369uars628mgkXXXXXXXXXX" --data-urlencode "redirect_uri=" "" The response will include a json body with the access token information. Access Token Response: HTTP/1.1 200 OK Cache-Control: no-store Content-Type: application/json;charset=UTF-8 Date: Mon, 28 Apr 2014 14:24:34 GMT Server: Mashery Proxy X-Mashery-Responder: prod-j-worker-us-east-1c-31.mashery.com Content-Length: 161 Connection: keep-alive { "token_type":"bearer", "mapi":"pekfaf6jxk6suyXXXXXXXXXX", "access_token":"dmnda67wbdnyayXXXXXXXXXX", "expires_in":86400, "refresh_token":"f2hqes6fpg37d2XXXXXXXXXX" } At this point, the development partners system would store the access token and refresh token and use them for future requests.
https://carvoyant-api.readthedocs.io/en/latest/getting-started/oauth-example-authcode.html
2021-07-24T00:58:59
CC-MAIN-2021-31
1627046150067.87
[array(['../_images/auth-dialog.png', '../_images/auth-dialog.png'], dtype=object) ]
carvoyant-api.readthedocs.io
PayCore.io v1.5.16 (December 27, 2019)¶ By Dmytro Dziubenko, Chief Technology Officer Merry Christmas and Happy New Year, friends! Hoping your holidays—and all the days of 2020—are filled with joy! Cheery Season's greetings from PayCore.io! This year we have done much, and more is needed. The platform is growing as a product: it became a versatile payment hub. Our team continues the system development and preparing an extensive rebuild for PayCore.io. We should recognize that the journey has just begun, but we are encouraged by the results of the work done so far. Highlights¶ - Creation of the Merchant Portal: internal logic update - Performance improvements List of Changes¶ Merchant Portal Tab¶ We boosted up the process of linking the merchant application with the main dashboard. It was necessary to update the internal logic of App's creation, but no need to worry: nothing has changed radically from outside. Thus, if you need to create a Commerce Account linked with an application, you can set up its start parameters via the Merchant Portal tab in the Administration section. Performance Improvements¶ This week we also made a handful of insignificant bug fixes too small for the human eye. The holiday season approached, so the next issue of release notes is scheduled on Friday, January 10. Let's enjoy this wintertime!
https://docs.paycore.io/release-notes/archive/2019/v1.5.16/
2021-07-24T01:51:47
CC-MAIN-2021-31
1627046150067.87
[array(['../images/v1.5.16/dashboard.png', 'Dashboard'], dtype=object) array(['../images/v1.5.16/merchant-app.png', 'Merchant portal'], dtype=object) ]
docs.paycore.io
UiPath.OracleNetSuite.Activities.InitializeRecord The Initialize Record activity uses the the NetSuite initialize operation to update a specific record (internalid). After initializing the record, the activity outputs the new record of the initialization in a Record object (Record) that you can use in a subsequent activity. The status of the request and success/failure information will be set Initialize Record activity inside the NetSuite Application Scope activity. - Click the Configure button inside the Initialize Record activity. This allows you to set the input parameters. - Select the Record Type of the record type you want the initialization to create. - Select the Reference Type of the record type you want to initialize. - Enter the Reference Id for the internalid of the reference record that will be initialized. - Create and enter a Recordvariable that will contain the newly created record. - Create and enter a ResponseStatusvariable for the Output property. In the Body of the Activity To enter your Initialize Record property values, you must use the Input Dialog by clicking the Configure button. - RecordType - The record type to create and initialize. - ReferenceType - The record type from which to initialize the record. - ReferenceId (string) - The Id of the record from which to initialize. 属性 Common - DisplayName - The display name of the activity. This field supports only Stringsor Stringvariables. Misc - Private - If selected, the values of variables and arguments are no longer logged at Verbose level. Output - Record - The initialized record as returned by NetSuite. Enter a Recordvariable (UiPath.OracleNetSuite.Com.netsuite.webservices.Record). The Recordobject is not saved by default but is the output of a the initialization transformation. Use the Insert Record activity to save this record with any additional fields that need to be set. -
https://docs.uipath.com/activities/lang-zh_CN/docs/oracle-netsuite-initialize-record
2021-07-24T02:10:57
CC-MAIN-2021-31
1627046150067.87
[array(['https://files.readme.io/c3a6199-IntializeRecord_MSC.png', 'IntializeRecord_MSC.png'], dtype=object) array(['https://files.readme.io/c3a6199-IntializeRecord_MSC.png', 'Click to close...'], dtype=object) ]
docs.uipath.com
All objects created in Data Service are referred to as Entities. To create a new entity: - In your Automation Cloud account, navigate to Data Service from the left-side navigation bar.. - Click Create New Entity. The Create Entity panel opens. - Enter a Display Name for the entity and, optionally, a Description. The Display Name can be alpha-numeric, but it must start with a letter. It can have between 3 and 20 characters. The Name field can be modified. This is the name displayed when you import the entity in Studio. - Klicken Sie auf Speichern (Save). The new entity is displayed in your Entities list. Updated 22 days ago
https://docs.uipath.com/data-service/lang-de/docs/creating-an-entity
2021-07-24T02:27:53
CC-MAIN-2021-31
1627046150067.87
[]
docs.uipath.com
How to contribute to Anaconda¶ Community engagement makes Anaconda Distribution, conda, and conda-build better. We value our open-source community and encourage all users to contribute to the Anaconda ecosystem. The best contributions start by helping and encouraging others, especially newcomers who are struggling with something you’ve overcome. See below for other ways you can contribute. Mailing lists¶ Join the mailing lists to help other users answer questions, debug issues, and suggest solutions. GitHub issues¶ If you want to get involved in contributing code for Anaconda, conda, or conda-build, we recommend collaborating with others, resolving bug issues, and submitting pulll requests with those resolutions. Stack Overflow¶ Answer questions and suggest resolutions and workarounds on Stack Overflow. Documentation¶ Notice an error or gap in our documentation? We welcome pull requests for conda and conda-build documentation improvements and additions. If a documentation change is needed in Distribution, open a ticket on anaconda-issues. Example documentation contribution¶ Follow the directions below to submit a documentation PR using the GitHub interface. Start in the conda or conda-build documentation. Select “Edit on GitHuB” on the page needing the edit. Edit the file in GitHub. Commit your changes. PR review process - PR is submitted. - Anaconda community members and/or staff review the PR, providing comments and revisions. - New contributors sign the CLA. - PR is merged. Conda-forge feedstocks¶ Contribute to conda-forge feedstocks where you can improve, update, and/or add new conda-build recipes to conda-forge. See our tutorials on how to build conda-build recipes. The recipes here are often used as the base of recipes used to build packages for defaults/repo.anaconda.com. Helping conda-forge increases the number and quality of packages available to install with conda, as well as helping Anaconda do the same for packages shipped in defaults. Tip A good way to find feedstocks to work on is to look at the staged recipes issues with the “Package request” label. Anaconda Enterprise¶ Ready to scale up your projects? Anaconda Enterprise is an enterprise-ready, secure, and scalable data science platform that empowers teams to govern data science assets, collaborate, and deploy their data science projects. Read more about Anaconda Enterprise to see if it’s the right option for you and your team. Social¶ The easiest way to contribute is to tell your friends about all of the things you can do with Anaconda. Be sure to mention that Anaconda provides package and environment management and over 7,500+ open source packages—completely free. Check our social media to keep up with what’s happening at Anaconda and add to the conversation. Twitter | Facebook | LinkedIn | SlideShare
https://docs.continuum.io/anaconda/reference/contribute/
2021-07-24T01:50:43
CC-MAIN-2021-31
1627046150067.87
[]
docs.continuum.io
The following illustrates the scenario where a shared pager or other notification task is blacked out for normal business hours but on call for evenings and weekends. For the individual or group on-call this includes: An opposite model might work for personnel working regular shifts (and who are not 'on-call' for the weekend). For that scenario, those shift rotations would have notifications 'blacked out' for the weekend but active for their normal weekday shifts.
https://docs.ipswitch.com/NM/WhatsUpGold2019/03_Help/1033/42033.htm
2021-07-24T02:07:21
CC-MAIN-2021-31
1627046150067.87
[]
docs.ipswitch.com
Demonstration: Business Operation Configuration Now you will explore the component level configuration options for a Business Operation: On the configuration page for Demo.HL7.MsgRouter.Production, click the ORM_O01_FileOperation component. On the top of the settings page, you'll find the Informational Settings: AdapterClassname: EnsLib.File.OutboundAdapter — The name of the adapter that the operation uses. Notice that this field is read-only. Class Name: EnsLib.HL7.Operation.FileOperation: — This is the name of the class that implements the component. Beneath this, you'll find the following configuration fields on the right panel under Basic Settings: Filename %H%M_ORM_O01_%f.txt — This specifies the name of the file that the operation outputs. %H and %M are time stamp specifiers. %f is a filename specifier that places the original input filename into the name of the new file. FilePath C:\Practice\Out — The directory that the operation writes to. If this directory does not exist, either create it now or change the configuration using a directory that does exist. If you change the configuration, please remember to click Apply. Below the Basic Settings you'll find the Additional Settings: FailureTimeout: 15 — The number of seconds that the operation continues to attempt to write a file before returning an error and ceasing its attempts. Overwrite false — When true, the operation overwrites any file with the name specified in File Name. When false the operation appends to the file. Pool Size: 1 — This is the number instances of the operation currently instantiated and at work in the production. Note that every operation that uses an inbound adapter must have a pool size of at least one. HL7 File Operation component's are instances of the EnsLib.HL7.Service.FileOperation class. The File Operation components use the Outbound File Adapter to generate files. The class for this adapter is EnsLib.File.OutboundAdapter Opens in a new window.
https://docs.intersystems.com/latest/csp/docbook/Doc.View.cls?KEY=THL7_Overview_ConfigureOperation
2021-07-24T02:14:03
CC-MAIN-2021-31
1627046150067.87
[]
docs.intersystems.com
Quickstart: Build a Java app to manage Azure Cosmos DB SQL API data APPLIES TO: SQL API In this quickstart, you create and manage an Azure Cosmos DB SQL API account from the Azure portal, and by using a Java app cloned from GitHub. First, you create an Azure Cosmos DB SQL API account using the Azure portal, then create a Java app using the SQL Java SDK, and then add resources to your Cosmos DB account by using the Java application. Azure Cosmos DB is a multi-model database service that lets you quickly create and query document, table, key-value, and graph databases with global distribution and horizontal scale capabilities. Important This quickstart - An Azure account with an active subscription. Create one for free. Or try Azure Cosmos DB for free without an Azure subscription. You can also use the Azure Cosmos DB Emulator with a URI of the==. - Java Development Kit (JDK) 8. Point your JAVA_HOMEenvironment variable to the folder where the JDK is installed. - A Maven binary archive. On Ubuntu, run apt-get install mavento install Maven. - Git. On Ubuntu, run sudo apt-get install gitto install Git. Introductory notes The structure of a Cosmos DB account. Irrespective of API or programming language, a Cosmos DB account contains zero or more databases, a database (DB) contains zero or more containers, and a container contains zero or more items, as shown in the diagram below: You may read more about databases, containers and items here. A few important properties are defined at the level of the container, among them provisioned throughput and partition key. The provisioned throughput is measured in Request Units (RUs) which have a monetary price and are a substantial determining factor in the operating cost of the account. Provisioned throughput can be selected at per-container granularity or per-database granularity, however container-level throughput specification is typically preferred. You may read more about throughput provisioning here.. Create a database account Before you can create a document database, you need to create a SQL API account with Azure Cosmos DB. From the Azure portal menu or the Home page, select Create a resource. On the New page, search for and select Azure Cosmos DB. On the Azure Cosmos DB page, select Create. In the Create Azure Cosmos DB Account page, enter the basic settings for the new Azure Cosmos either periodic or continuous backup policy. - Encryption - Use either service-managed key or a customer-managed key. - Tags - Tags are name/value pairs that enable you to categorize resources and view consolidated billing by applying the same tag to multiple resources and resource groups. Select Review + create. Review the account settings, and then select Create. It takes a few minutes to create the account. Wait for the portal page to display Your deployment is complete. Select Go to resource to go to the Azure Cosmos DB account page. switch to working with code. Let's clone a SQL API app from GitHub, set the connection string, and run it. You'll see how easy it is to work with data programmatically. Run the following command to clone the sample repository. This command creates a copy of the sample app on your computer. git clone Review the code This step is optional. If you're interested in learning how the database resources are created in the code, you can review the following snippets. Otherwise, you can skip ahead to Run the app . Managing database resources using the synchronous (sync) API CosmosClientinitialization. The CosmosClientprovides client-side logical representation for the Azure Cosmos database service. This client is used to configure and execute requests against the service. client = new CosmosClientBuilder() .endpoint(AccountSettings.HOST) .key(AccountSettings.MASTER_KEY) // Setting the preferred location to Cosmos DB Account region // West US is just an example. User should set preferred location to the Cosmos DB region closest to the application .preferredRegions(Collections.singletonList("West US")) .consistencyLevel(ConsistencyLevel.EVENTUAL) .buildClient(); CosmosDatabasecreation. CosmosDatabaseResponse cosmosDatabaseResponse = client.createDatabaseIfNotExists(databaseName); database = client.getDatabase(cosmosDatabaseResponse.getProperties().getId()); CosmosContainercreation. CosmosContainerProperties containerProperties = new CosmosContainerProperties(containerName, "/lastName"); // Create container with 400 RU/s CosmosContainerResponse cosmosContainerResponse = database.createContainerIfNotExists(containerProperties, ThroughputProperties.createManualThroughput(400)); container = database.getContainer(cosmosContainerResponse.getProperties().getId()); Item creation by using the createItemmethod. // Create item using container that we created using sync client // Use lastName as partitionKey for cosmos item // Using appropriate partition key improves the performance of database operations CosmosItemRequestOptions cosmosItemRequestOptions = new CosmosItemRequestOptions(); CosmosItemResponse<Family> item = container.createItem(family, new PartitionKey(family.getLastName()), cosmosItemRequestOptions); Point reads are performed using readItemmethod. try { CosmosItemResponse<Family> item = container.readItem(family.getId(), new PartitionKey(family.getLastName()), Family.class); double requestCharge = item.getRequestCharge(); Duration requestLatency = item.getDuration(); logger.info("Item successfully read with id {} with a charge of {} and within duration {}", item.getItem().getId(), requestCharge, requestLatency); } catch (CosmosException e) { logger.error("Read Item failed with", e); } SQL queries over JSON are performed using the queryItemsmethod. // Set some common query options CosmosQueryRequestOptions queryOptions = new CosmosQueryRequestOptions(); //queryOptions.setEnableCrossPartitionQuery(true); //No longer necessary in SDK v4 // Set query metrics enabled to get metrics around query executions queryOptions.setQueryMetricsEnabled(true); CosmosPagedIterable<Family> familiesPagedIterable = container.queryItems( "SELECT * FROM Family WHERE Family.lastName IN ('Andersen', 'Wakefield', 'Johnson')", queryOptions, Family.class); familiesPagedIterable.iterableByPage(10).forEach(cosmosItemPropertiesFeedResponse -> { logger.info("Got a page of query result with {} items(s) and request charge of {}", cosmosItemPropertiesFeedResponse.getResults().size(), cosmosItemPropertiesFeedResponse.getRequestCharge()); logger.info("Item Ids {}", cosmosItemPropertiesFeedResponse .getResults() .stream() .map(Family::getId) .collect(Collectors.toList())); }); Run the app Now go back to the Azure portal to get your connection string information and launch the app with your endpoint information. This enables your app to communicate with your hosted database. In the git terminal window, cdto the sample code folder. cd azure-cosmos-java-getting-started In the git terminal window, use the following command to install the required Java packages. mvn package In the git terminal window, use the following command to start the Java application (replace SYNCASYNCMODE with syncor asyncdepending on which sample code you would like to run, replace YOUR_COSMOS_DB_HOSTNAME with the quoted URI value from the portal, and replace YOUR_COSMOS_DB_MASTER_KEY with the quoted primary key from portal) mvn exec:java@SYNCASYNCMODE -DACCOUNT_HOST=YOUR_COSMOS_DB_HOSTNAME -DACCOUNT_KEY=YOUR_COSMOS_DB_MASTER_KEY The terminal window displays a notification that the FamilyDB database was created. The app creates database with name AzureSampleFamilyDB The app creates container with name FamilyContainer The app will perform point reads using object IDs and partition key value (which is lastName in our sample). The app will query items to retrieve all families with last name in ('Andersen', 'Wakefield', 'Johnson') The app doesn't delete the created resources. Switch back to the portal to clean up the resources. from your account so that you don't incur charges. SQL API account, create a document database and container using the Data Explorer, and run a Java app to do the same thing programmatically. You can now import additional data into your Azure Cosmos DB account.
https://docs.microsoft.com/en-us/azure/cosmos-db/create-sql-api-java
2021-07-24T03:06:20
CC-MAIN-2021-31
1627046150067.87
[array(['media/account-databases-containers-items/cosmos-entities.png', 'Azure Cosmos account entities'], dtype=object) ]
docs.microsoft.com
Using Studio you can easily manipulate specific files of an entity record. These activities allow you to execute operations such as upload, download, or delete specified files of a specified field of an entity record. The following example contains a main XAML demonstrating how to upload a file to Data Service, after downloading it from Olympics games related Wikipedia pages. Prerequisite Step 1: Create Entities in the Data Service Following the steps detailed in the Creating an Entity page, we created an entity for storing information about the Olympic games. The Olympics entity has the following fields: Name (text) and Logo (file). Note: System built-in fields are automatically added to your entity in order to provide details about its creation. Prerequisite Step 2: Import the Entities in Studio Import the entity inside your workflow in Studio, as explained in the Importing Entities page. The following workflow scrapes information and images about a certain Olympics edition, maps the found data to arguments, and passes them along to the main workflow. Step 3: Build the Workflow Query Related Records We'll use the Query Entity Records activity to pull up a list of records from the Olympics entity from Data Service by setting up the name variable. This goes through all records from the Name field and outputs them in the existingRecords variable to be used in the following If activity. If Add the If activity allows us to take one of the following two different courses of action, depending on whether the cont for the existingRecords variable is 0 or not: 1. If a record does not exist, create it and assign values to it If no records are found meaning that the existingRecords=0 condition is met, create a new Sequence, use the Multiple Assign activity to assign values to the olympics and olympics.Name variables and the Create Entity Record activity to add the collected information to the Data Service. You can use the information available in the Using Entities in Projects page Create Data Records section example. 2. If a record already exists, assign values to it If a record is found, meaning that the existingRecords=0 condition is not met, use the Assign or Multiple Assign activity to assign the value to the existingRecords variable. Upload File to Record Field We use the Upload File to Record Field activity and upload the created file to the Logo field from the Olympics entity record. Delete the local file Remove the file that was downloaded locally from Wikipedia to our machine after uploading it to Data Service by using the Delete activity. Display the Value of the Variable in the Output Panel Add the Write Line to display the value of the string variable, for this example we've added the olympics.Name, to the Output panel. Once finished, your project should look like this: Vor etwa einem Monat aktualisiert
https://docs.uipath.com/data-service/lang-de/docs/uploading-file-fields-in-entity-records
2021-07-24T01:24:48
CC-MAIN-2021-31
1627046150067.87
[array(['https://files.readme.io/2ded0c8-Screenshot_2.png', 'Screenshot_2.png'], dtype=object) array(['https://files.readme.io/2ded0c8-Screenshot_2.png', 'Click to close...'], dtype=object) array(['https://files.readme.io/18d019f-72829aa-Main.jpg', '72829aa-Main.jpg'], dtype=object) array(['https://files.readme.io/18d019f-72829aa-Main.jpg', 'Click to close...'], dtype=object) array(['https://files.readme.io/8934c78-Screenshot_4.png', 'Screenshot_4.png'], dtype=object) array(['https://files.readme.io/8934c78-Screenshot_4.png', 'Click to close...'], dtype=object) array(['https://files.readme.io/91305ca-Main.jpg', 'Main.jpg'], dtype=object) array(['https://files.readme.io/91305ca-Main.jpg', 'Click to close...'], dtype=object) array(['https://files.readme.io/4544a9a-Screenshot_1.png', 'Screenshot_1.png'], dtype=object) array(['https://files.readme.io/4544a9a-Screenshot_1.png', 'Click to close...'], dtype=object) array(['https://files.readme.io/63bbcf9-Screenshot_2.png', 'Screenshot_2.png'], dtype=object) array(['https://files.readme.io/63bbcf9-Screenshot_2.png', 'Click to close...'], dtype=object) array(['https://files.readme.io/c5232ff-Screenshot_3.png', 'Screenshot_3.png'], dtype=object) array(['https://files.readme.io/c5232ff-Screenshot_3.png', 'Click to close...'], dtype=object) array(['https://files.readme.io/426eb5e-Screenshot_1.png', 'Screenshot_1.png'], dtype=object) array(['https://files.readme.io/426eb5e-Screenshot_1.png', 'Click to close...'], dtype=object) array(['https://files.readme.io/1a06cbd-Screenshot_2.png', 'Screenshot_2.png'], dtype=object) array(['https://files.readme.io/1a06cbd-Screenshot_2.png', 'Click to close...'], dtype=object) array(['https://files.readme.io/8c2ecaf-DS-1.png', 'DS-1.png'], dtype=object) array(['https://files.readme.io/8c2ecaf-DS-1.png', 'Click to close...'], dtype=object) ]
docs.uipath.com
Frequenctly Asked Questions about the Service¶ How soon after observation is data available with (Where can I find how your soil moisture retrieval works?) relies on physical quantities that can be directly less frequent measurements around the equator, where the Earth is ‘widest’ around its longitude belt, and more frequent measurements at high latitudes (in absolute values), related to frozen conditions. When the soil is frozen, the signal that the satellite observes is completely different from unfrozen soil and cannot be transformed into soil moisture. This also applies for heavy thunderstorms when the top of the clouds are frozen. More information can be found in the documentation describing our data flags (How to retrieve the data flags?). Are there any other conditions for which no valid measurements can be derived?¶ This can happen when there are radio sources transmitting on the same frequency as the satellite is measuring. As a result, the measurement can be disturbed, causing the soil to look drier than it actually is. This disturbance is called Radio Frequency Interference (RFI). We have a filter in place that checks for these disturbances to filter them out in an early stage of the processing chain. However, when areas become too obscured by RFI, the soil moisture retrieval will eventually fail. More information can be found in the documentation describing our data flags (How to retrieve the data flags?). Can I download the data using (S)FTP?¶ Not by default. The standard VanderSat service is based on our API (see API User Guide) and our Viewer (see VanderSat Viewer). Bespoke solutions can be made, if required. Do you have example scripts for downloading your data?¶ Yes, we do. Check the API reference documentation for an example Python script. Do you provide data for free?¶ Our high resolution data is a commerical product that comes at a cost and has been developed in-house using our own resources. However, VanderSat does provide pasive microwave data at a course resolution for the CCI Soil moisture (financed by ESA:) and the C3S soil moisture (financed by EU Copernicus:) that may be obtained for free.
https://docs.vandersat.com/data_products/soil_moisture/faq_service.html
2021-07-24T02:06:44
CC-MAIN-2021-31
1627046150067.87
[]
docs.vandersat.com
Account¶ The Account object represents a unique account within the Carvoyant system. Properties Supported Verbs - GET - DELETE GET¶ Returns one or more accounts. Query Paths - /account/ - /account/{account-id} Query Parameters Call Options Sample JSON Response: { "account": { "id: 3 "firstName": "Speed" "lastName": "Racer" "username": "speedracer" "dateCreated": "20121130T144013+0000" "email": "[email protected]" "zipcode": "33635" "phone": "8135551212" "timeZone": "America/New_York" "preferredContact": "PHONE" } totalRecords: null actions: [ ] } Creates or updates an account. Note that the client credentials authentication mechanism must be used for account creation. User account access tokens are not authorized to create new accounts. In the response to the creation of a new response, an OAuth2 authorization code will be provided. The calling system can use that authorization code to retrieve an access token for the new account without required the user to explicitly grant access (creating the account assumes access has been granted). Query Paths - /account/ - /account/{account-id} Query Parameters Sample JSON Response: { account: { id: 87 firstName: "Speed" lastName: "Racer" username: null dateCreated: "20140505T173906+0000" email: "[email protected]" zipcode: "33635" phone: null timeZone: null preferredContact: "EMAIL" accessToken: { code: "2f2w4ae6mmbvrdk94feen2gy" } } totalRecords: null actions: [ ] } DELETE¶ Deletes the specified account. Warning This operation is permanent! All data and configuration for the account, including all of it’s vehicles will be deleted and cannot be restored. Please ensure that the Carvoyant account owner confirms this operation before making the API call. Query Paths - /account/{account-id} Query Parameters Sample JSON Response: { "result": "OK", "totalRecords": 1, "actions": [] }
https://carvoyant-api.readthedocs.io/en/latest/api-reference/resources/account.html
2021-07-24T01:21:45
CC-MAIN-2021-31
1627046150067.87
[]
carvoyant-api.readthedocs.io
PayCore.io v1.27 (December 22, 2020)¶ By Dmytro Dziubenko, Chief Technology Officer Season's Greetings from the PayCore.io team! This page includes only the new features and product improvements released in the recent two weeks. But all year long, we have worked hard to ensure our products are continually improving to make handling your payment processes more efficient and convenient. We have done much, but much more is needed to grow and widen our horizons. Highlights¶ - Increased Metadata limit up to 20 attributes - Batch Payouts Validation Update: the limit number of Base64 characters must not exceed 100,000 in the whole file - Card Gate Update: sanitised Cardholder DTO - And other minor fixes and performance improvements In the Details¶ Metadata Update¶ The Metadata object is heavily used to transfer additional transaction data so fulfilling our clients' requests, we increased its attributes' limit up to 20. Batch Payouts Update¶ As a follow-up to the previous batch payouts' update, we have revised the validation approach because adding new fields made transaction carrying complicated. We check each payout row separately according to provider settings and validate that the limit number of Base64 characters does not exceed 100,000 in the whole batch file. Card Gateway Update¶ Emoji in the cardholder name field? No longer an issue! We sanitised Cardholder DTO so you can see in the Invoice and the database fields all the 💕 and 😉 that the user put in, and the provider receives in Payment Latin symbols not caused an error. Fixes and Improvements¶ We have fixed a bunch of problems that occurred if we could not set Original ID from the providers to get the transaction's accurate status. And have also made some other bug fixes and system improvements too small to mention. Stay in touch and have a magical Holiday season, friends!
https://docs.paycore.io/release-notes/archive/2020/v1.27/
2021-07-24T00:35:10
CC-MAIN-2021-31
1627046150067.87
[array(['/release-notes/archive/2020/images/v1.27/paycore-greetings-card.png', 'PayCore'], dtype=object) ]
docs.paycore.io
Managing the header menus in the Hueman theme The header is an important section of any site or blog because a good header can not only catch the attention of the casual visitor, but also prove to be useful in accessing various pages of the site. The Hueman WordPress theme has lots of features that can be leveraged to make an attractive and useful header. In this documentation page we'll see how to create and arrange header menus in the Hueman theme in a proficient way. This documentation page will go through the following topics : - What are the available predefined menu locations in the Hueman theme? - How to assign a menu to a location? - How to enable (disable) sticky menu? - How to set a mobile menu? - What are the different mobile menu style ? - How to set a specific menu for a page or any context? What are the available predefined menu locations in the Hueman theme? The Hueman theme supports three menu locations: - Topbar: located at the very top of your page : - Header: located in the header just below your title or logo : - Footer: located in the footer just above the credits : N.B. Throughout this document we'll focus on the Topbar and Header menus, but the same principles can be applied to the Footer menu too How to assign a menu to a location? Creation of Menus Firstly you have to create a menu. For details on creation of menus you can check the WordPress Codex Documentation On Menus. Alternatively, from the customizer click on Menus for creation, deletion or managing menus. Assign Menus to Locations - From the customizer click on Menus : - Open the Menu Locations panel : - From the dropdown list against each menu location (Topbar, Header, Footer) choose the menu (among the ones you created earlier) that you want to be assigned to the specific location - Your site preview will refresh and you'll see your selected menu shown in the location you chose - Then click on the Save and Publish button to make your change persistent Use case: Assign the menu "My topbar menu" to the Topbar location. You can also assign a menu location directly when editing/creating a menu. - In the Customizer, open the Menus panel - Create, or expand an already created menu - Click on one of the checkboxes against the available locations under Display Location : How to enable (disable) sticky menu? From the Customizer navigate through Header Design -> Header Menus : mobile settings, scroll behavior That's the place where you can choose the sticky menu behavior. By default the menu will sticky to the top of your page both in desktop and mobile devices - both Desktop devices : make the top menu stick to the top on scroll (red arrow) AND Mobile devices : make the mobile menu stick to the top on scroll (blue arrow) options checked: Chose the combination that better suits your needs. How to set a mobile menu? When visiting your website using a smartphone or a tablet (with a width less than 720px), the menus are revealed when clicking on the hamburger button(s) : To choose which header menu you want to be displayed as mobile menu follow these steps: - From the Customizer navigate through Header Design -> Header Menus : mobile settings, scroll behavior - Expand the dropdown list against the option Select the menu(s) to use for mobile devices : - Select the header menu location(s) you want to be displayed as mobile menu(s) : - Header Menu: the menu assigned to the Header location will be displayed as mobile menu in one bar on top - Topbar Menu: the menu assigned to the Topbar location will be displayed as mobile menu in one bar on top (default) - Topbar and header menus, logo centered: both the menus assigned to, respectively, Topbar and Header menu locations will be displayed as two separated mobile menus with the title/logo centered - Save and Publish What are the different mobile menu style? The Hueman theme allows you to chose among two different mobile menu styles: - One bar on top (default) : - Logo centered, two menus : How to set a specific menu only for a page ? You'll find a detailed use case describing how to achieve this on this page.
https://docs.presscustomizr.com/article/282-managing-the-header-menus-in-the-hueman-theme
2021-07-24T02:29:58
CC-MAIN-2021-31
1627046150067.87
[array(['http://d33v4339jhl8k0.cloudfront.net/docs/assets/55e7171d90336027d77078f6/images/58f9c0010428634b4a327d50/file-puJwlhOR6N.png', 'Hueman header menus'], dtype=object) array(['http://d33v4339jhl8k0.cloudfront.net/docs/assets/55e7171d90336027d77078f6/images/58f9ce530428634b4a327d9e/file-wlK6aY9L1n.png', 'Customizer from Dashboard'], dtype=object) array(['http://d33v4339jhl8k0.cloudfront.net/docs/assets/55e7171d90336027d77078f6/images/58f9e1e22c7d3a057f886fd0/file-HUqQUZdDjH.png', 'Use case: assign menu to Topbar'], dtype=object) array(['http://d33v4339jhl8k0.cloudfront.net/docs/assets/55e7171d90336027d77078f6/images/58f9e4230428634b4a327e15/file-5mAQWSDXkF.png', 'Unset Topbar default page menu'], dtype=object) array(['http://d33v4339jhl8k0.cloudfront.net/docs/assets/55e7171d90336027d77078f6/images/58fa049d0428634b4a327eb7/file-wWqbyCUcTb.png', 'Sticky Menu'], dtype=object) array(['http://d33v4339jhl8k0.cloudfront.net/docs/assets/55e7171d90336027d77078f6/images/58fa088e0428634b4a327ed0/file-lNKqE4s0IK.png', 'Hamburger header menu'], dtype=object) ]
docs.presscustomizr.com
Overview The PCAP integration is a pseudo-device that allows loading packet capture files into the ThreatSTOP reporting system. Unlike regular devices, this feature doesn’t provide a mechanism to apply a policy to the device - only the log analysis and reporting. With this feature, you can take a snapshot of your network traffic and generate reports to assess if connections are being made from or to IP addresses listed in ThreatSTOP Threat Intelligence database. The PCAP data must be uploaded in ASCII format; binary PCAP files will not be processed. Detection of traffic direction The PCAP format doesn’t provide an indication of whether the packets were sent from your network to the Internet (outbound), or from the Internet to your network (inbound). By default, the ThreatSTOP Log parser will mark the packets as sent inbound. However, if the packet capture is done inside the network and detects RFC 1918 IPs (private IP addresses), the parser will mark packets originating from private IPs to public IPs as outbound. Setup - Create a device entry in the ThreatSTOP Admin Portal with the following type: - Type: IP Device - Manufactuer: Packet Capture - Model: PCAP - Provide the following settings: - Nickname: name the entry; this will be used to identify logs originated from this device in reports. - IP Type: Select ‘Static’ to identify the device using its public IP address or select ‘Dynamic’ to use a DNS name pointing to the IP address if it’s dynamic - IP Address: the static, public IP address of the device - Domain name: a DNS Fully-Qualified Domain Name (A Record) that is kept up to date with the dynamic IP of the device - Policy: while a policy will not be applied to the device, the parsing of the log file performed upon upload will use the IP address IOCs contained in the policy when looking for IOCs in the log. - Note: an optional note about the device. Uploading logs Log files - The system will detect any PCAP files that contains the timestamp, source IP address, source port, destination IP address and destination port in the standard format. Additional fields will be ignored. tcpdump command: $ sudo tcpdump -tt -n - -tt provides the timestamps as seconds since Jan 1, 1970 at 00:00:00 UTC - -n shows IP addresses and ports in numeric format Sample output: 1517931955.781702 IP 10.0.2.2.53231 > 192.1.2.3.80: Flags [.], ack 184924, win 65535, length 0 - The maximum size of the PCAP file (in ASCII) that can be uploaded is 15 MB. Manual upload To upload logs for the device created above, follow these steps: - Login to the ThreatSTOP Admin Portal - Browse to the Logs menu, and then the User Log Submission tab - Identify the device by its nickname or IP address in the list of devices - Choose a file to upload and click upload Files are processed within 15 minutes of upload and their data becomes available in the IP Defense reports. Automated upload (Linux) You can also automated the upload of logs from a Linux system. You will need root access. - Allow tcpdump to execute post-rotate scripts (Ubuntu only) $ sudo apt install apparmor-utils $ sudo aa-complain /usr/sbin/tcpdump Create a shell script named tsupload.sh in a directory of your choice #!/bin/bash tmpfile=tcpdump-ascii.$$ tcpdump -n -tt -r $1 >> $tmpfile /usr/bin/curl -F "upfile=@$tmpfile" -F "upfile_size=`/usr/bin/stat -c %s $tmpfile`" -F "md5_client=`/usr/bin/md5sum $tmpfile|/usr/bin/cut -d' ' -f 1`" rm $tmpfile - Run tcpdump sudo tcpdump -tt -n -G 600 -w 'tspcap.%s' -Z `whoami` -z /path/to/tsupload.sh - The command will rotate a new PCAP file every 600 seconds and upload the ASCII version via HTTPs. - On a traffic with high volume of data, the ASCII files might exceeed the 15 MB limit. You can rotate files more often or add a filter to the tcpdump command. For example, to filter TCP connection packets only, use this filter: "tcp[tcpflags] & (tcp-syn|tcp-ack) != 0"
https://docs.threatstop.com/pcap.html
2021-07-24T01:49:46
CC-MAIN-2021-31
1627046150067.87
[]
docs.threatstop.com
in the class, the behavior depends on the parameter, as documented below. The decorator returns the same class that it is called on; no new class is created., eq=True, order=False, unsafe_hash=False, frozen=False) class C: ... The parameters to dataclass()are: init: If true (the default), a __init__()method will be generated. If the class already defines __init__(), this parameter is ignored. repr: If true (the default), a __repr__()method). explicitly whether.and default_factory. init: If true (the default), this field is included as a parameter to the generated __init__()method. repr: If true (the default), this field is included in the string returned by the generated __repr__()method. compare: If true (the default), this field is included in the generated equality and comparison methods ( __eq__(), __gt__(), et al.). hash: This can be a bool or None. If true, this field is included in the generated __hash__()method. If None(the default), use the value of compare: this would normally be the expected behavior. A field should be considered in the hash if it’s used for comparisons. Setting this value to anything other than Noneis discouraged. One possible reason to set hash=Falsebut compare=Truewould be if a field is expensive to compute a hash value for, that field is needed for equality testing, and there are other fields that contribute to the type’s hash value. Even if a field is excluded from the hash, it will still be used for comparisons. metadata: This can be a mapping or None. None is treated as an empty dict. This value is wrapped in MappingProxyType()to make it read-only, and exposed on the Fieldobject. It is not used at all by Data Classes, and is provided as a third-party extension mechanism. Multiple third-parties can each have their own key, to use as a namespace in the metadata. If the default value of a field is specified by a call to field(), then the class attribute for this field will be replaced by the specified defaultvalue. If no defaultis provided, then the class attribute will be deleted. The intent is that after the dataclass()decorator runs, the class attributes will all contain the default values for the fields, just as if the default value itself were specified. For example, after: @dataclass class C: x: int y: int = field(repr=False) z: int = field(repr=False, default=10) t: int = 20 The class attribute C.z __init__() method is generated, then __post_init__() will not automatically be called. Among other uses, this allows for initializing field values that depend on one or more other fields. For example: @dataclass class C: a: float b: float c: float = field(init=False) def __post_init__(self): self.c = self.a + self.b The __init__() method generated by dataclass() does not call base class __init__() methods. If the base class has an __init__() method that has to be called, it is common to call this method in a __post_init__() method: @dataclass class Rectangle: height: float width: float @dataclass class Square(Rectangle): side: float def __post_init__(self): super().__init__(self.side, self.side) Note, however, that in general the dataclass-generated __init__() methods don’t need to be called, since the derived dataclass will take care of initializing all fields of any base class that is a dataclass itself. See the section below on init-only variables for ways to pass parameters to __post_init__(). Also see the warning about how replace() handles init=False fields. Class variables¶ One of two places where dataclass() actually inspects the type of a field is to determine if a field is a class variable as defined in PEP 526. It does this by checking if the type of the field is typing.ClassVar. If a field is a ClassVar, it is excluded from consideration as a field and is ignored by the dataclass mechanisms. Such ClassVar pseudo-fields are not returned by the module-level fields() function. Init-only variables¶ The other place where dataclass() inspects a type annotation is to determine if a field is an init-only variable. It does this by seeing if the type of a field is of type dataclasses.InitVar. If a field is an InitVar, it is considered a pseudo-field called an init-only field. As it is not a true field, it is not returned by the module-level fields() function. Init-only fields are added as parameters to the generated __init__() method, and are passed to the optional __post_init__() method. They are not otherwise used by dataclasses. For example, suppose a field will be initialized from a database, if a value is not provided when creating the class: @dataclass class C: i: int j: int = None database: InitVar[DatabaseType] = None def __post_init__(self, database): if self.j is None and database is not None: self.j = database.lookup('j') c = C(10, database=my_database) In this case, fields() will return Field objects for i and j, but not for database. Frozen instances that it finds, adds the fields from that base class to an ordered mapping of fields. After all of the base class fields are added, it adds its own fields to the ordered mapping. All of the generated methods will use this combined, calculated ordered mapping of fields. Because the fields are in insertion order, derived classes override base classes. An example: @dataclass class Base: x: Any = 15.0 y: int = 0 @dataclass class C(Base): z: int = 10 x: int = 15 The final list of fields is, in order, x, y, z. The final type of x is int, as specified in class C. The generated __init__() method for C will look like: def __init__(self, x: int = 15, y: int = 0, z: int = 10): Default factory functions¶ If a field()specifies a default_factory, it is called with zero arguments when a default value for the field is needed. For example, to create a new instance of a list, use:mylist: list = field(default_factory=list) If a field is excluded from __init__()(using init=False) and the field also specifies default_factory, then the default factory function will always be called from the generated __init__()function. This happens because there is no other way to give the field an initial value. Mutable default values¶, if this code was valid:@dataclass class D: x: List = [] def add(self, element): self.x += element it would generate code similar to:class D: x = [] def __init__(self, x=x): self.x = x def add(self, element): self.x += element assert D().x is D().x This has the same issue as the original example using class C. That is, two instances of class Dthat. It is a subclass of AttributeError.
https://docs.python.org/3/library/dataclasses.html?highlight=dataclasses
2021-07-24T02:23:28
CC-MAIN-2021-31
1627046150067.87
[]
docs.python.org
WooCommerce 4.1 introduces a new Marketing menu item to the WordPress Dashboard. The hub currently consists of four core components: - Installed Marketing Extensions - Recommended Extensions - WooCommerce Knowledge Base - Coupons Management Installed extensions ↑ Back to top Once a recognized marketing extension is installed this component consolidates them and provides you with a link to the next step needed to complete the setup of your extension: - Activate – If the extension is installed but not activated you will be presented with a link to Activate the extension. Clicking “Activate” will activate the extension. - Finish setup – If the extension is both installed and activated but there is a still a setup step remaining (perhaps you need connect to an external service) you will be presented with a link to Finish setup. Clicking “Finish Setup” will direct you to the relevant admin page where you can complete the setup process. Once setup for an extension is complete the component provides you a with a centralized place to access links relevant to your installed marketing extensions. These links include: - Docs – Linking through to the extension documentation (whether WooCommerce Docs or elsewhere). - Get Support – Linking through to the most appropriate place to receive support (whether WooCommerce Support or elsewhere). - Settings – Linking through to the extensions settings admin page (if available). - Dashboard – Linking through to the to extension dashboard admin page (if available). Recognized extensions ↑ Back to top Initially the Installed Extensions component will recognize a small selection of extensions. We will review and assess the number and scope of integrations included based on merchant usage and feedback. Recognized extensions include: - AutomateWoo - Mailchimp for WooCommerce - Facebook for WooCommerce - Google Ads for WooCommerce - HubSpot for WooCommerce - Amazon & eBay Integration for WooCommerce Recommended extensions ↑ Back to top This component displays a limited set of recommendations for official WooCommerce extensions that can be helpful to marketing your store. These recommendations are contextual. If you already have the extension installed it will not be recommended to you. Disabling Recommended Extensions ↑ Back to top This component respects the existing Marketplace Suggestions “Show Suggestions” option. The option is located at WooCommerce > Settings > Advanced > WooCommerce.com under the Marketplace Suggestions heading. To disable this component simply uncheck the “Show Suggestions” option and click Save Settings. WooCommerce Knowledge Base ↑ Back to top The final component of the hub is a curated listing of marketing related content from the WooCommerce.com blog focused on helping you to discover new suggestions and ideas to support your marketing efforts. Clicking on an article will open the content within a new tab within your browser. Coupons Management ↑ Back to top The Coupons section is where you can view and add coupons to offer discounts and track campaigns. Coupons can be applied by customers on the cart/checkout pages. More information at: Coupon Management.
https://docs.woocommerce.com/document/marketing-hub/?utm_medium=referral&utm_source=wordpress.org&utm_campaign=wp_org_repo_listing
2021-07-24T01:14:50
CC-MAIN-2021-31
1627046150067.87
[array(['https://docs.woocommerce.com/wp-content/uploads/2020/08/MarketingHub.png', 'Marketing Hub full screen'], dtype=object) array(['https://docs.woocommerce.com/wp-content/uploads/2020/04/installed.png?w=950', 'Installed Extensions component'], dtype=object) array(['https://docs.woocommerce.com/wp-content/uploads/2020/04/recommended.png?w=950', 'Recommended Extensions'], dtype=object) array(['https://docs.woocommerce.com/wp-content/uploads/2020/04/suggestions.png?w=950', 'Marketplace Suggestions Option Location'], dtype=object) array(['https://docs.woocommerce.com/wp-content/uploads/2020/04/knowledge.png?w=950', 'Knowledge base component'], dtype=object) ]
docs.woocommerce.com
Overview ↑ Back to top Google Analytics is a free web analytics tool that tracks visitors and pageviews on your site. WooCommerce Google Analytics Pro integrates with your Google Analytics account to track eCommerce events in your store, including advanced event tracking such as purchases, product reviews, coupon usage, full order refunds, and more. Uses enhanced eCommerce tracking to provide valuable metrics on your store’s performance. Configure the plugin by going to WooCommerce > Settings > Integrations. You will see ‘Google Analytics Pro’ listed as an option. Click this to see the settings. Extension Settings ↑ Back to top - Enable GA Tracking – Enable / check this to enable Google Analytics tracking. Disable this to disable tracking completely. - Google Analytics Property – Click to authenticate the plugin with the Google Analytics Property for your site. - Google Analytics Tracking ID – (If manually entering a tracking ID — not recommended) Input your Google Analytics property’s tracking ID to track events under the correct property. - Track Administrators? – Enable this to track actions by administrators and shop administrators. Leave disabled to exclude events from these users. - Tracking Options – Determine if you want to enable Display Advertising, Use Enhanced Link Attribution, Anonymize IP address (may be required by your country), and / or Track User IDs. We recommend anonymizing IP addresses and tracking user IDs, as tracking user IDs counts logged in users as one user (even across multiple devices), making your user count more accurate. Please note that if you intend to track User IDs, there are some additional steps reuired to enable User ID in your Google Analytics property. These steps are outlined in a Google Analytics help document here. - Enable Google Optimize – Click to integrate Google Optimize with Google Analytics. - Track Product impressions on – Determine when product impressions should be tracked: on single product pages, and / or archive pages (ie the shop). If you encounter “No HTTP response detected” errors, chances are you’re tracking more impressions than Google Analytics allows; please remove “Archive pages” from this setting if so. - Logging – Log nothing (default), or turn on to add debugging information to the PHP error log. Be careful when enabling this on a busy site, as it can quickly flood the PHP error log. For best performance, leave logging disabled unless you experience issues with the plugin. Customize Event Names ↑ Back to top Every event name sent to Google Analytics can be changed by editing the text field associated with each event. This is useful if you have an existing implementation and want to keep your event names consistent. To disable tracking of a particular event, clear the text field associated the event. - Signed In – Triggered when a customer signs in. The sign in can occur anywhere (wp-login.php, my account page, sign in widget, etc) and it will be tracked. Users are identified as soon as they login and are tracked as that identity until they logout (if you enable user ID tracking). - Signed Out – Triggered when a customer signs out. Identities are cleared upon logout, so if multiple people use the same computers but use different logins, this will help to track them all accurately. - Viewed Signup – Triggered when a visitor views the sign up (my account / registration) page. Hooks into the WordPress register form and will track anywhere it is used. - Signed Up – Triggered when a visitor registers an account. Hooks into the WordPress registration system and will track registrations anywhere the WordPress register form is used. - Viewed Homepage – Triggered when a visitor views the homepage. - Viewed Product – Triggered when a visitor views a single product. The ‘Product Name’ is tracked as an event label. - Clicked Product – Triggered when a visitor clicks on a product in a listing, such as the shop or search results. The ‘Product Name’ is tracked as an event label. -. Labels: Product Name, Quantity, Category, and Attributes (if product is a variation). - Removed from Cart – Triggered when a visitor removes an item from their cart. Labels:. are a using a payment gateway that uses a “Pay” page, such as Chase Paymentech or Authorize.Net SIM. - Completed Purchase – Triggered when a customer’s purchase has been submitted or paid for. This will track for gateways that immediate complete payment, such as credit cards, or those that place the order for a later payment, such as a “Check Payment” order. Labels: Order ID, Order Total, Shipping Total, Total Quantity, Payment Method. - Wrote Review – Triggered when a visitor writes a review of a product. This is tracked before spam processing, so if you get a lot of spam, you can consider disabling this event. Labels: Product Name. - Commented – Triggered when a visitor writes a comment on a blog post. This is tracked before spam processing, so if you get a lot of spam, you can consider disabling this event. Labels:. Labels: Country. - Tracked Order – Triggered when a customer submits the ‘Order Tracking’ form. - Cancelled Order – Triggered when a customer cancels a pending order. - Order Refunded – Triggered when an order has been fully refunded. - Reordered – Triggered when a customer reorders a previous order. (Click to view larger images) The Checkout Behavior Analysis will automatically be populated, so long as at least 1 event in the funnel is being tracked for your site. These events or the order cannot be configured, as they need to be static steps sent to Google for every checkout. You can track all 4 events to gain insight into how much of the checkout process customers complete, or delete an event name to stop tracking it. These steps can be re-named in your Google Analytics account if desired by going to Admin > eCommerce settings: You can then add custom names for the steps you have enabled on your site. Note that this does not influence what events are tracked. The tracked events are static, but you can choose any name you want for steps 1 – 4 (or less if you have disabled some events). To learn more about what the Checkout Behavior Analytics tracks, please read more about tracked funnels. Authenticating with GA ↑ Back to top There are two ways to connect the plugin to your Google Analytics account: by authenticating or by manually entering your tracking ID. We strongly recommend authenticating rather than manual entering. This is more reliable and will ensure that any future feature additions will be supported without further action from you. Authenticating To authenticate the plugin with your Google Analytics account, click the “Authenticate” button in the plugin settings. This will begin the authentication process. - Google will ask you to allow the plugin the permissions listed. Click “Allow”. - This will populate the profile dropdown with your Google Analytics properties. - Select the correct property from the list and save your settings. Manual Entry To manually enter your Tracking ID, log into your Google Analytics account. - Go to “Admin” at the top and select “Property Settings” for the correct property. - Under the property settings, copy the Tracking ID listed. - Paste this Tracking ID into the plugin settings. Your GA Account ↑ Back to top Once you’ve connected your Google Analytics account and set up event names, you can save the plugin settings, and eCommerce tracking data will begin to show up in your Google Analytics account within 24 hours. You can read more about enhanced eCommerce tracking here. Once you’ve saved the plugin events settings, the plugin will add all of the events you’ve configured as events in your GA dashboard. You can also view event categories (such as “Product”) for a more detailed view. Want to learn more? Check out: How to Use Enhanced eCommerce in GA. Tracking Funnels ↑ Back to top The plugin sends all events for the Shopping Behavior Analysis and Checkout Behavior Analysis reports, so you’ll see these funnels within your GA account. Shopping Behavior Analysis The Shopping Behavior analysis report is automatically generated from the events the plugin sends to Google Analytics. This shows general insight into store browsing experience and where customers enter or leave your purchasing funnel. This is useful to see cart and checkout abandonment for the store. There are no settings required for the Shopping Behavior funnel; it will automatically be populated based on pageview, add to cart, checkout, and transaction events. The Checkout Behavior Analysis is populated with the steps the plugin has listed under its settings. This is a static funnel that is intended to be a “zoomed-in” view of the “Sessions with Check-Out” part of the Shopping Behavior funnel. The Checkout Behavior analysis gives you a more fine-grained look at where customers leave the checkout process. While it’s most useful for sites with multi-step checkouts, this plugin will use events within this report that are relevant to any WooCommerce store, regardless of whether selling digital goods, shippable items, or other checkout processes. Please note that if you use express or off-site payment methods, especially if the customer leaves the site from the cart page, then this report may be skewed or may not have full data for your site. There are up to 4 steps tracked, and 3 of these steps will have “Checkout options” associated with them. Google Analytics allows one checkout option per event to allow you further insight into this funnel, and the ability to create segments based on these options. *only if shipping is enabled for the store / this checkout Tracking Subscriptions ↑ Back to top If you have installed and activated the WooCommerce Subscriptions plugin,. Other Information ↑ Back to top Basic Site Tracking ↑ Back to top Google Analytics Pro includes basic site tracking, such as pageviews and customer sessions, so no other Google Analytics plugin is needed to get complete analytics for your store. As such, if you’ve already used the free WooCommerce Google Analytics plugin for basic or eCommerce tracking, Google Analytics Pro will deactivate this plugin upon installation. Upgrading to v1.3.0 ↑ Back to top When upgrading to version 1.3.0 of the plugin, you’ll notice that the global javascript function has been renamed from the legacy __gaTracker to ga. This will not affect most sites, and you can adjust the plugin settings to use ga accordingly. However, if you have custom javascript implemented for your site, you may want to ensure that these customizations are updated to use the ga global function before changing this within the plugin settings. You should also be aware that the completed payment event is no longer necessary. Instead, completed purchase tracks purchases for all orders, both off-site and on-site gateways. If you’ve created custom analytics goals, these should be updated accordingly. Privacy & cookies Cookie usage ↑ Back to top Google Analytics, so you should ensure that your Google Analytics account is configured accordingly. Moz has a good overview of GDPR compliance with Google Analytics here. Frequently Asked Questions ↑ Back to top Q: Does this plugin support Shopping Behavior Analysis and Checkout Behavior Analysis reports? A: Absolutely! Check out our details on tracking funnels above for more info. Q: Will this plugin let me get my conversion rate? A: Both the free plugin and this pro version will give Google Analytics the data it needs to calculate your site’s conversion rate. The Pro version includes more accurate conversion rate tracking since it doesn’t require purchases to end up on the “thank you” page in order to be tracked, so orders via any payment method are tracked as conversions. Google has more details on enhanced eCommerce reporting here. Q: Does this support WooCommerce Bookings? A: Yes, but please be aware of the workflows in Bookings and this plugin. Orders are tracked for completion, not the booking itself. Therefore, to ensure a booking shows as a completed purchase, please adjust the order status for a booking order, which will also change the booking, rather than changing the booking status directly. This is most relevant when requiring approval or confirmation for bookings, as changing a booking doesn’t update the related order. Q: Does this plugin track order currency? A: Yep! Both the free and pro Google Analytics plugins for WooCommerce, so if a customer went to PayPal to complete a purchase and is directed back, this is tracked as the referrer. To avoid this, please follow this guide to set paypal.com as a referrer exclusion for your account. Q: When is a purchase considered “completed”? A: To ensure all purchases are tracked, regardless of whether the payment takes place offsite, on-site, the gateway calls “payment completed” on the order, etc., Google Analytics Pro has to rely on the order status to indicate the financial status. As such, any order that’s marked as “processing” or “completed” will have the “completed purchase” event tracked in Google Analytics. Q: I’m using Google Tag Manager and it looks like all my reporting is doubled! What should I do? A: You’ll need to remove the Google Analytics portion of the code from your Tag Manager implementation. Q: Can I use Google Tag Manager (GTM) with Google Analytics Pro? A: You can certainly use Google Analytics Pro with Google Tag Manager. However, Google Analytics Pro will track all of the regular page impressions as well as the eCommerce related impressions (“viewed product”, “started checkout” etc.) automatically. In order to avoid duplicated tracking, it will be necessary to remove the other implementation of Google Analytics (by either removing that tag manager script, or removing the Google Analytics tag from your Tag Manager implementation). Q: I prefer to keep my Google Analytics implementation and pageview tracking in Google Tag Manager. Can I still do that with Google Analytics Pro? A: With a little bit of custom code, you can remove the Google Analyics tracking code and remove the pageview tracking from Google Analytics Pro. Note: since customization is not covered in our support policy, we cannot further modify these or implement them directly for you. Q: Can I connect Google Analytics Pro to new GA4 properties? A: Not currently. We are evaluating the GA4 update to determine if and how our plugin can be updated to support it. In the meantime, for new projects, Google allows for the creation of earlier Universal Analytics properties alongside new GA4 properties. You can follow this guide from Google to set up and link a UA property under the new system. We’ve tested this setup and our plugin continues to track all the data when the UA property is selected in our settings. So while we aren’t able to take advantage of the new features in GA4 yet, the existing functionality of our plugin hasn’t been diminished by the update. Troubleshooting ↑ Back to top Known Issues ↑ Back to top - Admin / Shop Manager users are still tracked as “Visited Site” when visiting the website to login. There is no way to prevent this, since we don’t know they’re admin users until they login. - Only full refunds for orders are tracked, partial refunds are not supported because Google does not accept the product identifier when sending a partial refund. - If approving / confirming bookings via WooCommerce Bookings, please see this FAQ to ensure purchases are tracked — order statuses are what track purchases, not booking statuses. - Checkout Behavior Analysis reports will not use a custom set of steps. While you can name these steps whatever you’d like in your GA account, the report will consist of the pre-defined steps the plugin sends, which can be viewed on the settings page. - If you have orders that are pending or on hold that move to a “paid” status a day later, please note that there’s no way to tie this back to the original session for conversion tracking. GA resets these each day, so orders paid the next day or later will not have the conversion tied to the original customer session. - If a customer is a little click-happy and double / triple clicks an AJAX add to cart link, the “added to cart” event may be recorded multiple times. Multiple items will be added to the cart, so you may also see cart quantity changes reflected. Other Issues ↑ Back to top Not seeing tracking data in Google Analytics? It can take up to 24 hours for data to populate, so please keep this in mind. If it’s been longer than 24 hours, please follow these steps to make sure everything is setup correctly before posting a support request: - Check that your tracking ID is correct or that you are authenticated with Google Analytics. - Double-check that your tracking ID is correct 😉 - View the source of your homepage to make sure the Google Analytics javascript code exists. If you’ve added this javascript manually, it needs to be removed, as it will override the data the plugin sends to Google Analytics. - If you see “No HTTP Response detected” errors, please disable product impression tracking on archive pages. - If Google Analytics still isn’t working, please enable logging and submit a help request detailing what happened while following these steps and describing the issue so we can help you quickly! Add the log as an attachment is also helpful. For Developers ↑ Back to top Tracking Custom Events You can track custom events by using: wc_google_analytics_pro()->get_integration()->custom_event( $event_name, $properties ); Event name can be set as a string, while properties are passed as an array with property name => value. You can modify this sample snippet according to your needs with the help of a developer: if ( ! function_exists( 'my_custom_event_function' ) ) { function my_custom_event_function() { // check if Google Analytics Pro is active if ( ! function_exists( 'wc_google_analytics_pro' ) ) { return; } wc_google_analytics_pro()->get_integration()->custom_event( 'Event name', array( 'Property name' => 'value' ) ); } add_action( 'hook_to_trigger_event_on', 'my_custom_event_function' ); } The custom_event() can also be called without properties by omitting the second parameter. Questions & Support ↑ Back to top Have a question before you buy? Please fill out this pre-sales form. Already purchased and need some assistance? Get in touch with support via the help desk.
https://docs.woocommerce.com/document/woocommerce-google-analytics-pro/
2021-07-24T02:11:24
CC-MAIN-2021-31
1627046150067.87
[array(['https://docs.woocommerce.com/wp-content/uploads/2020/08/woocommerce-google-analytics-pro-settings-01.png?w=550', None], dtype=object) ]
docs.woocommerce.com
Managing Data Feeds When you are ready to begin running your data feeds on an ongoing basis, you will have some additional options that you can use: - All of the mapped data feeds will be listed under the Run option and can be turned off or on by selecting the slide bar next to the data feed name. - If adjustments need to be made to the data extraction parameters (that is, where the data files are saved), you can right-click the Data Adaptor and select Configure Extractor(s), which will then take you back to the data source screen. - Specific date ranges can be selected to import data from past weeks instead of new data and vice versa. - To import data between a specific date range, select Between and then select the date range that you wish to query. - To import all data after a specific date, select Between and then select a start date, but leave end date blank. - To import all data before a specific date, select Between and then select an end date, but leave the start date blank. This page was last edited on August 16, 2018, at 15:15.
https://docs.genesys.com/Documentation/DEC/latest/Adm/DataFd
2021-07-24T02:16:14
CC-MAIN-2021-31
1627046150067.87
[]
docs.genesys.com
Node Docs Product Updates Governance GitHub Search… cheqd: Node Documentation Guides Setting up a new cheqd node Setting up and configuring validators cheqd Cosmos CLI Building from source Client-app Identity APIs Architecture Architecture Decision Record (ADR) Process List of ADRs License Code of Conduct Security Policy GitBook Code of Conduct Our Pledge We as members, contributors, and leaders pledge to make participation in the cheqd community a harassment-free experience for everyone, regardless of age, body size, visible or invisible disability, ethnicity, sex characteristics, gender identity and expression, level of experience, education, socio-economic status, nationality, country of origin, personal appearance, race, religion, or sexual identity and orientation. We pledge to act and interact in ways that contribute to an open, welcoming, diverse, inclusive, and healthy community. Our Standards Examples of behaviour behaviour include: The use of sexualized language or imagery, and sexual attention or advances of any kind Use of inappropriate or non-inclusive language or other behaviour deemed unprofessional or unwelcome in the community behaviour and will take appropriate and fair corrective action in response to any behaviour that they deem inappropriate, threatening, offensive, or harmful. Community leaders have the right and responsibility to remove, edit, or reject comments, commits, messages, behaviour may be reported to the community leaders responsible for enforcement at . behaviour deemed unprofessional or unwelcome in the community. Consequence : A private, written warning from community leaders, providing clarity around the nature of the violation and an explanation of why the behaviour was inappropriate. A public apology may be requested. 2. Warning Community Impact : A violation through a single incident or series of actions. Any Community Impact assessment should take into account: 1. The severity and/or number of incidents/actions 2. Non-compliance with previous private warnings from community leaders (if applicable) Consequence : A warning with consequences for continued behaviour. behaviour. behaviour, . License Security Policy Last modified 3mo ago Export as PDF Copy link Edit on GitHub Outline Our Pledge Our Standards Enforcement Responsibilities Scope Enforcement Enforcement Guidelines 1. Correction 2. Warning 3. Temporary Ban 4. Permanent Ban Attribution
https://docs.cheqd.io/node/code_of_conduct
2022-09-25T07:37:43
CC-MAIN-2022-40
1664030334515.14
[]
docs.cheqd.io
This page was generated from methods/ProtoSelect.ipynb. ProtoSelect Overview Bien and Tibshirani (2012) proposed ProtoSelect, which is a prototype selection method with the goal of constructing not only a condensed view of a dataset but also an interpretable model (applicable to classification only). Prototypes can be defined as instances that are representative of the entire training data distribution. Formally, consider a dataset of training points \(\mathcal{X} = \{x_1, ..., x_n \} \subset \mathbf{R}^p\) and their corresponding labels \(\mathcal{Y} = \{y_1, ..., y_n\}\), where \(y_i \in \{1, 2, ..., L\}\). ProtoSelect finds sets \(\mathcal{P}_{l} \subseteq \mathcal{X}\) for each class \(l\) such that the set union of \(\mathcal{P}_1, \mathcal{P}_2, ..., \mathcal{P}_L\) would provided a distilled view of the training dataset \((\mathcal{X}, \mathcal{Y})\). Given the sets of prototypes, one can construct a simple interpretable classifier given by: Note that the classifier defined in the equation above would be equivalent to 1-KNN if each set \(\mathcal{P}_l\) would consist only of instances belonging to class \(l\). ProtoSelect method ProtoSelect is designed such that each prototype would satisfy a set of desired properties. For a set \(\mathcal{P}_l \subseteq \mathcal{X}\), the neighborhood of a point \(x_i \in \mathcal{P}_l\) is given by the points contained in an \(\epsilon\)-ball centered in \(x_i\), denoted as \(B(x_i, \epsilon)\). Thus, given a radius \(\epsilon\) for a point \(x_i\), we say that another point \(x_j\) is covered by \(x_i\) if \(x_j\) is contained in the \(\epsilon\)-ball centered on \(x_i\). A visualization of the prototypes sets for various \(\epsilon\) radius values are depicted in the following figure: Bien and Tibshirani, PROTOTYPE SELECTION FOR INTERPRETABLE CLASSIFICATION, 2012 A desirable prototype set for a class \(l\) would satisfy the following properties: cover as many training points as possible of the class \(l\). covers as few training points as possible of classes different from \(l\). is sparse (i.e., contains as few prototypes instances as possible). Formally, let us first define \(\alpha_{j}^{(l)} \in \{0, 1\}\) to indicate whether we select \(x_j\) to be in \(\mathcal{P}_l\). Then we can write the three properties as an integer program as follows: For each training point \(x_i\), we introduce the slack variables \(\xi_i\) and \(\nu_i\). Before explaining the two constraints, note that \(\sum_{j: x_i \in B(x_j, \epsilon)} \alpha_{j}^{(l)}\) counts the number of balls \(B(x_j, \epsilon)\) with \(x_j \in \mathcal{P}_l\) that cover the point \(x_i\). The constraint (a) tries to encourage that for each training point \((x_i, y_i)\), \(x_i\) is covered in at least one \(\epsilon\)-ball of a prototype for the class \(y_i\). On the other hand, the constraint (b) tries to encourage that \(x_i\) will not belong to any \(\epsilon\)-ball centered in a prototype for the other classes \(l \ne y_i\). Because the integer program defined above cannot be solved in polynomial time, the authors propose two alternative solution. The first one consists of a relaxation of the objective and a transformation of the integer program into a linear program, for which post-processing is required to ensure feasibility of the solution. We refer the reader to the paper for more details. The second one, recommended and implemented in Alibi, follows a greedy approach. Given the current choice of prototypes subsets \((\mathcal{P}_1, ..., \mathcal{P}_L)\), in the next iteration we update it to \((\mathcal{P}_1, ..., \mathcal{P}_{l} \cup \{x_j\}, ..., \mathcal{P}_L)\), where \(x_j\) is selected such that it maximizes the objective \(\Delta Obj(x_j, l) = \Delta \xi(x_j,l) - \Delta\nu(x_j, l) - \lambda\), where: Note that \(\Delta \xi(x_j, l)\) counts the number of new instances (i.e. not already covered by the existing prototypes) belonging to class \(l\) that \(x_j\) covers in the \(\epsilon\)-ball. On the other hand, \(\Delta \nu(x_j, l)\) counts how many instances belonging to a different class than \(l\) the \(x_j\) element covers. Finally, \(\lambda\) is the penalty/cost of adding a new prototypes encouraging sparsity (lower number of prototypes). Intuitively, a good prototype for a class \(l\) will cover as many new instances belonging to class \(l\) (i.e. maximize \(\Delta \xi(x_j, l)\)) and avoid covering elements outside the class \(l\) (i.e. minimize \(\Delta \nu(x_j, l)\)). The prototype selection algorithm stops when all \(\Delta Obj(x_j, l)\) are lower than 0. Usage from alibi.prototypes import ProtoSelect from alibi.utils.kernel import EuclideanDistance summariser = ProtoSelect(kernel_distance=EuclideanDistance(), eps=eps, preprocess_fn=preprocess_fn) kernel_distance: Kernel distance to be used. Expected to support computation in batches. Given an input \(x\) of size \(N_x \times f_1 \times f_2 \times \dotso\) and an input \(y\) of size \(N_y \times f_1 \times f_2 \times \dotso\), the kernel distance should return a kernel matrix of size \(N_x \times N_y\). eps: Epsilon ball size. lambda_penalty: Penalty for each prototype. Encourages a lower number of prototypes to be selected. Corresponds to \(\lambda\) in the paper’s notation. If not specified, the default value is set to 1 / N, where Nis the size of the dataset to choose the prototype instances from, passed to the fit method. batch_size: Batch size to be used for kernel matrix computation. preprocess_fn: Preprocessing function used for kernel matrix computation. The preprocessing function takes the input in a listor a numpyarray and transforms it into a numpyarray which is then fed to the kernel_distancefunction. The use of preprocess_fnallows the method to be applied to any data modality. verbose: Whether to display progression bar while computing prototype points. Following the initialization, we need to fit the summariser. summariser = summariser.fit(X=X_train, y=y_train) X: Dataset to be summarised. y: Labels of the dataset X. The labels are expected to be represented as integers [0, 1, ..., L-1], where Lis the number of classes in the dataset X. In a more general case, we can specify an optional dataset \(Z\) to choose the prototypes from (see the documentation of the fit method). In this scenario, the dataset to be summarised is still \(X\), but it is summarised by prototypes belonging to the dataset \(Z\). Furthermore, note that we only need to specify the labels for the \(X\) set through \(y\), but not for \(Z\). In case the labels \(y\) are missing, the method implicitly assumes that all the instances belong to the same class. This means that the second term in the objective, \(\Delta\nu(x_j, l)\), will be 0. Thus, the algorithm will try to find prototypes that cover as many data instances as possible, with minimum overlap between their corresponding \(\epsilon\)-balls. Finally, we can obtain a summary by requesting the maximum number of prototypes to be returned: summary = summariser.summarise(num_prototypes=num_prototypes) num_prototypes: Maximum number of prototypes to be selected. As we previously mentioned, the algorithm stops when the objective is less than 0, for all the remaining instances in the set of potential prototypes. This means that the algorithm can return a lower number of prototypes than the one requested. Another important observation is that the summary returns the prototypes with their corresponding labels although no labels were provided for \(Z\). This is possible since each prototype \(z\) will belong to a prototype set \(\mathcal{P}_l\), and thus we can assign a label \(l\) to \(z\). Following the summarisation step, one can train an interpretable 1-KNN classifier on the returned prototypes even for an unlabeled dataset \(Z\). Warning If the optional argument \(Z\) is not provided, it is automatically set to \(X\). Although the labels of the data instances belonging to \(Z\) are available in this case, the dataset \(Z\) is still viewed as an unlabeled dataset. This means that a prototype \(z_i \in Z\) belonging to the class \(l\) according to the labels \(y\), can be a prototype for a class \(k \ne l\). Hyperparameter selection Alibi exposes a cross-validation hyperparameter selection method for the radius \(\epsilon\) when the Euclidean distance is used. The method returns the \(\epsilon\) radius value that achieves the best accuracy score on a 1-KNN classification task. from alibi.prototypes.protoselect import cv_protoselect_euclidean cv = cv_protoselect_euclidean(trainset=(X_train, y_train), valset=(X_val, y_val), num_prototypes=num_prototypes, quantiles=(0., 0.4), preprocess_fn=preprocess_fn) The method API is flexible and allows for various arguments to be passed such as a predefined \(\epsilon\)-grid, the number of equidistant bins, keyword arguments to the KFold split when the validation set is not provided, etc. We refer the reader to the documentation page for a full parameter description. The best \(\epsilon\)-radius can be access through cv['best_eps']. The object also contains other metadata gathered throughout the hyperparameter search. Data modalities The method can be applied to any data modality by passing the preprocess_fn: Callable[[Union[list, np.ndarray]], np.ndarray] expected to return a numpy array feature representation compatible with the kernel provided. Prototypes visualization for image modality As proposed by Bien and Tibshirani (2012), one can visualize and understand the importance of a prototype in a 2D image scatter plot. To obtain the image size of each prototype, we fit a 1-KNN classifier on the prototypes using the feature representation provided by preprocess_fn and the Euclidean distance metric, which is consistent with our choice of kernel dissimilarity. The size of each prototype is proportional to the logarithm of the number of assigned training instances correctly classified according to the 1-KNN classifier. Thus, the larger the image, the more important the prototype is. Prototypes of a subsampled ImageNet dataset containing 10 classes using a ResNet50 pretrained feature extractor. import umap from alibi.prototypes import visualize_prototypes # define 2D reducer reducer = umap.UMAP(random_state=26) reducer = reducer.fit(preprocess_fn(X_train)) # display prototypes in 2D visualize_image_prototypes(summary=summary, trainset=(X_train, y_train), reducer=reducer.transform, preprocess_fn=preprocess_fn) summary: An Explanationobject produced by a call to the summarise method. trainset: Tuple, (X_train, y_train), consisting of the training data instances with the corresponding labels. reducer: 2D reducer. Reduces the input feature representation to 2D. Note that the reducer operates directly on the input instances if preprocess_fn=None. If the preprocess_fnis specified, the reducer will be called on the feature representation obtained after calling preprocess_fnon the input instances. preprocess_fn: Preprocessor function. Here we used a UMAP 2D reducer, but any other dimensionality reduction method will do. The visualize_image_prototypes method exposes other arguments to control how the images will be displayed. We refer the reader to the documentation page for further details. Examples Tabular and image datasets
https://docs.seldon.io/projects/alibi/en/latest/methods/ProtoSelect.html
2022-09-25T09:11:27
CC-MAIN-2022-40
1664030334515.14
[array(['../_images/protoselect_overview.png', 'ProtoSelect'], dtype=object) array(['../_images/protoselect_imagenet.png', 'ProtoSelect ImageNet'], dtype=object) ]
docs.seldon.io
Security Handling in Hardware and Software Overview BG22 devices feature a Cryptographic Accelerator peripheral that helps meet the speed and energy demands of secure applications while providing additional protection against common attacks. You will take a look at unencrypted messages and secure BLE connections with Network Analyzer. Then, the lab also includes hands on experience working with the security features by creating a Bluetooth beacon application that uses the PSA Crypto API to encrypt custom advertisement data. Getting Started Ensure that you have the correct hardware and software prepared to successfully complete the lab. Prerequisites For this lab you.2.0 or later - Bluetooth SDK 3.2.0 or later - EFR Connect Mobile App - Accept Location Access. “While using the App” will work. This is required for the Traffic Browser. Download the Workshop Source Files.0 or newer. - Ensure that the Secure Firmware Version is the latest one. Read and Update if not displayed. - Ensure your device has a bootloader installed. You can install a bootloader by flashing a demo example, such as the SoC-empty demo in the Example projects & Demo’s tab or by creating a bootloader project, such as the BGAPI UART DFU Bootloader, generating the code, and then building and flashing the bootloader image. For a more detailed explanation, see How to Use Bootloaders and OTA in Your Project. CRYPTOACC Peripheral Efficient hardware-based cryptography helps to meet the speed and energy demands of secure applications while providing additional protection against common attacks. There are a number of benefits to accelerating cryptographic operations in hardware: - Faster execution - More energy efficient - Offloads the main core - Saves code space - More resistant to differential power analysis (DPA) attacks The EFR32xG22 devices incorporate the CRYPTOACC peripheral for cryptographic hardware acceleration. The CRYPTO module provides an efficient acceleration of common cryptographic operations and allows these to be used efficiently with low CPU overhead. The CRYPTO module includes hardware accelerators for the Advanced Encryption Standard (AES), Secure Hash Algorithm SHA-1 and SHA-2 (SHA-224 and SHA-256), and modular multiplication used in ECC (Elliptic Curve Cryptography) and GCM (Galois Counter Mode). The CRYPTO module can autonomously execute and iterate a sequence of instructions to aid software and speed up complex cryptographic functions like ECC, GCM, and CCM (Counter with CBC-MAC). mbedTLS and PSA Crypto In addition to the CRYPTO module, Silicon Labs includes mbedTLS as part of the Gecko Platform SDK. mbed TLS is open source software licensed by ARM Limited. It provides an SSL library that makes it easy to use cryptography and SSL/TLS in applications. The PSA Crypto API is a low-level cryptographic API optimized for MCU and Wireless SoC. It provides APIs related to Random Number Generation (RNG), cryptographic algorithm usage, and key handling (symmetric and asymmetric). The PSA Crypto API provides developers with an easy-to-use and easy-to-learn interface to crypto primitives. It is designed for usability and flexibility and is based on the idea of a key store. The PSA Cryptographic API is an important PSA component that provides a portable interface to cryptographic operations on a wide range of hardware. Silicon Labs PSA Crypto implementations support RNG, symmetric and asymmetric keys, message digests, MAC, unauthenticated ciphers, AEAD, KDF, ECDSA, and ECDH. The AN1311: Integrating Crypto Functionality Using PSA Crypto Compared to Mbed TLS application note describes how to integrate crypto functionality into applications using PSA Crypto compared to Mbed TLS. This standard mbedTLS or PSA API will handle the interface with the CRYPTOACC peripheral. There are several code examples available within Simplicity Studio 5 that provide a starting point for custom applications and illustrate the use of the cryptographic libraries. Lab Run the Thunderboard Demo First, lets run the Bluetooth - SoC Thunderboard demo. - Open Simplicity Studio v5. If the Thunderboard BG22.2.0) is used. - Click on Example projects & Demo’s tab to browse the example projects. - Select “Bluetooth” from the Technology Type filters. Additionally, you can type "Thunderboard" in Keywords to narrow results. Demo Capture BLE Traffic Using Network Analyzer Simplicity Studio® 5 (SSv5)'s Network Analyzer tool captures a trace of wireless network activity. It uses the device’s PTI (Packet Trace Interface) that collects all received and transmitted packets and some meta data directly from the sequencer without affecting normal operation. Any PTI-enabled Silicon Labs platform can record the radio activity regardless of the application firmware that is being used. To start capturing with Network Analyzer: Right click on the device in the Debug Adapters view and press Connect: connect Right click on the device again and press Start capture. capture The Network Analyzer view is automatically opened. You can see radio packets that are listed in the Events tab. By clicking on them, you can see the details and the hex dump of the packet. After starting the capture, you will first see advertisement and scan request packets as the Thunderboard application is a server device. This lab does not use the Transactions tab. opened Connect to the Thunderboard Using a Smartphone - Start the EFR Connect app on your phone. - Start the Bluetooth Browser. - Connect to your device advertising as "Thunderboard". - Tap on the Device Information service and read out the Model Number String Characteristic. read - Observe this read operation in Network Analyzer. For this, open the Network Analyzer tab in Simplicity Studio. In Network Analyzer, after the connect indication, you can see how connect parameters are negotiated and how the smartphone discovers the GATT database of the Thunderboard application. Empty packets from the client are necessary to keep the connection alive. However, most of the time they can be filtered out for a more transparent trace. - To filter out connected empty packets from the trace, you can use the following filter: event.summary != "BLE LL - Empty PDU" con_ind Lets see what happened when you read out a characteristic of the device. In this log, you can see which attribute was read and what was the actual value. It's exactly the same with an external sniffer device, so everyone in the radio range has access to this data. unencrypted - Stop capturing. For this, right click on the device in the Debug Adapters window. After it, close the trace called Live. We don't need to save it now. stop_capture Fortunately, BLE supports secure connections. Bluetooth version 4.2 significantly increased the security of the protocol by using the public key-based key exchange. The Bluetooth specification defines security features to protect the user’s data and identity. Now let's secure our connection against passive eavesdroppers. In The Bluetooth LE protocol, it is achievable by pairing. GATT Permissions In the GATT database, different permissions can be granted for characteristics and descriptors. This exercise demonstrates what happens if the client tries to read data with encrypted permission and how the permission can be granted via pairing processes. A characteristic which requires encryption cannot be accessed without pairing because reading this characteristic requires an encrypted link. Reading this characteristic for the first time will trigger the pairing process. The pairing process depends on the IO capabilities of both client and server devices. Note that in case of "Just Works" pairing, it is not possible to confirm the identity of the connecting devices. Devices will pair with encryption but without authentication. To protect against passive eavesdropping, LE Secure Connection uses ECDH public key cryptography, which provides a very high degree of strength against passive eavesdropping attacks as it allows the key exchange over unsecured channels. The Bluetooth low energy technology provides multiple options for pairing, depending on the security requirements of the application. Now we are using Just works. Let's create a project that we can modify: - Go to the Launcherview in Simplicity Studio v5. For this, click on the Launcherbutton in the top right corner. - Click on Create New Projectin the upper right hand corner. A "New Project Wizard" window should appear. - For this exercise, the Bluetooth - SoC Thunderboard EFR32BG22 will be used. Under the Technology Typefilter window, select the Bluetoothfilter option. On the right under the resources list, scroll and select Bluetooth - SoC Thunderboard EFR32BG22 (BRD4184A). - Click Nextto move on. - Rename the project. For this lab, name the project sec_workshop_thunderboard. - Select Copy contents to copy the project files into your project. This makes version control easier to manage and future updates to the Simplicity Studio libraries will not impact the copied files in this project. 'Link the SDK and copy project sources' is selected by default. This creates the new project in the Simplicity Studio workspace. It will link any library files from the SDK and copy any source files directly into the project folder. - Click Finish to generate the project. create Modify the GATT database configuration. In the Project Configurator, the default tab is opened after creating the project or you can open it by double clicking the sec_workshop_thunderboard.slcp. Go to the Software Components tab and browse to Advanced Configurators > Bluetooth GATT Configurator. You will see the button to open on the top right corner when you select the component. Click it. Once in the GATT Configurator, you can set the read property of the Device Information -> Serial Number String characteristic to encrypted with the radio button. gatt Save the configuration (Ctrl+S), which will apply the changes to the GATT database files gatt_db.c and gatt_db.h. Build the project by clicking on the hammer icon, which will generate the application image to be flashed to your device. Flash the application image to your device by going into the project explorer tab. In your project root folder, in a binaries folder, click on the drop down arrow and right click on "sec_workshop_thunderboard.hex" > flash to device. flash program - Start capture using Network Analyzer as described in exercise 1. - Connect to the device with a smartphone again and read out the Device Information -> Serial Number String characteristic. Read other characteristics as well. If you are using an iOS device, accept the pairing request. pairing - Observe the packets in Network Analyzer. Because the connection is encrypted, you can no longer understand the Bluetooth traffic. Network Analyzer detects it and gives the following error message: missing_keys encrypted You can only see encrypted BLE packets which is good for the application but during development and debugging, it is important to decrypt these packets in the log. Network Analyzer is capable of decrpyting all the BLE communications if it knows the security keys. You can add the used security key in Simplicity Studio here: Window -> Preferences -> Network Analyzer -> Decoding -> Security Keys You can also set debug mode in firmware. In this mode, the secure connections pairing uses known debug keys, so that the encrypted packet can be opened by Bluetooth protocol analyzer. For this reason, pairings made in debug mode are unsecure. Using Debug Mode This exercise uses debug mode to decrypt the encrypted Bluetooth traffic with Network Analyzer. This example will also initiate pairing in a different way. - Open the app.c file in the created sec_workshop_thunderboard project. - Place the sl_bt_sm_set_debug_mode() command in the beginning of the sl_bt_evt_system_boot_id event. debug - Place the sl_bt_sm_increase_security() command in the sl_bt_evt_connection_opened_id event. In this way, pairing is initiated immediately from the server side after connection. increase - Build and flash the modified application to the Thunderboard. - Start capturing with Network Analyzer. - Connect to the device with a smartphone. As you can see in the Network Analyzer trace, the connection was encrypted but you can get the decrypted messages because of the debug mode. decrypted Bluetooth LE technology uses AES-CCM cryptography for encryption and the encryption is performed by the Bluetooth low energy controller. CCM operating mode is capable of protecting data integrity and confidentiality. Encrypted Custom Advertisement In this exercise you are going to make a beacon application starting from the Bluetooth - SoC Empty example. The PSA Crypto API will be used to encrypt custom advertisement data. The device will advertise a 16 byte long cipher encrypted with AES-ECB mode. Create Example Project - Go to the Launcher perspective in Simplicity Studio by clicking on the “Launcher” icon on the top right corner. “Launcher” and “Simplicity IDE” options should be visible. - Select the J-link device. - Click on Example projects & Demo’s tab to browse the example projects. - Select “Bluetooth” from the Technology Type filters. Additionally, you can type “Empty” in Keywords to narrow results. - Select Bluetooth - SoC Empty from the Bluetooth examples and click Create. (Do not select the "demo"). create - Rename the project to sec_workshop_advertisement and press Finish. Modify the Project First, add the log functionality to the application. The Project Configurator tab is opened after creating the project or you can open it by double clicking the .slcp file. Go to the Software Components tab and install the following Software Components: - Services -> IO Stream -> IO Stream: USART - Application -> Utility -> Log After this, copy and overwrite the default app.c in your project with the app.c source file provided by this lab. Code Explanation This section reviews parts of the code that are important to understand this lab exercise. Here is the architectural overview of the security part: log Custom Advertisement The device needs to advertise a custom manufacturer-specific data. The Advertising Manufacturer Specific Data software example serves as the basis for this feature. For more information about advertising, see the Bluetooth Advertising Data Basics article document which explains the basics of BLE advertising packet formatting. Advertising data consists of one or more Advertising Data (AD) elements. This example uses the following elements: - Flags: mandatory for every advertisement - Device name - Custom: manufacturer-specific data typedef struct{ uint8_t flags_len; // Length of the Flags field uint8_t flags_type; // Type of the Flags field uint8_t flags; // Flags uint8_t name_len; // Length of the Name field uint8_t name_type; // Type of the Name field uint8_t name[ADVERTISING_NAME_SIZE]; // Name uint8_t manuf_len; // Length of the Manufacturer ID field uint8_t manuf_type; // Type of the Manufacturer ID field uint8_t company_LO; // Manufacturer ID lower byte uint8_t company_HI; // Manufacturer ID higher byte uint8_t adv_data[CIPHER_MSG_SIZE]; // User data field }custom_adv_t; The advertisement payload is composed. Call the sl_bt_advertiser_set_data() command: encrypted_adv_data.flags_len = 0x02; encrypted_adv_data.flags_type = 0x01; encrypted_adv_data.flags = 0x04 | 0x02; encrypted_adv_data.name_len = ADVERTISING_NAME_SIZE+1; encrypted_adv_data.name_type = 0x09; memcpy(encrypted_adv_data.name, device_name, ADVERTISING_NAME_SIZE); encrypted_adv_data.manuf_len = 0x13; encrypted_adv_data.manuf_type = 0xFF; encrypted_adv_data.company_LO = 0xFF; encrypted_adv_data.company_HI = 0x02; memcpy(encrypted_adv_data.adv_data, cipher_msg, CIPHER_MSG_SIZE); sc = sl_bt_advertiser_set_data(advertising_set_handle, 0, sizeof(encrypted_adv_data), (uint8_t *)(&encrypted_adv_data)); Then, start advertising with the advertiser_user_data configuration: sc = sl_bt_advertiser_start( advertising_set_handle, advertiser_user_data, advertiser_non_connectable); AES-ECB Encryption In PSA Crypto, applications must call psa_crypto_init() to initialize the library before using any other function. However, in this case, the Bluetooth stack has already initialized it so you don’t need to. In the application, the aes_ecb_encrypt_message() function is responsible for encrypting the given plain message. In this example, the default sample data and AES key: // Plain message: 000102030405060708090a0b0c0d0e0f static uint8_t plain_msg[PLAIN_MSG}; // AES key: 91f71618e4e8cbf4979f4e613fcbfb50 static uint8_t aes_key[AES_KEY_SIZE] = {0x91, 0xf7, 0x16, 0x18, 0xe4, 0xe8, 0xcb, 0xf4, 0x97, 0x9f, 0x4e, 0x61, 0x3f, 0xcb, 0xfb, 0x50}; As it is a BLE application, the PSA Crypto Software Component is already installed in the project. You just need to import the crypto.h file. #include "psa/crypto.h" The psa_key_attributes_t object specifies the attributes for the new key during the creation process. The application must set the key type and size, key algorithm policy, and the appropriate key usage flags in the attributes for the key to be used in any cryptographic operations. If the key creation succeeds, the PSA Crypto will return an identifier for the newly created key. // Setup key attributes psa_key_attributes_t key_attr = psa_key_attributes_init(); // The state object for multi-part cipher operations psa_cipher_operation_t cipher_op = psa_cipher_operation_init(); // Import the key psa_set_key_type(&key_attr, PSA_KEY_TYPE_AES); psa_set_key_bits(&key_attr, 128); psa_set_key_usage_flags(&key_attr, PSA_KEY_USAGE_ENCRYPT); psa_set_key_algorithm(&key_attr, PSA_ALG_ECB_NO_PADDING); status = psa_import_key(&key_attr, key, key_len, &key_id); Now you can encrypt sample data with the created key using the psa_cipher_encrypt_setup(), psa_cipher_update() and psa_cipher_finish() functions. In the application, the aes_ecb_encrypt_message() function encryptes the sample data and then the encrypted cipher is copied to the custom advertisement structure. Build and flash the application and obtain the results with the smartphone. - Open the EFR Connect app. - Open the Bluetooth Browser. - Find the device advertising as "Lab 3" and click on it. - Check the Manufacturer Specific Data section cipher log To verify the result of the encryption process, use an online tool. Simplicity Studio 5 includes the PSA Crypto platform examples for key handling, symmetric and asymmetric cryptographic operations. See the corresponding readme.html file for details about each PSA Crypto platform example.
https://docs.silabs.com/bluetooth/3.3/lab-manuals/security-handling-in-hardware-and-software
2022-09-25T07:16:50
CC-MAIN-2022-40
1664030334515.14
[]
docs.silabs.com
Developer’s Guide to Memex Explorer¶ Setting up Memex Explorer¶ Application Setup¶ To set up a developer’s environment, clone the repository, then run the app_setup.sh script:$ git clone $ cd memex-explorer/source $ ./app_setup.sh You can then start the application from this directory:$ source activate memex $ supervisord Memex Explorer will now be running locally at. The Database Model¶ The current entity relation diagram: Updating the Database¶ As of version 0.4.0, Memex Explorer will start tracking all database migrations. This means that you will be able to upgrade your database and preserve the data without any issues. If you are using a version that is 0.3.0 or earlier, and you are unable to update your database without server errors, the best course of action is to delete the existing file at source/db.sqlite3 and start over with a fresh database. Enabling Non-Default Services¶ Nutch Visualizations¶ Nutch visualizations are not enabled by default. Nutch visualizations require RabbitMQ, and the method for installing RabbitMQ varies depending on the operating system. RabbitMQ can be installed via Homebrew on Mac, and apt-get on Debian systems. For more information on how to install RabbitMQ, read this page. Note: You may also need to change the below command to sudo rabbitmq-server, depending on how RabbitMQ is installed on your system and the permissions of the current user. RabbitMQ and Bokeh-Server are necessary for creating the Nutch visualizations. The Nutch streaming visualization works by creating and subscribing to a queue of AMQP messages (hosted by RabbitMQ) being dispatched from Nutch as it runs the crawl. A background task reads the messages and updates the plot (hosted by Bokeh server). To enable Bokeh visualizations for Nutch, change autostart=false to autostart=true for both of these directives in source/supervisord.conf, and then kill and restart supervisor.[program:rabbitmq] command=rabbitmq-server priority=1 -autostart=false +autostart=true [program:bokeh-server] command=bokeh-server --backend memory --port 5006 priority=1 -autostart=false +autostart=true Domain Discovery Tool (DDT)¶ Domain Discovery Tool can be installed as a conda package. Simply run conda install ddt to download the package for DDT. Like with Nutch visualizations, to enable DDT, change the directive in source/supervisord.[program:ddt] command=ddt priority=5 -autostart=false +autostart=false Temporal Anomaly Detection (TAD)¶ TAD does not currently have a conda package. Like the Nutch visualizations, it also has a RabbitMQ dependency. For instructions on installing TAD, visit the github repository. Like DDT and Nutch Visualizations, you also have to change the supervisor directive.[program:tad] command=tad priority=5 -autostart=false +autostart=false
https://memex-explorer.readthedocs.io/en/stable/dev_guide.html
2022-09-25T08:33:09
CC-MAIN-2022-40
1664030334515.14
[array(['_images/DbVisualizer.png', '_images/DbVisualizer.png'], dtype=object) ]
memex-explorer.readthedocs.io
Tips to optimize a campaign If you already know what type of offer you would like to advertise, the next thing would be to have various creatives for your offer. This will be really helpful later when we will be testing which creatives work and which don't. You can use image banks or search engines to find images for your creatives. Now, we need to use a postback URL. This will allow us to see, which clicks provided conversions. We need to use the {CLICK_ID} macros and enter the $ value of the conversion in the pixel like this:{CLICK_ID}&price=5 The Postback URL can be accessed on the campaign page. Now you need to upload your creatives to our system. We support several creatives within one campaign, which is very convenient because it can save you a lot of time. After that, we need to go to the Reports page and see how our campaign performs. Since we are sending the information about conversions to the reports using Postback URL, we can optimize the campaign using this information. For example, we can make a breakdown by domains and see ROI for each of the domains. To do this, we need to select Domains in the dimensions of the report. After you found domains with the lowest ROI, you can block the domains with low ROI in the domains black list. Another useful metric is an ROI breakdown by creatives. Since we are testing various creatives it is useful to see CTR and ROI for each of the creatives we uploaded. We can then stop the creatives with low CTR and ROI. Sometimes as many as 100 creatives can be required to find the best ones. Hopefully, after blocking low-performing domains, and creatives and testing different offers and landing pages you will be able to have positive ROI using our network.
https://docs.admachine.co/article/62-tips-to-optimize-a-campaign
2022-09-25T07:47:42
CC-MAIN-2022-40
1664030334515.14
[]
docs.admachine.co
SERVER CRISIS NEEDS IMMEDIATE DONATIONS Homepage link: 01 Nov 2003: Indymedia has just been notified that it will be losing its free hosting for the main IMC server, which hosts the majority of local indymedia.org sites. The IMC-Tech working group needs your immediate donations in order to purchase new servers and locate new hosting spaces by the 01 Dec 2003 deadline. Donations of hardware or hosting are also very welcome! Please email [email protected]. Donate via Paypal | More info... Details page: The hosting for the main Indymedia web server (stallman) has been donated pro-bono for over 2 years. However, the hosting provider has informed us that they can no longer offer this substantial donation, and we have until 01 Dec 2003 to find a solution. We are asking anyone who can to make a donation of $5. With a positive response from the large numbers of Indymedia contributors and sympathizers, we hope to raise the funds necessary. If you have more resources, we encourage you to donate whatever you can: $25, $50, $100, $500. Background: The IMC-Tech working group has been working to try and decentralize the number of indymedia.org sites on one server for years, but coordinating the various local IMC's and the necessary tech folks has proven difficult. Techs and other IMC folks are working hard to both secure new hosting arrangements, and help spread out the location of various sites so that this crisis can be prevented in the future. In addition, this will lower the amount of bandwidth and other resources required for any one server. Your donations are need for the following: Server hardware: Server hosting:Server hosting: - Pre-built rackmount or tower servers - Server components for custom-built servers - PC Weasel's (please email [email protected] for info first) - Additional hard drives (expensive server-class SCSI drives) - RAM memory (expensive server-class ECC/Registered memory) - Funds for lump sum donations or other costs associated hosting
https://docs.indymedia.org/Sysadmin/ServerCoordinationDonationsCall?cover=print;
2022-09-25T08:11:59
CC-MAIN-2022-40
1664030334515.14
[]
docs.indymedia.org
General¶ About MoinMoin¶ MoinMoin is a wiki engine written in Python. It is Free and Open Source Software under GNU GPL v2+. For details please read the License. Project homepage: Using MoinMoin, wiki users can easily create and maintain web content from their browser. You can use it: - as an easily-maintained web site - as a knowledge base - for taking notes - for creating documentation You can use it for: - your company / organisation, your work group - your school, college, or university - your projects and interests - just yourself You can run it on: - a public web server - an intranet server - your desktop or laptop - Linux, Mac OS X, Windows, and other OSes What makes MoinMoin special?¶ Moin tries to be a great wiki engine, which encompasses: powerful, extendable and easy-to-use. We don’t try to be everything, but we don’t try to be minimalistic either. There are lots of wiki engines out there, making it hard to pick one. However, choosing wisely is important because you may have to live with your choice for a long time because switching wiki engines is not easy. We won’t list all of moin’s features, because comparing feature lists is just not enough. Some features are best left unimplemented, even if they sound great at first. In moin, you will find most important features like in most major wiki engines. But still, you and your wiki users might feel quite a different overall experience just because of a bunch of small, superficial differences. Of course the quality of some features’ implementations can vary greatly. Thus, you have to try it and play with it, not just look at feature comparisons. MoinMoin has been around since about 2000. It has rapidly grown and evolved through moin 1.9.x. Its developers have increased their experience with Python and wiki technology over the years. With moin 2.0, there has been a rather revolutionary cleanup / rewrite of how moin works based on that experience. This promises to make it easier, cleaner, more consistent, more powerful, more flexible and more modular. Moin is written in Python, an easy to read, high-level, object-oriented, dynamic, well-designed and platform-independent programming language. Moin is Free Software (that implies that it is Open Source) and, because we use Python, you may even like to read and modify moin’s code. Who is using MoinMoin?¶ This shows some of the better-known users of MoinMoin: Web Sites¶ - KernelNewbies, Xen, LinuxWireless, GCC - Debian, Ubuntu, CentOS - Apache, Gnome, Wine, OpenOffice, Squid, Exim, Dovecot - Python, ScyPy, TurboGears - Mercurial, Darcs - FSFE, FFII, c-base, MusicBrainz - linuxwiki.de, jurawiki.de, ooowiki.de and … moinmo.in :D For links and more sites, please see: You may also add missing moin-based sites there.
https://moin-20.readthedocs.io/en/latest/intro/general.html
2022-09-25T07:28:21
CC-MAIN-2022-40
1664030334515.14
[]
moin-20.readthedocs.io
Django’s serialization framework provides a mechanism for “translating” Django models into other formats. Usually these other formats will be text-based and used for sending Django data over a wire, but it’s possible for a serializer to handle any format (text-based or not). See also If you just want to get some data from your tables into a serialized form, you could use the dumpdata management command. At the highest level, you can serialize data like this: from django.core import serializers data = serializers.serialize("xml", SomeModel.objects.all()) The arguments to the serialize function are the format to serialize the data to (see Serialization formats) and a QuerySet to serialize. (Actually, the second argument can be any iterator that yields Django model instances, but it’ll almost always be a QuerySet). You can also use a serializer object directly: XMLSerializer = serializers.get_serializer("xml") xml_serializer = XMLSerializer() xml_serializer.serialize(queryset) data = xml_serializer.getvalue() This is useful if you want to serialize data directly to a file-like object (which includes an HttpResponse): with open("file.xml", "w") as out: xml_serializer.serialize(SomeModel.objects.all(), stream=out) Note Calling get_serializer() with an unknown format will raise a django.core.serializers.SerializerDoesNotExist exception. If you only want a subset of fields to be serialized, you can specify a fields argument to the serializer: from django.core import serializers data = serializers.serialize('xml', SomeModel.objects.all(), fields=('name','size')) In this example, only the name and size attributes of each model will be serialized. The primary key is always serialized as the pk element in the resulting output; it never appears in the fields part. Note Depending on your model, you may find that it is not possible to deserialize a model that only serializes a subset of its fields. If a serialized object doesn’t specify all the fields that are required by a model, the deserializer will not be able to save deserialized instances. If you have a model that is defined using an abstract base class, you don’t have to do anything special to serialize that model. Call the serializer on the object (or objects) that you want to serialize, and the output will be a complete representation of the serialized object. However, if you have a model that uses multi-table inheritance, you also need to serialize all of the base classes for the model. This is because only the fields that are locally defined on the model will be serialized. For example, consider the following models: class Place(models.Model): name = models.CharField(max_length=50) class Restaurant(Place): serves_hot_dogs = models.BooleanField(default=False) If you only serialize the Restaurant model: data = serializers.serialize('xml', Restaurant.objects.all()) the fields on the serialized output will only contain the serves_hot_dogs attribute. The name attribute of the base class will be ignored. In order to fully serialize your Restaurant instances, you will need to serialize the Place models as well: all_objects = [*Restaurant.objects.all(), *Place.objects.all()] data = serializers.serialize('xml', all_objects) Deserializing data is very similar to serializing it: for obj in serializers.deserialize("xml", data): do_something_with(obj) As you can see, the deserialize function takes the same format argument as serialize, a string or stream of data, and returns an iterator. However, here it gets slightly complicated. The objects returned by the deserialize iterator aren’t regular Django objects. Instead, they are special DeserializedObject instances that wrap a created – but unsaved – object and any associated relationship data. Calling DeserializedObject.save() saves the object to the database. Note If the pk attribute in the serialized data doesn’t exist or is null, a new instance will be saved to the database. This ensures that deserializing is a non-destructive operation even if the data in your serialized representation doesn’t match what’s currently in the database. Usually, working with these DeserializedObject instances looks something like: for deserialized_object in serializers.deserialize("xml", data): if object_should_be_saved(deserialized_object): deserialized_object.save() In other words, the usual use is to examine the deserialized objects to make sure that they are “appropriate” for saving before doing so. If you trust your data source you can instead save the object directly and move on. The Django object itself can be inspected as deserialized_object.object. If fields in the serialized data do not exist on a model, a DeserializationError will be raised unless the ignorenonexistent argument is passed in as True: serializers.deserialize("xml", data, ignorenonexistent=True) Django supports a number of serialization formats, some of which require you to install third-party Python modules: The basic XML serialization format looks like this: <?xml version="1.0" encoding="utf-8"?> <django-objects <object pk="123" model="sessions.session"> <field type="DateTimeField" name="expire_date">2013-01-16T08:16:59.844560+00:00</field> <!-- ... --> </object> </django-objects> The whole collection of objects that is either serialized or deserialized is represented by a <django-objects>-tag which contains multiple <object>-elements. Each such object has two attributes: “pk” and “model”, the latter being represented by the name of the app (“sessions”) and the lowercase name of the model (“session”) separated by a dot. Each field of the object is serialized as a <field>-element sporting the fields “type” and “name”. The text content of the element represents the value that should be stored. Foreign keys and other relational fields are treated a little bit differently: <object pk="27" model="auth.permission"> <!-- ... --> <field to="contenttypes.contenttype" name="content_type" rel="ManyToOneRel">9</field> <!-- ... --> </object> In this example we specify that the auth.Permission object with the PK 27 has a foreign key to the contenttypes.ContentType instance with the PK 9. ManyToMany-relations are exported for the model that binds them. For instance, the auth.User model has such a relation to the auth.Permission model: <object pk="1" model="auth.user"> <!-- ... --> <field to="auth.permission" name="user_permissions" rel="ManyToManyRel"> <object pk="46"></object> <object pk="47"></object> </field> </object> This example links the given user with the permission models with PKs 46 and 47. Control characters If the content to be serialized contains control characters that are not accepted in the XML 1.0 standard, the serialization will fail with a ValueError exception. Read also the W3C’s explanation of HTML, XHTML, XML and Control Codes. When staying with the same example data as before it would be serialized as JSON in the following way: [ { "pk": "4b678b301dfd8a4e0dad910de3ae245b", "model": "sessions.session", "fields": { "expire_date": "2013-01-16T08:16:59.844Z", ... } } ] The formatting here is a bit simpler than with XML. The whole collection is just represented as an array and the objects are represented by JSON objects with three properties: “pk”, “model” and “fields”. “fields” is again an object containing each field’s name and value as property and property-value respectively. Foreign keys. All data is now dumped with Unicode. If you need the previous behavior, pass ensure_ascii=True to the serializers.serialize() function.. YAML serialization looks quite similar to JSON. The object list is serialized as a sequence mappings with the keys “pk”, “model” and “fields”. Each field is again a mapping with the key being name of the field and the value the value: - fields: {expire_date: !!timestamp '2013-01-16 08:16:59.844560+00:00'} model: sessions.session pk: 4b678b301dfd8a4e0dad910de3ae245b Referential fields are again represented by the PK or sequence of PKs. All data is now dumped with Unicode. If you need the previous behavior, pass allow_unicode=False to the serializers.serialize() function. The default serialization strategy for foreign keys and many-to-many relations is to serialize the value of the primary key(s) of the objects in the relation. This strategy works well for most objects, but it can cause difficulty in some circumstances. Consider the case of a list of objects that have a foreign key referencing ContentType. If you’re going to serialize an object that refers to a content type, then you need to have a way to refer to that content type to begin with. Since ContentType objects are automatically created by Django during the database synchronization process, the primary key of a given content type isn’t easy to predict; it will depend on how and when migrate was executed. This is true for all models which automatically generate objects, notably including Permission, Group, and User. Warning You should never include automatically generated objects in a fixture or other serialized data. By chance, the primary keys in the fixture may match those in the database and loading the fixture will have no effect. In the more likely case that they don’t match, the fixture IntegrityError. There is also the matter of convenience. An integer id isn’t always the most convenient way to refer to an object; sometimes, a more natural reference would be helpful. It is for these reasons that Django provides natural keys. A natural key is a tuple of values that can be used to uniquely identify an object instance without using the primary key value. Consider the following two models: from django.db import models class Person(models.Model): first_name = models.CharField(max_length=100) last_name = models.CharField(max_length=100) birthdate = models.DateField() class Meta: unique_together = [['first_name', 'last_name']] class Book(models.Model): name = models.CharField(max_length=100) author = models.ForeignKey(Person, on_delete=models.CASCADE) Ordinarily, serialized data for Book would use an integer to refer to the author. For example, in JSON, a Book might be serialized as: ... { "pk": 1, "model": "store.book", "fields": { "name": "Mostly Harmless", "author": 42 } } ... This isn’t a particularly natural way to refer to an author. It requires that you know the primary key value for the author; it also requires that this primary key value is stable and predictable. However, if we add natural key handling to Person, the fixture becomes much more humane. To add natural key handling, you define a default Manager for Person with a get_by_natural_key() method. In the case of a Person, a good natural key might be the pair of first and last name: from django.db import models class PersonManager(models.Manager): def get_by_natural_key(self, first_name, last_name): return self.get(first_name=first_name, last_name=last_name) class Person(models.Model): first_name = models.CharField(max_length=100) last_name = models.CharField(max_length=100) birthdate = models.DateField() objects = PersonManager() class Meta: unique_together = [['first_name', 'last_name']] Now books can use that natural key to refer to Person objects: ... { "pk": 1, "model": "store.book", "fields": { "name": "Mostly Harmless", "author": ["Douglas", "Adams"] } } ... When you try to load this serialized data, Django will use the get_by_natural_key() method to resolve ["Douglas", "Adams"] into the primary key of an actual Person object. Note Whatever fields you use for a natural key must be able to uniquely identify an object. This will usually mean that your model will have a uniqueness clause (either unique=True on a single field, or unique_together over multiple fields) for the field or fields in your natural key. However, uniqueness doesn’t need to be enforced at the database level. If you are certain that a set of fields will be effectively unique, you can still use those fields as a natural key. Deserialization of objects with no primary key will always check whether the model’s manager has a get_by_natural_key() method and if so, use it to populate the deserialized object’s primary key..last_name) That method should always return a natural key tuple – in this example, (first name, last name). Then, when you call serializers.serialize(), you provide use_natural_foreign_keys=True or use_natural_primary_keys=True arguments: >>> serializers.serialize('json', [book1, book2], indent=2, ... use_natural_foreign_keys=True, use_natural_primary_keys=True) When use_natural_foreign_keys=True is specified, Django will use the natural_key() method to serialize any foreign key reference to objects of the type that defines the method. When use_natural_primary_keys=True is specified, Django will not provide the primary key in the serialized data of this object since it can be calculated during deserialization: ... { "model": "store.person", "fields": { "first_name": "Douglas", "last_name": "Adams", "birth_date": "1952-03-11", } } ... This can be useful when you need to load serialized data into an existing database and you cannot guarantee that the serialized primary key value is not already in use, and do not need to ensure that deserialized objects retain the same primary keys. If you are using dumpdata to generate serialized data, use the dumpdata --natural-foreign and dumpdata --natural-primary command line flags to generate natural keys. Note You don’t need to define both natural_key() and get_by_natural_key(). If you don’t want Django to output natural keys during serialization, but you want to retain the ability to load natural keys, then you can opt to not implement the natural_key() method. Conversely, if (for some strange reason) you want Django to output natural keys during serialization, but not be able to load those key values, just don’t define the get_by_natural_key() method. --natural-foreign option will serialize any model with a natural_key() method before serializing standard primary key objects. However, this may not always be enough. If your natural key refers to another object (by using a foreign key or natural key to another object as part of a natural key), then you need to be able to ensure that the objects on which a natural key depends occur in the serialized data before the natural key requires them. To control this ordering, you can define dependencies on your natural_key() methods. You do this by setting a dependencies attribute on the natural_key() method itself. For example, let’s add a natural key to the Book model from the example above: class Book(models.Model): name = models.CharField(max_length=100) author = models.ForeignKey(Person, on_delete=models.CASCADE) def natural_key(self): return (self.name,) + self.author.natural_key() The natural key for a Book is a combination of its name and its author. This means that Person must be serialized before Book. To define this dependency, we add one extra line: def natural_key(self): return (self.name,) + self.author.natural_key() natural_key.dependencies = ['example_app.person'] This definition ensures that all Person objects are serialized before any Book objects. In turn, any object referencing Book will be serialized after both Person and Book have been serialized.
https://django.readthedocs.io/en/3.2.x/topics/serialization.html
2022-09-25T09:18:59
CC-MAIN-2022-40
1664030334515.14
[]
django.readthedocs.io
The Adaptr Music SDK For Android. The SDK centers around two classes AdaptrPlayer and Adaptr. AdaptrPlayer presents a fully functional music player that can play and stream music from the Adaptr servers. Adaptr is used to retrieve and manage a singleton AdaptrPlayer instance that is tied to an Android Foreground Service and Notification so that music can continue while your app is backgrounded and music can be controlled from the lock screen and media controls. Because virtually all functionality in the AdaptrPlayer takes place asynchronously, clients may register implementations of the various listener interfaces to receive notification of player events. The Adaptr. must be initialized with a token and secret before playback becomes available. The AdaptrPlayer uses those credentials to contact the Adaptr server, confirm that the client is in a region licensed for music playback, and retrieve the list of stations that the user may pull music from. The Adaptr takes care of managing lock screen integration, notifications, audio ducking, and playback controls. There are AdaptrPlayer methods setArtwork that allow clients to customize the layout and imagery used for the lock screen and notifications. To minimally get music playback started in your application, do the following: Download appropriate .arr file to libs folder and add .arr to your build.gradle file Place the following call in your Application.onCreate() method to begin communication with the service: Adaptr.initialize(getApplicationContext(), "adaptr", "adaptr"); Your token and secret values are provided to you when you sign up with adaptr, and will give your app access to your custom stations. Until you have them, you can use the "demo" string above, or one of the following strings for your token and secret value: In your Activity that exposes player functionality, you should default to not making music visible or available to the user, and then register a listener with the Adaptr Update the UI to expose music controls when music does come available. For instance, add the following code to your Android Activity.onCreate() method: Adaptr.getPlayerInstance(new AvailabilityListener() { public void onPlayerAvailable(AdaptrPlayer player) { // Save reference to instance myPlayer = player; // .. enable player control buttons here // Optionally queue up audio data to reduce lag when the user hits play: // player.prepareStations(); // optionally start playing music in a station immediately with: // player.play(player.getStationList().get(0)); // .. or do nothing, and call 'player.play()' when the user // wants to start music // Add Listeners // player.addPlayListener(MainActivity.this); // player.addStateListener(MainActivity.this); // player.addLikeStatusChangeListener(MainActivity.this); // player.prepareToPlay(MainActivity.this); } public void onPlayerUnavailable(Exception e) { // .. hide player buttons if they aren't already // hidden, because music isn't available to this user } }); If clients desire, they can create a AdaptrPlayer instance directly using one of its public constructors. This prevents the SDK from creating and managing an Android Service and creating and updating Notifications. It is not recommended to use this direct constructor if your app is planning on playing music while backgrounded. The STALLED and the new WAITING_FOR_ITEM states should cause some kind of 'spinner' or 'waiting' notification to be shown to the user. While in the STALLED state, the player will have metadata about the current song to display to the user. While in the WAITING_FOR_MUSIC state, there will be no current song and, therefore, no song metadata to display. The following listeners are defined in the SDK: The player can crossfade between songs and when changing stations. This can be controlled on the client by providing a non-zero value to AdaptrPlayer.setSecondsOfCrossfade(). Alternatively, the crossfade duration can be assigned to values provided by the Adaptr servers. Any values sent from the servers override any crossfade timing set on the client. If you are using proguard, no additional rules are needed as the library contains proguard settings that would be auto-merged in client apps proguard rules.
https://docs.adaptr.com/android/latest/index.html
2021-06-13T01:43:44
CC-MAIN-2021-25
1623487598213.5
[]
docs.adaptr.com
Validating IAM policies A policy is a JSON document that uses the IAM policy grammar. When you attach a policy to an IAM entity, such as a user, group, or role, it grants permissions to that entity. When you create or edit IAM access control policies using the AWS Management Console, AWS automatically examines them to ensure that they comply with the IAM policy grammar. If AWS determines that a policy is not in compliance with the grammar, it prompts you to fix the policy. IAM Access Analyzer provides additional policy checks with recommendations to help you further refine the policy. To learn more about IAM Access Analyzer policy checks and actionable recommendations, see IAM Access Analyzer policy validation. To view a list of warnings, errors, and suggestions that are returned by IAM Access Analyzer, see IAM Access Analyzer policy check reference. Validation scope AWS checks JSON policy syntax and grammar. It also verifies that your ARNs are formatted properly and action names and condition keys are correct. Accessing policy validation Policies are validated automatically when you create a JSON policy or edit an existing policy in the AWS Management Console. If the policy syntax is not valid, you receive a notification and must fix the problem before you can continue. The findings from the IAM Access Analyzer policy validation are automatically returned in the AWS Management Console if you have permissions for access-analyzer:ValidatePolicy. You can also validate policies using the AWS API or AWS CLI. Existing policies You might have existing policies that are not valid because they were created or last saved before the latest updates to the policy engine. As a best practice, we recommend that you open your existing policies and review the policy validation results that are generated. You cannot edit and save existing policies without fixing any policy syntax errors.
https://docs.aws.amazon.com/IAM/latest/UserGuide/access_policies_policy-validator.html
2021-06-13T02:57:18
CC-MAIN-2021-25
1623487598213.5
[]
docs.aws.amazon.com
MQL4 Reference Object Functions ObjectGetValueByTime The function returns the price value for the specified time value of the specified object. double ObjectGetValueByTime( long chart_id, // chart ID string object_name, // object name datetime time, // time int line_id=0 // line ID ); Parameters chart_id [in] Chart identifier. object_name [in] Name of the object. time [in] Time value. line_id=0 [in] Line identifier. Return Value The price value for the specified time value of the specified. An object can have several values in one price coordinate, therefore it is necessary to specify the line number. This function applies only to the following objects: See also Object Types
https://docs.mql4.com/objects/objectgetvaluebytime
2021-06-13T03:26:07
CC-MAIN-2021-25
1623487598213.5
[]
docs.mql4.com
Configure a master template An easy way to create a master template is to copy the existing ESS sample site and customize it to suit your own needs. You can also configure a master template from scratch. Before you beginRole required: content_admin or admin About this taskFollow these steps to create a single master page and generate all important components within the site. Procedure Design a layout. Regardless of the interface, a site can be distilled into a few simple layouts. Create a theme. The theme defines the structure of the layouts in CSS and the base styles, such as fonts and colors. For more information, see Design themes. Build the common blocks. Pages are composed of content blocks. Most content blocks are reused on multiple pages. For the master template, build basic blocks such as a header, side navigation, and some basic content for the main content area of the page. More detailed content can be added later, but define content for reuse on many of the site pages here. For more information, see Content blocks. Build a site entry page. Use the common content blocks you just created to design the first page that users see when they enter the site. For more information, see Create a content page. Build a detail page. Design the detail pages to determine how pages such as knowledge articles, catalog items, and search results are displayed. You can build a detail page by copying the site entry page and adding additional content blocks. For more information, see Copy a page. Assign the pages created to the new site. Create the site and apply the layout and theme to the site defaults. Then, navigate to the All Pages list and specify the master template site in the Site column for each of the master template pages. For more information, see Create a site.
https://docs.servicenow.com/bundle/orlando-servicenow-platform/page/administer/content-management/task/t_BuildAMasterTemplateStepByStep.html
2021-06-13T01:45:35
CC-MAIN-2021-25
1623487598213.5
[]
docs.servicenow.com
Domain separation and Delegated Development Domain separation is unsupported in the Delegated Development feature. Domain separation enables you to separate data, processes, and administrative tasks into logical groupings called domains. You can control several aspects of this separation, including which users can see and access data. Support level: No support The domain field may exist on data tables, but no logic exists to manage data. This level is not considered domain-separated. for service providers
https://docs.servicenow.com/bundle/quebec-application-development/page/build/applications/concept/domain-separation-delegated-development.html
2021-06-13T02:35:31
CC-MAIN-2021-25
1623487598213.5
[]
docs.servicenow.com
Overview Secrets are essential for the operation of production applications. Although it may be convenient, it is a bad security practice to embed secrets such as passwords, API tokens in source code or yaml files. Unintended exposure of secrets is one of the top risks that should be properly addressed. Kubernetes Secrets¶ Kubernetes provides an object called Secret that can be used to store application sensitive data. Kubernetes Secrets can be injected into a Pod either as an environment variable or mounted as a file. Storing sensitive data in a Kubernetes Secret does not automatically make it secure. By default, all data in Kubernetes Secrets is stored as a plaintext encoded with base64. Secrets are stored in the cluster's etcd database. Depending on how the cluster was provisioned, the etcd database may be encrypted. Here is an example of a Kubernetes Secret YAML with a sensitive "username" and "password" encoded in base64 format. apiVersion: v1 kind: Secret metadata: name: test-secret data: username: bXktYXBw password: Mzk1MjgkdmRnN0pi Challenges¶ Multi Cluster Deployments¶ It is operationally challenging, cumbersome and insecure to manually provision and manage secrets on every cluster esp. with a fleet of Kubernetes clusters. No Dangling Secrets¶ It is a poor security practice to leave Secrets orphaned on Kubernetes clusters long after the workload has been removed from the cluster. Dynamic Retrieval of Secrets¶ Instead of statically provisioning secrets on a cluster and risk exposure, the workload pods should dynamically retrieve secrets from a central secrets management system based on the cluster's identity. Operational Complexity¶ It is operationally cumbersome and challenging to retrofit applications to securely communicate with Vault. Solution¶ We have developed a deep integration with HashiCorp's market leading Secrets Management platform called Vault. This addresses two core problems: It eliminates the operational complexity associated with establishing trust between Kubernetes clusters and a central Vault server. It makes it extremely easy for developers to dynamically retrieve secrets from Vault by using simple annotations.
https://docs.stage.rafay.dev/integrations/secrets/overview/
2021-06-13T01:37:41
CC-MAIN-2021-25
1623487598213.5
[]
docs.stage.rafay.dev
Is the write capacity of your single-instance MySQL database nearing its maximum? Is the CPU utilization of your present configuration at an all-time high? Has your system’s performance slowed to a crawl such that users are complaining? Have you recently lost data due to unexpected hardware or network failures? Or is business booming and you anticipate encountering these issues in the near future? Those warning signs can indicate that you are outgrowing your current configuration. There are numerous options to consider when evaluating your future system requirements. For example, expanding to a larger server may provide a temporary solution but eventually that path becomes no longer practicable nor economical. The information below outlines some common upgrade alternatives, recaps potential obstacles of each, and provides justification as to why Xpand may be a viable alternative. Implementing either a Master/Slave or Master/Master replication topology to expand and distribute your system is one strategy many consider. In a Master/Slave configuration, the master typically performs writes, while the slaves service reads. As necessary, a slave can be promoted to a master. With Master/Master topologies, updates to multiple masters are synchronized and failover is built-in, as long as a single master can handle the workload. Both strategies allow reads and writes to be distributed amongst multiple servers to improve throughput or reduce logistical issues, and are certainly viable options for some implementations. But one needs to be aware of the following caveats: Writes to masters don’t scale beyond the capacity of the server they occupy. That means that once the server used for a master reaches its capacity, you’ll need yet another solution. In a Master/Slave topology, reads from slaves can add read capacity, but slave lags can render stale results. Stale data is sometimes acceptable, but if your application relies on consistency, stale data can wreak havoc. For example, online purchases that need to access inventory levels need a consistent, accurate view of the data to correctly fulfill orders. The goal of implementing a Master/Master topology is typically to scale write capacity. Adding slaves to additionally scale read transactions can introduce multiple update lags - one for the master, then one for each slave. The resulting stale data may be problematic. Master/Master topologies introduce challenges to coordinate conflicting concurrent updates. For example, what should happen if multiple users update the same row of data on multiple iterations of a master with conflicting information? Bulk updates performed in databases that employ Master/Master replication can readily introduce inconsistencies between the masters due to synchronization lags. Each new master or slave added to the system adds overhead for synchronization and coordination. Another approach to address growth is to distribute data across several computers by use of sharding. With horizontal sharding, whole tables are divided into subsets (i.e. U.S. customers versus foreign customers) and spread across multiple servers. With vertical sharding, tables are split by columns (i.e. frequently accessed columns versus infrequently used columns) and placed on different servers. There are several sharding and database traffic managers available to help manage a sharded environment or you can build your own. But they all have the same potential pitfalls: Joining tables across multiple shards is often difficult, does not perform well, and frequently requires forwarding large amounts of data. Specialized coding is often required to ensure that applications know the location of the data. ACID compliance, transactionality, and referential integrity will need to be developed and maintained at the application level. Offloading that standard RDBMS functionality to an application layer can be expensive, risky, and can degrade performance. Operational tasks such as backups, schema changes, adding indexes to improve performance, and upgrading applications become unwieldy. Recovering all shards to a single point-in-time (backup recovery) is nearly impossible. There are numerous distributed solutions available (both SQL-based and NoSQL) that allow two or more servers to be combined or “clustered”. Some of these distributed systems can be considered “shared nothing” as the servers do not share memory or disk storage. In a shared nothing environment, each server is self-sufficient and operates independently. Shared nothing architectures have been proven to be the best for scaling-out capacity and performance. Performance is enhanced because processing can be parallelized to multiple servers and there is no single potential bottleneck. One of the biggest challenges of distributed systems, however, is coordinating how the various processes, workloads, and data get distributed and balanced between multiple servers. Some of the tradeoffs to be considered with a distributed solution include: NoSQL solutions often scale nicely but relax support of transactions, ACID compliance, and RDBMS referential integrity to do so. Some NoSQL products now support SQL language semantics, but that does not mean they support typical RDBMS guarantees such as ACID compliance, real-time transaction processing, and referential integrity. Some distributed solutions are add-ons to a single instance database that emulate the functionality of a distributed system. Such solutions frequently do not scale for writes as robustly as they do for reads. Others may require expensive refactoring and customized changes to your application. Xpand eliminates the need to consider any of the options noted above! Our clustered solution offers easy, flexible scaling for growth. A feature of Xpand to Flex Up and Flex Down a cluster allows your database to adjust simply by adding or removing node(s). Xpand’s shared nothing architecture accommodates true distributed processing without sharding. Distributed query execution promotes parallel processing by sending queries to the data versus bringing the data to each query. All such distribution is completely transparent to the system’s users and accessible via a simple MySQL compatible SQL interface. No application customization is necessary to access this distributed data via distributed queries. The documentation relating to Distributed Database Architecture explains the numerous elements of Xpand that work together to make this possible. Additionally, Xpand is fault tolerant. The database includes an inherent replication strategy that maintains redundantly updated copies of all data. In the event of an unexpected node failure, Xpand will automatically recover and continue operations with the remaining nodes. A Xpand cluster can lose a node without losing any data. To learn more, read about Xpand's Consistency, Fault Tolerance, and Availability. Xpand provides these capabilities while maintaining conventional RDBMS properties such as ensuring real-time referential integrity, accommodating multi-table joins and ad hoc queries, and guaranteeing ACID Compliance. Xpand was designed to be a scalable drop-in replacement for MySQL. Finally, Xpand provides fast, complete, consistent backup and restore features along with online schema changes that are made without service interruption. Refer to the chart below to see how Xpand compares to the alternatives discussed. To obtain a free trial of Xpand, please contact Xpand Sales.
https://docs.clustrix.com/display/CLXDOC/Why+Xpand
2021-06-13T02:21:02
CC-MAIN-2021-25
1623487598213.5
[]
docs.clustrix.com
A newer version of this page is available. Switch to the current version. SchedulerStorage.CreateReminder(Appointment) Method Creates a reminder for the specified appointment. Namespace: DevExpress.Xpf.Scheduler Assembly: DevExpress.Xpf.Scheduler.v19.2.dll Declaration public Reminder CreateReminder( Appointment appointment ) Public Function CreateReminder( appointment As Appointment ) As Reminder CreateReminder method creates a reminder and associates it with the specified appointment. By default the ReminderBase.TimeBeforeStart value is 15 minutes. See Also Feedback
https://docs.devexpress.com/WPF/DevExpress.Xpf.Scheduler.SchedulerStorage.CreateReminder(DevExpress.XtraScheduler.Appointment)?v=19.2
2021-06-13T01:59:40
CC-MAIN-2021-25
1623487598213.5
[]
docs.devexpress.com
Team Development Team Development supports parallel development on multiple, non-production ServiceNow instances. Team Development provides the following features: Branching operations, including pushing and pulling record versions between instances. The ability to compare a development instance to other development instances. A central dashboard for all Team Development activities. Team Development overviewTeam Development allows developers to work on separate development instances while sharing code and resolving collisions throughout the development process.Team Development setupTo enable parallel development on multiple non-production instances, administrators can set up the Team Development instance hierarchy and grant access rights for developers.Code reviewsTeam Development administrators can require that pushes undergo code review before accepting pushes.Code review notificationsYou must enable email notifications on the instance requiring code review for that instance to send code review notifications.Code review workflowThe Team Development Code Review workflow manages how changes are pushed to the parent.Exclusion policiesYou can exclude certain files from change tracking by creating an exclusion policy.Instance hierarchiesTeam Development allows you to set up a distributed version control system between multiple ServiceNow instances where each instance acts as a source repository, or branch.Pulls and pushesDevelopers synchronize their instances to the parent instance by pulling and pushing versions of customized records and resolving collisions between versions on the parent instance and the development instance.Team Development processThe basic Team Development process sets up the instance hierarchy, grants developer access rights, manages the movement of development changes from development instances to test instances, and promotes applications to the production instance.Team Development rolesTo use Team Development, developers must have admin access to their development instance.VersionsVersion records track changes to a customized record over time so that administrators can compare or revert to specific versions later.Versions and local changesVersion records track changes to a customizable record over time so that you can compare or revert to a specific version later.
https://docs.servicenow.com/bundle/paris-application-development/page/build/team-development/concept/c_TeamDevelopment.html
2021-06-13T02:08:27
CC-MAIN-2021-25
1623487598213.5
[]
docs.servicenow.com
7.1.3 # #, d = default, h = higher, H = highest) is: RTSS(H) > DMAS(H) > CSS(H) > RTSS(h) > RTSS(d) > CSS(h) > CSS(d) > DMAS(h) > DMAS: + "latest": search a subset of the latest events + "random": search a random sampling of events + "diverse": search a diverse sampling of events + "rare": search a rare sampling of events based on clustering * Default: latest dataset.display.sample_ratio = > * A comma-separated list of tag fields that the data model requires for its search result sets. * This is a search performance setting. Apply it only to data models that use a significant number of tag field attributes in their definitions. Data models without tag fields cannot use this setting. This setting does not recognize tags used in constraint searches. * Only the tag fields identified.3 # # Last modified on 31 August, 2018 This documentation applies to the following versions of Splunk® Enterprise: 7.1.3 Feedback submitted, thanks!
https://docs.splunk.com/Documentation/Splunk/7.1.3/Admin/Datamodelsconf
2021-06-13T03:12:54
CC-MAIN-2021-25
1623487598213.5
[array(['/skins/OxfordComma/images/acrobat-logo.png', 'Acrobat logo'], dtype=object) ]
docs.splunk.com
unreal.BuoyancyData¶ - class unreal. BuoyancyData(pontoons=[])¶ Bases: unreal.StructBase Buoyancy Data C++ Source: Plugin: Water Module: Water File: BuoyancyComponent.h Editor Properties: (see get_editor_property/set_editor_property) angular_drag_coefficient(float): [Read-Write] Angular Drag Coefficient apply_drag_forces_in_water(bool): [Read-Write] Apply Drag Forces in Water buoyancy_coefficient(float): [Read-Write] Increases buoyant force applied on each pontoon. buoyancy_damp(float): [Read-Write] Damping factor to scale damping based on Z velocity. buoyancy_damp2(float): [Read-Write] Second Order Damping factor to scale damping based on Z velocity. buoyancy_ramp_max(float): [Read-Write] Maximum value that buoyancy can ramp to (at or beyond max velocity). buoyancy_ramp_max_velocity(float): [Read-Write] Maximum velocity until which the buoyancy can ramp up. buoyancy_ramp_min_velocity(float): [Read-Write] Minimum velocity to start applying a ramp to buoyancy. drag_coefficient(float): [Read-Write] Drag Coefficient drag_coefficient2(float): [Read-Write] Drag Coefficient 2 max_buoyant_force(float): [Read-Write] Maximum buoyant force in the Up direction. max_drag_speed(float): [Read-Write] Max Drag Speed max_water_force(float): [Read-Write] Maximum push force that can be applied by rivers. pontoons(Array(SphericalPontoon)): [Read-Write] Pontoons water_shore_push_factor(float): [Read-Write] Coefficient for nudging objects to shore (for perf reasons). water_velocity_strength(float): [Read-Write] Coefficient for applying push force in rivers. - property pontoons¶ [Read-Only] Pontoons - Type (Array(SphericalPontoon))
https://docs.unrealengine.com/4.26/en-US/PythonAPI/class/BuoyancyData.html
2021-06-13T03:19:44
CC-MAIN-2021-25
1623487598213.5
[]
docs.unrealengine.com
Custom dashboards allow you to create and arrange advanced reports based on all of your customer data and activity objects in Vitally. Dashboards consist of widgets, built using a simple 3-step widget wizard. Creating a new dashboard is simple! From the navigation bar, expand the caret next to Dashboards. Then, click Add Dashboard, and finally select Create a New Dashboard. Next, you'll be prompted to name your dashboard and set permissions. You'll have two choices to control access to your new Dashboard: You can publish your view to your team's collection of Dashboards so that it can be found by all of your team members. You can elect not to share this Dashboard, and it will only be visible to you or anyone that you share the link with. Once you create your dashboard, the next step is to create your first widget. The widget wizard is broken up into a sequence of 3 steps where you will be able to select the type of data you want to visualize, how it's visualized, how the data is calculated, and how the results are filtered. First, you'll want to give your new widget a name. Next, select the data object that you'll want to visualize in your widget. Finally, select the visualization that you want to apply. You can have have widgets that are simple (e.g. "show me the current Number of renewing customers over the next 6 months") to the complex (e.g. "show me a Stacked Bar grouped by the renewal date and subdivided by the account's health"). Depending on which type of visualization you have selected, you'l be presented with a list of options to configure your new widget. Here, you'll select the Account/Task/Project/etc columns that power your widget. For example, to build a widget visualizing the health of renewing ARR over the coming months, you'll want to 1) select the Stacked Bar option in step 1 and 2) specify the below options in Step 2: That gives you a widget that looks like this! Finally, add any relevant filters in order to narrow down your results to only the Accounts, Users, or Objects that you want to quantify.
https://docs.vitally.io/reporting-and-segmentation/custom-dashboards-beta
2021-06-13T01:40:21
CC-MAIN-2021-25
1623487598213.5
[]
docs.vitally.io
Crate evmap [−] [src] Single-reader, single-writer //); } });. Since the implementation uses regular HashMaps under the hood, table resizing is fully supported. It does, however, also mean that the memory usage of this implementation is approximately twice of that of a regular HashMap, and more if writes rarely refresh after writing.
https://docs.rs/evmap/0.6.5/evmap/index.html
2021-06-13T02:39:38
CC-MAIN-2021-25
1623487598213.5
[]
docs.rs
An Act to repeal 196.025 (4), 196.025 (5), 196.192 (2) (am) and 201.10 (1); to renumber 16.95 (12), 182.0175 (1) (bt), 182.0175 (1) (bv) and 182.0175 (3) (b); to renumber and amend 16.955, 182.0175 (2) (am) 3., 182.0175 (3) (a) (title), 182.0175 (3) (a) and 196.192 (2) (bm); to amend 26.03 (1v) (b), 101.80 (1g), 182.0175 (2) (am) (title), 182.0175 (2) (am) 7., 182.0175 (2) (bm) (title), 182.0175 (2m) (b) (intro.), 182.0175 (4), 182.0175 (5), 196.192 (2) (c), 196.192 (3m), 196.193 (3), 196.49 (5g) (ag), 196.49 (5g) (ar) 2m. b., 196.49 (5g) (ar) 2m. c., 196.491 (4) (c) 1m. (intro.), 196.491 (4) (c) 1m. a., 196.491 (4) (c) 1m. b., 196.595 (1) (c), 201.10 (2), 348.17 (3) and 348.17 (4); to repeal and recreate 182.0175 (3) (title); and to create 59.693 (11), 182.0175 (1) (ab), 182.0175 (1) (ac), 182.0175 (1) (ag), 182.0175 (1) (bq), 182.0175 (1) (br), 182.0175 (1) (bw), 182.0175 (1) (by), 182.0175 (1) (bz), 182.0175 (1m) (d) 8. to 12., 182.0175 (2) (as) (title), 182.0175 (2) (as) 3., 182.0175 (3) (bg), (br) and (c), 182.0175 (3) (d) 2., 182.0175 (3) (e), 182.0175 (3) (f), 182.0175 (3) (g), 196.025 (7), 196.026 and 196.192 (2) (bm) 1. and 2. of the statutes; Relating to: one-call system enforcement and other requirements, Public Service Commission authority regarding state energy policy, settlements between parties in Public Service Commission dockets, various public utility regulatory requirements, the regulation of utility facilities under a county construction site erosion control and storm water management zoning ordinance, granting rule-making authority, and providing a penalty. (FE)
https://docs.legis.wisconsin.gov/2017/proposals/ab532
2021-06-13T03:22:53
CC-MAIN-2021-25
1623487598213.5
[]
docs.legis.wisconsin.gov
If you already use a JWT or OpenID-based authentication mechanism (e.g. using Auth0, AWS Cognito, or Firebase), you can use your existing User Tokens with Xkit (or generate new tokens just for use with Xkit). To do so, you'll need a few pieces of information about your tokens: - The contents of the iss(issuer) claim - The contents of the aud(audience) claim (optional) - The claim that uniquely identifies your user (usually the sub[subject] claim) - A claim with a friendlier name for your user (e.g. an email) (optional) - The claim that identifies the group your user belongs to (optional) - The JSON Web Key Set (JWKS) URL used to sign the JWT. To set up your tokens for use with Xkit, go to the Settings page, and scroll down to the "User Tokens" section. Click on "Add Custom Issuer" under the Custom Issuers section, and provide the information requested. Click "Save" and your custom issuer will be active, allowing users to log into Xkit with your tokens. The User ID Claim In order to use your token to provision and authenticate your user, we need to know which one of the claims on the token is that user's unique identifier. For many tokens this is the sub (subject) claim, but some tokens contain custom claims like user_id. It's important to note that the contents of this claim will serve as the user's external_id elsewhere in the Xkit service. So if for any reason it is in a different format, that's the format you'll need to use when you are communicating with Xkit about that user. The Group ID Claim If you are using the User Groups feature to share connections between your users, we need to know which group this user is a part of. This will likely be a custom claim on your token. The contents of this field will be used in the Get Group Connection endpoint to retrieve tokens for this group. Other ways to provide keys Currently the only way to provide keys that serve as signers for your Custom Issuer is as a JWKS URL as defined in RFC 7517. If you have keys in another format that you would like to use, please Contact Support. Xkit-issued Tokens Note that Xkit will still issue its own tokens to users after they are authenticated, so there is no guarantee that the token in use by an Xkit library will be a token issued by your Custom Issuer. Updated 5 months ago
https://docs.xkit.co/docs/custom-token-issuer
2021-06-13T03:23:32
CC-MAIN-2021-25
1623487598213.5
[]
docs.xkit.co
Welcome to CALDERA’s documentation!¶ CALDERA™ is a cyber security framework designed to easily run autonomous breach-and-simulation exercises. It can also be used to run manual red-team engagements or automated incident response. CALDERA is built on the MITRE ATT&CK™ framework and is an active research project at MITRE. The framework consists of two components: 1. The core system. This is the framework code, including an asynchronous command-and-control (C2) server with a REST API and a web interface. 2. Plugins. These are separate repositories that hang off of the core framework, providing additional functionality. Examples include agents, GUI interfaces, collections of TTPs and more. Visit Installing CALDERA for installation information. For getting familiar with the project, visit Getting started, which documents step-by-step guides for the most common use cases of CALDERA, and Basic usage, which documents how to use some of the basic components in core CALDERA. Visit Learning the terminology for in depth definitions of the terms used throughout the project. For information about CALDERA plugins, visit Plugin Library and How to Build Plugins if you are interested in building your own. Usage Guides - Installing CALDERA - Getting started - Learning the terminology - Basic Usage - Server Configuration - Plugin library - How CALDERA makes decisions - Objectives - Operation Results - Initial Access Attacks - Windows Lateral Movement Guide - Dynamically-Compiled Payloads - Exfiltration - Peer-to-Peer Proxy Functionality for 54ndc47 Agents - Uninstall CALDERA - Troubleshooting - Resources The following section contains documentation from installed plugins. The following section contains information intended to help developers understand the inner workings of the CALDERA adversary emulation tool, CALDERA plugins, or new tools that interface with the CALDERA server. Developer Information - The REST API - How to Build Plugins - How to Build Planners - How to Build Agents Core System API
https://caldera.readthedocs.io/en/3.1.0/
2021-06-13T02:40:25
CC-MAIN-2021-25
1623487598213.5
[]
caldera.readthedocs.io
Greenbone Security Manager with Greenbone OS 20.08 – User Manual¶ This is the manual for the Greenbone Security Manager with Greenbone OS (GOS) version 20.08. Due to the numerous functional and other differences between GOS 20.08). Greenbone Security Manager – Overview - 3.1 Physical Appliances - 3.2 Virtual Appliances - 4 Guideline for Using the Greenbone Security Manager - 5 Setting up the Greenbone Security Manager - 5.1 GSM 5400/6500 - 5.2 GSM 400/450/600/650 - 5.3 GSM 150 - 5.4 GSM 35 - 5.5 GSM CENO/DECA/TERA/PETA/EXA - 5.6 GSM 25V - 5.7 GSM ONE/MAVEN - 6 Upgrading from GOS 6 to GOS 20.08 - 6.1 Upgrading the Greenbone Security Manager - 6.2 Updating the Feed After Upgrading to GOS 20.08 - 6.3 Upgrading the Flash Partition to the Latest Version - 6.4 Reloading the Web Interface After an Upgrade - 6.5 Changes of Default Behavior - 7 Managing the Greenbone Operating System - 7.1 General Information - 7.2 Setup Menu - 7.2.1 Managing Users - 7.2.1.1 Changing the System Administrator Password - 7.2.1.2 Managing Web Users - 7.2.1.3 Creating a Web Administrator - 7.2.1.4 Enabling a Guest User - 7.2.1.5 Creating a Super Administrator - 7.2.1.6 Deleting a User Account - 7.2.1.7 Changing a User Password - 7.2.1.8 Changing the Password Policy - 7.2.1.9 Configuring the Settings for Data Objects - 7.2.2 Configuring the Network Settings - 7.2.2.1 Switching an Interface to Another Namespace - 7.2.2.2 Configuring Network Interfaces - 7.2.2.3 Configuring the DNS Server - 7.2.2.4 Configuring the Global Gateway - 7.2.2.5 Setting the Host Name and the Domain Name - 7.2.2.6 Restricting the Management Access - 7.2.2.7 Displaying the MAC and IP Addresses and the Network Routes - 7.2.3 Configuring Services - 7.2.4 Configuring Periodic Backups - 7.2.5 Configuring the Feed Synchronization - 7.2.6 Activating or Deactivating the Boreas Alive Scanner - 7.2.7 Configuring the GSM as an Airgap Master/Sensor - 7.2.8 Configuring the Time Synchronization - 7.2.9 Selecting the Keyboard Layout - 7.2.10 Configuring Automatic E-Mails - 7.2.11 Configuring the Collection of Logs - 7.2.12 Setting the Maintenance Time - 7.3 Maintenance Menu - 7.3.1 Performing a Self-Check - 7.3.2 Performing and Restoring a Backup - 7.3.3 Copying Data and Settings to Another GSM with Beaming - 7.3.4 Performing a GOS Upgrade - 7.3.5 Performing a GOS Upgrade on Sensors - 7.3.6 Performing a Feed Update - 7.3.7 Performing a Feed Update on Sensors - 7.3.8 Upgrading the Flash Partition - 7.3.9 Shutting down and Rebooting the Appliance - 7.4 Advanced Menu - 7.4.1 Displaying Log Files of the GSM - 7.4.2 Performing Advanced Administrative Work - 7.4.3 Displaying the Greenbone Security Feed (GSF) Subscription Key - 7.4.4 Displaying the Copyright File - 7.5 Displaying Information about the Appliance - 8 Getting to Know the Web Interface - 8.1 Logging into the Web Interface - 8.2 List Pages and Details Pages - 8.3 Dashboards and Dashboard Displays - 8.4 Filtering the Page Content - 8.5 Using Tags - 8.6 Using the Trashcan - 8.7 Displaying the Feed Status - 8.8 Changing the User Settings - 8.9 Opening the User Manual - 8.10 Logging Out of the Web Interface - 9 Managing the Web Interface Access - 9.1 Users - 9.2 Roles - 9.3 Groups - 9.4 Permissions - 9.4.1 Creating and Managing Permissions - 9.4.2 Granting Super Permissions - 9.4.3 Granting Read Access to Other Users - 9.5 Using a Central User Management - 10 Scanning a System - 10.1 Using the Task Wizard for a First Scan - 10.2 Configuring a Simple Scan Manually - 10.3 Configuring an Authenticated Scan Using Local Security Checks - 10.3.1 Advantages and Disadvantages of Authenticated Scans - 10.3.2 Using Credentials - 10.3.3 Requirements on Target Systems with Microsoft Windows - 10.3.4 Requirements on Target Systems with ESXi - 10.3.5 Requirements on Target Systems with Linux/Unix - 10.3.6 Requirements on Target Systems with Cisco OS - 10.3.7 Requirements on Target Systems with Huawei VRP - 10.3.8 Requirements on Target Systems with EulerOS - 10.3.9 Requirements on Target Systems with GaussDB - 10.4 Configuring a Prognosis Scan - 10.5 Using Container Tasks - 10.6 Managing Targets - 10.7 Creating and Managing Port Lists - 10.8 Managing Tasks - 10.9 Configuring and Managing Scan Configurations - 10.9.1 Default Scan Configurations - 10.9.2 Creating a Scan Configuration - 10.9.3 Importing a Scan Configuration - 10.9.4 Editing the Scanner Preferences - 10.9.5 Editing the VT Preferences - 10.9.6 Managing Scan Configurations - 10.10 Performing a Scheduled Scan - 10.11 Creating and Managing Scanners - 10.12 Using Alerts - 10.13 Obstacles While Scanning - 11 Reports and Vulnerability Management - 11.1 Configuring and Managing Report Formats - 11.2 Using and Managing Reports - 11.3 Displaying all Existing Results - 11.4 Displaying all Existing Vulnerabilities - 11.5 Trend of Vulnerabilities - 11.6 Using Tickets - 11.7 Using Notes - 11.8 Using Overrides and False Positives - 11.9 Using Business Process Maps - 12 Performing Compliance Scans and Special Scans - 12.1 Configuring and Managing Policies - 12.2 Configuring and Managing Audits - 12.3 Using and Managing Policy Reports - 12.4 Generic Policy Scans - 12.4.1 Checking File Content - 12.4.2 Checking Registry Content - 12.4.3 Checking File Checksums - 12.4.4 Performing CPE-Based Checks - 12.5 Checking Standard Policies - 12.6 Running a TLS-Map Scan - 13 Managing Assets - 14 Managing SecInfo - 15 Using the Greenbone Management Protocol - 15.1 Changes to GMP - 15.2 Activating GMP - 15.3 Using gvm-tools - 15.4 Status Codes - 16 Using a Master-Sensor Setup - 17 Managing the Performance - 17.1 Monitoring the Appliance Performance - 17.2 Optimizing the Scan Performance - 17.3 Scan Queueing - 18 Connecting the Greenbone Security Manager to Other Systems - 18.1 Using an OSP Scanner - 18.2 Using Verinice - 18.3 Using Nagios - 18.4 Using the Cisco Firepower Management Center - 18.5 Using Alemba vFire - 18.6 Using Splunk - 18.6.1 Setting up the Greenbone-Splunk App - 18.6.2 Configuring a Splunk Alert - 18.6.3 Using the Greenbone-Splunk App - 19 Architecture - 20 Frequently Asked Questions - 20.1 Why is the Scanning Process so Slow? - 20.2 Why Is a Service/Product Not Detected? - 20.3 Why Is a Vulnerability Not Detected? - 20.4 Why Is It Not Possible to Edit Scan Configurations/Port Lists/Compliance Policies/Report Formats? - 20.5 Why Is It Not Possible to Delete Scan Configurations/Port Lists/Compliance Policies/Report Formats? - 20.6 Why Are Less Scan Configurations/Port Lists/Compliance Policies/Report Formats Visible Than With Previous GOS Versions? - 20.7 Why Does a VNC Dialog Appear on the Scanned Target System? - 20.8 Why Does the Scan Trigger Alarms on Other Security Tools? - 20.9 How Can a Factory Reset of the GSM Be Performed? - 20.10 Why Does Neither Feed Update nor GOS Upgrade Work After a Factory Reset? - 20.11 How Can an Older, Newer or Unsupported Backup Be Restored? - 20.12 What Can Be Done if the GOS Administration Menu Is not Displayed Correctly in PuTTY? - 20.13 How Can the GMP Status Be Checked Without Using Credentials? - 21 Glossary - 21.1 Alert - 21.2 Asset - 21.3 CERT-Bund Advisory - 21.4 Compliance Audit - 21.5 Compliance Policy - 21.6 CPE - 21.7 CVE - 21.8 CVSS - 21.9 DFN-CERT Advisory - 21.10 Filter - 21.11 Group - 21.12 Host - 21.13 Note - 21.14 Vulnerability Test (VT) - 21.15 OVAL Definition - 21.16 Override - 21.17 Permission - 21.18 Port List - 21.19 Quality of Detection (QoD) - 21.20 Remediation Ticket - 21.21 Report - 21.22 Report Format - 21.23 Result - 21.24 Role - 21.25 Scan - 21.26 Scanner - 21.27 Scan Configuration - 21.28 Schedule - 21.29 Severity - 21.30 Solution Type - 21.31 Tag - 21.32 Target - 21.33 Task - 21.34 TLS Certificate
https://docs.greenbone.net/GSM-Manual/gos-20.08/en/
2021-06-13T01:27:36
CC-MAIN-2021-25
1623487598213.5
[]
docs.greenbone.net
kPow supports SSL termination at the instance without a reverse proxy. Once configured all content is served via HTTPS, meaning you must update any configured integrations including Prometheus scrapers, SSO providers (e.g. OPENID_LANDING_URI and callback-urls within the provider) as https://. kPow is powered by Jetty which uses Java KeyStores (JKS) to manage certificates. Refer to the Jetty documentation for instructions on using the JDK keytool or OpenSSL to create and import certificates (e.g. a .pem file) into a KeyStore. Set the following environment variable and start kPow with SSL connections. ENABLE_HTTPS=true Once set kPow will serve HTTPS traffic on the configured UI PORT HTTPS_KEYSTORE= The location of your KeyStore, e.g. /var/certs/keystore.jks HTTPS_KEYSTORE_TYPE= The type of KeyStore (eg, PKCS12). HTTPS_KEYSTORE_PASSWORD= The password of the KeyStore. HTTPS_TRUSTSTORE= - (optional) The location of your Truststore e.g. /var/certs/truststore.jks HTTPS_TRUSTSTORE_TYPE= (optional) The type of TrustStore. HTTPS_TRUSTSTORE_PASSWORD= (optional) The password of the TrustStore.
https://docs.kpow.io/features/https-connections
2021-06-13T02:00:38
CC-MAIN-2021-25
1623487598213.5
[]
docs.kpow.io
Pass your actual test with our Huawei H35-950 training material at first attempt Last Updated: Jun 08, 2021 No. of Questions: 0 Questions & Answers with Testing Engine Latest Version: V12.05 Download Limit: Unlimited We are already working hard to make H35-950 exam material available to our valued customers. If you are interested in H35-950 exam material, provide us your email and we will notify you. We provide the most up to date and accurate H35-950 questions and answers which are the best for clearing the actual test. Instantly download of the Huawei H35-950 exam practice torrent is available for all of you. 100% pass is our guarantee of H35-950 HCS-HUAWEI CLOUD Stack Service Management V1.0 accurate questions with the best reputation in the market instead can help you ward off all unnecessary and useless materials and spend all limited time on practicing most helpful questions as much as possible. To get to know more about their features of Huawei-certification HCS-HUAWEI CLOUD Stack Service Management V1.0 practice torrent, follow us as passages mentioned below. To candidates saddled with burden to exam, our HCS-HUAWEI CLOUD Stack Service Management V1.0 pdf vce is serving as requisite preparation for you. Our H35-950uawei-certification latest torrent like others. With the effective HCS-HUAWEI CLOUD Stack Service Management V1.0 practice pdf like us you can strike a balance between life and study, and you can reap immediate harvest by using our HCS-HUAWEI CLOUD Stack Service Management V1.0 updated vce. With passing rate up to 98-100 percent, our Huawei study guide has help our customers realized their dreams as much as possible. If you master the certificate of the HCS-HUAWEI CLOUD Stack Service Management V1.0 HCS-HUAWEI CLOUD Stack Service Management V1.0 prep training, you can get full refund without any reasons or switch other versions freely. We think of writing the most perfect HCS-HUAWEI CLOUD Stack Service Management V1.035-950 practice questions, who are staunch defender to your interests. What is more, we have optimized the staff and employees to choose the outstanding one to offer help. It is a win-win situation for you and our company to pass the HCS-HUAWEI CLOUD Stack Service Management V1.0 practice exam successful. So we never stop the pace of offering the best services and H35-950 free questions. That is exactly the aims of our company in these years. Over 69850+ Satisfied Customers Len Murphy Reg Toby Ada Candance Exam4Docs is the world's largest certification preparation company with 99.6% Pass Rate History from 69850+ Satisfied Customers in 148 Countries.
https://www.exam4docs.com/hcs-huawei-cloud-stack-service-management-%C2%A0v1.0-accurate-questions-11030.html
2021-06-13T02:12:48
CC-MAIN-2021-25
1623487598213.5
[]
www.exam4docs.com
Download You can download the SlicingDice JDBC's driver by clicking on this link. Connection Instructions To connect to JDBC Driver you can use any SQL Client or any language that supports JDBC. To do so you only need to add the driver to the classpath and provide the following connection string: jdbc:slicingdice:APIKeys=DATABASE_KEY_OR_CUSTOM_DATABASE_KEY; Connection String Options The JDBC driver has a lot of connection string options that you can use to customize your JDBC Driver documentation. Use EnforceTableSchema to auto-create columns The JDBC driver allows the user to auto-create missing columns using the INSERT statement, to do so you will need to add the following connection string when loading the driver: Other="EnforceTableSchema=false". Usage Instructions The SlicingDice supports the SQL-92 insertions and queries so you can use the pure SQL to make queries and insertions using our driver. To learn more about the SQL compliance you can check the driver documentation.
https://docs.slicingdice.com/v1/docs/jdbc-driver
2018-08-14T08:34:51
CC-MAIN-2018-34
1534221208750.9
[]
docs.slicingdice.com
: Basic update website. Not defined by default, so it is best to specify your SITEURL; if you do not, feeds will not be generated with properly-formed URLs. You should include your domain, with no trailing slash at the end. Example: SITEURL = '' STATIC_PATHS = ['images'. CATEGORY_URL = 'category/{slug}.html' The URL to use for a category. CATEGORY_SAVE_AS = 'category/{slug}.html' The location to save a category. TAG_URL = 'tag/{slug}.html' The URL to use for a tag. TAG_SAVE_AS = 'tag/{slug}.html' The location to save the tag page. AUTHOR_URL = 'author/{slug}.html' The URL to use for an author. AUTHOR_SAVE_AS = 'author/{slug}.html' The location to save an author. YEAR_ARCHIVE_SAVE_AS = '' The location to save per-year archives of your posts. MONTH_ARCHIVE_SAVE_AS = '' The location to save per-month archives of your posts. DAY_ARCHIVE_SAVE_AS = '' The location to save per-day archives of your posts. SLUG_SUBSTITUTIONS = () Substitutions to make prior to stripping out non-alphanumerics when generating slugs. Specified as a list of 3-tuples of (from, to, skip)which are applied in order. skipis a boolean indicating whether or not to skip replacement of non-alphanumeric characters. Useful for backward compatibility with existing URLs. AUTHOR_SUBSTITUTIONS = () Substitutions for authors. SLUG_SUBSTITUTIONSis not taken into account here! CATEGORY_SUBSTITUTIONS = () Added to SLUG_SUBSTITUTIONSfor categories. TAG_SUBSTITUTIONS = () Added to SLUG_SUBSTITUTIONSfor tags. Note If you do not want one or more of the default pages to be created (e.g., you are the only author on your site and thus do not need an Authors page), set the corresponding *_SAVE_AS setting to '' to prevent the relevant page from being generated. Note Substitutions are applied in order with the side effect that keeping non-alphanum characters applies to the whole string when a replacement is made. For example if you have the following setting: SLUG_SUBSTITUTIONS = (('C++', 'cpp'), ('keep dot', 'keep.dot', True)) the string Keep Dot will be converted to keep.dot, however C++ will keep dot will be converted to cpp will keep.dot instead of cpp-will-keep.dot! If you want to keep non-alphanum characters only for tags or categories but not other slugs then configure TAG_SUBSTITUTIONS and CATEGORY_SUBSTITUTIONS respectively! work a bit differently than noted above. Only the _SAVE_AS settings are available, but it is available for any direct template. ARCHIVES_SAVE_AS = 'archives.html' The location to save the article archives page. YEAR_ARCHIVE_SAVE_AS = '' The location to save per-year archives of your posts. MONTH_ARCHIVE_SAVE_AS = '' The location to save per-month archives of your posts. DAY_ARCHIVE_SAVE_AS = '' The location to save per-day archives of your posts.'. Time and Date¶ TIMEZONE¶ The timezone used in the date information, to generate Atom and RSS feeds...'} DIRECT_TEMPLATES = ['index', 'categories', 'authors', 'archives'] List of templates that are used directly to render content. Typically direct templates are used to generate index pages for collections of content (e.g., tags and category index pages). If the tag and category collections are not needed, set DIRECT_TEMPLATES = ['index', 'archives'] PAGINATED_DIRECT_TEMPLATES = ['index'] Provides the direct templates that should be paginated. EXTRA_TEMPLATES_PATHS = [] A list of paths you want Jinja2 to search for templates. Can be used to separate templates from the theme. Example: projects, resume, profile ... These templates need to use DIRECT_TEMPLATESsetting. Metadata¶ DEFAULT_METADATA = {} The default metadata you want to use for all articles and pages. FILENAME_METADATA = '( Relative URL to output the Atom feed. FEED_RSS = None, i.e. no RSS Relative URL to output the RSS feed. FEED_ALL_ATOM = 'feeds/all.atom.xml' Relative URL to output the all-posts Atom feed: this feed will contain all posts regardless of their language. FEED_ALL_RSS = None, i.e. no all-posts RSS Relative URL to output the all-posts RSS feed: this feed will contain all posts regardless of their language. CATEGORY_FEED_RSS = None, i.e. no RSS Where to put the category RSS feeds. TAG_FEED_ATOM = None, i.e. no tag feed Relative URL to output the tag Atom feed. It should be defined using a “%s” match in the tag name. TAG_FEED_RSS = None, i.e. no RSS tag feed Relative URL to output the tag RSS feed. DEFAULT_LANG = 'en' The default language to use.. CSS_FILE = 'main.css' Specify the CSS file you want to load..”.')] It is possible to filter out messages by a template. Check out source code to obtain a template. For example: [.. See Site generation for more"
http://docs.getpelican.com/en/3.7.1/settings.html
2018-08-14T08:56:46
CC-MAIN-2018-34
1534221208750.9
[]
docs.getpelican.com
AWS services or capabilities described in AWS Documentation may vary by region/location. Click Getting Started with Amazon AWS to see specific differences applicable to the China (Beijing) Region. Contains the output of DescribeImages. Namespace: Amazon.EC2.Model Assembly: AWSSDK.EC2.dll Version: 3.x.y.z The DescribeImagesResponse type exposes the following members This example describes the specified AMI. var response = client.DescribeImages(new DescribeImagesRequest { ImageIds = new List { "ami-5731123e" } }); List images = response
https://docs.aws.amazon.com/sdkfornet/v3/apidocs/items/EC2/TDescribeImagesResponse.html
2018-08-14T09:02:15
CC-MAIN-2018-34
1534221208750.9
[]
docs.aws.amazon.com
.NET Core and PCL this operation is only available in asynchronous form. Please refer to DescribeClusterAsync. Namespace: Amazon.ElasticMapReduce Assembly: AWSSDK.ElasticMapReduce.dll Version: 3.x.y.z Container for the necessary parameters to execute the DescribeCluster service method. .NET Framework: Supported in: 4.5, 4.0, 3.5 Portable Class Library: Supported in: Windows Store Apps Supported in: Windows Phone 8.1 Supported in: Xamarin Android Supported in: Xamarin iOS (Unified) Supported in: Xamarin.Forms
https://docs.aws.amazon.com/sdkfornet/v3/apidocs/items/EMR/MEMRDescribeClusterDescribeClusterRequest.html
2018-08-14T09:02:20
CC-MAIN-2018-34
1534221208750.9
[]
docs.aws.amazon.com
AWS services or capabilities described in AWS Documentation may vary by region/location. Click Getting Started with Amazon AWS to see specific differences applicable to the China (Beijing) Region. Container for the parameters to the RejectVpcEndpointConnections operation. Rejects one or more VPC endpoint connection requests to your VPC endpoint service. Namespace: Amazon.EC2.Model Assembly: AWSSDK.EC2.dll Version: 3.x.y.z The RejectVpcEndpointConnect
https://docs.aws.amazon.com/sdkfornet/v3/apidocs/items/EC2/TRejectVpcEndpointConnectionsRequest.html
2018-08-14T09:02:54
CC-MAIN-2018-34
1534221208750.9
[]
docs.aws.amazon.com
Application Compatibility in the .NET Framework Introduction Compatibility is a very important goal of each .NET release. Compatibility ensures that each version is additive, so previous versions will still work. On the other hand, changes to previous functionality (to improve performance, address security issues, or fix bugs) can cause compatibility problems in existing code or existing applications that run under a later version. The .NET Framework recognizes retargeting changes and runtime changes. Retargeting changes affect applications that target a specific version of the .NET Framework but are running on a later version. Runtime changes affect all applications running on a particular version. Each app targets a specific version of the .NET Framework, which can be specified by: - Defining a target framework in Visual Studio. - Specifying the target framework in a project file. - Applying a TargetFrameworkAttribute to the source code. When running on a newer version than what was targeted, the .NET Framework will use quirked behavior to mimic the older targeted version. In other words, the app will run on the newer version of the Framework, but act as if it's running on the older version. Many of the compatibility issues between versions of the .NET Framework are mitigated through this quirking model. The version of the .NET Framework that an application targets is determined by the target version of the entry assembly for the application domain that the code is running in. All additional assemblies loaded in that application domain target that .NET Framework version. For example, in the case of an executable, the framework the executable targets is the compatibility mode all assemblies in that AppDomain will run under. Runtime changes Runtime issues are those that arise when a new runtime is placed on a machine and the same binaries are run, but different behavior is seen. If a binary was compiled for .NET Framework 4.0 it will run in .NET Framework 4.0 compatibility mode on 4.5 or later versions. Many of the changes that affect 4.5 will not affect a binary compiled for 4.0. This is specific to the AppDomain and depends on the settings of the entry assembly. Retargeting changes Retargeting issues are those that arise when an assembly that was targeting 4.0 is now set to target 4.5. Now the assembly opts into the new features as well as potential compatibility issues to old features. Again, this is dictated by the entry assembly, so the console app that uses the assembly, or the website that references the assembly. .NET Compatibility Diagnostics The .NET Compatibility Diagnostics are Roslyn-powered analyzers that help identify application compatibility issues between versions of the .NET Framework. This list contains all of the analyzers available, although only a subset will apply to any specific migration. The analyzers will determine which issues are applicable for the planned migration and will only surface those. Each issue includes the following information: The description of what has changed from a previous version. How the change affects customers and whether any workarounds are available to preserve compatibility across versions. An assessment of how important the change is. Application compatibility issue are categorized as follows: Version indicates when the change first appears in the framework. Some of the changes are introduced in a particular version and reverted in a later version; that is indicated as well. The type of change: The affected APIS, if any. The IDs of the available diagnostics Usage To begin, select the type of compatibility change below:
https://docs.microsoft.com/en-us/dotnet/framework/migration-guide/application-compatibility
2018-08-14T09:27:11
CC-MAIN-2018-34
1534221208750.9
[]
docs.microsoft.com
themes existing JS files. Method #1: Add JavaScript Using wp_footer This is the fastest and easiest way to get your custom code setup. This is also the method we use throughout the Popup Maker Documentation. Simply input your custom code in the wrapper below (denoted: "// Your custom code goes here."), then add the entire code block to your functions.php, or even better, add it to a child theme or custom site plugin, so you don't lose your changes when the theme is updated. Method #2: Add JavaScript Using a Plugin A less quick, but easy solution, this plugin offers great capability and flexibility to add your custom JavaScript code: Using Simple Custom CSS and JS with Popup Maker To make sure the CSS or JS you use for Popup Maker works as intended, add your code to the plugin normally, but make sure your Options Pane is setup like this:.
https://docs.wppopupmaker.com/article/84-getting-started-with-custom-js
2018-08-14T09:12:30
CC-MAIN-2018-34
1534221208750.9
[]
docs.wppopupmaker.com
Yes, you can manually change the Affiliation/Role of any registered user. Go to Data Studio, and in the search bar (top-right hand side), enter the name of the the person who's Role you want to change. Select the check box beside the name. You’ll see a few options appear on the top. Take the mouse icon on the one that says Update Info, and click Edit Affiliation. You can change the affiliation of this person from the pop-up that appears.
https://docs.almabase.com/en/articles/1235202-can-i-change-someone-s-affiliation-from-student-to-alumni
2020-08-03T14:29:51
CC-MAIN-2020-34
1596439735812.88
[]
docs.almabase.com
base classes — bpy_struct, Sequence Sequence strip to group other strips as a single sequence strip Representation of alpha information in the RGBA pixels Animation end offset (trim end) Animation start offset (trim start)
https://docs.blender.org/api/blender_python_api_2_67_1/bpy.types.MetaSequence.html
2020-08-03T15:19:06
CC-MAIN-2020-34
1596439735812.88
[]
docs.blender.org
Azure Key Vault Developer's Guide Key Vault allows you to securely access sensitive information from within your applications: - Keys and secrets are protected without having to write the code yourself and you are easily able to use them from your applications. - You are able to have your customers own and manage their own keys so you can concentrate on providing the core software features. In this way, your applications will not own the responsibility or potential liability for your customers' tenant keys and secrets. - Your application can use keys for signing and encryption yet keeps the key management external from your application, allowing your solution to be suitable as a geographically distributed app. - Manage Key Vault certificates. For more information, see Certificates For more general information on Azure Key Vault, see What is Key Vault). Public Previews Periodically, we release a public preview of a new Key Vault feature. Try out these and let us know what you think via [email protected], our feedback email address. Creating and Managing Key Vaults Azure Key Vault provides a way to securely store credentials and other keys and secrets, but your code needs to authenticate to Key Vault to retrieve them. Managed identities for Azure resources makes solving this problem simpler by giving Azure services an automatically managed identity in Azure Active Directory (Azure AD). You can use this identity to authenticate to any service that supports Azure AD authentication, including Key Vault, without having any credentials in your code. For more information on managed identities for Azure resources, see the managed identities overview. For more information on working with Azure AD, see Integrating applications with Azure Active Directory. Before working with keys, secrets or certificates in your key vault, you'll create and manage your key vault through CLI, PowerShell, Resource Manager Templates or REST, as described in the following articles: - Create and manage Key Vaults with CLI - Create and manage Key Vaults with PowerShell - Create and manage Key Vaults with the Azure portal - Create and manage Key Vaults with REST Set and retrieve secrets - Set and retrieve a secret with CLI - Set and retrieve a secret with PowerShell - Set and retrieve a secret with the Azure portal - Secrets operations with REST - Set and retrieve a secret with Python - Set and retrieve a secret with Java - Set and retrieve a secret with Node.js - Set and retrieve a secret with .NET (v4 SDK) - Create a key vault and add a secret via an Azure Resource Manager template Set and retrieve keys - Set and retrieve a key with CLI - Set and retrieve a key with PowerShell - Set and retrieve a key with the Azure portal - Keys operations with REST - Set and retrieve a key with Python Set and retrieve certificates - Set and retrieve a certificate with CLI - Set and retrieve a certificate with PowerShell - Set and retrieve a certificate with the Azure portal - Keys operations with REST - Set and retrieve a certificate with Python Coding with Key Vault The Key Vault management system for programmers consists of several interfaces. This section contains links to all of the languages as well as some code examples. Supported programming and scripting languages REST All of your Key Vault resources are accessible through the REST interface; vaults, keys, secrets, etc. Key Vault REST API Reference. .NET .NET API reference for Key Vault. Java Node.js In Node.js, the Key Vault management API and the Key Vault object API are separate. The following overview article gives you access to both. Azure Key Vault modules for Node.js Python Azure Key Vault libraries for Python Azure CLI Azure PowerShell Azure PowerShell for Key Vault Code examples For complete examples using Key Vault with your applications, see: - Azure Key Vault code samples - Code Samples for Azure Key Vault. How-tos The following articles and scenarios provide task-specific guidance for working with Azure Key Vault: - Change key vault tenant ID after subscription move - When you move your Azure subscription from tenant A to tenant B, your existing key vaults are inaccessible by the principals (users and applications) in tenant B. Fix this using this guide. - Accessing Key Vault behind firewall - To access a key vault your key vault client application needs to be able to access multiple end-points for various functionalities. - How to Generate and Transfer HSM-Protected Keys for Azure Key Vault - This will help you plan for, generate and then transfer your own HSM-protected keys to use with Azure Key Vault. - How to pass secure values (such as passwords) during deployment - When you need to pass a secure value (like a password) as a parameter during deployment, you can store that value as a secret in an Azure Key Vault and reference the value in other Resource Manager templates. - How to use Key Vault for extensible key management with SQL Server - The SQL Server Connector for Azure Key Vault enables SQL Server and SQL-in-a-VM to leverage the Azure Key Vault service as an Extensible Key Management (EKM) provider to protect its encryption keys for applications link; Transparent Data Encryption, Backup Encryption, and Column Level Encryption. - How to deploy Certificates to VMs from Key Vault - A cloud application running in a VM on Azure needs a certificate. How do you get this certificate into this VM today? - Deploying Azure Web App Certificate through Key Vault provides step-by-step instructions for deploying certificates stored in Key Vault as part of App Service Certificate offering. - Grant permission to many applications to access a key vault Key Vault access control policy supports up to 1024 entries. However you can create an Azure Active Directory security group. Add all the associated service principals to this security group and then grant access to this security group to Key Vault. - For more task-specific guidance on integrating and using Key Vaults with Azure, see Ryan Jones' Azure Resource Manager template examples for Key Vault. - How to use Key Vault soft-delete with CLI guides you through the use and lifecycle of a key vault and various key vault objects with soft-delete enabled. - How to use Key Vault soft-delete with PowerShell guides you through the use and lifecycle of a key vault and various key vault objects with soft-delete enabled. Integrated with Key Vault These articles are about other scenarios and services that use or integrate with Key Vault. -. - Azure Data Lake Store provides option for encryption of data that is stored in the account. For key management, Data Lake Store provides two modes for managing your master encryption keys (MEKs), which are required for decrypting any data that is stored in the Data Lake Store. You can either let Data Lake Store manage the MEKs for you, or choose to retain ownership of the MEKs using your Azure Key Vault account. You specify the mode of key management while creating a Data Lake Store account. - Azure Information Protection allows you to manager your own tenant key. For example, instead of Microsoft managing your tenant key (the default), you can manage your own tenant key to comply with specific regulations that apply to your organization. Managing your own tenant key is also referred to as bring your own key, or BYOK. Key Vault overviews and concepts - Key Vault soft-delete behavior) describes a feature that allows recovery of deleted objects, whether the deletion was accidental or intentional. - Key Vault client throttling orients you to the basic concepts of throttling and offers an approach for your app. - Key Vault security worlds describes the relationships between regions and security areas. Social Supporting Libraries - Microsoft Azure Key Vault Core Library provides IKey and IKeyResolver interfaces for locating keys from identifiers and performing operations with keys. - Microsoft Azure Key Vault Extensions provides extended capabilities for Azure Key Vault.
https://docs.microsoft.com/en-us/azure/key-vault/general/developers-guide
2020-08-03T15:08:41
CC-MAIN-2020-34
1596439735812.88
[]
docs.microsoft.com
Deleting a Read Replica, for Redis (Cluster Mode Disabled) Replication Groups Information in the following topic applies to Redis (cluster mode disabled) replication groups only. As read traffic on your Redis replication group changes, you might want to add or remove read replicas. Removing a node from a Redis (cluster mode disabled) replication group is the same as just deleting a cluster, though there are restrictions: You cannot remove the primary from a replication group. If you want to delete the primary, do the following: Promote a read replica to primary. For more information on promoting a read replica to primary, see Promoting a Read Replica to Primary, for Redis (cluster mode disabled) Replication Groups. Delete the old primary. For a restriction on this method, see the next point. If Multi-AZ is enabled on a replication group, you can't remove the last read replica from the replication group. In this case, do the following: Modify the replication group by disabling Multi-AZ. For more information, see Modifying a Replication Group. Delete the read replica. You can remove a read replica from a Redis (cluster mode disabled) replication group using the ElastiCache console, the AWS CLI for ElastiCache, or the ElastiCache API. For directions on deleting a cluster from a Redis replication group, see the following:
https://docs.aws.amazon.com/AmazonElastiCache/latest/red-ug/Replication.RemoveReadReplica.html
2020-08-03T15:27:43
CC-MAIN-2020-34
1596439735812.88
[]
docs.aws.amazon.com
Introduction¶ The UV Editor is used to map 2D assets like images/textures onto 3D objects and edit what are called UVs.Vs Explained¶, and V is the up-down direction. This image is thus in two dimensions (2D). We use U and V to refer to these «texture-space coordinates» instead of the normal X and Y, which are always used (along with Z) to refer to the three-dimensional space (3D). exactly how to map the faces of your object (in this case, a box) to a flat image in the UV Editor. You have complete freedom in how to do this. (Continuing our previous example, imagine that, having initially laid the box flat on the tabletop, you now cut it into smaller pieces, somehow stretch and/or shrink those pieces, and then arrange them in some way upon a photograph that is also lying on that tabletop.) Example¶. Примечание On more complex models (like seen in the sphere above) there pops up an issue where the faces cannot be cut, but instead they are stretched in order to make them flat. This helps making easier UV maps, but sometimes adds distortion to the final mapped texture. Advantages of UVs¶ While procedural textures are useful – they never repeat themselves and always «fit» 3D objects – they are not sufficient for more complex or natural objects. For instance, the skin on a human head will never look quite right when procedurally generated. Wrinkles on a human head, or scratches on attached, Krita, or your favorite painting application. Interface¶ Header¶ The header contains several menus and options for working with UVs. - Sync Selection Keeps UV and mesh part selections in sync. - Selection Modes Vertex Edge Face Island (Only available if Sync Selection is turned off) - Sticky Selection Mode When Sync Selection is disabled, these options control how UVs are selected. - View Tools for controlling how the content is displayed in the editor. See Navigating. Tools for Selecting UVs. - Image This contains options for Image Editor. - UVs Contains tools for Unwrapping Meshes and Editing UVs. - Pivot Similar to working with pivot points in the 3D Viewport. - UV Snapping Similar to Snapping in the 3D Viewport. - Proportional Editing See Proportional Editing. - Active UV Texture Map Selector Select which UV texture to use.
https://docs.blender.org/manual/ru/latest/editors/uv/introduction.html
2020-08-03T15:54:34
CC-MAIN-2020-34
1596439735812.88
[]
docs.blender.org
Notifications in Nova¶ Similarly. Unversioned notifications¶. Versioned notifications¶: the payload version defined by the nova_object.versionfield of the payload will be increased if and only if the syntax or the semantics of the nova_object.datafield of the payload is changed. a minor version bump indicates a backward compatible change which means that only new fields are added to the payload so a well written consumer can still consume the new payload without any change. a major version bump indicates a backward incompatible change of the payload which can mean removed fields, type change, etc in the payload. there is an additional field ‘nova_object.name’ for every payload besides ‘nova_object.data’ and ‘nova_object.version’. This field contains the name of the nova internal representation of the payload type. Client code should not depend on this name. There is a Nova configuration parameter notifications.notification_format that can be used to specify which notifications are emitted by Nova. The versioned notifications are emitted to a different topic than the legacy notifications. By default they are emitted to ‘versioned_notifications’ but it is configurable in the nova.conf with the notifications.versioned_notifications_topics config option. A presentation from the Train summit goes over the background and usage of versioned notifications, and provides a demo. How to add a new versioned notification¶. The nova.notifications.objects.base module¶ - class nova.notifications.objects.base. EventType(object, action, phase=None) Bases: nova.notifications.objects.base.NotificationObject to_notification_event_type_field() Serialize the object to the wire format. - class nova.notifications.objects.base. NotificationBase(**kwargs) Bases: nova.notifications.objects.base.NotificationObject Base class for versioned notifications. Every subclass shall define a ‘payload’ field. emit(context) Send the notification. - class nova.notifications.objects.base. NotificationObject(**kwargs) Bases: nova.objects.base.NovaObject Base class for every notification related versioned object. - class nova.notifications.objects.base. NotificationPayloadBase Bases: nova.notifications.objects.base.NotificationObject Base class for the payload of versioned notifications. populate_schema(set_none=True, **kwargs) Populate the object based on the SCHEMA and the source objects - Parameters kwargs – A dict contains the source object at the key defined in the SCHEMA - class nova.notifications.objects.base. NotificationPublisher(host, source) Bases: nova.notifications.objects.base.NotificationObject nova.notifications.objects.base. notification_sample(sample) Class decorator to attach the notification sample information to the notification object for documentation generation purposes. - Parameters sample – the path of the sample json file relative to the doc/notification_samples/ directory in the nova repository root. source. What should be in the notification payload¶ This is just a guideline. You should always consider the actual use case that requires the notification. Always include the identifier (e.g. uuid) of the entity that can be used to query the whole entity over the REST API so that the consumer can get more information about the entity. You should consider including those fields that are related to the event you are sending the notification about. For example if a change of a field of the entity triggers an update notification then you should include the field to the payload. An update notification should contain information about what part of the entity is changed. Either by filling the nova_object.changes part of the payload (note that it is not supported by the notification framework currently) or sending both the old state and the new state of the entity in the payload. You should never include a nova internal object in the payload. Create a new object and use the SCHEMA field to map the internal object to the notification payload. This way the evolution of the internal object model can be decoupled from the evolution of the notification payload. Important This does not mean that every field from internal objects should be mirrored in the notification payload objects. Think about what is actually needed by a consumer before adding it to a payload. When in doubt, if no one is requesting specific information in notifications, then leave it out until someone asks for it. The delete notification should contain the same information as the create or update notifications. This makes it possible for the consumer to listen only to the delete notifications but still filter on some fields of the entity (e.g. project_id). What should NOT be in the notification payload¶ Generally anything that contains sensitive information about the internals of the nova deployment, for example fields that contain access credentials to a cell database or message queue (see bug 1823104). Existing versioned notifications¶ Note Versioned notifications are added in each release, so the samples represented below may not necessarily be in an older version of nova. Ensure you are looking at the correct version of the documentation for the release you are using.
https://docs.openstack.org/nova/train/reference/notifications.html
2020-08-03T15:39:32
CC-MAIN-2020-34
1596439735812.88
[]
docs.openstack.org
A mixed reality platform developed by Microsoft, built around the API of Windows 10. More info See in GlossaryThe fundamental object in Unity scenes, which can represent characters, props, scenery, cameras, waypoints, and more. A GameObject’s functionality is defined by the Components attached to it. More info See in Glossary in Unity. More information on holograms is available on the Microsoft developer website. Windows Mixed Reality (WMR) immersive headsets feature an opaque display to block out the physical world and surround you in a 360 degree virtual environment. Many Virtual RealityA system that immerses users in an artificial 3D world of realistic images and sounds, using a headset and motion tracking. More info See in Glossary devices, such as the HTC Vive or OculusA VR platform for making applications for Rift and mobile VR devices. More info See in Glossary Rift, use sensors built into headsets and camerasA component which creates an image of a particular viewpoint in your scene. The output is either drawn to the screen or captured as a texture. More info See in Glossary Did you find this page useful? Please give it a rating:
https://docs.unity3d.com/2018.2/Documentation/Manual/wmr_sdk_overview.html
2020-08-03T14:33:02
CC-MAIN-2020-34
1596439735812.88
[]
docs.unity3d.com
The standard shader presents you with a list of material parameters. These parameters vary slightly depending on whether you have chosen to work in the Metallic workflow mode or the Specular workflow mode. Most of the parameters are the same across both modes, and this page covers all the parameters for both modes. These parameters can be used together to recreate the look of almost any real-world surface.
https://docs.unity3d.com/ru/2017.2/Manual/StandardShaderMaterialParameters.html
2020-08-03T15:45:22
CC-MAIN-2020-34
1596439735812.88
[]
docs.unity3d.com
Traffic Usage Traffic Usage APIs enables SwiftFederaton partners or customers to get 5-minute interval based volume usage, bandwidth usage and request usage for billing purpose. Please note, these APIs can only return the usage data of the logs that are 48 hours earlier than current time. Traffic Usage APIs includes: - Get Bandwidth Usage - Get Volume Usage - Get Request Number Usage
https://cdn-docs.swiftfederation.com/TrafficUsage/
2020-08-03T15:17:51
CC-MAIN-2020-34
1596439735812.88
[]
cdn-docs.swiftfederation.com
Centrifuge custom index Posted in Upload your private data by Rosario Brancaccio Thu Jan 02 2020 13:55:18 GMT+0000 (Coordinated Universal Time)·1·Viewed 276 times? Hi Rosario, Here's the suggested solution by our Bioinformatics Team: The database from which the sequences can be downloaded in the Reference Index Creation workflow is fixed to RefSeq. However, you can use Centrifuge Download tool to download sequences from GenBank database, as you already did locally, and then use the Centrifuge Build tool to build the index. The problem you are facing when using this tool is caused by providing the output of the Centrifuge Build tool, that you ran locally, directly to the Centrifuge tool on the cloud. The Centrifuge tool on the CGC takes the centrifuge index in tar.gzand tarformat. So, there are two options: tar -cf basename-string-value.tar basename-string-value.1.cf basename-string-value.2.cf basename-string-value.3.cf) and upload basename-string-value.tarto the CGC, and then run the Centrifuge Classifier tool with that file. Best, Marko
https://docs.cancergenomicscloud.org/discuss/5e0df646be1e1c001e4ae0db
2020-08-03T15:18:19
CC-MAIN-2020-34
1596439735812.88
[]
docs.cancergenomicscloud.org
Proxy tab It is also important for a company to check the users' navigation behaviour. Proxy servers have a very important role in a network, acting as an intermediary for requests from clients seeking resources from other servers. The widgets in this tab draw information from the proxy.all.access table.. Proxy traffic evolution You can check the proxy traffic history through a heat calendar that shows the daily amount of proxy traffic over the last 12 months. The line chart next to it shows proxy traffic over the last 24 hours. User behavior Select the Expand section option to see the following widgets: Users by accessed hosts The Voronoi diagram is a breakdown of users grouped by hosts over the period specified. Most accessed remote hosts This table below shows a count of users with the hosts they accessed, along with the corresponding source IP address. Enter a user or host in the search box at the top of the table to filter the contents of the list. You can also use this table to filter the contents of the Users by accessed hosts Voronoi diagram, the Most active users pie chart and the Distribution of users, hosts and IPs graph in this section. Select a user or host name and those widgets will be filtered out using the selected value. To remove the filter, select the x icon in the blue bar that appears in the filtered widgets. Users by number of connections Check the proportion of users over the period specified. Users, hosts and IPs This graph is a representation of the different relationships between users, IP addresses and accessed hosts. Most rejected users and categories These tables display a count of the most denied IP addresses, with their corresponding users and categories. Top active users Check the count of IP addresses and users with the highest number of connections over the selected period. Users by OS, family and device This Voronoi diagram shows users grouped by their corresponding operating systems, families and devices. Navigation behavior Select the Expand section option to see the following widgets: Distribution of accesses This diagram groups the different accesses by top-level domain, sub-domain and host. URLs with the highest entropy This table lists the URLs with the highest Shannon entropy over the selected period. You can also use this table to filter the contents of the Traffic by domains graph in this section. Select a user or host name and the widget will be filtered out using the selected value. To remove the filter, select the x icon in the blue bar that appears in the filtered widgets. Accessed top-level and sub-domains This pie chart shows the breakdown of most accessed first-level domains over the selected period. See the relationships between accessed top-level domains and sub-domains over the selected period. Most denied categories These pie charts show the most denied categories and web categories over the selected period. Top machines with TCP errors This table shows a count of the machines with the highest number of errors, including the corresponding types and categories. Accesses by request status, category and status code This Voronoi diagram is a breakdown of accesses grouped by request status, category and status code. Result codes This bar chart shows the frequency of cache result codes over the selected period.
https://docs.devo.com/confluence/ndt/applications/security-insights/navigating-security-insights/proxy-tab
2020-08-03T14:23:13
CC-MAIN-2020-34
1596439735812.88
[]
docs.devo.com
. Reference¶ - class...
https://docs.h5py.org/en/2.7.1/high/attr.html
2020-08-03T15:21:23
CC-MAIN-2020-34
1596439735812.88
[]
docs.h5py.org
TOPICS× Create a segment dimensions To create a segment dimension, you begin by making a selection within a workspace and then adding the segment to a visualization. To create a segment dimension - Add a segment visualization to the workspace. For example: - Add visualizations to your workspace that you want to use to define your segment, then make the desired selections to define your segment. - In the segment visualization, right-click the label of the segment after which you want the new segment to be added and click Add Segment .To create a new first segment, right-click the Segments label and click Add Segment .A new segment (named New Segment) appears in the visualization. The Other segment represents all of the data not included in your defined segments: it is effectively the difference between your dataset data and your segment data. - Right-click the newly created segment and click Rename Segment . - Type a descriptive name for your new segment in the name field.If a metric value, such as a particular visitor in Site, meets the criteria of multiple segments, the metric value is included in only the first listed segment that it matches. To save the segment dimension - Right-click the Segments label and click Save Dimension . The Save Dimension As window appears. The default save location is the User*profile name*\Dimensions folder. - In the File name field, type a descriptive name for the segments that you are saving as a dimension and click Save . You can access the segment dimension whenever you are working with a visualization. You also can export data associated with the elements in your saved dimension using the segment export feature. For more information about the segment export feature and instructions to configure it for your needs, see Configuring Segments for Export .
https://docs.adobe.com/content/help/en/data-workbench/using/client/analysis-visualizations/segments/c-create-seg-dim.html
2020-08-03T16:06:49
CC-MAIN-2020-34
1596439735812.88
[]
docs.adobe.com
PaaS provisioning BMC Cloud Lifecycle Management lets you provision a software stack on a computing platform. This functionality is called platform-as-a-service (PaaS) provisioning. BMC Cloud Lifecycle Management can provision the following types of public cloud PaaS platforms: - Databases (for example, Amazon or Microsoft Azure SQL Server) - Application Containers (for example, Cloud Foundry or Heroku) - Custom Containers Using BMC Database Automation, BMC Cloud Lifecycle Management can provision the following types of databases as platforms (although there are some limitations on the operating systems available for each supported version): - Oracle RAC - Oracle Restart - Oracle Standalone Note For assistance on adding a new custom PaaS provider of type database, application container, or custom container, contact BMC Professional Services. The following topics describe the process to set up and use PaaS provisioning: The following video (6:09) gives an overview of PaaS provisioning in BMC Cloud Lifecycle Management.
https://docs.bmc.com/docs/cloudlifecyclemanagement/45/administering-the-product/paas-provisioning
2020-08-03T14:16:23
CC-MAIN-2020-34
1596439735812.88
[]
docs.bmc.com
The Unlock Document node allows you to select a document that has previously been locked and alter the settings so that it becomes accessible. Only a System Administrator, a member of the Document Administrators group, a Knowledge Center Administrator, or the person who created the lock can break it. Unlock Document smart service is available as an expression function that can be executed inside a saveInto on a Interface Component or as part of a Web API. a!unlockDocument(document, onSuccess, onError) a!save()or a!httpResponse(). a!save()or a!httpResponse(). On This Page
https://docs.appian.com/suite/help/20.2/Unlock_Document_Smart_Service.html
2020-08-03T15:46:24
CC-MAIN-2020-34
1596439735812.88
[]
docs.appian.com
placement-manage¶ Description¶ placement-manage is used to perform administrative tasks with the placement service. It is designed for use by operators and deployers. Options¶ The standard pattern for executing a placement-manage command is: placement-manage [-h] [--config-dir DIR] [--config-file PATH] <category> <command> [<args>] Run without arguments to see a list of available command categories: placement-manage You can also run with a category argument such as db to see a list of all commands in that category: placement-manage db Configuration options (for example the [placement_database]/connection URL) are by default found in a file at /etc/placement/placement.conf. The config-dir and config-file arguments may be used to select a different file. The following sections describe the available categories and arguments for placement-manage. Placement Database¶ placement-manage db version Print the current database version. placement-manage db sync Upgrade the database schema to the most recent version. The local database connection is determined by [placement_database]/connectionin the configuration file used by placement-manage. If the connectionoption is not set, the command will fail. The defined database must already exist. placement-manage db stamp <version> Stamp the revision table with the given revision; don’t run any migrations. This can be used when the database already exists and you want to bring it under alembic control. placement-manage db online_data_migrations [--max-count] Perform data migration to update all live data. --max-countcontrols the maximum number of objects to migrate in a given call. If not specified, migration will occur in batches of 50 until fully complete. Returns exit code called after upgrading database schema and placement services on all controller nodes.. For example: $ placement-manage db online_data_migrations Running batches of 50 until complete 2 rows matched query create_incomplete_consumers, 2 migrated +---------------------------------------------+-------------+-----------+ | Migration | Total Found | Completed | +---------------------------------------------+-------------+-----------+ | set_root_provider_ids | 0 | 0 | | create_incomplete_consumers | 2 | 2 | +---------------------------------------------+-------------+-----------+ In the above example, the create_incomplete_consumersmigration found two candidate records which required a data migration. Since --max-countdefaults to 50 and only two records were migrated with no more candidates remaining, the command completed successfully with exit code 0.
https://docs.openstack.org/placement/ussuri/cli/placement-manage.html
2020-08-03T15:49:01
CC-MAIN-2020-34
1596439735812.88
[]
docs.openstack.org
Proxy Configuration NOTE - InsightOps and Proxy Support InsightOps does not currently support the agent-based proxy configuration procedure detailed here. As an alternative, InsightOps does support a logging.json proxy definition with additional key-value pairs. See the InsightOps - Configure the Insight Agent to Send Logs page for instructions. Insight Agent versions 2.3 and later are proxy-aware and comply with proxy routing definitions for the purpose of communicating with the Insight platform at and its various subdomains. The agent follows the highest priority proxy definition found, whether configured at the operating system level or in the file structure of the agent, according to an obedience hierarchy. Requirements The Insight Agent can only communicate through proxies that meet the following protocol requirements and authentication schemes. Supported HTTPS Proxies Your proxy must support the Request For Comments (RFC) 2817 standard, which specifies the HTTP CONNECT verb. Authentication Schemes Currently, only BASIC authentication is supported. Agent Behavior With Proxies If an HTTPS proxy is detected, the Insight Agent adds it as an additional communication route. Existing routes to any deployed Collectors or standard direct communication routes to the Insight platform are not removed in the process. Although the agent automatically determines the most efficient route through a calculated efficiency metric, proxy routes are always used if they can reach the Insight platform at the time of transmission. NOTE Higher priority proxy definitions will override lower priority definitions. Priority 1: proxy.config File You can automatically configure the proxy.config file during a command line installation of the Insight Agent or manually after the agent has been installed. TIP The command line argument method works with both the legacy certificate package installer and the preferred token-based installer. To configure the proxy.config file at install time with the command line: - Download the Insight Agent installer for the operating system of your choice. - Open a command line interface or terminal. - Run one of the following installation commands according to your endpoint operating system and include the proxy argument as shown ( 10.1.2.3:8443appears here as an example IP address and port combination): - Linux - sudo ./agent_installer.sh install_start --https-proxy=10.1.2.3:8443 - Mac - sudo ./agent_installer.sh install_start --https-proxy=10.1.2.3:8443 - Windows - msiexec /i agentInstaller-x86_64.msi HTTPSPROXY=10.1.2.3:8443 Alternatively, you can configure proxy awareness on individual agents that you have already installed by manually creating the proxy.config file in the agent installation directory. To manually create and configure the proxy.config file: - Browse to the following folder in your agent installation directory according to your operating system: - Linux - /opt/rapid7/ir_agent/components/bootstrap/common/ - Windows - C:\Program Files\Rapid7\Insight Agent\components\bootstrap\common\ - Mac - /opt/rapid7/ir_agent/components/bootstrap/common/ - Create a new file named proxy.config. .configis the necessary file extension. - Open proxy.configin a text editor and define your proxy address according to this example syntax: 1{ "https": "10.1.2.3:8443" } This JSON string stipulates that the agent must tunnel all HTTPS traffic to the address and port that you specify. If you want to specify your proxy address with basic authentication, you can do so by modifying the proxy declaration string to include a username and password. Substitute <username> and <password> with the appropriate values according to this example: 1{ "https": "<username>:<password>@10.1.2.3:8443" } Priority 2: Environment Variable While lower in priority than the manual proxy.config file configuration method, the agent regards the HTTPS_PROXY environment variable as the highest priority system-level proxy definition. Additionally, the agent also obeys any destinations excluded by the NO_PROXY environment variable. NOTE - This method is intended for Linux assets Configuring the HTTPS_PROXY environment variable is not necessary for Windows or Mac assets, nor do we recommend it given its constraints and procedural complexity. If you need to configure proxy rules at the system level for these operating systems, follow the instructions detailed in the Priority 3: Operating System Configuration section of this article. However, the HTTPS_PROXY environment variable is the only system-level proxy definition source available on Linux assets. If you need to configure proxy rules at the system level for a Linux host, you must set the HTTPS_PROXY environment variable according to the instructions detailed in this section. Linux HTTPS_PROXY Environment Variable Procedure The HTTPS_PROXY environment variable must be configured in the Insight Agent service file in order to save properly and persist as the host is powered on and off. The agent service file is located in /etc/systemd/system/ir_agent.service. To set the HTTPS_PROXY environment variable using the agent service file on a Linux host: - Open a terminal on the Linux host. - Navigate to your ir_agent.servicefile and open it with a text editor, such as vi: 1vi /etc/systemd/system/ir_agent.service - The output displays a series of categorical tags with variables under each tag. Under the [Service]tag, add the following line as shown and specify your desired IP address and port ( 10.1.2.3:8443appears here as an example IP address and port combination): 1Environment="HTTPS_PROXY=10.7.1.219:3128" - Save and close your changes. - To finish, reload systemctland restart the agent service: 1systemctl daemon-reload2service ir_agent restart Priority 3: Operating System Configuration If higher priority proxy definitions do not exist, the agent obeys proxy rules defined by the standard proxy configuration tools featured by the Windows and Mac operating systems. Windows The agent supports both the Automatically detect settings and Use automatic configuration script options shown in the following “Local Area Network (LAN) Settings” window. You can manually configure proxy definitions in the “Proxy Settings” window: - Browse to the “Internet Properties” window. - Click LAN settings. - On the “Local Area Network (LAN) Settings” window, check the box under “Proxy server” to enable the proxy configuration fields. - Click Advanced. - Specify your proxy address in the “Secure” field. - Specify any exceptions as necessary. NOTE Since Insight Agent runs as the SYSTEM user, the following registry value must be used when using any of the above settings: HKEY_USERS\S-1-5-18\Software\Microsoft\Windows\CurrentVersion\Internet Settings\Connections\DefaultConnectionSettings Additionally, the agent references WinHTTP settings if no proxy definitions are found in the previous Windows system configuration options. Run the following command in a command prompt to set a WinHTTP proxy definition. Substitute <ip_address_or_domain> and <port> with the necessary values as shown: 1netsh winhttp set proxy <ip_address_or_domain>:<port> If you need to specify exclusions, append the previous command with an exclusion list according to the following syntax. Substitute instances of <exclusion> with the necessary values and delimit multiple exclusions with a ; as shown: 1netsh winhttp set proxy <ip_address_or_domain>:<port> “<exclusion1>;<exclusion2>” TIP You can also mass-deploy WinHTTP proxy definition settings with a Group Policy. See the following Microsoft TechNet blog post for more information and instructions: Given all these system-level proxy configuration options, the agent also follows a Windows-specific obedience hierarchy: Mac Configure proxy definitions in the “Proxies” tab: - Click System Preferences. - In the “System Preferences” window, click Network. - In the “Network” window, click Advanced. - Click the Proxies tab. - Select the Secure Web Proxy (HTTPS) option. - Specify an address in the “Secure Web Proxy Server” field. - Specify any exceptions as necessary.
https://docs.rapid7.com/insight-agent/proxy-configuration/
2020-08-03T15:18:39
CC-MAIN-2020-34
1596439735812.88
[array(['/areas/docs/_repos//product-documentation__master/219cac3b8113f35fe0fea66a9c8d532502b33c46/insightagent/images/windows_proxy_config.png', None], dtype=object) array(['/areas/docs/_repos//product-documentation__master/219cac3b8113f35fe0fea66a9c8d532502b33c46/insightagent/images/macos_proxy_config.png', None], dtype=object) ]
docs.rapid7.com
GET /thread/messageUrl This API is used to get the message URL of a thread. The message URL is the URL to download the file which contains the thread message. Target users: anyone who has READ permission to the project. The resulting URL will be signed with Content-Type ="text/plain; charset=utf-8"; therefore, this header must be included with the GET on the URL. Resource URL
https://docs.synapse.org/rest/GET/thread/messageUrl.html
2020-08-03T14:12:12
CC-MAIN-2020-34
1596439735812.88
[]
docs.synapse.org
Default variable details¶ Some of debops.ferm default variables have more extensive configuration than simple strings or lists, here you can find documentation and examples for them. ferm__rules¶ The ferm__*_rules variables are YAML lists which define what firewall rules are configured on a host. The rules are combined together in the ferm__combined_rules variable which defines the order of the rule variables and therefore how they will affect each other. Each entry in the ferm__*_rules lists is a YAML dictionary. The entry needs to have the name parameter that specifies the rule name, otherwise it will be skipped. The result is stored as ferm__parsed_rules variable. This order allows modification of the default rules as well as rules defined by other Ansible roles using Ansible inventory variables. The rules are stored in the /etc/ferm/rules.d/ directory and the filename format is: /etc/ferm/rules.d/<weight>_rule_<name>.conf The rule "weight" is determined by a given rule type which can be overridden if needed, see the type, weight and weight_class parameters for more details. Each rule defined in a dictionary uses specific parameters. The parameters described here are general ones, mostly usable on the main "level" and are related to management of rule files. The parameters related to specific ferm rules are described in Firewall Rule Definitions documentation. name Name of the firewall rule to configure. An example rule definition: ferm__rules: - name: 'accept_all_connections' type: 'accept' accept_any: True rules Either a string or a YAML text block that contains raw ferm configuration options, or a list of YAML dictionaries which specify firewall rules. If this parameter is not specified, role will try and generate rules automatically based on other parameters specified on the "first level" of a given rule definition. Most of the other parameters can be specified on the "second level" rules and will apply to a given rule in the list. Example custom rule definition that restarts nginx after firewall is modified: ferm__rules: - name: 'restart_nginx': type: 'post-hook' rules: '@hook post "type nginx > /dev/null && systemctl restart nginx || true";' Example list of rule definitions which will open access to different service ports; rules will be present in the same file: ferm__rules: - name: 'allow_http_https' rules: - dport: 'http' accept_any: True - dport: 'https' accept_any: True rule_state Optional. Specify the state of the firewall rule file, or one of the rules included in that file. Supported states: present: default. The rule file will be created if it doesn't exist, a rule will be present in the file. absent: The rule file will be removed, a rule in the file will not be generated. ignore: the role will not change the current state of the configuration file. This value does not have an effect on the rules inside the file. comment - Optional. Add a comment in the rule configuration file, either as a string or as a YAML text block. template - Optional. Name of the template to use to generate the firewall rule file. Currently only one template is available, ruleso this option is not useful yet. type Optional. Specify the rule type as a name, for example acceptor reject. Different rule types can use different rule parameters, the rule type also affects the "weight" used to order the configuration files. Weight of the different rules is specified in the ferm__default_weight_mapvariable and can be overridden using the ferm__weight_mapvariable. List of known rule types can be found in the Firewall Rule Definitions documentation. weight_class - Optional. Override the rule type with another type, to change the sort order of the configuration files. This parameter does not affect the ferm configuration template, only the resulting filename. weight - Optional. Additional positive or negative number (for example 2or -2) which will be added to the rule weight affecting the file sorting order. ferm_input_list¶ This is a set of legacy debops.ferm variables, kept to allow older roles to be usable with new variables. You should use the ferm__*_rules variables instead in new configuration, the legacy variables will be removed at some point. List of ferm INPUT rules that should be present or absent in the firewall rule set. The same format is also used for ferm_input_group_list, ferm_input_host_list and ferm_input_dependent_list. Each rule is defined as a YAML dict with the following keys: type - Name of template file to use, required. Format: <type>.conf.j2 dport - List of destination ports to manage, required. name - Optional. Custom name used in the generated rule filename weight - Optional. Helps with file sorting in rule directory filename - Optional. Custom filename instead of a generated one rule_state - Optional. State of the rule. Defaults to present. Possible values: presentor absent Depending on the chosen type, many additional variables are supported. Please check the template files located in the templates/etc/ferm/ferm.d directory.
https://docs.debops.org/en/master/ansible/roles/ferm/defaults-detailed.html
2020-08-03T14:24:22
CC-MAIN-2020-34
1596439735812.88
[]
docs.debops.org
Optimize Windows 10 update delivery with Configuration Manager Applies to: Configuration Manager (current branch) and network load to optimize update delivery. This article explains these technologies, compares them, and provides recommendations to help you make decisions on which one to use. Windows 10 provides several types of updates. For more information, see Update types in Windows Update for Business. This article focuses on Windows 10 quality updates with Configuration Manager. Express update delivery Windows 10 quality update downloads can be large. Every package contains all previously released fixes to ensure consistency and simplicity. Microsoft has been able to reduce the size of Windows 10 update content that each client downloads with a feature called express. Express is used today by millions of devices that pull updates directly from the Windows Update service and significantly reduces the download size. This benefit is also available to customers whose clients don't directly download from the Windows Update service. Configuration Manager added support for express installation files of Windows 10 quality updates in version 1702. However, for the best experience it's recommended that you use Configuration Manager version 1802 or later. For the best performance in download speeds, it's also recommended that you use Windows 10, version 1703 or later. Note The express version content is considerably larger than the full-file version. An express installation file contains all of the possible variations for each file it's meant to update. As a result, the required amount of disk space increases for updates in the update package source and on distribution points when you enable express support in Configuration Manager. Even though the disk space requirement on the distribution points increases, the content size that clients download from these distribution points decreases. Clients only download the bits they require (deltas) but not the whole update. Peer-to-peer content distribution Even though clients download only the parts of the content that they require, expedite Windows updates in your environment by utilizing peer-to-peer content distribution. Leveraging peers as a download source for quality updates can be beneficial for environments where local distribution points aren't present in remote offices. This behavior prevents the need for all clients to download content from a remote distribution point across a slow WAN link. Using peers can also be beneficial when clients fallback to the Windows Update service. Only one peer is needed to download update content from the cloud before making it available to other devices. Configuration Manager supports many peer-to-peer technologies, including the following: - Windows Delivery Optimization - Configuration Manager peer cache - Windows BranchCache The next sections provide further information on these technologies. Windows Delivery Optimization Delivery Optimization is the main download technology and peer-to-peer distribution method built into Windows 10. Windows 10 clients can get content from other devices on their local network that download the same updates. Using the Windows options available for Delivery Optimization, you can configure clients into groups. This grouping allows your organization to identify devices that are possibly the best candidates to fulfill peer-to-peer requests. Delivery Optimization significantly reduces the overall bandwidth that's used to keep devices up-to-date while speeding up the download time. Note Delivery Optimization is a cloud-managed solution. Internet access to the Delivery Optimization cloud service is a requirement to utilize its peer-to-peer functionality. For information about the needed internet endpoints, see Frequently asked questions for Delivery Optimization. For the best results, you may need to set the Delivery Optimization download mode to Group (2) and define Group IDs. In group mode, peering can cross internal subnets between devices that belong to the same group including devices in remote offices. Use the Group ID option to create your own custom group independently of domains and AD DS sites. Group download mode is the recommended option for most organizations looking to achieve the best bandwidth optimization with Delivery Optimization. Manually configuring these Group IDs is challenging when clients roam across different networks. Configuration Manager version 1802 added a new feature to simplify management of this process by integrating boundary groups with Delivery Optimization. When a client wakes up, it talks to its management point to get policies, and provides its network and boundary group information. Configuration Manager creates a unique ID for every boundary group. The site uses the client's location information to automatically configure the client's Delivery Optimization Group ID with the Configuration Manager boundary ID. When the client roams to another boundary group, it talks to its management point, and is automatically reconfigured with a new boundary group ID. With this integration, Delivery Optimization can utilize the Configuration Manager boundary group information to find a peer from which to download updates. Delivery Optimization starting in version 1910 Starting with Configuration Manager version 1910, you can use Delivery Optimization for the distribution of all Windows update content for clients running Windows 10 version 1709 or later, not just express installation files. To use Delivery Optimization for all Windows update installation files, enable the following software updates client settings: - Allow clients to download delta content when available set to Yes. - Port that clients use to receive requests for delta content set to 8005 (default) or a custom port number. Important - Delivery Optimization must be enabled (default) and not bypassed. For more information, see Windows Delivery Optimization reference. - Verify your Delivery Optimization client settings when changing your software updates client settings for delta content. - Delivery Optimization can't be used for Office 365 client updates if Office COM is enabled. Office COM is used by Configuration Manager to manage updates for Office 365 clients. You can deregister Office COM to allow the use of Delivery Optimization for Office 365 updates. When Office COM is disabled, software updates for Office 365 are managed by the default Office Automatic Updates 2.0 scheduled task. This means that Configuration Manager doesn't dictate or monitor the installation process for Office 365 updates. Configuration Manager will continue to collect information from hardware inventory to populate Office 365 Client Management Dashboard in the console. For information about how to deregister Office COM, see Enable Office 365 clients to receive updates from the Office CDN instead of Configuration Manager. - When using a CMG for content storage, the content for third-party updates won't download to clients if the Download delta content when available client setting is enabled. Configuration Manager peer cache Peer cache is a feature of Configuration Manager that enables clients to share with other clients content directly from their local Configuration Manager cache. Peer cache doesn't replace the use of other peer caching solutions like Windows BranchCache. It works together with them to provide more options for extending traditional content deployment solutions such as distribution points. Peer cache doesn't rely upon BranchCache. If you don't enable or use BranchCache, peer cache still works. Note Clients can only download content from peer cache clients that are in their current boundary group. Windows BranchCache BranchCache is a bandwidth optimization technology in Windows. Each client has a cache, and acts as an alternate source for content. Devices on the same network can request this content. Configuration Manager can use BranchCache to allow peers to source content from each other versus always having to contact a distribution point. Using BranchCache, files are cached on each individual client, and other clients can retrieve them as needed. This approach distributes the cache rather than having a single point of retrieval. This behavior saves a significant amount of bandwidth, while reducing the time for clients to receive the requested content. Selecting the right peer caching technology Selecting the right peer caching technology for express installation files depends upon your environment and requirements. Even though Configuration Manager supports all of the above peer-to-peer technologies, you should use those that make the most sense for your environment. For most customers, assuming clients can meet the internet requirements for Delivery Optimization, the Windows 10 built-in peer caching with Delivery Optimization should be sufficient. If your clients can't meet these internet requirements, consider using the Configuration Manager peer cache feature. If you're currently using BranchCache with Configuration Manager and it meets all your needs, then express files with BranchCache may be the right option for you. Peer cache comparison chart Conclusion Microsoft recommends that you optimize Windows 10 quality update delivery using Configuration Manager with express installation files and a peer caching technology, as needed. This approach should alleviate the challenges associated with Windows 10 devices downloading large content for installing quality updates. Keeping Windows 10 devices current by deploying quality updates each month is also recommended. This practice reduces the delta of quality update content needed by devices each month. Reducing this content delta causes smaller size downloads from distribution points or peer sources. Due to the nature of express installation files, their content size is considerably larger than traditional self-contained files. This size results in longer update download times from the Windows Update service to the Configuration Manager site server. The amount of disk space required for both the site server and distribution points also increases. The total time required to download and distribute quality updates could be longer. However, the device-side benefits should be noticeable during the download and installation of quality updates by the Windows 10 devices. For more information, see Using Express Installation Files. If the server-side tradeoffs of larger-size updates are blockers for the adoption of express support, but the device-side benefits are critical to your business and environment, Microsoft recommends that you use Windows Update for Business with Configuration Manager. Windows Update for Business provides all of the benefits of express without the need to download, store, and distribute express installation files throughout your environment. Clients download content directly from the Windows Update service, thus can still use Delivery Optimization. Frequently asked questions How do Windows express downloads work with Configuration Manager? The Windows update agent (WUA) requests express content first. If it fails to install the express update, it can fall back to the full-file update. The Configuration Manager client tells WUA to download the update content. When WUA initiates an express download, it first downloads a stub (for example, Windows10.0-KB1234567-<platform>-express.cab), which is part of the express package. WUA passes this stub to the Windows update installer, component-based servicing (CBS). CBS uses the stub to do a local inventory, comparing the deltas of the file on the device with what is needed to get to the latest version of the file being offered. CBS then asks WUA to download the required ranges from one or more express .psf files. If Delivery Optimization is enabled and peers are discovered to have the needed ranges, the client will download from peers independently of the ConfigMgr client. If Delivery Optimization is disabled or no peers have the needed ranges, the ConfigMgr client will download these ranges from a local distribution point (or a peer or Microsoft Update). The ranges are passed to the Windows Update Agent which makes them available to CBS to apply the ranges. Why are the express files (.psf) so large when stored on Configuration Manager peer sources in the ccmcache folder? The express files (.psf) are sparse files. To determine the actual space being used on disk by the file, check the Size on disk property of the file. The Size on disk property should be considerably smaller than the Size value. Does Configuration Manager support express installation files with Windows 10 feature updates? No, Configuration Manager currently only supports express installation files with Windows 10 quality updates. How much disk space is needed per quality update on the site server and distribution points? It depends. For each quality update, both the full-file and express version of the update are stored on servers. Windows 10 quality updates are cumulative, so the size of these files increases each month. Plan for a minimum of 5 GB per update per language. Do Configuration Manager clients still benefit from express installation files when falling back to the Windows Update service? Yes. If you use the following software update deployment option, then clients still use express updates and Delivery Optimization when they fall back to the cloud service: If software updates are not available on distribution point in current, neighbor or site groups, download content from Microsoft Updates Why is express file content not downloaded for existing updates after I enable express file support? The changes only take effect for any new updates synchronized and deployed after enabling support. Is there any way to see how much content is downloaded from peers using Delivery Optimization? Windows 10, version 1703 (and later) includes two new PowerShell cmdlets, Get-DeliveryOptimizationPerfSnap and Get-DeliveryOptimizationStatus. These cmdlets provide more insight into Delivery Optimization and cache usage. For more information, see Delivery Optimization for Windows 10 updates How do clients communicate with Delivery Optimization over the network? For more information about the network ports, proxy requirements, and hostnames for firewalls, see FAQs for Delivery Optimization. Log files Use the following log files to monitor delta downloads: - WUAHandler.log - DeltaDownload.log
https://docs.microsoft.com/en-GB/mem/configmgr/sum/deploy-use/optimize-windows-10-update-delivery
2020-08-03T15:06:35
CC-MAIN-2020-34
1596439735812.88
[]
docs.microsoft.com
transitionExperimental. This type is experimental and subject to change at any time. Do not depend on it.. Methods transition transition transition(implementation, inputs, outputs)Experimental. This type is experimental and subject to change at any time. Do not depend on it. Creates a configuration transition to be applied across a dependency edge.
https://docs.bazel.build/versions/0.22.0/skylark/lib/transition.html
2020-08-03T15:15:34
CC-MAIN-2020-34
1596439735812.88
[]
docs.bazel.build
To allow you to write, design, and adjust the setting for your campaign better, The Create Campaign process has been broken down into 3 different pages. Let’s take a look at the different building blocks of sending a campaign to your subscribers: 1. Campaign Details On this page, there are 4 elements: 1. Campaign Type: Choose whether you want to send a regular campaign (no expiry date) or a 'Flash Sale Campaign' (set an expiry date) Here's how it looks when you select 'Flash Sale Campaign'. You can use the date picker on the right to choose when you want the campaign to stop sending. 2. Sending Options: You can choose Send Now (send the campaign as soon as you finish creating it) or Schedule (set a date in the future when you want the campaign to go live) When you select 'Schedule', you will be shown a date picker to select when you want the campaign to start sending. 3. Segments: By default, your campaign is sent to 'All Subscribers'. (If you are on the Enterprise plan, you can choose which segment you want to send the campaign to) 4. Smart Delivery: Enable Smart Delivery if you want the notifications to be sent to your subscribers during their active hours. 2. Create Notification On this page, you can create the campaign, add copy, add a hero image, and see how the notification would look on different devices. As you can see, the page shows you how your notification would look on different devices. Let's look at the different building blocks of this page: 1. Title: Add a title to your notification to let shoppers know what it's about. 2. Message: Add an optional message to give more information to your subscribers. 3. Emoji Keyboard: Select one (or more) emojis from the emoji keyboard next to ‘Title’ and ‘Message’ fields. 4. Primary Link: Add the URL you want your subscribers to land on when they click on your notification. 5. Desktop Hero Image: Add a hero image to your notification so that subscribers on their desktop (Windows 10, 8, and older) are shown an appealing image within the notification. Ensure that your hero image is optimized to the size—728x360 pixels. 6. Mobile Hero Image: Add a hero image to your notification so that subscribers on their mobile (Android) are shown an appealing image within the notification. Ensure that your hero image is optimized to the size—728x240 pixels. 7. Add button: Add upto 2 buttons on your campaign to make it more interactive and give subscribers more options. Under button, you will be shown 2 fields— - Title: This is the button text. - Primary Link: This is the URL you want your subscribers to land on when they click on the button. 8. Logo: You can temporarily change the logo for the campaign you are creating by changing the logo on the editor. 9. Get Test Notification: Send a test notification to your device before sending your campaign to ensure that you’re happy with how the notification looks. 3. Campaign Summary On this screen, you can see an overview of the campaign, type of campaign, segment it is being sent to, when it will start sending, and it's text on notification preview. Once you are happy with the campaign, you can hit 'Send'. Note: Campaigns cannot be stopped once you send them. So ensure that you have filled out all the fields and use the ‘Get Test Notification’ feature to check it out on your device before you send it to your subscribers.
https://docs.pushowl.com/en/articles/2320279-create-campaign-walkthrough
2020-08-03T15:23:19
CC-MAIN-2020-34
1596439735812.88
[array(['https://downloads.intercomcdn.com/i/o/202645404/88a9a3cb18a2fec40dabe946/Frame+1+%282%29.png', None], dtype=object) array(['https://downloads.intercomcdn.com/i/o/202650887/eaa34c0470c43ca2b4c2840f/Screenshot+2020-04-22+at+9.55.46+AM.png', None], dtype=object) array(['https://downloads.intercomcdn.com/i/o/202651032/6deba7600765f9d8a34dccb6/Screenshot+2020-04-22+at+9.57.03+AM.png', None], dtype=object) array(['https://downloads.intercomcdn.com/i/o/202650238/247fbc520c2a8cdda66e8128/Frame+3.png', None], dtype=object) array(['https://downloads.intercomcdn.com/i/o/202649748/5fc596f94408755f392fe1d5/ezgif.com-video-to-gif.gif', None], dtype=object) array(['https://downloads.intercomcdn.com/i/o/202650417/dd9f820810d0533b32cdb77f/Screenshot-2020-04-13-at-2.57.31-PM-2048x1166.png', None], dtype=object) ]
docs.pushowl.com
Memory The Memory storage backend uses in-memory tables to store all data. This data is never persisted to disk or to any other storage mechanism. The Memory storage engine is best used for testing Riak clusters or for storing small amounts of transient state in production systems. Internally, the Memory backend uses Erlang Ets tables to manage data. More information can be found in the official Erlang documentation. Enabling the Memory Backend To enable the memory backend, edit your configuration files for each Riak node and specify the Memory backend as shown in the following example: storage_backend = memory {riak_kv, [ ..., {storage_backend, riak_kv_memory_backend}, ... ]} Note: If you replace the existing specified backend by removing it or commenting it out as shown in the above example, data belonging to the previously specified backend will still be preserved on the filesystem but will no longer be accessible through Riak unless the backend is enabled again. If you require multiple backends in your configuration, please consult the Multi backend documentation. Configuring the Memory Backend The Memory backend enables you to configure two fundamental aspects of object storage: maximum memory usage per vnode and object expiry. Max Memory This setting specifies the maximum amount of memory consumed by the Memory backend. It’s important to note that this setting acts on a per-vnode basis, not on a per-node or per-cluster basis. This should be taken into account when planning for memory usage with the Memory backend, as the total memory used will be max memory times the number of vnodes in the cluster.. The following are all possible settings: memory_backend.max_memory_per_vnode = 500KB memory_backend.max_memory_per_vnode = 10MB memory_backend.max_memory_per_vnode = 2GB %% In the app.config-based system, the equivalent setting is max_memory, %% which must be expressed in megabytes: {riak_kv, [ %% storage_backend specifies the Erlang module defining the storage %% mechanism that will be used on this node. {storage_backend, riak_kv_memory_backend}, {memory_backend, [ ..., {max_memory, 4096}, %% 4GB in megabytes ... ]} To determine an optimal max memory setting, we recommend consulting the documentation on LevelDB cache size. TTL The time-to-live (TTL) parameter specifies the amount of time an object remains in memory before it expires. The minimum time is one second. In the newer, riak.conf-based configuration system, you can specify ttl in seconds, minutes, hours, days, etc. The following are all possible settings: memory_backend.ttl = 1s memory_backend.ttl = 10m memory_backend.ttl = 3h %% In the app.config-based system, the ttl setting must be expressed in %% seconds: {memory_backend, [ %% other settings {ttl, 86400}, %% Set to 1 day %% other settings ]} Dynamically Changing ttl There is currently no way to dynamically change the ttlsetting for a bucket or bucket type. The current workaround would be to define multiple Memory backends using the Multi backend, each with different ttlvalues. For more information, consult the documentation on the Multi backend.
https://docs.riak.com/riak/kv/2.2.0/setup/planning/backend/memory.1.html
2020-08-03T15:27:12
CC-MAIN-2020-34
1596439735812.88
[]
docs.riak.com
Use this guide to configure the Data tab in the Web Admin for each SecureAuth IdP realm. This includes directory integration and user profile field mapping. 1. An on-premises directory must be established and ready to integrate with SecureAuth IdP 2. A Service Account must be created for SecureAuth IdP with read privileges to access the data store, and write privileges (optional) to update user information 3. Create a New Realm for the target resource for which the configuration settings will apply, or open an existing realm for which configurations have already been started 4. Configure the Overview tab in the Web Admin before configuring the Data tab 1. In the Membership Connection Settings section, select the directory with which SecureAuth IdP will integrate for 2-Factor Authentication and assertion from the Data Store dropdown 2.) - Google Apps – due to Google's recent API changes, SecureAuth IdP no longer supports Google Apps data store integration - WebAdmin (for SecureAuth0 Admin Realm only) 3. Select True from the Same As Above dropdown if the profile fields used for authentication (telephone number, email address, knowledge-based questions) are all contained in the data stored selected in step 1 Select False if a different data store will be used to contain the profile fields, and select the data store type from the Default Profile Provider dropdown No configuration is required in this section if True is selected from the Same As Above dropdown (step 3) 4. If False is selected from the Same As Above dropdown (step 3), select which data store type from the Data Store dropdown will be the Default Profile Provider dropdown from which user profile information will be pulled (e.g. Directory Server) 5. Follow the distinct configuration steps for the specific data store in addition to the configuration steps on this page - Active Directory (sAMAccountName) - Active Directory (UPN) - Lightweight Directory Services (AD-LDS) - Lotus Domino - Novell eDirectory - Sun ONE - Tivoli Directory - Open LDAP - Other LDAP - SQL Server - ODBC - ASPNETDB - Web Service (Multi-Datastore) - Google Apps – due to Google's recent API changes, SecureAuth IdP no longer supports Google Apps data store integration This section is only required for LDAP data stores For SQL Server directories, refer to SQL User Data Store Tables and Stored Procedures Configuration Guide to create the profile mapping For ASPNETDB directories, refer to ASPNETDB Configuration Guide to configure to data store to work with SecureAuth IdP 6. Map the SecureAuth IdP Property to the appropriate data store Field For example, Groups is located in the memberOf data store Field 7. Change the Source from Default Provider if another directory is enabled in the Profile Connection Settings section and contains the Property 8. Check Writeable for a Property that will be changed in the data store by SecureAuth IdP For example, user account information (telephone number) or authentication mechanisms (knowledge-based questions, fingerprints) The Data Format section states how the information is stored in the directory (not available for all Profile Properties): - Plain Text: Stored as regular text, readable (default) - Standard Encryption: Stored and encrypted using RSA encryption - Advanced Encryption: Stored and encrypted using AES encryption - Standard Hash: Stored and encrypted using SHA 256 hash - Plain Binary: Stored as a binary representation of the data (uses a .NET library to make it binary – may not be readable by all applications) - JSON: Stored in a universal format, readable by all applications (similar to Plain Text) The Fields listed are only examples as each data store is organized differently and may have different values for each Property 9. Click Add Property if a required Property is not listed 10. Enter property name and click Add 11. The new Property will appear at the bottom of the list and can then be mapped to the appropriate data store Field Refer to LDAP Attributes / SecureAuth IdP Profile Properties Data Mapping for more information 12. Add any additional identities or user information that is not stored in the on-premises data store but will be used in assertion (optional) Click Save once the configurations have been completed and before leaving the Data page to avoid losing changes
https://docs.secureauth.com/display/80/Data+Tab+Configuration
2020-08-03T15:57:08
CC-MAIN-2020-34
1596439735812.88
[]
docs.secureauth.com