content
stringlengths 0
557k
| url
stringlengths 16
1.78k
| timestamp
timestamp[ms] | dump
stringlengths 9
15
| segment
stringlengths 13
17
| image_urls
stringlengths 2
55.5k
| netloc
stringlengths 7
77
|
---|---|---|---|---|---|---|
The Navigation Areas define how difficult it is to walk across a specific area, the lower cost areas will be preferred during path finding. In addition each NavMeshA mesh that Unity generates to approximate the walkable areas and obstacles in your environment for path finding and AI-controlled navigation. More info
See in Glossary Agent has an Area Mask which can be used to specify on which areas the agent can move.
In the above example the area types are used for two common use cases:
The area type can be assigned to every object that is included in the NavMesh baking, in addition, each Off-Mesh Link has a property to specify the area type.
In a nutshell, the cost allows you to control which areas the pathfinder favors when finding a path. For example, if you set the cost of an area to 3.0, traveling across that area is considered to be three times longer than alternative routes.
To fully understand how the cost works, let’s take a look at how the pathfinder works.
Unity uses A* to calculate the shortest path on the NavMesh. A* works on a graph of connected nodes. The algorithm starts from the nearest node to the path start and visits the connect nodes until the destination is reached.
Since the Unity navigation representation of polygons, the first thing the pathfinder needs to do is to place a point on each polygon, which is the location of the node. The shortest path is then calculated between these nodes.
The yellow dots and lines in the above picture shows how the nodes and links are placed on the NavMesh, and in which order they are traversed during the A*.
The cost to move between two nodes depends on the distance to travel and the cost associated with the area type of the polygon under the link, that is, distance * cost. In practice this means, that if the cost of an area is 2.0, the distance across such polygon will appear to be twice as long. The A* algorithm requires that all costs must be larger than 1.0.
The effect of the costs on the resulting path can be hard to tune, especially for longer paths. The best way to approach costs is to treat them as hints. For example, if you want the agents to not to use Off-Mesh Links too often, you could increase their cost. But it can be challenging to tune a behavior where the agents to prefer to walk on sidewalks.
Another thing you may notice on some levels is that the pathfinder does not always choose the very shortest path. The reason for this is the node placement. The effect can be noticeable in scenarios where big open areas are next to tiny obstacles, which results navigation mesh with very big and small polygons. In such cases the nodes on the big polygons may get placed anywhere in the big polygon and from the pathfinder’s point of view it looks like a detour.
The cost per area type can be set globally in the Areas tab, or you can override them per agent using a script.
The area types are specified in the Navigation Window’s Areas tab. There are 29 custom types, and 3 built-in types: Walkable, Not Walkable, and Jump.
If several objects of different area types are overlapping, the resulting navmesh area type will generally be the one with the highest index. There is one exception however: Not Walkable always takes precedence. Which can be helpful if you need to block out an area.
Each agent has an Area Mask which describes which areas it can use when navigating. The area mask can be set in the agent properties, or the bitmask can be manipulated using a script at runtime.
The area mask is useful when you want only certain types characters to be able to walk through an area. For example, in a zombie evasion game, you could mark the area under each door with a Door area type, and uncheck the Door area from the zombie character’s Area Mask. | https://docs.unity3d.com/Manual/nav-AreasAndCosts.html | 2020-10-23T22:41:49 | CC-MAIN-2020-45 | 1603107865665.7 | [] | docs.unity3d.com |
Enable Rich Snippets
Search queries are traditionally divided into three categories:
- Navigational. The user intent is to quickly find the link to a particular discussion, knowledge article, case, or another such document.
- Informational. Queries through which the user is looking to learn something, which could be a definition of a term or a instructional article.
- Transactional. Shopping is on the top of the user's mind.
With Enable Rich Snippets, user experience can be improved immensely during informational search queries. Enable Rich Snippets extracts the first <ol> (ordered) or <ul> (unordered) list from the most relevant result in response to most search queries that start with "what" or "how to" and displays the list right on the search results page (Figure 1).
NOTE.
A related feature to handle informational queries not beginning with "how to" has been explained in Knowledge Graph.
Figure 1. In response to the search query—how to boost a document for specific keywords—Enable Rich Snippets extracts the first steps from Boost Documents for Specific Keywords and presents them on the search results page, enabling users to quickly get an overview of the procedure.
When Enable Rich Snippets Work
Once turned on, the feature will display a rich snippet each time all the following conditions are fulfilled:
- Search query is in the form "
how to {{something}}".
- At least one matching document corresponding to the query is found.
- The document has at least one ordered (
<ol>) or unordered (
<ul>) list.
Best Practices
Here are some best practices for getting the most relevant rich snippets on the search results page.
- Convey essential information in the first 3-4 bullet points of a list. Bullet point 5 and beyond are not part of rich snippets.
- Keep each bullet point to a maximum of 80 characters; the characters are be truncated starting from number 81.
- Avoid using snippets. The lists in snippets are ignored. (MadCap Flare)
- Cover the main points in the first list, if there are more than one lists on a page. Enable Rich Snippets extracts points from the top list and overlooks others.
Turn On Enable Rich Snippets on a Search Client
Turning on Enable Rich Snippets is easy and it goes a long way in improving user experience.
- Go to Search Tuning from main navigation and select a search client where rich snippets will work.
- Toggle Enable Rich Snippets to the right to turn it on.
To turn off rich snippets, select a search client and toggle Enable Rich Snippets to the left.
Last updated: Friday, September 25, 2020 | https://docs.searchunify.com/Content/Search-Tuning/Enable-Rich-Snippets.htm | 2020-10-23T22:26:58 | CC-MAIN-2020-45 | 1603107865665.7 | [] | docs.searchunify.com |
Working with playbooks¶
Playbooks record and execute Ansible’s configuration, deployment, and orchestration functions. They can describe a policy you want your remote systems to enforce, or a set of steps in a general IT process.
If Ansible modules are the tools in your workshop, playbooks are your instruction manuals, and your inventory of hosts are your raw material...
- Templating (Jinja2)
- Advanced playbooks features
- Playbook Example: Continuous Delivery and Rolling Upgrades | https://docs.ansible.com/ansible/latest/user_guide/playbooks.html | 2020-10-23T21:39:28 | CC-MAIN-2020-45 | 1603107865665.7 | [] | docs.ansible.com |
Enabled Behaviours are Updated, disabled Behaviours are not.
This is shown as the small checkbox in the inspector of the behaviour.
using UnityEngine; using System.Collections; using UnityEngine.UI; // Required when Using UI elements.
public class Example : MonoBehaviour { public Image pauseMenu;
public void Start() { //Enables the pause menu UI. pauseMenu.enabled = true; } } | https://docs.unity3d.com/ScriptReference/Behaviour-enabled.html | 2020-10-23T22:22:11 | CC-MAIN-2020-45 | 1603107865665.7 | [] | docs.unity3d.com |
Administering Container-to-Container Networking
This topic describes how to configure the Container-to-Container Networking feature. For an overview of how Container-to-Container Networking works, see the Understanding Container-to-Container Networking topic.
Enable Container-to-Container Networking
This section presents two procedures for enabling Container-to-Container Networking. See the table below and choose the procedure that corresponds with your use case.
Enable on an IaaS
If your Cloud Foundry (CF) deployment runs on an IaaS, follow these steps to enable Container-to-Container Networking.
- Ensure that you have a database ready. To complete this procedure, you need a v5.7 or later MySQL database, or a PostgreSQL database. You can use a database within your Cloud Controller database instance or create a different instance, such as Amazon RDS.
- Target your BOSH Director using the BOSH CLI:
$ bosh target BOSH-DIRECTOR-IP
- If you have not already, upload a stemcell that uses Linux kernel 4.4, such as stemcell 3263.2 or later:
$ bosh upload stemcell URL-OF-STEMCELL
- Open the CF properties stub that you created when deploying Cloud Foundry and do the following:
- Under
properties>
uaa>
scim>
users>
name: admin>
groupsadd a new group named
network.admin.
scim: users: - name: admin password:
groups: - scim.write - scim.read ... - routing.router_groups.write - network.admin
- Under
properties>
uaa>
clients>
cf, to the line beginning
scope:add
network.admin.
- Under
clients, add a
network-policyclient as follows and replace
REPLACE-WITH-UAA-CLIENT-SECRETwith a secure password of your choosing.
clients: cf: scope: cloud_controller.read, [...] routing.router_groups.read,network.admin network-policy: authorities: uaa.resource,cloud_controller.admin_read_only authorized-grant-types: client_credentials,refresh_token secret: REPLACE-WITH-UAA-CLIENT-SECRET
clients: cf: scope: cloud_controller.read, [...] routing.router_groups.read,network.admin
- Create a Container-to-Container Networking stub file
stubs/cf-networking/stub.ymland copy in the template below:
--- cf_networking_overrides: releases: - name: cf-networking version: latest driver_templates: - name: garden-cni release: cf-networking - name: cni-flannel release: cf-networking - name: netmon release: cf-networking - name: vxlan-policy-agent release: cf-networking properties: cf_networking: vxlan_policy_agent: policy_server_url: ca_cert: | -----BEGIN CERTIFICATE----- REPLACE-WITH-CA-CERTIFICATE -----END CERTIFICATE----- client_cert: | -----BEGIN CERTIFICATE----- REPLACE-WITH-CLIENT-CERTIFICATE -----END CERTIFICATE----- client_key: | -----BEGIN RSA PRIVATE KEY----- REPLACE-WITH-CLIENT-KEY -----END RSA PRIVATE KEY----- garden_external_networker: cni_plugin_dir: /var/vcap/packages/flannel/bin cni_config_dir: /var/vcap/jobs/cni-flannel/config/cni plugin: etcd_endpoints: - (( config_from_cf.etcd.advertise_urls_dns_suffix )) etcd_client_cert: (( config_from_cf.etcd.client_cert )) etcd_client_key: (( config_from_cf.etcd.client_key )) etcd_ca_cert: (( config_from_cf.etcd.ca_cert )) policy_server: database: type: REPLACE-WITH-DB-TYPE username: REPLACE-WITH-USERNAME password: REPLACE-WITH-PASSWORD host: REPLACE-WITH-DB-HOSTNAME port: REPLACE-WITH-DB-PORT name: REPLACE-WITH-DB-NAME skip_ssl_validation: true uaa_client_secret: REPLACE-WITH-UAA-CLIENT-SECRET uaa_url: (( "." config_from_cf.system_domain )) ca_cert: | -----BEGIN CERTIFICATE----- REPLACE-WITH-CA-CERTIFICATE -----END CERTIFICATE----- server_cert: | -----BEGIN CERTIFICATE----- REPLACE-WITH-SERVER-CERT -----END CERTIFICATE----- server_key: | -----BEGIN RSA PRIVATE KEY----- REPLACE-WITH-SERVER-KEY -----END RSA PRIVATE KEY----- garden_properties: network_plugin: /var/vcap/packages/runc-cni/bin/garden-external-networker network_plugin_extra_args: - --configFile=/var/vcap/jobs/garden-cni/config/adapter.json jobs: - name: policy-server instances: 1 persistent_disk: 256 templates: - name: policy-server release: cf-networking - name: route_registrar release: cf - name: consul_agent release: cf - name: metron_agent release: cf resource_pool: database_z1 networks: - name: diego1 properties: nats: machines: (( config_from_cf.nats.machines )) user: (( config_from_cf.nats.user )) password: (( config_from_cf.nats.password )) port: (( config_from_cf.nats.port )) metron_agent: zone: z1 route_registrar: routes: - name: policy-server port: 4002 registration_interval: 20s uris: - (( "api." config_from_cf.system_domain "/networking" )) consul: agent: services: policy-server: name: policy-server check: interval: 5s script: /bin/true config_from_cf: (( merge ))
- Edit the stub file using the table below as a guide:
Note: For a test environment, you can use a script such as the one in the Container-to-Container Networking Release repository to generate the certificates and keys required for the stub. For a production deployment, use certificates signed by a Certificate Authority (CA).
- Create a file that contains the following bash script. Name the file
generate_diego.sh.
set -e -x -u environment_path=STUBS-DIRECTORY output_path=MANIFEST-DIRECTORY diego_release_path=LOCAL-DIEGO-REPO pushd cf-release ./scripts/generate_deployment_manifest aws \ ${environment_path}/stubs/director-uuid.yml \ ${diego_release_path}/examples/aws/stubs/cf/diego.yml \ ${environment_path}/stubs/cf/properties.yml \ ${environment_path}/stubs/cf/instance-count-overrides.yml \ ${environment_path}/stubs/cf/stub.yml \ > ${output_path}/cf.yml popd pushd diego-release ./scripts/generate-deployment-manifest \ -g \ -c ${output_path}/cf.yml \ -i ${environment_path}/stubs/diego/iaas-settings.yml \ -p ${environment_path}/stubs/diego/property-overrides.yml \ -n ${environment_path}/stubs/diego/instance-count-overrides.yml \ -N ${environment_path}/stubs/cf-networking/stub.yml \ -v ${environment_path}/stubs/diego/release-versions.yml \ > ${output_path}/diego.yml popdReplace the variables as follows:
STUBS-DIRECTORY: The directory containing your stubs for CF, Diego, and Container-to-Container Networking.
MANIFEST-DIRECTORY: The directory where you want the manifest created.
LOCAL-DIEGO-REPO: The directory of the local copy of the
diego-releaserepository.
- Enter the following commands to make the script executable and run the script.
$ chmod u+x generate_diego.sh $ ./generate_diego.sh
- Enter the following command to target your BOSH director:
$ bosh target BOSH-DIRECTOR-IPFor example,
$ bosh target 192.0.2.1
- Enter the following command to set your CF deployment to the manifest you generated.
$ bosh deployment ${output_path}/cf.yml
- Enter the following command to deploy CF.
$ bosh deploy
- Enter the following command to set your Diego deployment to the manifest you generated.
$ bosh deployment ${output_path}/diego.yml
- Enter the following command to deploy Diego.
$ bosh deploy
- (Optional) Try the Cats and Dogs example in the Container-to-Container Networking Release repository. In this tutorial, you deploy two apps and create a Container-to-Container Networking policy that allows them to communicate directly with each other.
Enable on BOSH-Lite
If your CF deployment runs on BOSH-Lite, follow these steps to enable Container-to-Container Networking.
- Ensure your BOSH-Lite version is
9000.131.0or later. If you need to upgrade, follow the instructions for Upgrading the BOSH-Lite VM.
- Navigate to your
bosh-litedirectory, for example,
$ cd ~/workspace/bosh-lite
- To enable bridge-netfilter on the VM running BOSH-Lite, run the following command:
$ vagrant ssh -c 'sudo modprobe br_netfilter'
Container-to-Container Networking on BOSH-Lite requires this Linux kernel feature to enforce network policy.
- Upload the latest BOSH-Lite stemcell:
$ bosh upload stemcell
- To clone the required CF release repos to your workspace, enter the following commands:
$ git clone
$ git clone
$ git clone
- To enable Container-to-Container Networking on BOSH-Lite, navigate to the
cf-networking-releasedirectory and run the deploy script:
$ cd ~/workspace/cf-networking-release
$ ./scripts/deploy-to-bosh-lite
- (Optional) Try the Cats and Dogs example in the Container-to-Container Networking Release repository. In this tutorial, you deploy two apps and create a Container-to-Container Networking policy that allows them to communicate directly with each other. Policies for Container-to-Container Networking
This section describes how to create and modify Container-to-Container Networking policies using the Cloud Foundry Command Line Interface (cf CLI).
Ensure that you are using cf CLI v6.30 or higher:
$ cf versionFor more information about updating the cf CLI, see the Installing the cf CLI topic.
To use the commands, you must have either the
network.write or
network.admin UAA scope..
Add a Network Policy
To add a policy that allows direct network traffic from one app to another, run the following command:
cf add-network-policy SOURCE_APP --destination-app DESTINATION_APP --protocol (tcp | udp) --port RANGE
Replace the placeholders in the above command as follows:
SOURCE_APPis the name of the app that sends traffic.
DESTINATION_APPis the name of the app that will receive traffic. frontend backend tcp 8080
Remove a Network Policy
To remove a policy that allows direct network traffic from an app, run the following command:
cf remove-network-policy SOURCE_APP --destination-app DESTINATION_APP --protocol PROTOCOL --port RANGE
Replace the placeholders in the above command to match an existing policy, as follows:
SOURCE_APPis the name of the app that sends traffic.
DESTINATION_APPis the name of the app that receives traffic.Create a pull request or raise an issue on the source for this page in GitHub | http://docs.cloudfoundry.org/devguide/deploy-apps/cf-networking.html | 2017-09-19T17:17:28 | CC-MAIN-2017-39 | 1505818685912.14 | [] | docs.cloudfoundry.org |
Welcome to a Little Book of R for Biomedical Statistics!¶
By Avril Coghlan, Parasite Genomics Group, Wellcome Trust Sanger Institute, Cambridge, U.K. Email: [email protected]
This is a simple introduction to biomedical statistics using the R statistics software.
There is a pdf version of this booklet available at.
If you like this booklet, you may also like to check out my booklet on using R for time series analysis, and my booklet on using R for multivariate analysis,.
Contents:
- How to install R
- Introduction to R
- Installing R
- Installing R packages
- Running R
- A brief introduction to R
- Links and Further Reading
- Acknowledgements
- License
- Using R for Biomedical Statistics
- Biomedical statistics
- Calculating Relative Risks for a Cohort Study
- Calculating Odds Ratios for a Cohort or Case-Control Study
- Testing for an Association Between Disease and Exposure, in a Cohort or Case-Control Study
- Calculating the (Mantel-Haenszel) Odds Ratio when there is a Stratifying Variable
- Testing for an Association Between Exposure and Disease in a Matched Case-Control Study
- Dose-response analysis:
- Calculating the Sample Size Required for a Randomised Control Trial
- Calculating the Power of a Randomised Control Trial
- Making a Forest Plot for a Meta-analysis of Several Different Randomised Control Trials:
- Links and Further Reading
- Acknowledgements
- License
Acknowledgements¶
Thank you to Noel O’Boyle for helping in using Sphinx,, to create this document, and github,, to store different versions of the document as I was writing it, and readthedocs,, to build and distribute this document.
For very helpful comments and suggestions for improvements, thank you very much to: Tony Burton, Richard A. Friedman, Duleep Samuel, R.Heberto Ghezzo, David Levine, Lavinia Gordon, Friedrich Leisch, and Phil Spector.
I will be grateful if you will send me (Avril Coghlan) corrections or suggestions for improvements to my email address [email protected]
License¶
The content in this book is licensed under a Creative Commons Attribution 3.0 License. | http://a-little-book-of-r-for-biomedical-statistics.readthedocs.io/en/latest/index.html | 2017-09-19T16:52:15 | CC-MAIN-2017-39 | 1505818685912.14 | [] | a-little-book-of-r-for-biomedical-statistics.readthedocs.io |
Checkbox tutorials¶
Creating an empty provider¶
Plainbox Providers are bundles containing information how to run tests.
To create an empty provider run:
$ plainbox startprovider --empty com.example:myprovider
plainbox is the internal tool of checkbox. It’s used on rare occasions,
like creating a new provider.
--empty informs plainbox that you want to
start from scratch.
com.example:myprovider is the name of the provider.
Providers use IQN naming, it helps in tracking down ownership of the provider.
Plainbox Jobs are the things that describe how tests are run. Those Jobs are defined in .pxu files, in ‘units’ directory of the provider.
The provider we’ve just created doesn’t have that directory, let’s create it:
$ cd com.example\:myprovider $ mkdir units
Adding a simple job to a provider¶
Jobs loosely follow RFC822 syntax. I.e. most content follow
key:value
pattern.
Let’s add a simple job that runs a command.
Open any
.pxu file in
units directory of the provider
(if there isn’t any, just create one, like
units.pxu).
And add following content:
id: my-first-job flags: simple command: mycommand
id is used for identification purposes
flags enables extra features. In the case of
simple, it lets us not
specify all the typical fields - Checkbox will infer some values for us.
command specifies which command to run. Here it’s
mycommand
In order for jobs to be visible in Checkbox they have to be included in some
test plan. Let’s add a test plan definition to the same
.pxu file.:
unit: test plan id: first-tp name: My first test plan include: my-first-job
Warning
Separated entities in the .pxu file has to be separated by at least one empty line.
Running jobs from a newly created provider¶
In order for Checkbox to see the provider we have to install it. To do so run:
$ sudo ./manage.py install
Now we’re ready to launch Checkbox! Start the command line version with:
$ checkbox-cli
Follow the instructions on the screen. The test will (probably) fail, because
of
mycommand missing in your system. Let’s change the job definition to do
something meaningful instead. Open
units.pxu, and change the line:
command: mycommand
to
command: [ `df -B 1G --output=avail $HOME |tail -n1` -gt 10 ]
Note
This command checks if there’s at least 10GB of free space in $HOME
This change won’t be available just yet, as we still have an old version of the provider installed in the system. Let’s remove the previous version, and install the new one.:
$ sudo rm -rf /usr/local/lib/plainbox-providers-1/com.example\:myprovider/ $ sudo ./manage.py install
This sudo operations (hopefully) look dangerous to you. See next part to see how to avoid that.
Developing provider without constantly reinstalling it¶
Instead of reinstalling the provider every time you change anything in it, you can make Checkbox read it directly from the place you’re changing it in.:
$ ./manage.py develop
Because now Checkbox may see two instances of the same provider, make sure you remove the previous one.
Note
./manage.py develop doesn’t require sudo, as it makes all the
references in user’s home.
Improving job definition¶
When you run Checkbox you see the job displayed as ‘my-first-job’ which is the
id of the job, which is not very human-friendly. This is because of the
simple flag. Let’s improve our Job definition. Open
units.pxu and
replace the job definition with:
id: my-first-job _summary: 10GB available in $HOME _description: this test checks if there's at least 10gb of free space in user's home directory plugin: shell estimated_duration: 0.01 command: [ `df -B 1G --output=avail $HOME |tail -n1` -gt 10 ]
New stuff:
_summary: 10GB available in $HOME
Summary is shown in Checkbox screens where jobs are selected. It’s a
human-friendly identification of the job. It should should be short (50 - 70
chars), as it’s printed in one line.
_ means at the beginning means
the field is translatable.
_purpose: this test checks if there's at least 10gb of free space in user's home directory
Purpose as the name suggest should describe the purpose of the test.
plugin: shell
Plugin tells Checkbox what kind of job is it.
shell means it’s a automated
test that runs a command and uses it’s return code to determine jobs outcome.
estimated_duration: 0.01
Tells Checkbox how long the test is expected to run. This field is currently informative only. | http://checkbox.readthedocs.io/en/latest/tutorials.html | 2017-09-19T17:03:04 | CC-MAIN-2017-39 | 1505818685912.14 | [] | checkbox.readthedocs.io |
Counts chars encoded as bytes up to a certain limit (capacity of byte buffer). size() returns the number of bytes, it will return -1 if the capacity was reached or an error occurred. this class is useful for calculating the content length of a HttpServletResponse before the response has been committed | http://docs.grails.org/4.0.0.RC1/api/org/grails/web/util/BoundedCharsAsEncodedBytesCounter.html | 2020-05-25T12:14:20 | CC-MAIN-2020-24 | 1590347388427.15 | [] | docs.grails.org |
base classes — bpy_struct, ID, NodeTree
Node tree consisting of linked nodes used for compositing
Max size of a tile (smaller values gives better distribution of multiple threads, but more overhead)
Quality when editing
Quality when rendering
Enable buffering of group nodes
Enable GPU calculations
Use two pass execution during editing: first calculate fast nodes, second pass calculate all nodes
Use boundaries for viewer nodes and composite backdrop
Inherited Properties
Inherited Functions | https://docs.blender.org/api/blender_python_api_2_68_release/bpy.types.CompositorNodeTree.html | 2020-05-25T11:06:21 | CC-MAIN-2020-24 | 1590347388427.15 | [] | docs.blender.org |
Track¶
- Name
The track name can be changed with this field. Track names are used for linking tracking data to other areas, like a Follow Track constraint.
- Enable (eye icon)
This toggle controls if the marker is enabled. If a marker is disabled, its position is not used either by solver nor by constraints.
- Lock (padlock icon)
The toggle controls whether the track is locked. Locked tracks cannot be edited at all. This helps to prevent accidental changes to tracks which are “finished” (tracked accurate along the whole footage).
Track Preview Widget¶
The widget in this panel is called “Track Preview” and it displays the content of the pattern area. This helps to check how accurately the feature is being tracked (controlling that there is no good feature at position where the mask corner should be placed. Details of this technique will be written later.
There is small area below the preview widget which can be used to enlarge the vertical size of preview widget (the area is highlighted with two horizontal lines).
Further Options¶
- Channels
Tracking happens in gray-scale space, so a high contrast between the feature and its background yields more accurate tracking. In such cases disabling some color channels can help.
- Grayscale Preview (B/W)
Display the preview image as gray-scale even if all channels are enabled.
- Mask Preview (black/white icon)
Applies mask defined by an annotation tool in the preview widget.
- Weight
When several tracks are used for 3D camera reconstruction, it is possible to assign a reduced weight to some tracks to control their influence on the solution result. This parameter can (and often need to) be animated.
Altering the weights of problem tracking markers can correct or greatly reduce undesirable jumps as feature disappear or become difficult to track.
Another use of Track Weights zero and use the feature detection to quickly add lots of markers. Now track them and solve the scene again. Since their weight is zero they will not influence your solution at all, but you will have lots of good reference points in your scene.
- Stabilization Weight
While Weight parameter is used for 3D reconstruction, the Stabilization Weight is used to control 2D stabilization.
- Color Presets
The preset for the Custom Color.
- Custom Color. | https://docs.blender.org/manual/en/dev/movie_clip/tracking/clip/properties/track/track.html | 2020-05-25T12:24:56 | CC-MAIN-2020-24 | 1590347388427.15 | [] | docs.blender.org |
Reference
Draw Mode
Toolbar ‣ Cutter
The Cutter tool delete points in between intersecting strokes.
Draw a dotted line around the strokes you want to be cut.
After releasing the mouse/pen all the points on the selected strokes
will be deleted until another intersecting stroke is found.
Original drawing.¶
Lasso Selecting the strokes to be cut.¶
Final result.¶ | https://docs.blender.org/manual/ru/latest/grease_pencil/modes/draw/tool_settings/cutter.html | 2020-05-25T12:06:36 | CC-MAIN-2020-24 | 1590347388427.15 | [] | docs.blender.org |
WSO2 IoT Server is 100% API driven. Therefore, you can create, publish and install the application using the application management APIs.
Obtain the access token
You can obtain an access token by providing the resource owner's username and password as an authorization grant. It requires the base64 encoded string of the
consumer-key:consumer-secret combination. Let's take a look at how it's done.
App management APIs
Overview
Content Tools
Activity | https://docs.wso2.com/pages/diffpages.action?pageId=87716643&originalId=72422914 | 2020-05-25T12:45:49 | CC-MAIN-2020-24 | 1590347388427.15 | [] | docs.wso2.com |
ESP32 EthernetKit.
ESP32-Ethernet-Kit is an ESP32 microcontroller based development board produced by Espressif.
It consists of two development boards, the Ethernet board and the PoE board. The Ethernet board contains Bluetooth / Wi-Fi dual-mode ESP32-WROVER-B module and IP101GRI, a Single Port 10/100 Fast Ethernet Transceiver (PHY). The PoE board provides power over Ethernet functionality. The Ethernet board can work independently, without the PoE board installed.
>
PoE Board¶
This board coverts power delivered over the Ethernet cable (PoE) to provide a power supply for the Ethernet board. The main components of the PoE board are shown on the block diagram under Functionality Overview.
The PoE board has the following features:
- Support for IEEE 802.3at
- Power output: 5 V, 1.4 A
To take advantage of the PoE functionality the RJ45 Port of the Ethernet board should be connected with an Ethernet cable to a switch that supports PoE. When the Ethernet board detects 5 V power output from the PoE board, the USB power will be automatically cut off.
Power¶
Power to the Esp32 EthernetKit is supplied via the on-board USB Micro B connector.j
The device can operate on an external supply of 6 to 20 volts. If using more than 12V, the voltage regulator may overheat and damage the device. The recommended range is 7 to 12 volts.
Connect, Register, Virtualize and Program¶
The Esp32 EthernetKitC DevKitC device is recognized by Zerynth Studio. The next steps are:
- Select the DevKit DevKitC DevKitC. | https://docs.zerynth.com/latest/official/board.zerynth.esp32_ethernetkit/docs/index.html | 2020-05-25T11:44:46 | CC-MAIN-2020-24 | 1590347388427.15 | [array(['../../../_images/esp32_ethernetkit.png', 'Esp32 EthernetKit'],
dtype=object) ] | docs.zerynth.com |
). NB IoT support is not ready yet.
The communication with BG96 is performed via UART without hardware flow control at 115200 baud.
This module provides the
bg96Exception to signal errors related to the hardware initialization and management.
init(serial, dtr, rts, power, reset, status, power_on=LOW, reset_on=LOW, status_on=HIGH)¶ the reset pin (faster, do not detach from network).
Network Time¶
The BG96 has an internal Real Time Clock that is automatically synchronized with the Network Time. The current time can be retrieved with the following function:(fix_rate=1, use_uart=0)¶
Initializes the GNSS subsystem, given the following parameters:
- fix_rate, configure GNSS fix or NMEA output rate in seconds
- use_uart, use the secondary serial port (UART3) of the BG96 to output NMEA sentences
fix()¶
Return a tuple of 10)
- Not supported
- Not supported
- UTC time as a tuple (yyyy,MM,dd,hh,mm,ss)
The function return None if a fix can’t be obtained. | https://docs.zerynth.com/latest/official/lib.quectel.bg96/docs/official_lib.quectel.bg96_bg96.html | 2020-05-25T12:33:36 | CC-MAIN-2020-24 | 1590347388427.15 | [] | docs.zerynth.com |
Spline IK¶ can be found on the Spline IK page.
Basic Setup¶
The Spline IK Constraint is not strictly an Inverse Kinematics method (i.e. IK Constraint), but rather a Forward Kinematics method (i.e. normal bone posing). However, it still shares some characteristics of the IK Constraint, such as operating on multiple bones, not being usable for Objects, and being evaluated after all other constraints have been evaluated. It should be noted that if a Standard IK chain and a Spline IK chain both affect a bone at the same time the Standard IK chain takes priority. Such setups are best avoided though, since the results may be difficult to control.
To setup Spline IK, it is necessary to have a chain of connected bones and a curve to constrain these bones to:
With the last bone in the chain selected, add a Spline IK Constraint from the Bone Constraints tab in the Properties Editor.¶
For the precise list of options, see Spline IK constraint. This section is intended to introduce the workflow.
Roll Control¶.
Примечание
There are a couple of limitations to consider:
Bones do not inherit a curve’s tilt value to control their roll.
There is no way of automatically creating a twisting effect where a dampened rotation is inherited up the chain. Consider using Bendy Bones instead.
Offset Controls¶¶¶
The thickness of the bones in the chain is controlled using the constraint’s XZ Scale Mode setting. This setting determines the method used for determining the scaling on the X and Z axes of each bone in the chain.
The available modes are:
- None
This option keeps the X and Z scaling factors as 1.0.
- Volume Preserve
The X and Z scaling factors are taken as the inverse of the Y scaling factor (length of the bone), maintaining the „volume“ of the bone.
- Bone Original
This options just uses the X and Z scaling factors the bone would have after being evaluated in the standard way.
In addition to these modes, there is an option, Use Curve Radius. When this option is enabled, the average radius of the radii of the points on the curve where the joints of each bone are placed, are used to derive X and Z scaling factors. This allows the scaling effects, determined using the modes above, to be tweaked as necessary for artistic control.
Tips for Nice Setups¶
For optimal deformations, it is recommended that the bones are roughly the same length, and that they are not too long, to facilitate a better fit to the curve. Also, bones should ideally be created in a way that follows the shape of the curve in its „rest pose“ shape, to minimize the problems in areas where the curve has sharp bends which may be especially noticeable when stretching is disabled.
For control of the curve, it is recommended that hooks (in particular, Bone Hooks) are used to control the control points of the curve, with one hook per control point. In general, only a few control points should be needed for the curve (e.g. one for every 3-5 bones offers decent control).
The type of curve used does not really matter, as long as a path can be extracted from it that could also be used by the Follow Path Constraint. This really depends on the level of control required from the hooks. | https://docs.blender.org/manual/ru/dev/animation/armatures/posing/bone_constraints/inverse_kinematics/spline_ik.html | 2020-05-25T12:28:57 | CC-MAIN-2020-24 | 1590347388427.15 | [] | docs.blender.org |
-
!
Verifying the configuration
For an HTTP callout to work correctly, all the HTTP callout parameters and the entities associated with the callout must be configured correctly. While the Citrix ADC appliance does not check the validity of the HTTP callout parameters, it indicates the state of the bound entities, namely the server or virtual server to which the HTTP callout is sent. The following table lists the icons and describes the conditions under which the icons are displayed.
Table 1. Icons That Indicate the States of Entities Bound to an HTTP Callout
For an HTTP callout to function correctly, the icon must be green at all times. If the icon is not green, check the state of the callout server or virtual server to which the HTTP callout is sent. If the HTTP callout is not working as expected even though the icon is green, check the parameters configured for the callout.
You can also verify the configuration by sending test requests that match the policy from which the HTTP callout is invoked, checking the hits counter for the policy and the HTTP callout, and verifying the responses that the Citrix ADC appliance sends to the client.
Note: An HTTP callout can sometimes invoke itself recursively a second time. If this happens, the hits counter is incremented by two counts for each callout that is generated by the appliance. For the hits counter to display the correct value, you must configure the HTTP callout in such a way that it does not invoke itself a second time. For more information about how you can avoid HTTP callout recursion, see Avoiding HTTP Callout Recursion.
To view the hits counter for an HTTP callout
- Navigate to AppExpert > HTTP Callouts.
- In the details pane, click the HTTP callout for which you want to view the hits counter, and then view the hits in the Details area.
Verifying. | https://docs.citrix.com/en-us/citrix-adc/12-1/appexpert/http-callout/verifying-http-callout-configuration.html | 2020-05-25T11:35:21 | CC-MAIN-2020-24 | 1590347388427.15 | [] | docs.citrix.com |
- Configuration
- Coveo Control Panel Administration Access
- Understanding the Command Center
- Introduction to the Coveo Configuration files
- Understanding the Coveo Search Provider Configuration File
- Overview of Field Management in the Coveo.SearchProvider.config File
- Managing Your License
- Understanding the REST Endpoint Configuration Files
- Changing the Default Coveo for Sitecore REST Endpoint Proxy Path
- Understanding the Cloud Platform Client Pipelines
- Understanding the Coveo UI Pipelines
- Accessing the Coveo for Sitecore API
Understanding the Command Center
The Command Center gives you more control over a Coveo for Sitecore installation through an administration user interface. The Command Center also displays index related information and provides a convenient means to perform common configurations and actions.
Accessing the Command Center
To access the Command Center as a Sitecore administrator
Open the Coveo Search section of the Sitecore Control Panel (see Opening the Coveo Search Control Panel Section).
Choose Indexing Manager.
You can also access the Command Center using URL
http://[INSTANCE NAME]/coveo/command-center/index.html, where
[INSTANCE NAME]is the name of your Sitecore instance. Non Sitecore administrators can only access the Command Center by URL (see Giving Access to the Command Center).
Command Center Sections
The following is a summary of the information you can access and the actions you can perform in each section of the Command Center.
Indexing Manager
The Indexing Manager section of the Command Center provides an overview of index-related information. It lists your Coveo indexes, displays the index crawlers and the content each goes through, and the database where the information is stored.
You can initiate index rebuilding from the interface and subsequently monitor the status of the rebuild. The Indexing Manager also allows you to handpick the Sitecore fields that you want to index.
For more information about the Indexing Manager, see Understanding the Indexing Manager.
Relevance Manager
The Relevance Manager section of the Command Center provides general information regarding Coveo Cloud relevance features and direct links to those features in the Coveo Cloud Administration Console.
For more information about the Relevance Manager, see Understanding the Relevance Manager.
Coveo Cloud Manager
The Coveo Cloud Manager section of the Command Center allows you to change the Coveo Cloud organization your Sitecore instance is linked to.
You can also modify security and content indexing settings in this section.
For more information about the Coveo Cloud Manager, see Updating Coveo for Sitecore Settings.
Getting Help
In each Command Center section, just next to the page title, you can click the help button to reach the related documentation article.
The Organization Status Dialog
When a Coveo Cloud trial organization has been idle for some time, it’s paused automatically. When you try to access the Command Center and your organization has been idle for some time, one of the following dialogs is displayed:
If you click the Resume button, the organization status then changes to
Resuming.
The dialog automatically closes once the organization has finished resuming and is, once again, fully functional.
You can also resume your cloud organization through the Coveo Cloud Administration Console, in the main menu, in the Status section. | https://docs.coveo.com/en/2540/ | 2020-05-25T10:28:17 | CC-MAIN-2020-24 | 1590347388427.15 | [array(['/en/assets/images/c4sc-v4/CommandCenterDefaultPage.png',
'Command Center Default Page'], dtype=object)
array(['/en/assets/images/c4sc-v5/GettingHelp.png', None], dtype=object)
array(['/en/assets/images/c4sc-v5/CloudOrgPausing.png', None],
dtype=object)
array(['/en/assets/images/c4sc-v5/CloudOrgPaused.png', None], dtype=object)
array(['/en/assets/images/c4sc-v5/CloudOrgResuming.png', None],
dtype=object) ] | docs.coveo.com |
You can use Engagespot REST API to send push notifications (both in-site and off-site) to your users. This API can be integrated with your app backend irrespective of the programming language. If you prefer an SDK for your programming language, see the SDK sections instead.
API Endpoint -
Request Method - POST
Content-Type - application/json
Authorization - For authorization, you need to pass your app's API-Key via API-Key header.
You can find your app's API-Key from your Engagespot dashboard. Goto App Settings
Please don't confuse API Key with Site Key. Both are different. You should keep your API Key secret.
Request Body
Request body should be in json format. A sample request body is given below.
{ "campaign_name": "User Message Alert", "notification": { "title": "You have a new message from John!", "message": "Hey Dave, Wassup...", "icon": "", "url": "" }, "send_to": "identifiers", "identifiers": [ "83647520" ] }
Parameters
Success Response | https://docs.engagespot.co/how-to-send-notifications-via-engagespot-api/how-to-send-notifications-using-engagespot-rest-api | 2020-05-25T11:21:51 | CC-MAIN-2020-24 | 1590347388427.15 | [] | docs.engagespot.co |
Here is the breakdown for Solo, Basic, Pro, and Custom.
These pricing plans are made to meet the budget and needs of operations of all sizes.
IMPORTANT: If you are not subscribed to one of the plans listed below, your default royalty share (payable to you) becomes 80%. Once you are subscribed to a plan, your royalty share will increase to 85% or 90% and higher depending on your account type.
The WordPress plugin is included FREE for all paying LabelGrid customers; however for new customers who are not yet subscribed to a LabelGrid plan, but who are only looking to use the WP integration, we have a special plan called the "demo plan" just for this purpose. So, if you are not using any other parts of LG and just want to use the WP plugin, the "demo plan" is what you need. This plan will only cost you $3/month. The demo plan is not listed on the subscription form, however if you are interested in this plan, simply send us a support ticket. Also note: The demo plan comes with very minimal features, so we highly recommend you sign-up for at least a solo plan to get the most out of LabelGrid.
Most plans allow a yearly discount. Prices may change, so make sure to look at the yearly discount options when you purchase your subscription.
First you must register here, then login and go to LabelGrid Dashboard > Billing.
We accept all major credit cards as well as PayPal. | https://docs.labelgrid.com/billing/pricing-overview/pricing-plans | 2020-05-25T11:43:41 | CC-MAIN-2020-24 | 1590347388427.15 | [] | docs.labelgrid.com |
Windows Update (WSUS) failed with error 0x80080005? try this solution!
Some customers reported that they got error 0x80080005 on their WSUS client. A simple fix is adding the following registry key:
HKLMSYSTEMCurrentControlSetControl
Key = RegistrySizeLimit
Type = DWORD
Value = 4294967295
If the above solution can not resolve your failure, please send me the log files. | https://docs.microsoft.com/en-us/archive/blogs/asiasupp/windows-update-wsus-failed-with-error-0x80080005-try-this-solution | 2020-05-25T10:30:05 | CC-MAIN-2020-24 | 1590347388427.15 | [] | docs.microsoft.com |
One of the fundamental questions when designing the array schema is "what are my dimensions and what are my attributes"? The answer depends on and is rather related to whether your array is dense or sparse. Two good rules of thumb that apply to both dense and sparse arrays:
If you frequently perform range slicing over a field/column of your dataset, you should consider making it a dimension.
The order of the dimensions in the array schema matters. More selective dimensions (i.e., with greater pruning power) should be defined before less selective ones.
In dense arrays, telling the dimensions from attributes is potentially more straightforward. If you can model your data in a space where every cell has a value (e.g., image, video), then your array is dense and the dimensions will be easy to discern (e.g., width and height in an image, or width, height and time in video).
It is important to remember that dense arrays in TileDB do not explicitly materialize the coordinates of the cells (in dense fragments) and, therefore, which may result in significant savings if you are able to model your array as dense. Moreover, reads in dense arrays may be faster than in sparse, as dense arrays use implicit spatial indexing and, therefore, the internal structures and state are much more lightweight.
It is always good to think of a sparse dataset as a dataframe before you start designing your array, i.e., as a set of "flattened" columns with no notion of dimensionality .Then follow the two rules of thumb above.
Recall that TileDB explicitly materializes the coordinates of the sparse cells. Therefore, make sure that the array sparsity is large enough. If the array is rather dense, then you may consider defining it as dense, filling the empty cells with some user-defined "dummy" values that you can recognize (so that you can filter them out manually after receiving the results from TileDB). | https://docs.tiledb.com/main/performance-tips/choosing-dimensions | 2020-05-25T11:43:37 | CC-MAIN-2020-24 | 1590347388427.15 | [] | docs.tiledb.com |
KaplanMeierFitter¶
- class
lifelines.fitters.kaplan_meier_fitter.
KaplanMeierFitter(alpha: float = 0.05, label: str = None)¶
Bases:
lifelines.fitters.UnivariateFitter
Class for fitting the Kaplan-Meier estimate for the survival function.
Examples
from lifelines import KaplanMeierFitter from lifelines.datasets import load_waltons waltons = load_waltons() kmf = KaplanMeierFitter(label="waltons_data") kmf.fit(waltons['T'], waltons['E']) kmf.plot()
confidence_interval_¶
The lower and upper confidence intervals for the survival function. An alias of
confidence_interval_survival_function_. Uses Greenwood’s Exponential formula (“log-log” in R).
confidence_interval_survival_function_¶
The lower and upper confidence intervals for the survival function. An alias of
confidence_interval_. Uses Greenwood’s Exponential formula (“log-log” in R).
confidence_interval_cumulative_density_¶
The lower and upper confidence intervals for the cumulative density.
conditional_time_to_event_¶
Return a DataFrame, with index equal to survival_function_, that estimates the median duration remaining until the death event, given survival up until time t. For example, if an individual exists until age 1, their expected life remaining given they lived to time 1 might be 9 years.
cumulative_density_at_times(times, label=None) → pandas.core.series.Series¶
Return a Pandas series of the predicted cumulative density at specific times
fit(durations, event_observed=None, timeline=None, entry=None, label=None, alpha=None, ci_labels=None, weights=None)¶
Fit the model to a right-censored dataset
fit_interval_censoring(lower_bound, upper_bound, event_observed=None, timeline=None, label=None, alpha=None, ci_labels=None, show_progress=False, entry=None, weights=None, tol=1e-07) → KaplanMeierFitter¶
Fit the model to a interval-censored dataset using non-parametric MLE. This estimator is also called the Turball Estimator.
Currently, only closed interval are supported. However, it’s easy to create open intervals by adding (or subtracting) a very small value from the lower-bound (or upper bound). For example, the following turns closed intervals into open intervals.
>>> left, right = df['left'], df['right'] >>> KaplanMeierFitter().fit_interval_censoring(left + 0.00001, right - 0.00001)
Note
This is new and experimental, and many features are missing.
fit_left_censoring(durations, event_observed=None, timeline=None, entry=None, label=None, alpha=None, ci_labels=None, weights=None)¶
Fit the model to a left-censored dataset
median_survival_time_
Return the unique time point, t, such that S(t) = 0.5. This is the “half-life” of the population, and a robust summary statistic for the population, if it exists.
plot(**kwargs)¶
Plots a pretty figure of the model
Matplotlib plot arguments can be passed in inside the kwargs, plus
plot_cumulative_density(**kwargs)¶
Plots a pretty figure of the cumulative density function.
Matplotlib plot arguments can be passed in inside the kwargs.
predict(times: Union[Iterable[float], float], interpolate=False) → pandas.core.series.Series¶
Predict the {0} at certain point in time. Uses a linear interpolation if points in time are not in the index. | https://lifelines.readthedocs.io/en/latest/fitters/univariate/KaplanMeierFitter.html | 2020-05-25T09:54:38 | CC-MAIN-2020-24 | 1590347388427.15 | [] | lifelines.readthedocs.io |
How Matic Works?
Matic Network is a blockchain application platform that provides hybrid Proof-of-Stake and Plasma-enabled sidechains.
Matic has a three-layer architecture:
- Staking and Plasma smart contracts on Ethereum
- Heimdall (Proof of Stake layer)
- Bor (Block producer layer)
The below image will help you understand how the core components interact with each other.
| https://docs.matic.network/docs/home/architecture/matic-flow/ | 2020-05-25T11:26:01 | CC-MAIN-2020-24 | 1590347388427.15 | [array(['/img/Bor/bor-architecture.png', None], dtype=object)] | docs.matic.network |
New-Offline
Address Book
Use the New-OfflineAddressBook cmdlet to create offline address books (OABs).
New-OfflineAddressBook [-Name] <String> -AddressLists <AddressBookBaseIdParameter[]> [-Confirm] [-DiffRetentionPeriod <Unlimited>] [-DomainController <Fqdn>] [-GeneratingMailbox <MailboxIdParameter>] [-GlobalWebDistributionEnabled <Boolean>] [-IsDefault <Boolean>] [-PublicFolderDatabase <DatabaseIdParameter>] [-PublicFolderDistributionEnabled <Boolean>] [-Schedule <Schedule>] [-Server <ServerIdParameter>] [-ShadowMailboxDistributionEnabled <Boolean>] [-SkipPublicFolderInitialization] [-Versions <MultiValuedProperty>] [-VirtualDirectories <VirtualDirectory
$a = Get-AddressList | Where {$_.Name -Like "*AgencyB*"}; New-OfflineAddressBook -Name "OAB_AgencyB" -Server myserver.contoso.com -AddressLists $a -Schedule "Mon.01:00-Mon.02:00, Wed.01:00-Wed.02:00"
In Exchange Server 2010 and 2013, this example uses.
Example 2
New-OfflineAddressBook -Name "Contoso Executives OAB" -AddressLists "Default Global Address List","Contoso Executives Address List" -GlobalWebDistributionEnabled $true
This example creates a new OAB named Contoso Executives OAB with the following properties:
Address lists included in the OAB: Default Global Address List and Contoso Executives Address List
All OAB virtual directories in the organization can accept requests to download the OAB.
The organization mailbox that's responsible for generating the OAB is SystemMailbox{bb558c35-97f1-4cb9-8ff7-d53741dc928c} (we didn't use the GeneratingMailbox parameter to specify a different organization mailbox).
The OAB isn't used by mailboxes and mailbox databases that don't have an OAB specified (we didn't use the IsDefault parameter with the value $true).
Example 3
New-OfflineAddressBook -Name "New OAB" -AddressLists "\Default Global Address List" -Server SERVER01 -VirtualDirectories "SERVER01\OAB (Default Web Site)"
In Exchange Server 2010, this example creates the OAB New OAB that uses Web-based distribution for Microsoft Office Outlook 2007 or later clients on SERVER01 by using the default virtual directory.
Example 4
New-OfflineAddressBook -Name "Legacy OAB" -AddressLists "\Default Global Address List" -Server SERVER01 -PublicFolderDatabase "PFDatabase" -PublicFolderDistributionEnabled $true -Versions Version1,Version2
In Exchange Server 2010, this example creates the OAB Legacy OAB that uses public folder distribution for Outlook 2003 Service Pack 1 (SP1) and Outlook 98 Service Pack 2 (SP2) clients on SERVER01.
If you configure OABs to use public folder distribution, but your organization doesn't have any public folder infrastructure, an error will be returned. For more information, see Managing Public Folders. is the default value.
The Name parameter specifies the unique name of the OAB. The maximum length is 64 characters. If the value contains spaces, enclose the value in quotation marks.
This parameter is available or functional only in Exchange Server 2010.
The PublicFolderDatabase parameter specifies the public folder database that's used to distribute the OAB. You can use any value that uniquely identifies the database. For example:
Name
Distinguished name (DN)
GUID
To use this parameter, the PublicFolderDistributionEnabled parameter must be set to $true.
This parameter is available or functional only in Exchange Server 2010.
The PublicFolderDistributionEnabled parameter specifies whether the OAB is distributed via public folders. If the value of the PublicFolderDistributionEnabled parameter is $true, the OAB is distributed via public folders.
This parameter is available or functional only in Exchange Server 2010."
This parameter is available or functional only in Exchange Server 2010.
The Server parameter specifies the Exchange server where you want to run this command. You can use any value that uniquely identifies the server. For example:
Name
FQDN
Distinguished name (DN)
ExchangeLegacyDN
If you don't use this parameter, the command is run on the local server. available or functional only in Exchange Server 2010.
The SkipPublicFolderInitialization parameter specifies whether to skip the immediate creation of the OAB public folders if you're creating an OAB that uses public folder distribution. The OAB isn't available for download until the next site folder maintenance cycle has completed. You don't have to specify a value with the SkipPublicFolderInitialization parameter. Omitting this parameter may cause the task to pause while it contacts the responsible public folder server to create the necessary public folders. If the server is presently unreachable, or is otherwise costly to contact, the pause could be significant.
This parameter is available or functional only in Exchange Server 2010.
The Versions parameter specifies what version of OAB to generate. The allowed values are:
Version1
Version2
Version3
Version4}). For example, Mailbox01\OAB (Default Web Site),Mailbox01\OAB (Exchange Back End.
To use this parameter, the value of the GlobalWebDistributionEnabled parameter must be $false.
In Exchange 2013 CU7 or later, we recommend that you. | https://docs.microsoft.com/en-us/powershell/module/exchange/new-offlineaddressbook?view=exchange-ps | 2020-05-25T12:27:07 | CC-MAIN-2020-24 | 1590347388427.15 | [] | docs.microsoft.com |
from world to camera space.
This matrix is often referred to as "view matrix" in graphics literature.
Use this to calculate the Camera space position of GameObjects or to provide a cam: Camera; function Start() { cam = GetComponent.<Camera>(); } function LateUpdate() { var camoffset: Vector3 = new Vector3(-offset.x, -offset.y, offset.z); var m: Matrix4x4 = Matrix4x4.TRS(camoffset, Quaternion.identity, new Vector3(1, 1, -1)); cam.worldToCameraMatrix = m * transform.worldToLocalMatrix; }
// Offsets camera's rendering from the transform's position. using UnityEngine; using System.Collections;
public class ExampleClass : MonoBehaviour { public Vector3 offset = new Vector3(0, 1, 0); Camera cam;
void Start() { cam = GetComponent<Camera>(); }
void LateUpdate() { Vector3 camoffset = new Vector3(-offset.x, -offset.y, offset.z); Matrix4x4 m = Matrix4x4.TRS(camoffset, Quaternion.identity, new Vector3(1, 1, -1)); cam.worldToCameraMatrix = m * transform.worldToLocalMatrix; } }
See Also: Matrix4x4.LookAt, CommandBuffer.SetViewMatrix. | https://docs.unity3d.com/2017.4/Documentation/ScriptReference/Camera-worldToCameraMatrix.html | 2020-05-25T12:24:29 | CC-MAIN-2020-24 | 1590347388427.15 | [] | docs.unity3d.com |
Logo Change:
For changing the logo, navigate to the Appearance > Customizer. Go to the Site Identity tab. Here you can upload your Logo.
For best result use resized logo image, Suggested size is 145×53 pixel.
This theme support another header type called transparent header. For transparent header you can use another logo image.
For uploading the transparent header logo go to the Theme Option > Header Settings. Here you can upload the transparent header logo.. | https://docs.xoothemes.com/docs/rising/logo-and-site-information-change/ | 2020-05-25T10:16:23 | CC-MAIN-2020-24 | 1590347388427.15 | [] | docs.xoothemes.com |
Using the TeamPulse TFS Process Template
Successful synchronization in TeamPulse depends on work items (stories, bugs, issues, risks.
This article will walk you through installing the TeamPulse project template on your TFS server.
When should you use the TeamPulse Work Item Template?
The TeamPulse v1.0 TFS process template should be used if both of the following conditions are met.
- You have an existing TeamPulse project that was created with the default “TeamPulse” project template (see screenshot above).
- You want to synchronize with TFS and you do not have an existing TFS project.
How to install and use it
Installing and using the TeamPulse v1.0 TFS process template involves two main steps - uploading the process template to your project collection, and creating a new TFS project using this template.
Note: You must be a project collection administrator to upload process templates.
- Download template the TeamPulse v1.0 TFS process template. >> TeamPulse v1.0
- Unzip the file TeamPulse v1.0.zip to a folder on your computer.
- Ensure Microsoft Visual Studio Team Explorer 2012 is installed.
- Run Visual Studio as an administrator.
- Using Team Explorer, connect to your project collection.
- From the menu, go to Team > Team Project Collection Settings > Process Template Manager…
- Click Upload then browse to the folder that was extracted from the TeamPulse v1.0 zip file.
- Click Select Folder to begin uploading the process template.
- A message will be displayed on success or failure.
- To create a new project using the new process template, go to Team Explorer, right click on the root project collection node and click New Team Project…
- In the process template drop down, select TeamPulse v1.0.
- Go through the rest of the project creation steps by clicking Next, or click Finish.
Template information
Created from the MSF for Agile Software Development v5.0 process template.
The states for user stories, tasks, and bugs have been significantly modified.
Includes the following work item types and states:
- User Story – Not Done, In Progress, Ready for Test, Done, Deferred, Deleted.
- Task – Not Done, In Progress, Done, Deleted.
- Bug – Not Done, In Progress, Ready for Test, Done, Deferred, Deleted.
- Feedback – Not Done, Requires Follow-up, Requires Analysis, Ready, In Progress, Done, Deleted.
- Issue – Active, Closed.
- Test Case – Ready, Closed, Design.
- Shared Step – Active, Closed.
For more information about synchronization in TeamPulse, please read the synchronization documentation. | http://docs.telerik.com/teampulse/knowledge-base/using-the-teampulse-tfs-process-template | 2016-07-23T11:08:00 | CC-MAIN-2016-30 | 1469257822172.7 | [array(['/teampulse/images/using-the-teampulse-tfs-process-template/tfs-kb-png.png',
'Create TeamPulse Project'], dtype=object) ] | docs.telerik.com |
Ink Tool Properties
When you select the Ink tool, its properties and options appears in the Tools Properties view.
Lasso and Marquee.
Show Inkable Lines
The Show Inkable Lines
option highlights all pencil lines (so no brush strokes) on the selected layer. Pencil line segments that are already inked with the selected swatch colour from the colour palette are also not highlighted.
Be Smart on Connecting Lines
With this
option selected, as you hover and move the cursor across intersecting pencil lines, the path that you create will get highlighted. When you click on your mouse or stylus the highlighted segments will get inked.
With this option disabled, all the intersecting segments that your cursor comes near will get highlighted and become part of the selection, even if they were not situated in the direction of the chosen path.
This option only works if the Ink tool is in Hover Mode and not Select Mode.
Select Mode
Use this
mode instead of the Hover Mode. In the Hover Mode, any potentially inkable pencil line will have its central vector line highlighted as the Ink tool’s cursor hovers over it. Use [Ctrl] (Windows) or [⌘] (Mac OS X) to toggle between the two modes.
Arrange Ink Lines
Use this
option to have every newly inked line be brought to the front. Disable this option to have every newly inked line be sent to the back. Use [Alt] to toggle between the two options.
Mitre
As you hover over two perpendicular or nearly perpendicular segments a highlighted path with a corner is created. Clicking on these highlighted segments inks both segments and makes them appear as a single stroke with a corner or bend.
Click on the Mitre
button to reveal four options from its drop down menu. Select either Round, Mitre, Bevel or As Is before creating corner selections to make a bend in the path either round, sharp, bevelled or gapped.
Tip Style
Use the Tip Style option to customize the edge of the Ink tool.
Related Topics | http://docs.toonboom.com/help/animate-pro/Content/HAR/Stage/006_Colour/029_H2_Ink_Tool_Properties.html | 2017-08-16T17:23:05 | CC-MAIN-2017-34 | 1502886102309.55 | [array(['../../../Resources/Images/HAR/Stage/Colours/HAR_InktoolProperties_001.png',
None], dtype=object)
array(['../../../Resources/Images/HAR/Stage/Drawing/HAR_BevelOptions_001.png',
None], dtype=object) ] | docs.toonboom.com |
In Harmony, you can use expressions to automate the calculation of effect values based on the values in another function. An expression is a mathematical formula that allows you to manipulate the value in the source function to create new values for the destination effect.
For example, if you take two characters and one is walking across the stage and the other is following the same path two steps behind. Without expressions, you would have to manually enter the values for the position of the peg so that it was one frame behind the original element. However, you can save time by building an expression that does this process for you. Then, if you change the position of the element in the original column, Harmony automatically updates the Expression columns linked to it.
Related Topics | http://docs.toonboom.com/help/harmony/Content/HAR/Stage/016_Animation_Paths/074_H1_Expression_Columns_.html | 2017-08-16T17:25:25 | CC-MAIN-2017-34 | 1502886102309.55 | [] | docs.toonboom.com |
Once all your pieces are completed, it is time to attach them to each other, the same as you would do with the brass fasteners. To do so, you will connect your layers one to another in the Timeline view.
This process is divided in the following steps:
Once you have finished drawing all your pieces, you may find that your drawings need to be reordered. You can place these pieces in the correct order before attaching your pieces to each other.
To reorder your layers:
When you animate, you will often want the forearm and hand to follow the upper arm when you select and move it. To do this, you must attach the forearm to the upper arm and the hand to the forearm.
To attach a layer to another one, drag the layers one onto the other in the Timeline view.
To attach a layer to another one:
Once the layer is attached to another one, it is pushed to the right.
A good element to add to your puppet is a Master Peg. This is a trajectory on which you attach your puppet to make it travel through your scene. t is also used when you want to scale the entire character up or down without doing it on each individual piece. For example, you could animate your puppet walking on the spot then use the Master Peg to get it to move from left to right.
To add a Master Peg:
All the layers are connected to the peg and moved slightly to the right.
In order for your pieces to rotate properly, it is important to set the pivots at the right location. The pivot is the point from where the body part will rotate, they generally correspond to an articulation. To know where to position a pivot point, think of your own body. If you are moving an arm, notice that your own arm rotates from the shoulder. So, the pivot point for the puppet's arm must be located at the shoulder. The pivot point for the forearm will be the elbow, etc.
You can apply this technique to anything you want to animate that contains a joint or pivot. For example, if you are animating a spider, study a real spider or a film of a spider moving to determine where you should place the pivot points correctly in your animation.
To position the pivots:
Do not forget to do this for your master peg. Positioning the pivot between the feet is often a good option. | http://docs.toonboom.com/help/toon-boom-studio/Content/TBS/User_Guide/009_Cut-out/004_H1_Attaching.html | 2017-08-16T17:26:45 | CC-MAIN-2017-34 | 1502886102309.55 | [] | docs.toonboom.com |
(my choice):
$ cd aiohttp $ mkvirtualenv --python=`which python3` aiohttp
There are other tools like pyvenv but you know the rule of thumb now: create a python3 virtual environment and activate it.
After that please install libraries required for development:
$ pip install -r requirements-dev.txt
We also recommend to install ipdb but it’s on your own:
$ pip install ipdb
Note
If you plan to use
ipdb within the test suite, execute:
$ py.test tests -s -p no:timeout command to run the tests with disabled timeout guard and output capturing.
Congratulations, you are ready to run the test suite
Run aiohttp test suite¶
After all the preconditions are met you can run tests typing the next command:
$ make test.
The End¶
After finishing all steps make a GitHub Pull Request, thanks. | http://aiohttp.readthedocs.io/en/stable/contributing.html | 2017-08-16T17:10:09 | CC-MAIN-2017-34 | 1502886102309.55 | [] | aiohttp.readthedocs.io |
. This is the scenario discussed in this topic.
Web-based single sign-on (SSO) to the AWS Management Console from your organization. Users can sign in to a portal in your organization hosted by a SAML 2.0–compatible IdP, select an option to go to AWS, and be redirected to the console without having to provide additional sign-in information. In addition to being able to use a third-party SAML IdP to establish SSO access to the console, you can alternatively create a custom IdP to enable console access for your external users. For more information about building a custom IdP, see Creating a URL that Enables Federated Users to Access the AWS Management Console (Custom Federation Broker).
Using SAML-Based Federation for API following process is used:
A user in your organization uses a client app to request authentication from your organization's IdP.
The IdP authenticates the user against your organization's identity store.
The IdP constructs a SAML assertion with information about the user and sends the assertion to the client app.
The client app calls the AWS STS
AssumeRoleWithSAMLAPI, passing the ARN of the SAML provider, the ARN of the role to assume, and the SAML assertion from IdP..
To configure your organization's IdP and AWS to trust each other.
If your IdP enables SSO to the AWS console, then you can configure the maximum duration of the console sessions. For more information, see Enabling SAML 2.0 Federated Users to Access the AWS Management Console.
Note
The AWS implementation of SAML 2.0 federation does not support encrypted SAML assertions between the identity provider and AWS. However, the traffic between the customer's systems and AWS is transmitted over an encrypted (TLS) channel.:
Copy
{ "Version": "2012-10-17", "Statement": [{ "Effect": "Allow", "Principal": {"AWS": "arn:aws:iam:. A hash value based on the concatenation of the
Issuerresponse value (
saml:iss) and a string with the
AWSaccount ID and the friendly name (the last part of the ARN) of the SAML provider in IAM. The concatenation of the account ID and friendly name of the SAML provider is available to IAM policies as the key
saml:doc. The account ID and provider name must be separated by a '/' as in "123456789012/provider_name". For more information, see the
saml:dockey at Available Keys for SAML-Based Federation.
The combination of
NameQualifierand
Subjectcan be used to uniquely identify a federated ( "" + "123456789012" + "/MySAMLIdP" ) )
For more information about the policy keys that are available for SAML-based federation, see Available Keys for SAML-Based Federation.,
transient, or the full
FormatURI from the
Subjectand
NameIDelements used in your SAML assertion. A value of
persistentindicates that the value in
saml:subis the same for a user across all sessions. If the value is
transient, the user has a different
saml:subvalue for each session. For information about the
NameIDelement's
Formatattribute, see Configuring SAML Assertions for the Authentication Response..
Copy
{ . | http://docs.aws.amazon.com/IAM/latest/UserGuide/id_roles_providers_saml.html | 2017-08-16T17:33:51 | CC-MAIN-2017-34 | 1502886102309.55 | [array(['images/saml-based-federation.diagram.png',
'Getting temporary security credentials based on a SAML assertion'],
dtype=object) ] | docs.aws.amazon.com |
August 10, 2016
This release is for the Private Cloud Appliance only.
Application
New features
- FlowView support for v35 appliance model—AppNeta now offers FlowView product in a KVM virtualized environment.
- Embeddable charts—We're busting PathView out of its cage! Grab the code for loss, latency, jitter, RTT, or MOS charts and embed it on any webpage just like you would embed a YouTube video. Embeddable charts means you construct a central dashboard outside of our service to display data from multiple monitoring solutions. One thing to consider is that viewers of the charts should be able to access AWS cloud.
- Branded contact info—The support email address on the login page can now be customized as part of system branding.
- Notification email enhancement—Alert profile violations are now highlighted in red so that they're easier to visually process.
- Branded Swagger—Custom branding has been extended to the web service API.
Resolved issues
8.x Appliances Only
You have the features and fixes in this release when your appliance version is 8.4.7.x.
New features
This is an appliance-only upgrade in which typically only fixes are introduced.
Resolved issues
9.x Appliances Only
You have the features and fixes in this release when your appliance version is 9.4.1.x.
New features
- New web UI—AppNeta now offers new web-based UI called 'web admin' to manage your appliance. The following tasks can be performed from 'web admin': changing hostname, timezone, proxy settings, password and AppNeta connection. Other tasks will be added in subsequent releases. To access 'web admin', login at https://<hostname> with the admin credentials. | https://docs.appneta.com/release-notes/2016-08-10-pca.html | 2017-08-16T17:12:58 | CC-MAIN-2017-34 | 1502886102309.55 | [] | docs.appneta.com |
AWS Flow (Ruby) Layer
Note
This layer is available only for Linux-based stacks.
An AWS Flow (Ruby) layer is an AWS OpsWorks Stacks layer that provides a blueprint for instances that host Amazon SWF activity and workflow workers. The workers are implemented by using the AWS Flow Framework for Ruby, which is a programming framework that simplifies the process of implementing a distributed asynchronous application while providing all the benefits of Amazon SWF. It is ideal for implementing applications to address a broad range of scenarios, including business processes, media encoding, long-running tasks, and background processing.
The AWS Flow (Ruby) layer includes the following configuration settings.
- RubyGems version
The framework's Gem version.
- Bundler version
-
- EC2 Instance profile
A user-defined Amazon EC2 instance profile to be used by the layer's instances. This profile must grant permissions for applications running on the layer's instances to access Amazon SWF.
If your account does not have an appropriate profile, you can select New profile with SWF access to have AWS OpsWorks Stacks update the profile for or you can update it yourself by using the IAM console. You can then use the updated profile for all subsequent AWS Flow layers. The following is a brief description of how to create the profile by using the IAM console. For more information, see Using IAM to Manage Access to Amazon SWF Resources.
Creating a profile for AWS Flow (Ruby) instances
Open the IAM console at.
Click Policies in the navigation pane and click Create Policy to create a new customer-managed policy.
Click Select next to Policy Generator and then specify the following policy generator settings:
Effect – Allow.
AWS Service – Amazon Simple Workflow Service.
Actions – All Actions (*).
Amazon Resource Name (ARN) – An ARN that specifies which Amazon SWF domains the workers can access. Type
*to provide access to all domains.
When you are finished, click Add Statement, Next Step, and then Create Policy.
Click Roles in the navigation pane and click Create New Role.
Specify the role name and click Next Step. You cannot change the name after the role has been created.
Select AWS Service Roles and then select Amazon EC2.
Click Customer Managed Policies from the Policy Type list and select the policy that you created earlier.
Specify this profile when you create an AWS Flow (Ruby) layer in AWS OpsWorks Stacks.
Note
For more information on how to use the AWS Flow (Ruby) layer, including a detailed walkthrough that describes how to deploy a sample application, see Tutorial: Hello AWS OpsWorks!. | http://docs.aws.amazon.com/opsworks/latest/userguide/workinglayers-awsflow.html | 2017-08-16T17:34:25 | CC-MAIN-2017-34 | 1502886102309.55 | [] | docs.aws.amazon.com |
Structure of the Spartan Base Theme.
- css folder contains files for all cascading style sheets. Spartan has the ability to determine the width of the browser and responds accordingly. Styles are inherited from one layer to the next.
- global.css = global – Bottommost layer and if not overwritten, it will have all of its styles inherited by all of the other CSS layers.
- spartan-alpha-default.css = default
- spartan-alpha-default-fluid.css = narrow
- spartan-alpha-default-normal.css = normal
- img folder contains image elements used in the theme. It is further divided into: 'admin', 'banners', 'buttons', and 'icons' sub-folders.
- preprocess folder allows you to easily store and organize your preprocess functions
- process folder allows you to easily store and organize your process functions
- templates contains tpl files for the various nodes and regions on your site
- logo.png and logo-sm.png are the logo image that gets placed in the header and footer of the theme
- openomega.info declares basic information about the theme (it is discussed in more detail on the "Your First OpenPublic Subtheme". | http://docs.openpublicapp.com/documentation/structure-spartan-base-theme | 2017-08-16T17:14:40 | CC-MAIN-2017-34 | 1502886102309.55 | [array(['http://products.phase2technology.com/sites/default/files/styles/panopoly_image_original/public/spartan-files.png?itok=S04rElMG',
None], dtype=object) ] | docs.openpublicapp.com |
An Act to amend 69.21 (1) (a) 1. and 69.21 (1) (b) 3. of the statutes; Relating to: copies of certain vital records. (FE)
Amendment Histories
2015 Wisconsin Act 157 (PDF: )
2015 Wisconsin Act 157: LC Act Memo
Bill Text (PDF: )
LC Amendment Memo
Fiscal Estimates
AB633 ROCP for Committee on State Affairs and Government Operations On 2/3/2016 (PDF: )
LC Bill Hearing Materials
Wisconsin Ethics Commission information
2015 Senate Bill 507 - Hold (Available for Scheduling) | http://docs.legis.wisconsin.gov/2015/proposals/ab633 | 2018-10-15T14:36:05 | CC-MAIN-2018-43 | 1539583509326.21 | [] | docs.legis.wisconsin.gov |
Link Building Outreach: How To Run Successful Outreach Campaigns That Get Results
Link building in 2018
Some websites still rely heavily on directory links, PBNs, and paid links. Do these methods work? If done properly, then sometimes. Are these methods risky? Yes.
But come on, we all know that these methods of link building are frowned upon. When it comes to PBNs Nathan Gotch's summary is pretty accurate in my opinion:
That is because these sites will be creating content and reaching out to suitable bloggers for relevant links.
Why is outreach so vital for link building?
Editorial links gained through outreach have always been safe from regular Google purges, and there is no evidence to suggest that this will change.
This raises the question "why doesn't everyone use outreach?"
Well to answer that question – it is quite hard! The trick is to find websites that will find your content interesting and promote your content to that website's owner or editor.
Outreach brick walls
#1: Small pool of websites
For example, let's say your niche was a very specific type of flooring. Simply widen your niche to include all types of flooring or to include the even broader niche of home improvements.
You should try and be as niche and relevant as possible. However, if you are not getting much traction being extremely niche then don't be scared to widen your niche.
#2: Time investment
I have run outreach campaigns where I have sent over 50 personalized emails, received 6 replies and ended up with 2 links as a result. All in all, this campaign took me just under 15 hours but the two high quality links I ended up with were editorial mentions from high DA sites in an incredibly competitive market that we now dominate online.
#3: Rejection
Take my last example, I sent 50 emails and received 6 replies. Was it worth it for 2 high quality links? Hell yeah!
#4: No suitable content
I learned the hard way, through burning bridges like this by being too abrupt, that link building is a process that is vital to SEO success and not a task that you carry out in bursts on a Friday afternoon.
My advice would be to plan a gap in the prospect website's content and reach out once you are fully armed with your awesome piece of content.
Why outreach fails
Too many webmasters are spammed with 'hey check out this content that I am sure you will love' emails. These emails get deleted as there is no effort or real content behind them. So the outreach ultimately fails because your emails suck.
The biggest reason your emails suck is most likely due to either:
Hello,
I had a look at your post [insert post name]
[insert post url]
We have a simiar article that goes into a lot more depth, check it out here:
[insert post url]
Please link to this if you think it is valuable.
Thanks,
This example had very little effort and the sender is most definitely focused on quantity rather than quality.
How to research a link opportunity properly?
If you are a website owner and someone reaches out to you for a link you don't want the hassle of finding a place for the link in your content. That is why a content hook is so important – find information that will help you nudge the website owner towards where your link will fit in.
How to write the perfect outreach email?
"Hey [Name], You might remember we spoke about [subject] a few weeks ago…."
Here is an example of an email I recently sent out that got me a solid link:
Check out an interview, where Ann Smarty, a former Editor-in-chief at SEJ, tells about the main points editors of the popular blogs pay their attention to while reading the guest posting request and how to make your letter stand out of hundreds of other requests.
If you want to take your outreach emails to the next level, read this post by Neil Patel. In this article, he goes through 7 things that make him delete outreach emails he receives. Avoid these at all cost! Also, if you fancy a bit of a laugh, read John Doherty's examples of bad outreach emails.
Important: Follow Up!
A lot of people use Boomerang to schedule a reply if the receiver does not respond. I instead change this to send me the email back if I have not received a response. This way I can craft a more personalized response.
To use Boomerang for Gmail, install the extension on Chrome and follow the steps below:
Final thoughts
Use this guide to make sure you are getting the most out of your outreach campaigns and please give me a shout in the comments with any questions :)
Recommended posts
How To Find And Remove Bad Backlinks
Ankit Mishra
Nice Article on Outreach!! i would really recommend to others ..
The Product 420
thanks for sharing. Nice topic! | http://docs.serpstat.com/blog/link-building-outreach-how-to-run-successful-outreach-campaigns-that-get-results/ | 2018-10-15T16:10:05 | CC-MAIN-2018-43 | 1539583509326.21 | [array(['https://static.tildacdn.com/tild6162-6430-4931-a431-396162336466/-/empty/tild6230-6265-4738-b.jpg',
'Link Building Outreach: How To Run Successful Outreach Campaigns That Get Results'],
dtype=object)
array(['https://static.tildacdn.com/tild6331-3465-4462-b230-393231386530/-/empty/gr2_1.jpg',
None], dtype=object)
array(['https://static.tildacdn.com/tild3835-3030-4239-b533-333730613262/-/empty/image3.jpg',
None], dtype=object)
array(['https://static.tildacdn.com/tild3236-3938-4733-b331-646133336636/-/empty/image2.jpg',
None], dtype=object)
array(['https://static.tildacdn.com/tild3337-3865-4365-a463-393632616334/-/empty/image5.png',
None], dtype=object)
array(['https://static.tildacdn.com/tild3534-3737-4434-b566-323832376236/-/empty/image4.png',
None], dtype=object) ] | docs.serpstat.com |
Object
Object Components, a subset of all Components, are used to create and render 3D scenes within TouchDesigner (also called Objects).
There are 11 object component types, with Geometry, Camera, Light and Null being most common:
Shared Mem Out COMP, Shared Mem In COMP
The component types that are used to render 3D scenes: contain the 3D shapes to render, plus , , Ambient Light, Null, Bone, Handle and other component types.
The 3D data held in SOPs and passed for rendering by the . | https://docs.derivative.ca/Object | 2018-10-15T16:14:31 | CC-MAIN-2018-43 | 1539583509326.21 | [] | docs.derivative.ca |
To turn on call tracing and enable logging to a file, use the Registry Editor (regedit or regedt32) to enter a fully qualified file name in the registry entry:
HKEY_LOCAL_MACHINE\SOFTWARE\OpenLink Software\OpenLink OLE DB Provider\DebugFile
To turn off call tracing simply leave this entry blank. A separate log file is opened for each process which uses the provider. Each file opened is named using the file base name specified in the DebugFile entry above with a three digit process ID suffix. | http://docs.openlinksw.com/uda/mt/mt_oldedbdebug/ | 2017-03-23T08:17:38 | CC-MAIN-2017-13 | 1490218186841.66 | [] | docs.openlinksw.com |
VoipBox is one of the core VoipSwitch components, responsible for processing multimedia files and streams. In every case, when multimedia content flows from server, VoipBox acts as a streamer or transcoder, besides exceptions when media over web interface (such as VUP or VUC), can be processed as they are (without transcoding).
Files structure:
The engine, configuration and media files are located in its installation directory - usually: C:\Program Files (x86)\VoipBox*, unless other location was selected during installation. VoipBox installation path needs to be included in configuration files of other VoipSwitch modules, as they communicate, write and read files directly to VoipBox environment. In case of deployment distributed to several servers its recommended to install VoipBox on the machine that handles web services due to the need of mentioned direct communication, otherwise required access to VoipBox directories must be supplied over network resources.
Folders:
- \application-data\language - Language media files evoked by IVR scenarios ie |You have|five|dollars|. Stored in WAVE format.
- \application-data\scenario - Files containing IVR scenario instructions, stored in XML format. In most cases factory set of scenarios is sufficient for carriers to support their desired IVR services, but its possible to customize existing or write new scenario in case of needs that can not be covered by factory set.
- \call_recordings - Customers calls recorded by Call Recording service. Default format AMR, configurable in voipbox_config.xml/<listener type="call-recorder">
- \faxes - Fax files received by FaxBox service and sent over web (VUP, VUC) interface or Email2Fax service. Files are being written and stored by VoipBox in .FAX format and converted with help of ImageMagick to other common graphic formats required by web modules.
- \grammar - Grammar libraries of languages supported by VoipBox .
- \greetings - Enduser media files used as custom recordings for Voicemail, Music On Hold, Call Waiting services. Managed usually from My Profile level of user portal. Stored by default in Mp3 format.
- \pbx_waves - PBX enduser media files used in PBX scenarios. Stored by deault in Mp3 format.
- \logs - VoipBox logs folder.
- \temp - temp directory used for media conversions.
Setup:
VSM: Settings : System : VoipBox
- Files storage - path to the folder where relevant VoipBox files are stored. Such folder should contain following sub-folders: scenarios, languages, messages, greetings, call_recordings, faxes, pbx_waves (required by a PBX module)
- Maximum number length - maximum length of a number processed by the VoipBox scenario, either a PIN or a dialed number ("0" means unlimited
- Non activity timeout - the time after which either the announcement will be repeated, passed number will be processed (if not followed by the finish key) or a call to VoipBox scenario will be disconnected. Should be set to non-zero value
- Finish key - a key with which a person calling to the VoipBox scenario should confirm the typed number (by default set as a pound key "#"), eg. after registering with a PIN or dialing a number via the scenario, if the Finish key is set to "#", dialed number should look like "12345#"
- Redial string - a string of characters indicating the last dialed number should be redialed again - if the calling person is connected to the VoipBox scenario (by default set as "*#")
- End call string - a string of characters using which a call made via the VoipBox scenario can be disconnected without disconnecting the calling party from the scenario so the next number can be dialed (by default set as "##")
- Non activity retries - a number determining how many times the announcement should be repeated if there is no action from the calling party end
- Wrong pin retries - number of retries available if the incorrect PIN has been passed
- Time multiplier - changes the time announced to the customer that can be utilized for a call (dependent on the account state and voice rate for the called number) by multiplying the original value Time multiplier allows defining different time multipliers for different client LOTs.
- Time addition - similar to the Time multiplier but adds a number of seconds to the announced time (it can't be assigned to different LOTs like a previous feature)
- Round time to minutes - rounds the announced time to minutes (doesn't announce seconds)
- Silence duration - time in seconds after which the calling person will hear the announcement
- Use client's account to recharge - enables the feature of recharging customer's account using other customer account, like IVR or Retail, using one of VoipBox Recharge scenarios. In such case IVR/Retail client account that is a source of the recharge is treated as a recharge PIN
- Ani storing exceptions- disables customers from adding ANI numbers specified in the field. Subsequent number should be separated by "|" sign, for example: anonymous|none|442081369011;
Remaining configuration settings are contained in voipbox_config.xml file, stored in main VoipBox directory. The file defines listeners configuration, audio codecs parameters and database connection details. As VoipBox reads configuration only during startup, any configuration change should be followed by restart of the module. Its important to copy final state of voipbox_config.xml to voipbox_routes stored in main VoipSwitch installation directory and remember to reflect any further change to this location.
Listeners:
- voipbox - Port where VoipSwitch sends requests to evoke scenarios, used also by web modules to trigger media conversions.
- audio-rtp - Port used for media streaming VoipBox<->VoipSwitch while executing scenarios.
- audio-mergerOFF - Port handling “audio events”
- audiotranscoderOFF - Port handling transcoding for certain audio codec. One port is dedicated only for one type of codec. For several codecs the port must be defined multiple times on following ports (as on example below). Transcoding service consumes significant volume of CPU resources, therefore in case of need to transcode many concurrent calls, its recommended to run several VoipBox instances on separate / dedicated servers where each instance is dedicated only for selected codec. In such case it's important to reflect the change of listening ports configuration in voipbox_routes/voipbox_config.xml stored in main VoipSwitch installation directory.
Example where one VoipSwitch instance used two VoipBox instances for transcoding:
- codecs - defines payload definition for codecs supported by VoipBox.
- database - contains database connection details.
Scenarios
The main VoipBox purpose is to handle IVR services defined by scenarios. Each scenario is a script containing instruction which can be described as conditional or unconditional sequence of events. Most of scenarios required intended configuration, starting from dedicated DID number in routing plan. Other, system scenarios are triggered automatically under certain conditions. In many cases, for a particular type of IVR service you can choose between several scenarios, where the main common thread bears various options as described below.
IVR services and dedicated scenario groups
Calling cards, Callback.
Scenarios containing element "PIN" as a part of their name are dedicated for calling cards services requiring customers authorization by providing their PIN number.
Scenarios containing element "Ask for number" without element "PIN" as a part of their name are dedicated for callback services in case when customers are recognized by their CLI number
Other scenarios containing above, common elements as a part of their names allow to enable additional options for certain service
Scenario names consisted of different elements are joined by + character. Usually their names are self described.
Possible options:
- PIN - basing scenario with authorization. Customer is requested for authorization by PIN number and then for destination number.
Additional options for PIN scenario:
- Account - provides account state
- Time - provider maximal time of connection for destination number basing on user account state and tariff rate.
- Register - saves customer CLI to the system, on further calls customer will be recognized by CLI without a need to provide PIN number.
- Only once - by default, after call is disconnected by remote party, customer is requested to provide destination number for next call. With option "only once" disconnection by remote party will finish IVR scenario and disconnect originating side.
- Select language - provides ability to select language for IVR prompts.
- PIN ANI_only - optional scenario that authorizes customers only by CLI number. Despite "PIN" element as a part if its name, PIN authorization is not allowed.
- no ANI - omits authorization by CLI number, always requires PIN number
- Recharge - provides ability to recharge account
- One stage - specific scenario for services provided in cooperation with source carrier. In case of "one stage" based services, destination number is being indicated directly by source carrier, therefore "ask for number" element is ommited in its basic sequence.
Sip clients scenarios:
- Account state - provides account state
- Recharge - provides ability to recharge by PIN code.
- Recharge + account state provides ability to recharge by PIN code and informs about account state
- Voicemail Management - Provides ability to check voicemail messages and to manage voicemail greeting (after greeting record, there is need to set recorded welcome message in Find Me rule for use it)
- Voicemail welcome greeting - provides ability to record voicemail greeting (there is need to set recorded welcome message in Find Me rule for use it)
- Fax send - scenario dedicated for web2fax and email2fax service
- Say verification code - dedicated for RCS account activation service
- Time + call - provides maximal possible time of connection for destination number basing on account state and tariff rate
- PBX is scenario handling DID numbers for PBX customers. It's part of extended structure, controlled by VUC, therefore it shouldn't be configured manually. Basically number is being assigned to the scenario by system when it's purchased by customer over VUC. From technical point of view, after purchase, system inserts the number to tables: 'portal_clientdids' and 'dialingplan' and then destination point of the number is added to table 'pbx_dialingplan'. In case of any problem related to inbound routing on PBX numbers, these are three basic tables that should be checked for diagnostic purposes.
Simple IVR scenarios configuration
- Basic sip clients scenario configuration such as Voicemail Management, Recharge or Account state, requires 2 simple steps
-add local (short) number to routing plan
-include same number in customers tariff (usually with free rate
Detailed configuration for scenarios belonging to IVR services has been described on dedicated pages. | http://docs.voipswitch.com/display/doc/1.14+Voipbox | 2017-03-23T08:08:52 | CC-MAIN-2017-13 | 1490218186841.66 | [array(['/download/attachments/34803236/VoipBoxVSMconfig.png?version=1&modificationDate=1464244733000&api=v2',
None], dtype=object) ] | docs.voipswitch.com |
What You Need To Know
Before you create a campaign, it's a good idea to review the concepts in this chapter to understand how CiviCampaign can best help you manage your work, and consider the key questions in relation to your organisation's specific needs.
Key Concepts
Your organisation will likely have its own campaign strategies and processes. CiviCampaign is a tool that you can use in conjunction with your existing methods, to streamline and automate certain processes.
Campaign Goals and Revenue
Define and document the concrete goals of the campaign, and what you hope to raise in funds (if applicable), and record it in the campaign information. This will enable you to use reports to analyze the effectiveness of a campaign at its conclusion.
Planning Your Campaign Activities.
Key Questions
Answer these questions in the context of your organisation or a specific campaign:
- Who are your target audience, and how will you reach them? Remember that the audience for your campaign activities may not be the same as the audience you are trying to reach with the actual campaign itself. Understanding your target audience will help you to choose the most appropriate strategies and communication activities to achieve the goal(s) of your campaign.
- What activities and strategies, such as events and mailings, will be associated with this campaign?
- How will you be gathering data during the campaign (e.g. surveys, petitions, event registrations) and who will be responsible for entering the data into CiviCRM?
- What kind of reports will be useful for monitoring progress and evaluating the campaign at its conclusion?
CiviEngage and CiviCampaign
CiviEngage is a Drupal only feature that enhances CiviCampaign with more functionality for surveys and pre-configures your installation of CiviCRM with custom data sets, profiles and enhancements to reports. See the section Civic Engagement for more details about CiviEngage. | https://docs.civicrm.org/user/en/4.6/campaign/what-you-need-to-know/ | 2017-03-23T08:16:58 | CC-MAIN-2017-13 | 1490218186841.66 | [] | docs.civicrm.org |
This is an iframe, to view it upgrade your browser or enable iframe display.
Prev
9.16.
Add/Remove Software
application to make desired changes.
Choose which package groups you want to install.
Figure 9.49. Package Group Selection
By default, the Fedora installation process loads a selection of software that is suitable for a system deployed as a basic server. Note that this installation does not include a graphical environment. To include a selection of software suitable for other roles, click the radio button that corresponds to one of the following options:.17, “Installing Packages”
.
To select a component, click on the checkbox beside it (refer to
Figure 9.49, “Package Group Selection”
).
To customize your package set further, select the
Customize now
option on the screen. Clicking
takes you to the
Package Group Selection
screen.
9.16 16 - i386
repository contains the complete collection of software that was released as Fedora 16, with the various pieces of software in their versions that were current at the time of release. If you are installing from the Fedora 16 16 - i386 - Updates
repository contains the complete collection of software that was released as Fedora 16, 16, refer to the
Fedora 16 Cluster Suite Overview
, available from
.
Enter the details of additional software repositories.
Figure 9.50..51. Select network interface
Select an interface from the drop-down menu.
Click
OK
.
Anaconda
activates the interface that you selected, then starts
NetworkManager
to allow you to configure the interface.
Configuring network connections.
Figure 9.52. Network Connections
For details of how to use
NetworkManager
, refer to
Section 9.5, “Setting the Hostname”
If you select
Add additional software repositories
, the
Edit repository
dialog appears. Provide a
Repository name
and the
Repository URL
for its location.
Fedora Software Mirrors
To find a Fedora software mirror near you, refer to
..
Backtracking Removes Repository Metadata
If you choose
Back
from the package selection screen, any extra repository data you may have entered is lost. This allows you to effectively cancel extra repositories. Currently there is no way to cancel only a single repository once entered.
Prev
9.15.3. Alternative Boot Loaders
Up
9.16.2. Customizing the Software Selection | https://docs.fedoraproject.org/en-US/Fedora/16/html/Installation_Guide/s1-pkgselection-x86.html | 2017-03-23T08:11:38 | CC-MAIN-2017-13 | 1490218186841.66 | [] | docs.fedoraproject.org |
WordPress
Information about running WordPress on Webarchitects, Ecohost and Ecodissident servers.
We have a low-volume announcement email list for clients using WordPress.
Our newer Webarchitects servers, have an option of an automatic WordPress install when hosting accounts are created.
See also the other pages in the Category:WordPress on this wiki.
Contents
- 1 Changing the site URL
- 2 HTTPS
- 3 WordPress Multisites
- 4 WordPress Table Prefix
- 5 Piwik
- 6 Brute Force Attacks
Changing the site URL
When WordPress sites are automatically installed on the
host1.webarch.net or
host2.webarch.net servers they are set up on sub-domains based on usernames, for example. This is fine for development purposes but when a site is to be made live the main domain name for the site needs to be changed. There is an article on the WordPress site which documents various ways to do this, however we find that the easiest thing is if clients ask us to do it.
The method we use is a wp-cli search and replace, this updates serialized entries in the WordPress database, for example:
su - user -s /bin/bash cd sites/default wp search-replace "user.host2.webarch.net" "example.org"
There are other options for editing serialized entries in the database, which you can use without our help, listed on the WordPress documentation site.
HTTPS
If your WordPress site doesn't use HTTPS then your password is vulnerable to being compromised, especially if you login to your site from un-encrypted public WIFI hotspots.
Since 2014 Google have been ranking sites that use HTTPS higher and since 2015 free HTTPS certificates from Let's Encrypt have been available.
Starting in 2017 Google will be marking HTTP pages with password forms as non-secure and they plan to eventually:
label all HTTP pages as non-secure, and change the HTTP security indicator to the red triangle that we use for broken HTTPS
Due to these factors it makes sense to use HTTPS for your WordPress site, all our new WordPress hosting comes with HTTPS enabled by default.
The simple way to set up a WordPress site so that unathenticated users (people who simply read the site without a login) get a HTTP version and authenticated users only use HTTPS is to add this variable to
wp-config.php:
// define('FORCE_SSL_ADMIN', true);
Note that this should come before wp-settings.php is required, but there are limitations to this approach:
Assuming the front end is using non-secure http protocol, this can result in mixed protocol usage. Further, any cookies returned by AJAX calls to URLs built using
admin_url('admin-ajax.php')will be secure and thus unavailable to other parts of the front end.
For this reason we strongly suggest you make your WordPress site HTTPS only (with a redirect for people accessing it using HTTP).
Once the HTTPS certificate has been installed (we need to do this, it can't be done by clients) the
~/.htaccess file can have the rules documented on the htaccess wiki page added to the start of it. The other step that needs to be done is to update the WordPress internal links, we find the easiest way to do this is using the wp-cli search-replace function, you need to ask us to undertake this task, for example:
su - user -s /bin/bash cd sites/default wp search-replace "" ""
WordPress Multisites
A WordPress multisite is one WordPress instance which hosts multiple seperate sites, if you would like us to host a multisite please get in touch and we would be happy to set this up for you. See the documentation on WordPress.org for more infomation.
WordPress Table Prefix
You can host multiple WordPress sites using one MySQL database if each site has a different table prefix. The advantage of this is that it can dramatically reduce your hosting costs, however it is not without drawbacks.
The primary problem with this approach is when one site is compromised (we see several sites compromised a year) either via an insecure plugin, or brute force attack on the main admin account (these are the most usual causes) — often one site being compromised results in all the others on the same hosting also being compromised and then rolling back to the version of files and database prior to the breach becomes an awful lot more complicated and therefore expensive. We would advise against this approach for these reasons unless you are running very secure sites (using HTTTP authentication for logins) with minimal, well maintained and updated plugins to mitigate the risk of compromise.
If you want to have a development copy of your WordPress site to do things like work on the theme and test plugins then you are also better off with a separate database as you will no doubt want to copy the live database to the development site more than once and it would be safer doing this with separate database.
Piwik
We have a Piwik server available for use by members of our co-operative, if you would like to use it please get in contact to ask for an account to be set up and then install the WP-Piwik plugin.
Brute Force Attacks
WordPress sites are vulnerable to brute force attacks on admin accounts via the
wp-login.php page and the XML-RPC interface, this is where botnets try multiple password combinations against the admin username (they are able to find this out) to try to gain access to sites in order to post spam to them. This is why it is important to make sure you use good passwords.
WP Disable XML-RPC Pingback
On our newest servers we have configured support for wp-disable-xml-rpc-pingback, this prevents the abuse of your site's XML-RPC by simply removing some methods used by attackers.
WP Stop XML-RPC Attack
On our newest servers we have configured support for wp-disable-xml-rpc-pingback, this disables all access to your
xmlrpc.php, except for JetPack and Automattic and checks with ARIN for Automattic's subnets and updates your
.htaccess file.
WP Fail2Ban
On our newest servers we have configured support for the WP-fail2ban plugin and we automatically install and configure this with each WordPress site install. The result of this is that if there are more than 5 failed login attempts on one site on the server then the remote IP address is banned from accessing any site on the server for 24 hours
If you believe an IP address has been blocked in error please contact us to unblock it.
There is a danger of false positives and malicious side effects of banning IP addresses, if for example someone or several people make more than 5 failed login attempts from the same IP address it will be banned, or if someone deliberately makes login attempts which fail in order to get a IP address banned, this is especially a danger with shared proxy servers, for example Tor exit nodes or if you set your site up to use CloudFlare.
The best way to mitigate the danger of false positives is to use HTTP Authentication to add an additional layer of security, a username and password you can share with other editors of the site, this isn't an option if your site allows anyone to create accounts to post content, it is only suitable when there is a small number of trusted editors, see the instructions for password protecting wp-login.php.
For servers which don't have WP-fail2ban support configured we suggest that you install a plugin to limit the rate at which these attacks can be run (though note that there is still the danger of false positives, as mentioned above, with these), for example:
Brute Force Login Protection
Brute Force Login Protection writes to your .htaccess file however there are reports of it failing when servers are under high load, this shouldn't be an issue with our servers and the developer is working on a solution, but create a backup of your
.htaccess file and revert to it if your site starts displaying server errors.
BruteProtect
BruteProtect is used to track every failed login attempt across all installed users of the plugin and it blocks that IP across the entire BruteProtect network.
All In One WP Security & Firewall
All In One WP Security & Firewall haso lots of options, some which have the potential to break your site, if in doubt only enable the brute force login attack prevention feature.
WordFence Security
WordFence Security has lots of features (don't install this if you want something simple) including two factor authentication with the Premium (paid for) version.
However there have been some issues with WordFence — it can create a
wp-content/uploads/.htaccess file containing the following:
#
This will cause internal server errors when files are accessed as we don't allow any
php_ options or
Options to be set by clients in HTAccess files for security reasons, furthermore we disable PHP code from running in the uploads directory at an Apache config file level for WordPress sites with the following directives, so the WordFence
.htaccess file is unnecessary:
<Directory /home/example/sites/default/wp-content/uploads/> Options None +SymLinksifOwnerMatch SetHandler None <Files *> SetHandler None </Files> <IfModule mod_php5.c> php_flag engine off </IfModule> AddType text/plain .php .phtml .cgi .pl </Directory>
If in doubt best not use the WordFence plugin, we configure sites to use HTTPS for logins and deploy wp-disable-xml-rpc-pingback, WordPress#WP_Stop_XML-RPC_Attackwp-stop-xmlrpc-attack]] and wp-fail2ban, and these mitigate the thread caused by brute force attacks. | https://docs.webarch.net/wiki/WordPress | 2017-03-23T08:08:18 | CC-MAIN-2017-13 | 1490218186841.66 | [] | docs.webarch.net |
WooCommerce 2.1 Endpoints and the BuddyPress Profile
From version 2.1 on WooCommerce got rid of most pages and uses endpoints instead.
If you are interested in understanding WooCommerce endpoints, see here:
In earlier versions of WooCommerce we could add any sub page (now endpoints) as page to BuddyPress or everywhere else.
Before, every endpoint was a page in the WordPress system. Now with the new endpoints, we have one main page "my-account" for example and many endpoints. The url endings after the page slug 'my-account' are the endpoints.
How to integrate only one endpoint of my WooCommerce 'my-account' into BuddyPress.
By default, we have done all for you and moved all my account parts into their BuddyPress places.
Some users do not want to have their addresses synced with BuddyPress and only want to have this endpoint removed from the BuddyPress profile.
For now, the only solution is to use a function to add a filter.
See the video.
This is the example function from the video | http://docs.themekraft.com/article/321-woocommerce-2-1-endpoints-and-the-buddypress-profile | 2017-03-23T08:11:56 | CC-MAIN-2017-13 | 1490218186841.66 | [] | docs.themekraft.com |
Octopus
If you have a question that is not asked and answered on these pages, please contact us at [email protected]
If the Octopus Gateway and the I/O Extension unit are on the same CANBus then they already share the GND, so you just need to connect it on one of the units.
The Octopus Gateway only accepts the Standard SIM Card type.
The Octopus Gateway has a 150000 circular log and each parameter measurement corresponds to 1 log.
You can use the CDT.exe tool to verify the status of the Octopus’ data push to Wattics. The CDT.exe tool is located in the same directory as the WatticsTool.exe:
Launch the CDT.exe Software Tool, tick ‘Serial Number’, enter your Octopus serial number (e.g. 230001AB) and click Connect.
Once connected, click on the System Messages tab, and checks the logs:
- LAN connection: if you see a message starting with [ETH] showing up every 30 seconds, then it means that the connection to the Wattics platform fails and the Octopus tries to reconnect. If no message pops up within a minute then the connection is established.
- GPRS connection: if you see messages with ATModem::Connect (ConnectionID: 0) and DISCONNECT: 0 | RX: 0 | TX 0 | LN: 227 showing up alternatively every 30 seconds, then it means that the connection fails due to the PIN code of the sim being active or to the use of wrong APN credentials. Logs showing new connections may also appear should the network signal strength be low. A successful connection will only show updates on network signal strength with no disconnection or reset logs.
Troubleshooting
1. If using a LAN connection, make sure that the IT team has configured their firewall to allow TCP communication from the Octopus to octopus.wattics.com (52.50.202.103) on port 4401. You can also double check that you have correctly_2<<
2. When using GPRS Sim cards or when your firewall does not support the use of URLs, you must enter 52.50.202.103 as Service Host Address in your Octopus project file. To check this, run the Wattics Tool and open your project, then go to Output>Services. Change octopus.wattics.com with 52.50.202.103, save and redeploy your project. Double check your APN credentials, and disable your sim card’s 4-digit PIN code using any unlocked mobile phone.
3. To check if data is being received on Wattics’ end, you must go to the Breakdown tab in your Dashboard, click on your data point and check the control panel on the right hand side. If Jan 1970 is shown then it means that no data has been received yet (please wait for at least 10-15mn). The panel will otherwise update to today’s date when data starts being collected,.
Please contact us at [email protected] if you have any question regarding this verification process.
First clarification.
The Octopus and I/O Extension units can accept 4 types of pulses, please refer to the correct wiring diagram depending on your metering setup.
Dry Contact Pulses
Dry Contacts (with no power source):
1. Power the Octopus off
2. Connect the Octopus V+ line to the meter’s input line
3. Connect the meter’s pulse output line to the Octopus digital input line
3. Power up the Octopus
Dry Contacts (already with power supply on common):
The first check is to ensure that the 3rd party controller’s power supply is DC and does not exceed the maximum voltage of 24VDC. Once this is confirmed, follow the steps below:
1. Power Octopus and controller off
2. Connect the GND from Octopus Power Supply to the GND of the 3rd party controller’s power supply
3. Connect pulse output(s) from the 3rd party’s controller to Octopus digital inputs
4. Power up both units
The problem with not having a common GND is that the voltage at the 3rd party controller’s input can be higher than 24VDC.
Diode Pulses
Same connection as the dry contact pulses but with polarity. That means the 24VDC and the pulse unit need to be connected to the right terminal input/output of the pulse meter.
Transistor (PNP/NPN) Pulses
Pulses types need to be verified before the installation to avoid any problems.
Should pulses not be counted by the Octopus devices, you can follow the following steps to troubleshoot the issue:
1 – Use a multimeter to verify that:
- Pulses are generated by the pulse-emitting device and visible at the multimeter. No pulses means an issue with the pulse-emitting device and not with the Octopus.
- The output voltage line of the pulse-emitting device goes back to near 0V between pulses. Any base voltage over 1V can possibly mean that the 0-24V transitions are not captured by the Octopus, requiring remote assistance to update the pulse threshold on the Octopus.
2 – Short the V+ terminal outputs of the Octopus devices with any of its terminal inputs to simulate pulses and check if these are counted by the Octopus. If they are not counted then it means that there is a misconfiguration with your Octopus software project, please check you have used the correct driver and the correct input number.
API
If you have a question that is not asked and answered on these pages, please contact us at [email protected]
How can I upload data via the Wattics REST API
This page contains the JSON format specifications as well as the rules that your software must follow when pushing data to Wattics (e.g. units, timestamp format, authentication etc). Please make sure to read everything carefully first to ensure we get it right the first time. Any clarification needed by your software team, please email us at [email protected].
You must first read our API documentation:
How can I upload data via the Wattics REST API
Then you need to request an API Startup Package at [email protected] providing information about yourproject. Our team will send you:
- HTTPs credentials (username and password)
- Unique Data Stream IDs to use for three test data points (electricity, gas and environmental data)
- Dashboard demo account in our dev environment (to verify that data is coming in nicely when you push it)
You must log in to your Wattics Dashboard using the credentials received as part of your API Startup Package. After log in you will find the three data points in your menu tree under the Breakdown tab.
To check if data is being received on Wattics’ end, you must go to the Breakdown tab, click on your data point and check the control panel on the right hand side. If Jan 1970 is shown then it means that no data has been received yet (please wait for at least two time intervals). The panel will otherwise update to today’s date,.
In case you want to validate your JSON format, you can use third party REST plugins for Firefox and Chrome which are great to push data and check if any error is returned. Sometimes you can spot a missing parenthesis or an error code indicating an incorrect password. You can also use GET calls on our API to check the last data packet received and stored on Wattics’ end to confirm that packets have been received. You just need to a standard HTTP GET request creating the URL in the following way (you will need to remove ‘dev-‘ from the url when pushing data to our production environment):
Finally, when debugging, please remember to also check the timestamps and values shown in our dashboard, as these could reveal incorrect time and unit settings on your end. Once all is verified we can experiment pushing a batch of historical data and confirm that testing is done before moving to the production environment.
The data push frequency can be adapted to your application requirements, as this frequency will define the granularity of the readings shown in the dashboard (e.g. if you push every 5 minutes you will have a 5-minute minimum granularity in the graphs).
It is important that you let us know what your data push frequency is at the start or whenever you decide to change it. This indeed allows us to configure our platform to detect broken communication when no data is coming in and issue real-time notifications.
If your software client does not push at regular intervals, you must choose the highest time period acceptable after which data communication can be considered as broken. All packets received during that time interval will be aggregated by our API and shown aggregated in your graphs.
Not at the moment, but these may be supported in the next version of our API. Please register your interest with so we can update you when these are available.
Our API expects the standard ISO8601 yyyy-mm-ddThh:mm:ss.sss+|-hh:mm where “yyyy-mm-ddThh:mm:ss.sss” represents the local time and “+|-hh:mm” is the OFFSET to apply in order to obtain the UTC timestamp. You can find more information here: “”.
For example, 1994-11-05T08:15:30-05:00 corresponds to November 5, 1994, 8:15:30 am, US Eastern Standard Time.
If your meter or data system does not record individual readings per phase, you must use the _1 entries and omit the _2 and _3 entries. For example, you must push your total kWh to the pC_1 parameter. | http://docs.wattics.com/faq/ | 2017-03-23T08:16:49 | CC-MAIN-2017-13 | 1490218186841.66 | [array(['/wp-content/uploads/2017/01/CDTfolder.png', None], dtype=object)
array(['/wp-content/uploads/2017/01/CDTconnect.png', None], dtype=object)
array(['/wp-content/uploads/2016/04/OctopusTool-20.png', None],
dtype=object)
array(['/wp-content/uploads/2016/07/API-ORG.jpg', None], dtype=object)
array(['/wp-content/uploads/2016/07/API-ControlPanel.jpg', None],
dtype=object)
array(['/wp-content/uploads/2016/07/API-NoData.jpg', None], dtype=object)
array(['/wp-content/uploads/2016/08/sim-card-types.jpg', None],
dtype=object)
array(['/wp-content/uploads/2016/08/Octopus-SIM1.JPG.png', None],
dtype=object)
array(['/wp-content/uploads/2016/08/Octopus-SIM.jpg', None], dtype=object)
array(['/wp-content/uploads/2016/08/CLEAN_M2MGATEWAY_PULSE_METER_CONNECTION.png',
None], dtype=object)
array(['/wp-content/uploads/2016/08/CLEAN_IOCONTROLLER_PULSE_METER_CONNECTION.png',
None], dtype=object)
array(['/wp-content/uploads/2016/08/M2M_GATEWAY_PULSE_PNP_NPN.png', None],
dtype=object)
array(['/wp-content/uploads/2016/08/IO_CONTROLLER_PULSE_PNP_NPN.png',
None], dtype=object)
array(['/wp-content/uploads/2016/12/turnkeysolutions-api.jpg', None],
dtype=object)
array(['/wp-content/uploads/2016/07/API-ORG.jpg', None], dtype=object)
array(['/wp-content/uploads/2016/07/API-ControlPanel.jpg', None],
dtype=object)
array(['/wp-content/uploads/2016/07/API-NoData.jpg', None], dtype=object)] | docs.wattics.com |
To import your existing website to Acquia Cloud using a site archive file, complete the following steps:
Prepare your website before you export it from your current environment.
Create a Drupal site archive file for import by exporting your existing Drupal website.
You can use the tools provided in your existing website environment (including Acquia Dev Desktop), or you can use the Drush
archive-dumpcommand. The
archive-dumpcommand is available in Drush 4.5 or later. For more information about importing your website using Drush, see Importing an existing website using Drush.
Import your website into the Dev environment of your Acquia Cloud account. Acquia Cloud commits your website code to your Acquia Cloud code repository, installs your database, and uploads your files.
Check out a local copy of the repository using either Git or Subversion (SVN).
You can also import an existing website manually, using the
mysqldump command and importing your files using SFTP, scp, or rsync. | https://docs.acquia.com/cloud/site/import/archive | 2016-09-24T22:33:05 | CC-MAIN-2016-40 | 1474738659512.19 | [] | docs.acquia.com |
Charges
Charges are transactions paid to an Organization by a constituent’s credit card or bank account (ACH). Charges must be initiated from within your Application; they cannot be created from the Merchant Portal. Charges can be captured and follow this flow:
They can also be authorized first and captured (or voided) later, using this flow:
Creating Charges
Charges are created with the Neon Pay API. Here's a simple example of how a charge can be created with a single call:
POST /api/charges HEADERS: Content-Type: application/json Accept: application/json X-API-Key: key_abcdef123456 X-App-ID: 12 BODY: { "merchant_id": 32, "amount": 5000, "type": "cc", "currency": "usd", "funding_currency": "usd", "origin": "ecommerce", "token": "token_0934029jdsfgjd", }
Notice that card or banking information is not sent directly through the API. Instead, payment method tokens are used. Please refer to the the Tokenization section below for more information.
Assigning Charges to Merchants
All transactional records in Neon Pay (charges, refunds, disputes, and payouts) are associated with a Merchant Account owned by one of your customers (Organizations). When creating charges or other records, you must include a
merchant_id parameter in your request body.
Neon Pay will only allow you to use the
merchant_id of an account already associated with your application. Your application's API credentials are not specific to each merchant (as is the case with some payment processors). Instead, you use a single set of API credentials for your application and supply a
merchant_id with each API request you make.
Recurring Charges
Neon Pay does not provide any logic to facilitate scheduled payments; your Application must manage payment schedules. Payment tokens can be created and then saved in your Application and used later. Simply create a new charge using an existing token, and (using the
recurring field on the Charge object) specify the payment as having been part of a recurring schedule.
Tokenization
NeonPay employs client-side tokenization for securing sensitive data (such as credit cards or bank accounts) in the processing of payments. When collecting payment data on forms, implement NeonPay.js as a means to tokenize this data prior to sending it to NeonPay's API.
The primary benefit to your application is the reduction of PCI compliance risk. Since sensitive data never touches your application directly, you have a much smaller risk profile.
This illustration shows the process of tokenizing credit card or bank account data:
Fees
Fees are charged to organizations in a number of scenarios. All of these fees are configurable in the Merchant Portal by Application or System Administrators on a Merchant-by-Merchant basis. We set default fee amounts for each Application that are applied to an Application’s new Merchants, but the fees can then be overridden for any merchant manually.
Fees are are collected from merchant accounts by applications. Fees are collected from applications by Neon Pay.
Standard Processing Fees
These are the standard fees associated with all merchant accounts. Your application charges these fees to merchants. You may set defaults for these rates from the Neon Pay Portal, but you may also override the rates on a merchant-by-merchant basis. The rates you set as default or specifically to a merchant will be the rates charged to a customer. Neon Pay will collect transaction fees at agreed-upon rates from your application.
Platform Fees
Platform fees may not be used by all applications. You may use these to add an arbitrary additional fee, either in percentage or flat rate, to charges.
Refunds
Existing charges can be refunded. Refunds can be initiated either through the API or from the Merchant Portal. Using the Refunds API, simply specify the ID of the charge to be refunded and specify the amount to refund. Neon Pay supports partial refunds, and a single charge can be refunded multiple times (until the original amount has been fully refunded).
Neon Pay provides API methods and webhooks related to Refunds. We recommend that you support refunding from within your Application, but also listen for notifications of refunds initiated from the Merchant Portal to ensure updates are reflected in your Application.
Refunding Fees
At this moment, it is not possible to refund transaction fees to a customer. This capability is under development and will be added in the future. This document will be updated with the recommended process for doing so. | https://docs.neononepay.com/components/charges/ | 2021-09-17T04:43:47 | CC-MAIN-2021-39 | 1631780054023.35 | [] | docs.neononepay.com |
A project (an entity created by Eclipse) can contain any number of SOAtest-specific .tst files. They can also contain source files you want to analyze with SOAtest, and any other resources that make sense for your environment.
Each .tst file can include any number of test suites/scenarios, tools, and inputs. The organization and structure is up to you. To keep file size down and to improve maintainability, we recommend using one .tst file for each distinct testing requirement.
For best practices related to projects, test files, and workspaces, see Workspaces, Projects, and Test Files.
Test Suites and Scenarios
A test suite is any collection of tests that are individually runnable, and has the following setting in the test suite configuration panel:
A scenario is any collection of tests that are not individually runnable because they have dependencies. One example of a scenario is when a series of API tests extracts a value from one test’s response and uses it as part of subsequent test message. Another example is a sequence of web scenarios recorded from a browser.
SOAtest allows you to create a new Eclipse Java project that has access to SOAtest's Extensibility API, then configure SOAtest scripts and Extension tools to invoke classes from the new Java project.
To create a new SOAtest Java project:
- Choose File> New> Project.
- Select SOAtest> Custom Development> SOAtest Java Project, then click Next.
- Complete this wizard, which has the same options as Eclipse’s Java Project wizard.
- Click Finish.
Your new Java project will be shown in the Package Explorer view in the Eclipse Java development perspective. The project's build path will automatically have the jar files needed in order to use SOAtest's Extensibility API. Any Java classes added to your project can be accessed by Extension tools in your SOAtest test suite. For an example of how to do this, see "Java Example" in Extensibility and Scripting Basics..
The selected Java Project's build output folder and build path entries will be added to the classpath table.
If the Automatically reload classes option is enabled, then SOAtest will attempt to reload classes from your Eclipse project after being modified or recompiled. The Reload button can also be used to force SOAtest to reload classes from the classpath entries. | https://docs.parasoft.com/display/SOA9105/Adding+Projects%2C+.tst+files%2C+and+Test+Suites | 2021-09-17T04:10:06 | CC-MAIN-2021-39 | 1631780054023.35 | [] | docs.parasoft.com |
Create a flow property
This section demonstrates how to create a custom property for a flow. You must first create a flow before you can create a custom flow property.
You can create a flow property using one of the following methods:
Show—The most common properties aggregate events, such as counting the unique number of session IDs.
Filter—You can group and label one segment of your flows, for example, "Did reach shopping cart".
Label—You can group your flows into multiple labels. For example, if "Num5MinSessions > 10" then "heavy user" otherwise "light user".
Calculate—You can perform a mathematical function and apply it as a property. For example, if you have Num5MinSessions and Num60MinSessions, you can create a flow property that applies a function that returns a ratio of the two.
Flow time—You can create a time-based flow property based on the time spent in the flow or between steps within the flow. (For example, the time between Add to cart and Purchase.)
In this example, we create a flow property for our "Watch a movie" flow that counts the number of posts that occur during each flow instance.
To create a flow property, do the following:
Click Data in the left navigation bar, then the Flows tab.
Select the flow (for which you want to create a property) from the list on the left, then click Properties in the top-right menu bar.
Click +New Flow Property in the top right corner.
Enter a unique name for the property at the top of the window.
In the Definition tab, select a method from the dropdown. In our example we use Show. For information about the available methods, see the method lexicon entry.
Choose the appropriate options from the drop-down lists. In this example, we chose to show a count of events, filtered to events with action that matches post_comment.
Click GO to generate results for the flow property.
More information
For more information about flows, see the following information in the User's Guide: | https://docs.scuba.io/guides/Create-a-flow-property.1302332268.html | 2021-09-17T03:38:22 | CC-MAIN-2021-39 | 1631780054023.35 | [] | docs.scuba.io |
Changes to CDH and HDP Components in CDP Private Cloud Base CDH and HDP components that have changed or been removed in CDP Private Cloud Base. Updated CDH ComponentsCDH Component Changes in CDP Private Cloud Base 7.Updated HDP ComponentsHDP Component Changes in CDP Private Cloud Base 7. HDP Core component version changesVersion number changes for the core components included in HDP 2.6.5.x. Changes to Ambari and HDP services During the process of upgrading to Ambari 7.1.x and HDP intermediate bits, additional components are added to your cluster, and deprecated services and views are removed. | https://docs.cloudera.com/cdp-private-cloud/latest/release-guide/topics/cdp-component-info.html | 2021-05-06T02:13:38 | CC-MAIN-2021-21 | 1620243988724.75 | [] | docs.cloudera.com |
Creating SSL certificates, keystores, and truststores
Guidelines for creating and configuring SSL dependencies.
Before configuring SSL for DataStax Enterprise (DSE) services, you must create SSL certificates, keystores, and truststores. DSE supports both remote keystore SSL providers and local keystore files.
Complete the procedure for Creating local SSL certificate and keystore files.
After creating and configuring SSL dependencies, configure SSL for node-to-node connections and client-to-node connections. | https://docs.datastax.com/en/security/6.0/security/secSslCertificatesKeystores.html | 2021-05-06T01:33:56 | CC-MAIN-2021-21 | 1620243988724.75 | [] | docs.datastax.com |
Performing a Test List¶
Selecting a test list¶
To perform a test list, first login to QATrack+ and then select the Choose Unit option from the Perform Q
On the next page, all the test lists with the chosen frequency will be displayed along with relevant information about the last time that test list was performed and when the test list is next due on this unit.
Click on the Perform QC button next to the list that you would like to complete.
Choose a test list to perform
Tree Views¶
In addition to the Choose Unit method for selecting a test list, QATrack+ has two “Tree Views” that present the QC available to perform in a tree structure grouped either by Unit & Frequency, or by Unit, Frequency, and Category. These views are found in the Perform QC menu:
The menu options for selecting a tree view
An example of the Unit, Frequency, and Category view is shown here:
An example of a Unit, Frequency, Category tree view
Performing a test list¶
An example test list is shown below. Details about all the features will given below but briefly, you can see all the tests completed and ready to be submitted. The shaded input boxes for the last two tests indicate that they are composite tests i.e. they are test values calculated based on the other 4 input values. Passing, tolerance and failing tests are displayed with a green, yellow or red status, respectively. Tests which have no reference or tolerance set for them are shown in blue.
If for some reason you need to finish a test list at a later time, you can click the Mark this list as still in progress checkbox next to the Submit QC Results button. When this box is checked, the test list will not be considered complete and will not be marked for review.C sessions for the current test list.
Continue an in progress test list from the sidebar
Auto Save¶
As of version 3.1.0, QATrack+ now auto-saves your data in the background every time you enter a new test result. This helps prevent data loss in the case that a user mistakenly navigates away from a test list page without submiting the data, or due to a browser crash, power failure, etc.
You can see the last auto-save time in the top right hand portion of the form for entering QC data.
Autosave status showing last saved time
When performing a test list with autosaved data available, the left hand drawer menu will also show any autosaved sessions which you can click to load and continue.
Continue an autosaved test list instance
Autosaved sessions will be automatically deleted either:
- When the QC session is submitted succesfully -or-
- After 30 days has passed since the auto-saved session was last modified. (To change the 30 day interval, you may change the AUTOSAVE_DAYS_TO_KEEP setting). | https://docs.qatrackplus.com/en/stable/user/qa/performing_a_test_list.html | 2021-05-05T23:55:01 | CC-MAIN-2021-21 | 1620243988724.75 | [array(['../../_images/choose_unit_menu.png', 'Choose Unit Menu Option'],
dtype=object)
array(['../../_images/select_unit_button.png', 'Select unit button'],
dtype=object)
array(['../../_images/select_unit_dropdown.png',
'Select unit dropdown button'], dtype=object)
array(['../../_images/choose_test_list.png',
'Choose a test list to perform'], dtype=object)
array(['../../_images/tree_view_menu.png',
'The menu options for selecting a tree view'], dtype=object)
array(['../../_images/category_tree_view.png',
'An example of a Unit, Frequency, Category tree view'],
dtype=object)
array(['../../_images/example_test_list.png', 'Example test list'],
dtype=object)
array(['../../_images/test_procedure.png', 'Embedded test procedure'],
dtype=object)
array(['../../_images/add_comment.png', 'Adding comments to test lists'],
dtype=object)
array(['../../_images/skip_test.png', 'Skipping a single test'],
dtype=object)
array(['../../_images/perform_subset.png',
'Performing a subset (Dosimetry & AQA) of tests within a test list'],
dtype=object)
array(['../../_images/attach_button.png',
'Attaching files to a test list instance'], dtype=object)
array(['../../_images/save_for_later.png',
'Save a test list to complete later'], dtype=object)
array(['../../_images/in_progress_menu.png', 'In progress menu'],
dtype=object)
array(['../../_images/continue_in_progress.png',
'Continue an in progress test list'], dtype=object)
array(['../../_images/in_progress_sidebar.png',
'Continue an in progress test list from the sidebar'], dtype=object)
array(['../../_images/autosave_status.png',
'Autosave status showing last saved time'], dtype=object)
array(['../../_images/autosave_load.png',
'Continue an autosaved test list instance'], dtype=object)] | docs.qatrackplus.com |
SciPy 0.13.0 Release Notes¶
Contents
- SciPy 0.13.0 Release Notes
- New features.ioimprovements
scipy.interpolateimprovements
scipy.statsimprovements
- Deprecated features
- Backwards incompatible changes
- Other changes
- Authors
SciPy 0¶¶
Trust-region unconstrained minimization algorithms¶
The
minimize function gained two trust-region solvers for unconstrained
minimization:
dogleg and
trust-ncg.
scipy.sparse improvements¶
Boolean comparisons and sparse matrices¶¶¶
explicitly¶
B-spline derivatives and antiderivatives¶¶¶
LIL matrix assignment¶¶. | https://docs.scipy.org/doc/scipy-1.1.0/reference/release.0.13.0.html | 2021-05-06T01:50:48 | CC-MAIN-2021-21 | 1620243988724.75 | [] | docs.scipy.org |
.
Monitoring Messages between Client and Server:
As an intermediary, TCPMon only receives messages and forwards them to the back end server. Therefore, it is a safe tool to be used for debugging purposes.
Note that TCPMon cannot be used to view messages transferred over https protocol.
- Start,. | https://docs.wso2.com/pages/viewpage.action?pageId=34617821&navigatingVersions=true | 2021-05-06T00:41:55 | CC-MAIN-2021-21 | 1620243988724.75 | [] | docs.wso2.com |
View audit trail
From Genesys Documentation
This topic is part of the manual Manage your Contact Center in Agent Setup for version Current of Agent Setup.
The Audit Trail, located in the Audit section, details the actions taken in your Agent Setup application, including update, delete, import, login, and logout activities.
Related pages:
TipAll Users who have the required roles and permissions set in Access Groups can perform these tasks.
ImportantThere are currently no limits placed on the number of audit logs or how long they are kept in the Audit Trail.
The summary table on the Audit tab contains an entry for every action taken in the application. The table lists the following details:
- Username - Of the user who made the change.
- Action - The type of action made. See a description of each action in the table below.
- Message - Specific details about the action. This could be the exact file name of an imported file or the name of a skill that was updated or created.
- Date & Time - The date and time that the action took place.
- Refresh - Updates/refreshes the audit search results.
- Download Audit Data - Exports the audit logs to .xlsx files.
Each action type is documented in the table below. Note that an 'object' can mean an agent, an agent group, a skill, or a transaction. | https://all.docs.genesys.com/PEC-AS/Current/ManageCC/View_audit_trail | 2021-05-06T01:06:56 | CC-MAIN-2021-21 | 1620243988724.75 | [array(['/images-supersite/b/ba/AS_AuditScreen_GAPI20783.png', '1'],
dtype=object) ] | all.docs.genesys.com |
Teradata FastLoad Error Conditions
While processing a Teradata FastLoad job script, Teradata FastLoad tracks and records information about five types of error conditions that cause the Teradata Database to reject an input data record. Table 22 describes these error conditions.
Note: Teradata FastLoad does not store duplicate rows in an error table.
Additionally, when operating in batch mode, Teradata FastLoad returns the system error codes for error conditions encountered during Teradata FastLoad operations. (Teradata FastLoad does not return system error codes when operating in interactive mode.) This subsection describes the procedures for handling the five types of Teradata FastLoad error conditions.
See Messages (B035‑1096) for more information about system error messages. | https://docs.teradata.com/r/PE6mc9dhvMF3BuuHeZzsGg/70inotveU4Ktk3Cbzx2iJg | 2021-05-06T00:50:58 | CC-MAIN-2021-21 | 1620243988724.75 | [] | docs.teradata.com |
7.5.1 Business Purpose
Data captured and transferred using the wellbore markers object has these uses:
- Helps with drilling planning and operations; for example:
- Guide geosteering (the act of adjusting the borehole position (inclination and azimuth angles) “on the fly” while drilling a borehole) to reach one or more geological targets, when encountering obstacles that require deviation from the drilling plan.
- Optimize drilling operations based on the formation angle, for example, a horizontal reservoir could be better exploited with a horizontal well.
- Helps with assessing hydrocarbon potential and reservoir quality, by contributing information to determine:
- Chronostratigraphic age. The maturity of the formation is useful information in estimating the potential of source rock (where hydrocarbons were initially produced).
- Formation angles (dip information) and continuity of the reservoir based on the depth and angles of the formation in different wells.
- Sequence stratigraphy, which is important in identification of a petroleum system and its potential for production of hydrocarbons. | http://docs.energistics.org/WITSML/WITSML_TOPICS/WITSML-000-087-0-C-sv2000.html | 2021-05-06T01:44:00 | CC-MAIN-2021-21 | 1620243988724.75 | [] | docs.energistics.org |
Path is a tunnel between two endpoints. Path visibility is a report on utilization and quality of the paths between an Edge and its SD-WAN peers. SD-WAN Orchestrator enables an Enterprise user to monitor the Path visibility using the monitoring dashboard.
You can monitor the Path information for the SD-WAN peers connected to an Edge.
Procedure
- In the Enterprise portal, click the Open New Orchestrator UI option available at the top of the Window.
- Click Launch New Orchestrator UI in the pop-up window. The UI opens in a new tab displaying the monitoring options.
- Click Edges to view the Edges associated with the Enterprise.
- Click the link to an Edge and click the Paths tab.
Results
At the top of the page, you can choose a specific time period to view the path information for the edge.
To get a report of an SD-WAN peer in CSV format, select the SD-WAN peer and click Export Path Statistics.
Click the link to an SD-WAN peer to view the corresponding Path details as follows:
- All the SD-WAN peers that have established paths during the selected time period
- The status of the paths available for a selected peer
- Overall Quality score of the paths for a selected peer for video, voice, transactional traffic
- Time series data for each path by metrics like: Throughput, Latency, Packet loss, Jitter, and so on. For more information on the parameters, see Monitor Edges.
The metrics time-series data is displayed in graphical format. You can select and view the details of a maximum of 4 paths at a time.
Hover the mouse on the graphs to view more details.
You can choose the metrics from the drop-down list to view the corresponding graphical information. By default the Scale Y-axis evenly checkbox is enabled. This option synchronizes the Y-axis between the charts. If required, you can disable this option.
Click the DOWN arrow in the Quality Score pane at the top, to view the Path score by the traffic types.
You can click an SD-WAN peer displayed at the left pane to view the corresponding Path details. | https://docs.vmware.com/en/VMware-SD-WAN/4.0/VMware-SD-WAN-by-VeloCloud-Administration-Guide/GUID-6347C210-1C0E-4A62-A6A1-1FADA0153B7D.html | 2021-05-06T01:39:43 | CC-MAIN-2021-21 | 1620243988724.75 | [array(['images/GUID-15D49F14-C629-47DF-BA84-B224180BA594-low.png', None],
dtype=object)
array(['images/GUID-F1DD551E-60DD-47B0-87FD-4A86CDF51AE3-low.png', None],
dtype=object)
array(['images/GUID-73A0993F-AF60-46EE-9833-F5D1470F3617-low.png', None],
dtype=object) ] | docs.vmware.com |
Troubleshooting
The Nuclear Option.
Corrupt/missing Cocoapods Specs repository
You run
rake pm:install on a freshly created redpotion app and it hangs on
Updating spec repo master. Presumably, you've already run
pod setup one time on your machine, so what gives?
If you see an error message about pod not being able to find the master spec repo when you run
rake pm:install --verbose, you can perform a clean pod setup:
> pod repo remove master > pod setup
Now you should be able to run rake pm:install. | http://docs.redpotion.org/en/latest/cookbook/troubleshooting/ | 2021-05-06T01:47:17 | CC-MAIN-2021-21 | 1620243988724.75 | [] | docs.redpotion.org |
HTTP REST Block
Contents
Use this block to access an external system using a RESTful API over HTTP.
You can use the HTTP REST block for accessing external systems using a RESTful API, over HTTP. You can read or write to these web services, although routing applications typically read from web services.
You can read or write to any external system that houses and exposes data through a REST web service. This could be a generic web service, such as one that returns the weather forecast for a specific location, or one that converts a monetary value from one currency to another. Or this could be a company's internal web service that fetches a customer's account details and billing history from the company's internal databases.
This block can be used in all four phases of the application.
- Check that the RESTful API you are accessing will return data in the format that you expect. While most web services typically return JSON data, there are some that may not. You may want to use an external tool to test the RESTful API outside of Designer to ensure it behaves the way you expect, before attempting to access it within your application.
- If the request timeout period is reached and no response is received from the REST web service, the output variables have a value of undefined.
Service Details tab
Enter the URL of the RESTful web service in the HTTP URL field. Enable the check box to use a variable, or disable the check box to use a string.
In the drop-down menu beside the HTTP URL field, select the HTTP method to access the web service: get, post, put, or delete.
If you are using post or put as the HTTP method, select an Encoding Type. (Otherwise, you will not see this option.)
In the Request Timeout field, enter the time, in seconds, that the application waits for a response from the web service before moving on to the next block.
If you want to post the results of a recording captured by the Record Utterance block to the specified URL, you can specify the variable that holds a recorded file in the Upload Record Utterance field. (This option is only supported in the Self Service phase, as the recording file captured by the Record Utterance block is no longer available after the Self Service phase.)
Select Disable DTMF buffering if you want to prevent any DTMF inputs made during fetch audio playback from being buffered and carried forward into subsequent User Input or Menu blocks.
Select Play fetch audio if you want to specify an audio resource to play to the caller while the data is fetched.
- Enable the check box beside the Play fetch audio check box to specify a variable.
- In the Play fetch audio minimum for field, you can enter the minimum length of time to play the audio, even if the document arrives in the meantime.
- In the Start fetch audio after field, you can enter a period of time to wait before audio is played.
Input Parameters
In the Input Parameters tab, specify the inputs to the web service. You can choose either:
- JSON Payload — Send a JSON value from a variable as an input to the web service. This option is applicable only for put and post methods.
- Key Value pairs — Click Add Parameters and enter the Name of the parameter expected by the web service, and the Value to pass to the input. You can toggle the Value between a string and a variable.
Output Parameters
In the Output Parameters tab, click Add Parameters to specify how and where to store the results of the web service call. The Variable Name is the application variable in which to store the data, and the JSON Expression is the key in which you expect the result to be in the response object.
See the code sample and table below for an example:
{ "thing": { "otherthing": "abc" }, "arrayofthings": [ "thing1", "thing2" ] }
Authentication tab
Enable the Enable Basic Authentication check box to use HTTP basic authentication as part of the web service request. When enabled, the User Name and Password fields are displayed. Optionally, click the check box to select a variable for either of these fields.
Results tab
Select a variable to store the outcome status (true or false) of the HTTP fetch.
You can also select variables in which to store the data and headers of the HTTP response, and the HTTP error code if the operation failed.
You must also select an action to take if the fetch operation is not successful. You can choose to "Continue with normal processing" or "Execute error handler blocks".
If you select "Execute error handler blocks", an Error Handler child block appears under the HTTP REST block.
Use the Error Handler block to send the application to another target block that you select from the Navigation tab, or add child blocks that will perform the actual error handling..
Advanced tab
The Use Designer service to make this request check box is enabled by default. This allows the fetch request to use a HTTP proxy, which is typically required when sending requests to external resources.
Select Internal Genesys Service if the application is sending a fetch request to an internal Genesys service. This type of request does not go through a HTTP proxy.
Click Add Header if you want to use a custom HTTP header.
Test tab
The Test tab lets you test an API call from the block without making an actual test call.
Select the variables to be used as Input Parameters (make sure you specify them in the requested format, using single quotes for strings and "()" for JSON values) and any other variables to be used.
If the variables had a default value set in the Initialize phase, you can choose to keep those values or provide your own. The application will remember the values used the next time you open the application.
Click Send Test Request to run the test and generate the results.
Scenarios
If you want to:
- Play weather information for a customer for whom you have a profile and address:
- This scenario assumes that the weather API expects two input parameters (date and location) and provides its output in JSON format, under the key result. The corresponding input information is stored in two variables: currentdate and zipcode.
- Add the HTTP REST block to the Self Service portion of the application, in a position after you have retrieved the customer location.
- In the HTTP URL field, enter the URL of the weather web service (for example,).
- Select get as the HTTP method.
- In the Input Parameters tab, click Add Parameters twice.
- For the first parameter, use the following information:
- Name: date
- Type: variable
- Value: currentdate
- For the second parameter, use the following information:
- Name: location
- Type: variable
- Value: zipcode
- In the Output Parameters tab, click Add Parameters and use the following information:
- Variable Name: weather
- JSON Expression: result | https://all.docs.genesys.com/PEC-ROU/Current/Designer/HTTPREST | 2021-05-06T00:59:24 | CC-MAIN-2021-21 | 1620243988724.75 | [] | all.docs.genesys.com |
In Cloud Elements, you can build formula templates, reusable workflow templates that are independent of API providers. Formula templates include triggers, such as events or schedules, that kick off a series of steps. Formulas support a large variety of different use cases across different services. For example, they can keep systems in sync, migrate data between systems, or automate business workflows.
After you build formula templates, you can use the templates to create formula instances. In formula instances, you replace the variables in the templates with actual elements and values.
Formulas are a great way to move the logic out of your apps and into Cloud Elements. This helps keep your code less complex and more maintainable so you can focus on meeting your customers' needs.
Example
We give detailed examples of formulas in the Examples article, but to help you understand the power of formulas, here's a common example.
A common use case is keeping contacts synced across many systems. You might need to make sure that whenever a contact is added to Salesforce, it also syncs to HubSpot. To do this, you must first transform the data. Then, create a formula template that listens for updates to contacts in one API provider, and then pushes those contacts to another. After you set up the template, create a formula instance where you plug in Salesforce as the source element and HubSpot as the target element.
Definitions
To help you understand formulas, review the definitions in this section.
- formula template
- A reusable workflow that is independent of the element and includes the triggers, steps, and variables for a formula instance to execute the workflow.
- formula instance
- A specific instance of a formula template configured with explicit variables and associated with specific element instances.
- trigger
- An action that occurs and kicks off a formula. Triggers can be events set up on an element instance, an API call to an element instance, a scheduled occurrence, or manually triggered.
- step
- An individual step within a formula workflow that can include branches to subsequent success and failure steps.
- variable
- Variables that represent either element instances or specific values that must be supplied for each formula instance.
Working with Formulas
Formula Execution Timeouts
The maximum time that a formula execution without sub-formula steps can run is 100 minutes. Although some executions could run for longer than 100 minutes, there are no guarantees that the execution will complete if it runs for longer than 100 minutes.
Restarting Formulas Mid-subformula
If a formula execution stops or times out during an in-progress sub-formula step, the parent formula will restart from the beginning of the subformula, regardless of how much of the subformula was completed.
Formula Step Timeouts
For consistent performance, a single formula step should not last longer than 5 minutes. | https://docs.cloud-elements.com/home/introduction-formulas | 2021-05-06T00:23:03 | CC-MAIN-2021-21 | 1620243988724.75 | [] | docs.cloud-elements.com |
This guide was written to integrate Atlassian Jira
v7.1.4 into FOSSA.
There are 2 main features in this integration.
1 ) Allowing FOSSA to export issues as new tickets in Jira automatically.
2) Resolve the issues within Jira and update the results in FOSSA automatically.
Jira Permission
To enable these two features, two different permission will be needed.
Export Issue - Jira "Product Access" is needed to create the issue.
Resolve Issue - Jira "Administration Access" is needed to create the webhook.
A permission group can be created to inherit both Product and Administration permission to give the user of that group access to both.
Jira Cloud: Create an API token
Jira Cloud allows users to create API tokens that integrations such as FOSSA can use to communicate with Jira on the user's behalf. FOSSA requires that a Jira user with permission to create, resolve, and modify users create the API token.
Please see the Jira documentation for help creating an API token.
When configuring your Jira site in FOSSA, use the username of the Jira user that created the API token, and use the API token as the user's password.
Jira On-Prem: Create a user for FOSSA on Jira
FOSSA requires an admin account on Jira to manage the creation and modification of new tickets from issues.
- Create a new user
Specify whatever username/password you'd like in the screen above. By default, we keep the combination
fossabot/fossa123.
- Add user as admin
Configure FOSSA with Jira
In FOSSA, click your username in the top-right corner and select Settings > Integrations > Jira. If this is your first time configuring Jira, click the "Click to Configure Jira" button, otherwise click "Add Jira" to add an additional Jira site.
The Name field is used by FOSSA to identify this Jira site in issue export dialogues.
The Jira site URL field should be the address we can use to reach your Jira site. It's usually the same address you use to access Jira.
The Resolved Workflow Statuses are statuses in your Jira site that indicate than an issue has been resolved. If an issue on Jira is in one of these states, then the issue on FOSSA will be closed. If an issue on Jira transitions out of one of these states, the FOSSA issue will be reopened.
Add in the username and password or API token created above, and click save in the top right corner.
Important: Take note of the generated Webhook URL. You'll need to use this value when configuring the webhook in Jira in the next section.
Linking FOSSA projects to Jira projects
Now that both services are configured, you may associate projects in FOSSA with projects in Jira.
Inside of FOSSA under a project's settings, set:
- Issue Tracker Type to
Jira
- Jira Project Key
You can find the project key by looking:
- At Jira project URLs:
- Issue keys:
PROJECTKEY-173,
PROJECTKEY-244
- The right-hand details panel of each project summary page under
Key
Add Webhooks to Jira
FOSSA requires webhooks to sync issue status with Jira. This means that when a user closes an issue in Jira, the corresponding issue will be resolved in FOSSA.
- Navigate to Admin > System > Advanced > WebHooks
- Create a new webhook
Enter in your FOSSA IP/Port with the path specified below. Note that the webhook URL is different for each Jira site that you configure in FOSSA.
Define events for updating/deleting issues:
Then click the Create on the bottom of the form. The created webhook should look something like:
Troubleshooting
If issues aren't getting exported, please check your logs for common errors:
- "The issue type selected is invalid."
When FOSSA generates a ticket, by default it sets the Jira
issueType to be
Task. This is one of the default issue types for new Jira installations, but your admin may have deleted/configured out this issue type or your installation could just be missing it. Check on the existing issue types in Jira and create the
Task type if it is missing.
See the Jira help doc for more instructions.
Updated 5 months ago | https://docs.fossa.com/docs/atlassian-jira | 2021-05-06T00:07:24 | CC-MAIN-2021-21 | 1620243988724.75 | [array(['https://files.readme.io/b118fbb-Screen_Shot_2020-07-14_at_5.13.14_PM.png',
'Screen Shot 2020-07-14 at 5.13.14 PM.png'], dtype=object)
array(['https://files.readme.io/b118fbb-Screen_Shot_2020-07-14_at_5.13.14_PM.png',
'Click to close...'], dtype=object)
array(['https://files.readme.io/e3a4ac9-jira-user-nav.png',
'jira-user-nav.png'], dtype=object)
array(['https://files.readme.io/e3a4ac9-jira-user-nav.png',
'Click to close...'], dtype=object)
array(['https://files.readme.io/811fe5b-jira-create-user-btn.png',
'jira-create-user-btn.png'], dtype=object)
array(['https://files.readme.io/811fe5b-jira-create-user-btn.png',
'Click to close...'], dtype=object)
array(['https://files.readme.io/296d5f1-jira-create-user.png',
'jira-create-user.png'], dtype=object)
array(['https://files.readme.io/296d5f1-jira-create-user.png',
'Click to close...'], dtype=object)
array(['https://files.readme.io/83f4847-jira-edit-members.png',
'jira-edit-members.png'], dtype=object)
array(['https://files.readme.io/83f4847-jira-edit-members.png',
'Click to close...'], dtype=object)
array(['https://files.readme.io/ea1562a-jira-add-admin.png',
'jira-add-admin.png'], dtype=object)
array(['https://files.readme.io/ea1562a-jira-add-admin.png',
'Click to close...'], dtype=object)
array(['https://files.readme.io/0e358d9-Screen_Shot_2019-09-23_at_3.31.21_PM.png',
'Screen Shot 2019-09-23 at 3.31.21 PM.png'], dtype=object)
array(['https://files.readme.io/0e358d9-Screen_Shot_2019-09-23_at_3.31.21_PM.png',
'Click to close...'], dtype=object)
array(['https://files.readme.io/f67cb74-jira-webhook-nav.png',
'jira-webhook-nav.png'], dtype=object)
array(['https://files.readme.io/f67cb74-jira-webhook-nav.png',
'Click to close...'], dtype=object)
array(['https://files.readme.io/dbb9b02-jira-create-webhook.png',
'jira-create-webhook.png'], dtype=object)
array(['https://files.readme.io/dbb9b02-jira-create-webhook.png',
'Click to close...'], dtype=object)
array(['https://files.readme.io/6f913b2-Screen_Shot_2019-09-23_at_3.11.00_PM.png',
'Screen Shot 2019-09-23 at 3.11.00 PM.png'], dtype=object)
array(['https://files.readme.io/6f913b2-Screen_Shot_2019-09-23_at_3.11.00_PM.png',
'Click to close...'], dtype=object)
array(['https://files.readme.io/e5ba3f2-jira-webhook-permissions.png',
'jira-webhook-permissions.png'], dtype=object)
array(['https://files.readme.io/e5ba3f2-jira-webhook-permissions.png',
'Click to close...'], dtype=object)
array(['https://files.readme.io/926fa01-Screen_Shot_2019-09-23_at_3.12.48_PM.png',
'Screen Shot 2019-09-23 at 3.12.48 PM.png'], dtype=object)
array(['https://files.readme.io/926fa01-Screen_Shot_2019-09-23_at_3.12.48_PM.png',
'Click to close...'], dtype=object) ] | docs.fossa.com |
jsmn (community library)
Summary
Minimalistic JSON parser in C
Library Read Me
This content is provided by the library maintainer and has not been validated or approved.). Git installed. Just run:
$ git clone
Repository layout is simple: jsmn.c and jsmn.h are library files, tests are in the jsmn_test.c,_UNDEFINED = 0, JSMN_OBJECT = 1, JSMN_ARRAY = 2, JSMN_STRING = 3, JSMN_PRIMITIVE = 4 }:
jsmn_parser parser; jsmntok_t tokens[10];
jsmn_init(&parser);
// js - pointer to JSON string // tokens - an array of tokens available // 10 - number of tokens available jsmn_parse(&parser, js, strlen(js), tokens, 10);
This will create a parser, and then it tries to parse up to 10 JSON tokens from
the
js string.
A non-negative return value of
jsmn_parse is the number of tokens actually
used by the parser.
Passing NULL instead of the tokens array would not store parsing results, but
instead the function will return the value of tokens needed to parse the given
string. This can be useful if you don't know yet how many tokens to allocate.
If something goes wrong, you will get an error. Error will be one of these:.
Browse Library Files | https://docs.particle.io/cards/libraries/j/jsmn/ | 2021-05-06T01:40:01 | CC-MAIN-2021-21 | 1620243988724.75 | [] | docs.particle.io |
- Go to Appearance>Customize > Home Page Settings > Contact Section
- Select the Header Style either Page or Text
- Select the page for Contact Section Title from drop-down box
- Enter the shortcode of the contact form in the Contact Form Shortcode field
- Select Background Type either Image or Color
- Select the desired Background Color
- Select the desired Text Color
- Click Publish button
Note: To get the above contact form follow the steps below.
- Go to Admin Dashboard >Contact> Contact Form > Add New
- Copy the following shortcode
- Paste this code in form field via Admin Dashboard > Contact Form > Add New
| https://docs.prosysthemes.com/business-times-pro/front-page-settings/how-to-configure-contact-section/ | 2021-05-06T01:11:43 | CC-MAIN-2021-21 | 1620243988724.75 | [array(['http://docs.prosysthemes.com/wp-content/uploads/2020/01/contact-us-1024x435.png',
None], dtype=object)
array(['http://docs.prosysthemes.com/wp-content/uploads/2020/01/contact-section.png',
None], dtype=object)
array(['http://docs.prosysthemes.com/wp-content/uploads/2020/01/contact-forms-1.png',
None], dtype=object)
array(['http://docs.prosysthemes.com/wp-content/uploads/2020/01/contact-form-2-1024x361.png',
None], dtype=object) ] | docs.prosysthemes.com |
lists. Note that the Assigned To property is used for display only and users not part of the Assigned To group will still be able to see and perform the test list.
Visible To¶
Choose the groups you want this test list on this unit to be visible to by moving the groups from the Available visible to to the Chosen. | https://docs.qatrackplus.com/en/stable/admin/qa/assign_to_unit.html | 2021-05-06T00:57:58 | CC-MAIN-2021-21 | 1620243988724.75 | [] | docs.qatrackplus.com |
SharpDustSensor (community library)
Summary
Library for the Sharp Dust Sensor GP2Y10.
Example Build Testing
Device OS Version:
This table is generated from an automated build. Success only indicates that the code compiled successfully.
Library Read Me
This content is provided by the library maintainer and has not been validated or approved.
Library for the Sharp Dust Sensor GP2Y10.
This driver is for the Library for the Sharp Dust Sensor GP2Y10 and is based on Adafruit's Unified Sensor Library (Adafruit_Sensor). when.
Browse Library Files | https://docs.particle.io/cards/libraries/s/SharpDustSensor/ | 2021-05-06T01:30:23 | CC-MAIN-2021-21 | 1620243988724.75 | [] | docs.particle.io |
Remedy IT Service Management application. You can still perform collision management and impact analysis in
You can disable collision management or impact analysis from the Centralized configuration by using the disableCollisionManagement configuration parameter. For more information, see Centralized configuration. | https://docs.bmc.com/docs/smartit1808/disabling-collision-management-and-impact-analysis-in-smart-it-818731766.html | 2021-05-05T23:54:48 | CC-MAIN-2021-21 | 1620243988724.75 | [] | docs.bmc.com |
Punchcard
In the Punchcard tab you can see a commit summary by day of the week and hour. You can select a period of time and a user.. | https://docs.stiltsoft.com/awesome-graphs/server/features/graphs/punchcard?preview=%2F8323664%2F8553277%2FPunchcard.png | 2021-05-06T01:23:28 | CC-MAIN-2021-21 | 1620243988724.75 | [] | docs.stiltsoft.com |
How Things Work in Genesys Engage Multicloud
From Genesys Documentation
Find How It Works and Getting Started articles for the various Genesys Engage Multicloud applications and features.
Agent Desktop
Agent Setup
Callback
Chat
Cloud Data Download Service
Cloud iWD.
Digital Channels
- both content analysis and your Categories and Prioritization schemas for Engage cloud Email
- Near real-time dashboards for monitoring your backlog
Genesys Softphone
Gplus Adapter for Salesforce
IVR
Outbound
- Run aggressive sales campaigns
- Send automated alerts, notifications, or reminders without ever engaging agents
- Run collections campaigns that target high-risk accounts
- Run SMS or Email campaigns
- Run multi-channel blended campaigns
Predictive Routing
Recording, Quality Management and Speech Analytics
Genesys Recording, QM and Speech Analytics solution evaluates recorded customer interactions for data about what is happening in your organization. SpeechMiner UI reviews and analyzes this data to uncover the cause and effect relationships that influence business issues and contact center performance. For more information refer to: Recording, Quality Management and Speech Analytics (SpeechMiner UI).
Reporting
Routing
Voice
Widgets
Workforce Management
Contents
- 1 Agent Desktop
- 2 Agent Setup
- 3 Callback
- 4 Chat
- 5 Cloud Data Download Service
- 6 Cloud iWD
- 7 Co-browse
- 8 Digital Channels
- 9 Email
- 10 Genesys Softphone
- 11 Gplus Adapter for Salesforce
- 12 IVR
- 13 Outbound
- 14 Predictive Routing
- 15 Recording, Quality Management and Speech Analytics
- 16 Reporting
- 17 Routing
- 18 Voice
- 19 Widgets
- 20 Workforce Management | https://all.docs.genesys.com/PEC-Admin/HIW | 2021-05-06T00:26:42 | CC-MAIN-2021-21 | 1620243988724.75 | [] | all.docs.genesys.com |
To edit a location click on it in the location overview:
Once you enter the dashboard, you click on the little pencil icon in the location logo / picture.
Now you can change the name of the location, chose another picture and edit the standard schedule settings. You can also delete the location if desired. All data stored here (employees, schedules, reports) will be erased.
| http://docs.staffomatic.com/staffomatic-help-center/locations/how-can-i-edit-the-location | 2018-05-20T13:56:20 | CC-MAIN-2018-22 | 1526794863570.21 | [array(['https://uploads.intercomcdn.com/i/o/27150330/8512534504036f2ce437c02e/Bildschirmfoto+2017-06-23+um+10.51.19.png',
None], dtype=object)
array(['https://uploads.intercomcdn.com/i/o/27150725/f44331d438554d4378f35f9c/Bildschirmfoto+2017-06-23+um+10.57.05.png',
None], dtype=object)
array(['https://uploads.intercomcdn.com/i/o/27150998/792bf36e63d4a5383607874b/Bildschirmfoto+2017-06-23+um+11.00.31.png',
None], dtype=object) ] | docs.staffomatic.com |
Fugue.AWS.EC2.NetworkInterface¶
Table of Contents¶
Module Members¶
external¶
(Function)
Create a reference to an externally managed NetworkInterface.
Example usage:
eni: EC2.NetworkInterface.external("eni-1234abcd", AWS.Us-east-1)
Type Signature
fun (String, Region) -> NetworkInterface
- Argument:
networkInterfaceId
The ID of the target NetworkInterface. Must be of the form “eni-” followed by 8 characters from a-z and 0-9.
- Argument:
region
The Region containing the target NetworkInterface.
- Returns:
A reference to the specified NetworkInterface.
Type: NetworkInterface
new¶
(Function)
new NetworkInterface (Constructor)
Call this constructor to create a new Fugue.AWS.EC2.NetworkInterface value.
Type Signature
fun { subnet: Subnet, description: Optional<String>, privateIpAddress: Optional<String>, sourceDestCheck: Optional<Bool>, privateIpAddresses: Optional<List<String>>, tags: Optional<List<Tag>>, securityGroups: Optional<List<SecurityGroup>>, numSecondaryPrivateIpAddresses: Optional<Int>, elasticIPs: Optional<List<ElasticIpAttachment>>, resourceId: Optional<String> } -> NetworkInterface
- Argument:
subnet
The EC2 Subnet the Network Interface should associate with.
- Argument:
description
A plaintext description for the network interface. Mutable.
- Argument:
privateIpAddress
The primary private IPv4 address of the network interface. If no IP address is specified, AWS will select one at random from the subnet’s IPv4 CIDR range.
- Argument:
sourceDestCheck
Defaults to True. When set to False disables the source/destination checking. Disabling checking allows the NetworkInterface to forward traffic that it is not the source or destination of, allowing the NetworkInterface to work with a NAT (network address translation) instance. Mutable.
- Argument:
privateIpAddresses
A list of secondary private IPv4 addresses. Mutable.
Type: Optional<List<String>>
- Argument:
A list of EC2 tags. Mutable.
Type: Optional<List<Tag>>
- Argument:
securityGroups
A list of EC2 Security Groups to associate with this network interface. Mutable.
Type: Optional<List<SecurityGroup>>
- Argument:
numSecondaryPrivateIpAddresses
The total number of secondary private IPv4 addresses to assign to this network interface. This is the number of randomly selected addresses plus the number of addresses specified in privateIpAddresses. The random addresses are chosen from the subnet’s CIDR block. If not specified, this will default to the total number of addresses in privateIpAddresses. Mutable.
- Argument:
elasticIPs
A list of EC2 ElasticIpAttachments describing associations between private IPs associated with this network interface and EC2 ElasticIPs. Mutable.
Type: Optional<List<ElasticIpAttachment>>
- Argument:
resourceId
Resource ID of the resource to import with Fugue Import. This field is only honored on
fugue run. The resource ID is the AWS ID. Mutable. Example:
eni-1234abcd
- Returns:
A Fugue.Core.AWS.EC2.NetworkInterface value.
Type: NetworkInterface
region¶
(Function)
Retrieve the region from a NetworkInterface value.
Works for NetworkInterfaces defined in the composition as well external values.
Example usage:
vpc: EC2.Vpc.new { region: AWS.Us-west-2, cidrBlock: "10.0.0.0/16", } subnet: EC2.Subnet.new { vpc: vpc, cidrBlock: "10.0.1.0/24", } networkInterface1: EC2.NetworkInterface.new { subnet: subnet, } region1: EC2.NetworkInterface.region(networkInterface1) # => AWS.Us-west-2 networkInterface2: EC2.NetworkInterface.external("eni-01234567", AWS.Us-east-1) region2: EC2.NetworkInterface.region(networkInterface2) # => AWS.Us-east-1
Type Signature
fun (NetworkInterface) -> Region
- Argument:
networkInterface
The network interface from which to get the region.
Type: NetworkInterface
- Returns:
The region containing the network interface. | https://docs.fugue.co/Fugue.AWS.EC2.NetworkInterface.html | 2018-05-20T13:59:17 | CC-MAIN-2018-22 | 1526794863570.21 | [] | docs.fugue.co |
<![CDATA[ ]]>Production > Effects > Using Effects > Mesh Warp
Mesh Warp
Use the Mesh Warp effect to distort your drawings. With this module you can create effects such as a character in a warped mirror and looking through a glass jar. You can also animate the position of the grid to perform the distortion over time.
The Mesh Warp module is a position module, same as a Peg module.
Use the Mesh Warp editor to adjust the grid size, deformation quality and the region of interest.
| https://docs.toonboom.com/help/harmony-11/workflow-standalone/Content/_CORE/_Workflow/031_Effects/079A_H2_Mesh_Warp.html | 2018-05-20T14:08:39 | CC-MAIN-2018-22 | 1526794863570.21 | [array(['../../../Resources/Images/_ICONS/Home_Icon.png', None],
dtype=object)
array(['../../../Resources/Images/HAR/_Skins/stage.png', None],
dtype=object)
array(['../../../../Skins/Default/Stylesheets/Images/transparent.gif',
'Closed'], dtype=object)
array(['../../../Resources/Images/HAR/_Skins/draw.png',
'Toon Boom Harmony 11 Draw Online Documentation'], dtype=object)
array(['../../../../Skins/Default/Stylesheets/Images/transparent.gif',
'Closed'], dtype=object)
array(['../../../Resources/Images/HAR/_Skins/sketch.png', None],
dtype=object)
array(['../../../../Skins/Default/Stylesheets/Images/transparent.gif',
'Closed'], dtype=object)
array(['../../../Resources/Images/HAR/_Skins/controlcenter.png',
'Installation and Control Center Online Documentation Installation and Control Center Online Documentation'],
dtype=object)
array(['../../../../Skins/Default/Stylesheets/Images/transparent.gif',
'Closed'], dtype=object)
array(['../../../Resources/Images/HAR/_Skins/scan.png', None],
dtype=object)
array(['../../../../Skins/Default/Stylesheets/Images/transparent.gif',
'Closed'], dtype=object)
array(['../../../Resources/Images/HAR/_Skins/stagePaint.png', None],
dtype=object)
array(['../../../../Skins/Default/Stylesheets/Images/transparent.gif',
'Closed'], dtype=object)
array(['../../../Resources/Images/HAR/_Skins/stagePlay.png', None],
dtype=object)
array(['../../../../Skins/Default/Stylesheets/Images/transparent.gif',
'Closed'], dtype=object)
array(['../../../Resources/Images/HAR/_Skins/stageXsheet.png', None],
dtype=object)
array(['../../../../Skins/Default/Stylesheets/Images/transparent.gif',
'Closed'], dtype=object)
array(['../../../Resources/Images/HAR/Stage/Effects/HAR11/HAR11_Mesh_Warp_Effect.png',
'Mesh Warp Effect Mesh Warp Effect'], dtype=object)
array(['../../../Resources/Images/HAR/Stage/Effects/HAR11/HAR11_MeshWarp_Network.png',
'Mesh Warp Network Connection Mesh Warp Network Connection'],
dtype=object)
array(['../../../../Skins/Default/Stylesheets/Images/transparent.gif',
'Closed'], dtype=object)
array(['../../../Resources/Images/HAR/Stage/Effects/HAR11/HAR11_Mesh_Warp_Properties.png',
'Mesh Warp Properties Mesh Warp Properties'], dtype=object) ] | docs.toonboom.com |
Overview¶
Purpose and Scope¶
The WeatherOps Insight API is an interface to WDT’s Weather as a Service® analytics platform, allowing application developers to leverage high quality weather information for applications, and products. The WeatherOps Insight API provides access to historical, current, and forecast data for any region of interest, such as an agriculture field, urban area, or utility service area. API response formats can include time-series.
Data is available for up to the past 365 days and up to 10 days in the future..
Methods¶
Requests must use HTTPS and one of the following methods. Details are provided in the endpoint descriptions.
Parameters¶
Some requests may have parameters in the URI query string or request body..
Data¶
WDT’s Platform API provides the data underlying the Insight API. Responses list the Platform API product(s) used to create the result. Platform API Products receive regular updates to improve accuracy; this can cause response values to improve over time.
Missing Data¶
In cases where there are gaps in data coverage, Insight API will return a response with values based on the data that was available to it at the time of the request. This response will still have an HTTP status code of 200, but will also include a list of ‘missingValidTimes’ that indicate where the gaps in data coverage occurred.
{ 'missingValidTimes': ['2017-01-01T06:00:00Z', '2017-01-01T07:00:00Z'] . . . } | http://docs.api.wdtinc.com/insight-api/en/latest/overview.html | 2018-05-20T13:46:32 | CC-MAIN-2018-22 | 1526794863570.21 | [] | docs.api.wdtinc.com |
Jupyter configuration files¶
Jupyter configuration is based on the
traitlets.config module.
Essentially, each Jupyter application (e.g.
notebook, or
nbconvert) has a number of configurable values which:
- have default values
- can be altered form their default by values read from configuration files, which can be a)
.jsonstatic files b)
.pyconfig python scripts
- can be overridden by command-line arguments
Jupyter config path¶
Jupyter applications search for configuration files in each directory in the jupyter config path. This path includes different locations in different operating systems, but you can use the root jupyter command to find a list of all jupyter paths, and look for the config section:
jupyter --paths
There are at least three configuration directories
- a per-user directory
- a directory in the
sys.prefixdirectory for the python installation in use
- a system-wide directory
Note that it is likely that to write to the system-wide config directory will require elevated (admin) privileges. This may also be true for the sys-prefix directory, depending on the python installation in use.
Finally, you can also specify a configuration file as a command line argument, for example:
jupyter notebook --config=/home/john/mystuff/jupyter_notebook_config.json
Note that this can change which filenames are searched for, as noted below.
Jupyter configuration filenames¶
Jupyter applications search the Jupyter config path for config files
with names derived from the application name, with file extension of either
.json (loaded as json) or
.py (run as a python script).
For example, the
jupyter notebook application searches for config files
called
jupyter_notebook_config, while the
jupyter nbconvert application
searches for config files named
jupyter_nbconvert_config, with the file
extensions mentioned above.
In addition, all jupyter applications will load config files named
jupyter_config.json or
jupyter_config.py.
Specifying a config file on the command line has the additional slightly subtle effect that it will also change the filename that the application searches for. For example, if I call the notebook using
jupyter notebook --config=/home/john/mystuff/special_config_ftw.json
then instead of searching the Jupyter config path for files named
jupyter_notebook_config, the notebook application will search the config
path for other files also named
special_config_ftw, which can mean that the
normal config files get missed. As a result, it may be preferable to name any
custom config files with the standard filename for the jupyter application they
pertain to.
Config files edited by jupyter_contrib_nbextensions¶
The
jupyter contrib nbextensions install command edits some config files as
part of the install:
-
-
jupyter_notebook_config.jsonis edited in order to:
-
- enable the jupyter_nbextensions_configurator. serverextension
- enable the
contrib_nbextensions_help_itemnbextension, which adds a link to readthedocs to the help menu
-
-
jupyter_nbconvert_config.jsonis edited in order to:
-
- edit the nbconvert template path, adding the
jupyter_contrib_nbextensions.nbconvert_supporttemplates directory
- add preprocessors
CodeFoldingPreprocessorand
PyMarkdownPreprocessorfrom the
jupyter_contrib_nbextensions.nbconvert_supportmodule | http://jupyter-contrib-nbextensions.readthedocs.io/en/latest/config.html | 2018-05-20T13:46:27 | CC-MAIN-2018-22 | 1526794863570.21 | [] | jupyter-contrib-nbextensions.readthedocs.io |
Create Orchestration ROI labor rate cards The labor rate card defines the hourly cost of performing a task manually. Before you beginRole required: orchestration_manager About this task Before calculating your Orchestration ROI, you must create labor rate cards for the manual work that would be required to complete the tasks correlated to the ROI calculations.. | https://docs.servicenow.com/bundle/geneva-it-operations-management/page/product/orchestration/task/t_CreateOrchROILaborRateCard.html | 2018-05-20T14:14:15 | CC-MAIN-2018-22 | 1526794863570.21 | [] | docs.servicenow.com |
Create new automated test Create a named automated test containing a series of steps to execute. Before you beginRole required: [atf_test_admin] or [atf_test_designer]. Procedure Navigate to Automated Test Framework > Tests. Click New. The system displays the Test new record form. On the Test new record form, enter a name for your test in the Name field. The system will identify this test by this name wherever it displays a list of tests, for example, under the Tests module. In the Description field, enter a description for your test. Click Save. The system creates a new test record and returns to the list of tests. What to do nextAdd the steps for the new test. | https://docs.servicenow.com/bundle/jakarta-servicenow-platform/page/administer/auto-test-framework/task/atf-create-test.html | 2018-05-20T14:12:31 | CC-MAIN-2018-22 | 1526794863570.21 | [] | docs.servicenow.com |
Introduction the content, appropriateness and overall experience offered by an app or game, and serve to help gamers and parents make choices about what they’re downloading and playing.
Every app and game published on iTunes, Google Play and the Amazon Apps Store are required to have a rating. For the most part, developers self-regulate and agree to adhere to specific guidelines as set out by the various Ratings Agencies.
Please note that while we attempt to maintain information that is current and up to date, the 3rd parties described here may change their policies and programs at any time and without notice. Please be sure to follow the links for further and up-to-date information.
ESRB Ratings – Entertainment Software Rating Board
).”
The mission of the ESRB is to empower both parents and consumers to make informed decision about the suitability and appropriateness of Apps and Games through their voluntary ratings systems.
The ESRB’s App and Game Ratings are found on software boxes, images, and icons. The ESRB has guidelines and rules for developers to follow. These guidelines help ensure consumers will understand the ratings.
Early Childhood
The eC (Early Childhood) icon indicates that the App or Game is safe for young children. The target audience is young children.
Everyone
The E (Everyone) icon indicates that the App or Game is safe for all audiences. The ESRB does warn that some content may include mild cartoon or fantasy violence or mild language that may be offensive to some.
Everyone 10+
The E10+ (Everyone 10+) icon indicates that the App or Game is safe for audiences of the age 10 or older. These Apps or Games may also include mild cartoon or fantasy violence, mild language, or suggestive themes that may be offensive to some.
Teen
Apps or Games with the T (Teen) ratings may contain violence, language, themes, crude humor, blood, or gambling that is inappropriate for audiences under the age of 13.
Mature 17+
Apps or Games with the M (Mature 17+) are intended for a more mature audience aged 17 or older. These Apps or Games typically contain intense violence, sexual content and themes, blood and gore, and offensive language.
Adults Only
Apps or Games with the Ao (Adults Only) ratings usually contain gambling with real money. Adult Only Apps and Games may include graphic sexual content and themes, intense or over the top violence. Adult Only content is appropriate for adults aged 18 or older.
Rating Pending
The RP (Rating Pending) icon appears in Apps, Games or advertising, marketing and promotional materials that are going to have a rating but the ratings have not yet been assigned.
ESRB Content Descriptors Alcohol: The consumption of alcoholic beverages
- Use of Drugs: The consumption or use of illegal drugs
- Use of Tobacco: The consumption of tobacco products
- Violence: Scenes involving aggressive conflict. May contain bloodless dismemberment
- Violent References: References to violent acts
Digital Purchases: Enables purchases of digital goods completed directly from within the app (e.g., purchases of additional game content, levels, downloadable music, etc.)
Unrestricted Internet: Product provides access to the internet
PEGI – Pan European Game Information
)
PEGI is used and recognised throughout Europe and has the enthusiastic support of the European Commission. It is considered to be a model of European harmonisation in the field of the protection of children.”
The PEGI App and Game Ratings are visible on software boxes, images, and icons. PEGI has guidelines and rules for developers to follow. These guidelines help ensure consumers will understand the ratings.
Detailed information about PEGI and PEGI App and Game Ratings is available on the PEGI website at
PEGI Ratings
PEGI 3
The PEGI 3 icon indicates the apps or games are suitable for all ages from 3 and above. Some content may contain cartoon violence but not in a manner that a child could associate with real life. PEGI 3 content does not have sounds or pictures that will frighten children, nor offensive language.
PEGI 7
The PEGI 7 rating is a more restricted version of the PEGI 3 rating. PEGI 7 includes Apps or Games which contain scenes or sounds that may frighten younger children.
PEGI 12
The PEGI 12 rating applies to Apps or Games which contain scenes of violence toward humanoids or animals, sexuality, or mildly offensive language.
PEGI 16
The PEGI 16 rating applies to Apps or Games which depict violence in a realistic manner. PEGI 16 content contains offensive language, possible use of tobacco, alcohol, drugs and the depiction of criminal acts.
PEGI 18
The PEGI 18 is a ratings classification for Apps or Games which depict violence or sexuality in a manner that is considered extreme and repulsive.
PEGI OK
The PEGI OK rating is for Apps or Games which contain nothing that leads to a higher score than PEGI 3. Developers are required to provide a declaration to PEGI that their content does not include violence, sexual activity or innuendo, nudity, bad language, gambling, the use of drugs, alcohol or tobacco, or scary scenes.
PEGI Content Descriptors
PEGI Content Descriptors
Bad Language
The App or Game contains bad or offensive language
Discrimination
The App or Game contains matter which may encourage discrimination
Drugs
The App or Game refers to or depicts the use of drugs
Fear
The App or Game contains images or sounds that may scare young children
Gambling
The App or Game encourages or teaches gambling
Sex
The App or Game depicts nudity, sexuality or sexual references
Violence
The App or Game depicts violence
Online Gameplay
The App or Game can be played online
IARC – International Age Rating Coalition
“A ground-breaking global rating and age classification system for digitally delivered games and apps that reflects the unique cultural differences among nations and regions.
Administered by many of the world’s game rating authorities, the International Age Rating Coalition (IARC) provides a globally streamlined age classification process for digital games and mobile apps, helping to ensure that today’s digital consumers have consistent access to established and trusted age ratings across game devices. Established in 2013, IARC simplifies the process by which developers obtain age ratings from different regions around the world by reducing it to a single set of questions about their products’ content and interactive elements. The questionnaire is programmed with unique algorithms that generate ratings reflecting each participating rating authority’s distinct standards, along with a generic rating for the rest of the world. IARC rating assignments also include content descriptors and interactive elements identifying apps that collect and share location or personal information, enable user interaction, share user-generated content, and/or offer in-app digital purchases. The IARC system currently includes rating authorities which collectively represent regions serving approximately 1.5 billion people, with more expected to participate in the future.”
IARC is unique in that developers complete a questionnaire about their App or Game and submit it to a participating storefront. The survey includes information like sharing, location, UGC and in-app purchases.
IARC uses this information to calculate and assign a rating that is appropriate to the particular world region the content is being downloaded, viewed or played.
Because IARC utilizes automation algorithms to calculate App or Game ratings, IARC ratings are applied only to digitally distributed apps and games.
At Disrupted Logic and ctalyst, we deeply respect the content rating systems and we encourage our publishers, developers, and advertisers to adhere to the policies and guidelines to help protect children around the world and to ensure a rich and rewarding entertainment experience for everyone. | http://docs.ctalyst.com/article/ctalyst-explains-mobile-app-game-ratings/ | 2018-05-20T14:03:15 | CC-MAIN-2018-22 | 1526794863570.21 | [] | docs.ctalyst.com |
Request planCreate an operational resource planRequest resourcesConfirm a resource planConfirm and allocate | https://docs.servicenow.com/bundle/kingston-it-business-management/page/product/resource-management/task/t_RequestAChangeToAResourcePlan.html | 2018-05-20T13:52:49 | CC-MAIN-2018-22 | 1526794863570.21 | [] | docs.servicenow.com |
{"_id":"5997286999900500197dff28",-08-18T17:48:25.754Z","link_external":false,"link_url":"","githubsync":"","sync_unique":"","hidden":false,"api":{"settings":"","results":{"codes":[]},"auth":"required","params":[],"url":""},"isReference":false,"order":3,"body":"# Overview\nThe IKONOS dataset includes more than 12 years of panchromatic and multispectral satellite imagery. The IKONOS satelite launched in 1999 and collected imagery with .80-meter resolution and multispectral imagery with 3.2-meter resolution until early 2015.\n\n**Uses**: Useful for image analysis, map creation, and change detection.\n\n**Spectral Bands**: Panchromatic band, Multispectral 4-band\n**Date Range**: 2000-2014\n\n**Products available through GBDX**: IKONOS Ortho-ready 2A\n\nSee the [IKONOS Data Sheet]() for more details.\n\n\nIKONOS Ortho-ready 2A products are cataloged on GBDX. They do not need to be ordered. See the [S3 Location](#section-s3-location) section to learn how to find the location of an IKONOS acquisition. \n[block:image]\n{\n \"images\": [\n {\n \"image\": [\n \"\",\n \"ikonos-data-post.jpeg\",\n 875,\n 485,\n \"#36373d\"\n ],\n \"caption\": \"IKONOS Multispectral Image\"\n }\n ]\n}\n[/block]\n# Search \n\nUsing the GBDX Catalog API, search by \"types\" to find IKONOS data. To narrow the search results set, include area, date range, or both. Search results can be further filtered by the properties of an\n\n\n## Catalog V2 API Request\n\nSend a ```POST``` request to `````` with a request body. \n\nFor gbdxtools, see <a href=\"\">Catalog Search Overview</a>\n\n## Search Example 1: Search by Types\n\nSearch by the type \"IKONOS\". \n[block:code]\n{\n \"codes\": [\n {\n \"code\": \" {\\n \ \\t\\\"startDate\\\": \\\"2013-06-01T12:00:00.000Z\\\",\\n \\t\\\"endDate\\\": \\\"2014-06-06T12:00:00.000Z\\\",\\n \\t\\\"types\\\": [\\\"IKONOS\\\"]\\n }\",\n \"language\": \"json\"\n }\n ]\n}\n[/block]\n\n\n## Search Result Example \n[block:code]\n{\n \"codes\": [\n {\n \"code\": \" {\\n \\\"stats\\\": {\\n \\\"recordsReturned\\\": 4,\\n \\\"totalRecords\\\": 4,\\n \\\"typeCounts\\\": {\\n \\\"GBDXCatalogRecord\\\": 4,\\n \\\"IKONOSAcquisition\\\": 4,\\n \\\"Acquisition\\\": 4,\\n \\\"IKONOS\\\": 4\\n }\\n },\\n \\\"results\\\": [\\n {\\n \\\"identifier\\\": \\\"2014051819045100000011602271\\\",\\n \\\"type\\\": [\\n \\\"GBDXCatalogRecord\\\",\\n \\\"Acquisition\\\",\\n \\\"IKONOS\\\",\\n \\\"IKONOSAcquisition\\\"\\n ],\\n \\\"properties\\\": {\\n \\\"sunAzimuth\\\": 138.05568,\\n \\\"cloudCover\\\": 14,\\n \\\"targetAzimuth\\\": 5.8030877,\\n \\\"multiResolution\\\": 3.363062,\\n \\\"zone\\\": \\\"10N\\\",\\n \\\"catalogID\\\": \\\"2014051819045100000011602271\\\",\\n \\\"offNadirAngle\\\": 8.466732771390973,\\n \\\"platformName\\\": \\\"IKONOS\\\",\\n \\\"sunElevation\\\": 67.75093,\\n \\\"vendor\\\": \\\"DigitalGlobe\\\",\\n \\\"timestamp\\\": \\\"2014-05-18T19:04:51.000Z\\\",\\n \\\"bucketPrefix\\\": \\\"po_1589873\\\",\\n \\\"bucketName\\\": \\\"ikonos-product\\\",\\n \\\"panResolution\\\": 0.8407655,\\n \\\"footprintWkt\\\": \\\"MULTIPOLYGON(((-122.447100942 38.009674487, -122.405065387 38.0099824057, -122.360801534 38.0104292797, -122.360888189 37.9366972827, -122.360915202 37.8629486558, -122.360995306 37.7893586117, -122.361062053 37.7158250194, -122.361085278 37.6423306511, -122.361179125 37.5690723099, -122.361178606 37.4959540291, -122.361273151 37.4229310949, -122.361323759 37.3493866053, -122.36138463 37.27629299, -122.36146581 37.2032024157, -122.361508363 37.130061414, -122.361554153 37.0571233329, -122.361606515 36.9842365938, -122.404941998 36.9841393043, -122.446074055 36.984146552, -122.489463558 36.9842658, -122.489578501 37.0570998457, -122.489705039 37.1299801272, -122.489829829 37.2029128581, -122.489934273 37.2758806469, -122.490068076 37.3489377906, -122.490225408 37.4220740983, -122.490333326 37.4952226093, -122.490409701 37.5690203212, -122.490602637 37.6419039354, -122.490799032 37.7152885378, -122.490949706 37.7887401763, -122.49109447 37.8622395224, -122.491322445 37.9358165729, -122.491436876 38.0095536189, -122.447100942 38.009674487)))\\\",\\n \\\"components\\\": 2,\\n \\\"imageBands\\\": \\\"PAN_MS1\\\",\\n \\\"sensorPlatformName\\\": \\\"IKONOS\\\"\\n }\\n }\\n \\n truncated to show a single IKONOS acquisition\",\n \"language\": \"json\",\n \"name\": null\n }\n ]\n}\n[/block]\n# Types\n\nThere are four \"types\" associated with IKONOS records. \n\nType | Definition\n--- | ---\nGBDXCatalogRecord | The parent type for all GBDX catalog records\nAcquisition | The parent type for all Acquistions\nIKONOS | All IKONOS records. \nIKONOSAcquisition | All IKONOS Acqusitions\n\n***Note: All IKONOS records are IKONOS acquisitions. A search for types \"IKONOS\" or \"IKONOSAcquisitions\" will return the same results set.***\n\n\n# Properties\n\nThe following properties and metadata files are associated with an IKONOS record in the GBDX catalog. \n\nProperty | Description | Values\n--- | --- | ---\nbucketName | The AWS bucket name where IKONOS data is stored. Bucket name + bucket location make up the S3 location for IKONOS data. This url may be used as input to some processing tasks | ikonos-product\nbucketPrefix | The AWS prefix for the location where IKONOS data is stored. Bucket name + bucket location make up the S3 location for IKONOS data. This url may be used as input to some processing tasks | po_1549873\"\ncatalogID | The record ID provided by the vendor | 2014051819045100000011602271\ncloudCover| Estimate of the max cloud-covered fraction of the product component | 0.000 to 1.000, -999.000 if not assessed\ncomponents | IKONOS scans are broken into components. A directory will typically include multiple components | 2 (as part of the directory name, this component would be shown as a 7 digit number; \"0000002\"\nfootprintWkt| The geometry that defines the location of the record| MULTIPOLYGON \nimageBands | The type of image band of the image. For all IKONOS imagery imageBands = PAN_MS1 | PAN_MS1\nmultiResolution | The multispectral resolution for IKONOS imagery is 3.2 - 4m | ex: 3.363062\noffNadirAngle | The spacecraft elevation angle measured from nadir to the image center as seen from the spacecraft at the time the strip or substrip was acquired| 8.466732771390973\npanResolution |The panchromatic resolution for IKONOS imagery is .80 - 1 m | ex: 0.8407655\nplatformName |The name of the sensor platform that acquired the data. The properties \"platformName\" and \"sensorPlatformName\" have the same value | IKONOS\nsensorPlatformName | The name of the satellite that acquired the image | IKONOS\nsunAzimuth |The azimuth of the sun as seen by an observer sitting on the target measured in a clockwise direction from north | 138.05568\nsunElevation |The angle of the sun above the horizon | 67.75093\ntargetAzimuth|The azimuth of the target as seen by an observer sitting on the spacecraft measured in a clockwise direction from north | 5.8030877\ntimestamp | The timestamp indicates the date and time data was acquired by the satellite | 2014-05-18T19:04:51.000Z\nvendor | The name of the data provider. | DigitalGlobe\nzone |All IKONOS data is project into UTM. The zone attribute is the UTM zone | 10N\n\n# Catalog ID\nThe Catalog ID is the product ID assigned by the vendor. \n\nThe IKONOS catalog ID is prepended by the date and time of collection. For example:\n\n```200702021935150000001162644```\n\n# Directory Structure and Contents\nIKONOS imagery is located in the bucket s3://ikonos-product. Each scan has its own prefix. These prefixes are named by an internal production order identifier( i.e. po_1642618). \n\nWithin each prefix there is a directory named \"meta\" containing a cloud mask as well as the original metadata properties file. This properties file is a human readable text file containing pertinent collection information.\n\nIn addition to the metadata directory, one or more product directories will exist. The IKONOS scans are broken into components. Each directory is named by the prefix name followed by a 7 digit component number. The following example shows a scan directory containing 3 components.\nmeta\npo_1642618_0000000\npo_1642618_0000001\npo_1642618_0000002\n\nEach component directory contains two files in tif format. One contains the multispectral image and the other contains the pancromatic image.\npo_1642618_bgrn_0000000.tif\npo_1642618_pan_0000000.tif\nThere is also one RPC file per image file. The RPCs define the camera model and needed to orthorectify the image.\npo_1642618_bgrn_0000000_rpc.txt\npo_1642618_pan_0000000_rpc.txt\n\nIn addition several shapefiles are included for each component. The aoi shapefile is the AOI of the original order to produce the strip. It's footprint is the same and the footprint of the image. The image shapefile contains the footprint of the scan as well as metdata related to the scan.\npo_1642618_aoi.shp\npo_1642618_component.shp\npo_1642618_image.shp\n\n\n# S3 location\nThe S3 path to the files for an IKONOS scan is constructed from the following fields: \nbucket\nbucketPrefix\ncomponents\n\nThese three fields can be used to construct the paths to all of the files for a scan. For example the path to the multispectral tif file is:\n\n<bucket>/<bucketPrefix>_<component>/<filePrefix>_bgrn_<component>.tif\ns3://ikonos-product/po_1642618/po_1642618_0000003/po_1642618_bgrn_0000003.tif\n\n\n# Processing IKONOS Data\n\nTo ortho-rectify an IKONOS image, use [ENVI® RPC Orthorectification](doc:envi-rpc-orthorectification).\n\nTo pan-sharpen an IKONOS image, use [ENVI® NNDiffuse PanSharpening](doc:envi-nndiffuse-pansharpening).","excerpt":"","slug":"ikonos","type":"basic","title":"IKONOS"} | https://gbdxdocs.digitalglobe.com/docs/ikonos | 2018-05-20T13:58:31 | CC-MAIN-2018-22 | 1526794863570.21 | [] | gbdxdocs.digitalglobe.com |
Current Version: 0.70.0Released: May 25, 2018
Alexa, turn on the lightsUse Alexa to control Home Assistant.
Ok Google, turn on the ACUse Google Assistant to control Home Assistant.
Recent Blog Posts
Join the Home Assistant t-shirt revolution!
All proceeds will be donated to the Electronic Frontier Foundation. …
-?
View examples by the community. | https://rc--home-assistant-docs.netlify.com/ | 2018-05-20T13:19:43 | CC-MAIN-2018-22 | 1526794863570.21 | [] | rc--home-assistant-docs.netlify.com |
To handle a wide variety of resolutions, Gideros provides a functionaly called Automatic Screen Scaling.
Before starting your project, you determine your logical resolution and position all your sprites according to this. For example, if you determine your logical resolution as 320x480, your upper left corner will be (0, 0), your lower right corner will be (319, 479) and the center coordinate of your screen will be (160, 240). Then according to your scale mode, Gideros automaticly scales your screen according to the real resolution of your hardware.
There are 8 types of scaling modes: | http://docs.giderosmobile.com/automatic_screen_scaling.html | 2018-05-20T14:09:25 | CC-MAIN-2018-22 | 1526794863570.21 | [] | docs.giderosmobile.com |
WorkItemStore.CultureInfo Property
Gets the localization environment that is used by the client.
Namespace: Microsoft.TeamFoundation.WorkItemTracking.Client
Assembly: Microsoft.TeamFoundation.WorkItemTracking.Client (in Microsoft.TeamFoundation.WorkItemTracking.Client.dll)
Syntax
'Declaration Public ReadOnly Property CultureInfo As CultureInfo
public CultureInfo CultureInfo { get; }
public: property CultureInfo^ CultureInfo { CultureInfo^ get (); }
member CultureInfo : CultureInfo with get
function get CultureInfo () : CultureInfo
Property Value
Type: System.Globalization.CultureInfo
The localization environment that is used by the client.
.NET Framework Security
- Full trust for the immediate caller. This member cannot be used by partially trusted code. For more information, see Using Libraries from Partially Trusted Code.
See Also
Reference
Microsoft.TeamFoundation.WorkItemTracking.Client Namespace | https://docs.microsoft.com/en-us/previous-versions/visualstudio/visual-studio-2013/bb164773(v=vs.120) | 2018-05-20T14:39:41 | CC-MAIN-2018-22 | 1526794863570.21 | [] | docs.microsoft.com |
Important Release Information
Microsoft.
The version of Authenticode that was released with Internet Explorer 3.02 UPD added several new, important code-signing features that improved on the initial implementation of Authenticode. Both the code-signing tools and browsers were updated with a new infrastructure that provides for these new features. The two most important features are:
- The addition of a verifiable signature time stamp. When a software publisher's certificate expires, it is impossible to determine if the software was signed during the valid period of the certificate without incorporation of a verifiable signature time stamp. Authenticode version 2.0 incorporates time stamping support in both the signing and verification tools. In addition, VeriSign will be supporting a verifiable time stamping service for Authenticode signing purposes.
- Inclusion of certificates in the certification authority verification hierarchy that expired on June 30, 1997. Earlier versions of Windows Internet Explorer are now unable to verify Authenticode signatures after that date. Internet Explorer version 3.02 UPD and later versions resolve this by eliminating these short-lived certificates. Signatures on certificates issued by VeriSign will properly verify until expiration of the VeriSign root certificate.
The Authenticode version for Internet Explorer 4.0 contains the same infrastructure and features as the Internet Explorer 3.02 UPD release. However, to provide a more consistent user interface, many of the command line option flags have been renamed or changed, and a few new ones have been added.
As a result of these Authenticode improvements, the following steps need to be taken:
- Software publishers need to re-sign their code using the Authenticode version 2.0 tools for Internet Explorer 3.02 UPD or later in order for users to be able to verify their signed files after June 30, 1997.
- Users need to upgrade to Internet Explorer 3.02 UPD or later in order to verify signed files after June 30, 1997.
Note that once files are re-signed, users of Internet Explorer versions earlier than 3.02 UPD will not be able to verify the re-signed files. But after July 1, 1997, users of Internet Explorer versions earlier than 3.02 UPD will not be able to verify any signed files, whether the files have been re-signed with the new tools or not. It is clearly in the users' best interest to upgrade to Internet Explorer 3.02 UPD or later to be able to continue to verify signed files. So software publishers should be able to re-sign their code using the new tools with confidence that users will be able to verify the files.
Additionally, by using the VeriSign service to time stamp the new signatures, software publishers gain the added benefit that the digital signatures will not need to be re-signed when their own software publishing certificate expires. | https://docs.microsoft.com/en-us/previous-versions/windows/internet-explorer/ie-developer/platform-apis/ms537362(v=vs.85) | 2018-05-20T14:27:46 | CC-MAIN-2018-22 | 1526794863570.21 | [] | docs.microsoft.com |
. Refer to the Scripting guide to learn more about scripting with Harmony.. | https://docs.toonboom.com/help/harmony-11/workflow-standalone/Content/_CORE/_Workflow/008_Interface/018_H2_Script_Editor_View.html | 2018-05-20T14:06:22 | CC-MAIN-2018-22 | 1526794863570.21 | [array(['../../../Resources/Images/_ICONS/Home_Icon.png', None],
dtype=object)
array(['../../../Resources/Images/HAR/_Skins/stage.png', None],
dtype=object)
array(['../../../../Skins/Default/Stylesheets/Images/transparent.gif',
'Closed'], dtype=object)
array(['../../../Resources/Images/HAR/_Skins/draw.png',
'Toon Boom Harmony 11 Draw Online Documentation'], dtype=object)
array(['../../../../Skins/Default/Stylesheets/Images/transparent.gif',
'Closed'], dtype=object)
array(['../../../Resources/Images/HAR/_Skins/sketch.png', None],
dtype=object)
array(['../../../../Skins/Default/Stylesheets/Images/transparent.gif',
'Closed'], dtype=object)
array(['../../../Resources/Images/HAR/_Skins/controlcenter.png',
'Installation and Control Center Online Documentation Installation and Control Center Online Documentation'],
dtype=object)
array(['../../../../Skins/Default/Stylesheets/Images/transparent.gif',
'Closed'], dtype=object)
array(['../../../Resources/Images/HAR/_Skins/scan.png', None],
dtype=object)
array(['../../../../Skins/Default/Stylesheets/Images/transparent.gif',
'Closed'], dtype=object)
array(['../../../Resources/Images/HAR/_Skins/stagePaint.png', None],
dtype=object)
array(['../../../../Skins/Default/Stylesheets/Images/transparent.gif',
'Closed'], dtype=object)
array(['../../../Resources/Images/HAR/_Skins/stagePlay.png', None],
dtype=object)
array(['../../../../Skins/Default/Stylesheets/Images/transparent.gif',
'Closed'], dtype=object)
array(['../../../Resources/Images/HAR/_Skins/stageXsheet.png', None],
dtype=object)
array(['../../../../Skins/Default/Stylesheets/Images/transparent.gif',
'Closed'], dtype=object)
array(['../../../Resources/Images/HAR/Stage/Interface/HAR11/HAR11_ScriptEditor_View.png',
'Script Editor View Script Editor View'], dtype=object)
array(['../../../../Skins/Default/Stylesheets/Images/transparent.gif',
'Closed'], dtype=object)
array(['../../../../Skins/Default/Stylesheets/Images/transparent.gif',
'Closed'], dtype=object)
array(['../../../../Skins/Default/Stylesheets/Images/transparent.gif',
'Closed'], dtype=object)
array(['../../../../Skins/Default/Stylesheets/Images/transparent.gif',
'Closed'], dtype=object)
array(['../../../../Skins/Default/Stylesheets/Images/transparent.gif',
'Closed'], dtype=object) ] | docs.toonboom.com |
Available Packages¶
The CernVM-FS software is available in form of several packages:
- cvmfs-release
- Adds the CernVM-FS yum/apt] as well as the
libcvmfs_cache.astatic library and
libcvmfs_cache.hheader in order to develop cache plugins.
- publishers and Stratum 1 servers.
- cvmfs-gateway
- The publishing gateway services are installed on a node with access to the authoritative storage.
- cvmfs-ducc
- Daemon that unpacks container images into a repository. Supposed to run on a publisher node.
- cvmfs-notify
- Websockets frontend for used for repository update notifications. Supposed to be co-located with a RabbitMQ service.
- kernel-…-.aufs21
- Scientific Linux 6 kernel with aufs. Required for SL6 based Stratum 0 servers.
- cvmfs-shrinkwrap
- Stand-alone utility to export file system trees into containers for HPC use cases.
- cvmfs-unittests
- Contains the
cvmfs_unittestsbinary. Only required for testing. | https://cvmfs.readthedocs.io/en/2.6/apx-rpms.html | 2021-06-12T19:36:05 | CC-MAIN-2021-25 | 1623487586390.4 | [] | cvmfs.readthedocs.io |
Working with Microsoft Active Directory in Amazon FSx for Windows File Server
Amazon FSx works with Microsoft Active Directory (AD) to integrate with your existing Microsoft Windows environments. Active Directory is the Microsoft directory service used to store information about objects on the network and make this information easy for administrators and users to find and use. These objects typically include shared resources such as file servers and network user and computer accounts.
When you create a file system with Amazon FSx, you join it to your Active Directory domain to provide user authentication and file- and folder-level access control. Your users can then use their existing user identities in Active Directory to authenticate themselves and access the Amazon FSx file system. Users can also use their existing identities to control access to individual files and folders. In addition, you can migrate your existing files and folders and these items' security access control list (ACL) configuration to Amazon FSx without any modifications.
Amazon FSx provides you with two options for using your Amazon FSx for Windows File Server file system with Active Directory: Using Amazon FSx with AWS Directory Service for Microsoft Active Directory and Using Amazon FSx with your self-managed Microsoft Active Directory.
Amazon FSx supports Microsoft
Azure Active Directory Domain Services
After you create a joined Active Directory configuration for a file system, you can update only the following properties:
Service user credentials
DNS server IP addresses
You cannot change the following properties for your joined Microsoft AD:
DomainName
OrganizationalUnitDistinguishedName
FileSystemAdministratorsGroup
However, you can create a new file system from a backup and change these properties in the Microsoft Active Directory integration configuration for that file system. For more information, see Walkthrough 2: Create a File System from a Backup.
Amazon FSx does not support Active Directory Connector and Simple Active Directory.
Topics | https://docs.aws.amazon.com/fsx/latest/WindowsGuide/aws-ad-integration-fsxW.html | 2021-06-12T21:30:00 | CC-MAIN-2021-25 | 1623487586390.4 | [] | docs.aws.amazon.com |
All docs This doc
WSO2 API Microgateway 3.0.0 (WSO2 MGW) is a lightweight gateway distribution that can be used to deploy a single API or multiple APIs. In summary, WSO2 API Microgateway is a specialized form of WSO2 API Gateway.
In this Quick Start Guide let's see how a service can be securely proxied via the microgateway. Let's expose the publicly available petstore services ( ) using the microgateway.
Try out the following Quick Start Guides based on your preference. | https://docs.wso2.com/pages/viewpage.action?pageId=126568001 | 2021-06-12T20:52:34 | CC-MAIN-2021-25 | 1623487586390.4 | [] | docs.wso2.com |
Crate rasters
Version 0.6.1
See all rasters's items
Library to efficiently process GDAL rasters.
Align a pair of rasters by their geo. transform.
Process rasters in memory-efficient chunks.
Geometry manipulation utilities
Utilities to compute histogram
Abstractions to safely read GDAL datasets from multiple
threads.
Utilities to accumulate first and second moments; min;
and max of a f64 statistic incrementally.
f64
The error type returned by this crate. Currently this is
a synonym for anyhow::Error .
anyhow::Error
The Result type returned by this crate.
Result | https://docs.rs/rasters/0.6.1/rasters/ | 2021-06-12T21:36:42 | CC-MAIN-2021-25 | 1623487586390.4 | [] | docs.rs |
Crate rw_lease
Version 0.1.0
See all rw_lease's items
The DrainGuard represents waiting for the readers to release their
leases so we can take a write lease.
An RWLock, but:
This guard signifies read access. When it drops, it will release the read lock.
This guard signifies write access. When it drops, it will release the write lock.
Can happen when we try to take a read lease. | https://docs.rs/rw_lease/0.1.0/rw_lease/ | 2021-06-12T20:58:05 | CC-MAIN-2021-25 | 1623487586390.4 | [] | docs.rs |
Siddhi allows you to manage any faults that may occur when handling streaming data in a graceful manner. This section explains the different ways in which the faults can be handled gracefully.
Handling runtime errors
To specify how errors that occur at runtime, you need to add an
@OnError annotation to a stream definition as shown below.
@OnError(action='on_error_action')
define stream <stream name> (<attribute name> <attribute type>, <attribute name> <attribute type>, ... );
The on_error_action parameter specifies the action to be executed during failure scenarios. The possible action types are as follows:
LOG: This logs the event with an error, and then drops the event. If you do not specify the fault handling actionvia the
@OnErrorannotation,
LOGis considered the default action.
STREAM: This automatically creates a fault stream for the base stream. The definition of the fault stream includes all the attributes of the base stream as well as an additional attribute named
_error. The events are inserted into the fault stream during a failure. The error identified is captured as the value for the
_errorattribute.
e.g., The following is a Siddhi application that includes the
@OnError annotation to handle failures during runtime.
@OnError(name='STREAM')
define stream StreamA (symbol string, volume long);
from StreamA[custom:fault() > volume]
insert into StreamB;
from !StreamA#log("Error Occured")
select symbol, volume long, _error
insert into tempStream;
Here, if an error occurs for the base stream named
StreamA, a stream named
!StreamA is automatically created. The base stream has two attributes named symbol and volume. Therefore,
!StreamA has the same two attributes, and in addition, another attribute named
_error.
The Siddhi query uses the
custom:fault() extension generates the error detected bsed on the specified condition (i.e., if the volume is less than a specified amount). If no error is detected, the output is inserted into
StreamB stream. However, if an error is detected, it is logged with the
Error Occured text. The output is inserted into a stream named
tempStream, and the error details are presented via the
_error stream attribute (which is automatically included in the
!StreamA i fault stream and then inserted into the TempStream which is the inferred output stream)..
Handling errors that occur when publishing the output
To specify the error handling methods for errors that occur at the time the output is published, you can include the on.error parameter in the sink configuration as shown below.
@sink(type='sink_type', on.error='on.error.action')
define stream <stream name> (<attribute name> <attribute type>, <attribute name> <attribute type>, ... );
The action types that can be specified via the
on.error parameter when configuring a sink are as follows. If this parameter is not included in the sink configuration,
LOG is the action type by default.. | https://docs.wso2.com/display/SP440/Fault+Handling | 2021-06-12T21:29:08 | CC-MAIN-2021-25 | 1623487586390.4 | [] | docs.wso2.com |
Security in Amazon DocumentDB
Cloud security at AWS is the highest priority. As an AWS customer, you benefit from a data center and network architecture that are built to meet the requirements of the most security-sensitive organizations.
Security is a shared responsibility between AWS and you. This documentation helps
you understand how to apply the shared responsibility model when using Amazon DocumentDB. DocumentDB (with MongoDB compatibility), see AWS Services in Scope by Compliance Program .
Security in the cloud — Your responsibility is determined by the AWS service that you use. You are also responsible for other factors including the sensitivity of your data, your organization’s requirements, and applicable laws and regulations.
You also learn how to use other AWS services that help you monitor and secure your Amazon DocumentDB resources. The following topics show you how to configure Amazon DocumentDB to meet your security and compliance objectives.
Topics
- Data Protection in Amazon DocumentDB
- Identity and Access Management in Amazon DocumentDB
- Managing Amazon DocumentDB Users
- Restricting Database Access Using Role-Based Access Control (Built-In Roles)
- Logging and Monitoring in Amazon DocumentDB
- Updating Your Amazon DocumentDB TLS Certificates
- Compliance Validation in Amazon DocumentDB
- Resilience in Amazon DocumentDB
- Infrastructure Security in Amazon DocumentDB
- Security Best Practices for Amazon DocumentDB
- Auditing Amazon DocumentDB Events | https://docs.aws.amazon.com/documentdb/latest/developerguide/security.html | 2021-06-12T20:35:48 | CC-MAIN-2021-25 | 1623487586390.4 | [] | docs.aws.amazon.com |
OpenZeppelin Defender¶
Introduction¶
OpenZeppelin Defender is a web-based application that allows developers to perform and automate smart contract operations in a secure way. Defender offers different components:
- Admin — to automate and secure all your smart contract operations such as access controls, upgrades, and pausing
- Relay — to build with a private and secure transaction infrastructure with the implementation of private relayers
- Autotasks — to create automated scripts to interact with your smart contracts
- Sentinel — to monitor your smart contract's events, functions, and transactions, and receive notifications via email
- Advisor — to learn and implement best practices around development, testing, monitoring, and operations
OpenZeppelin Defender can now be used on the Moonbase Alpha TestNet. This guide will show you how to get started with Defender and demo the Admin component to pause a smart contract. You can find more information in regards to the other components in the links mentioned above.
For more information, the OpenZeppelin team has written a great documentation site for Defender.
Getting Started with Defender on Moonbase Alpha¶
This section goes through the steps for getting started with OpenZeppelin Defender on Moonbase Alpha.
Checking Prerequisites¶
The steps described in this section assume you have MetaMask installed and connected to the Moonbase Alpha TestNet. If you haven't connected MetaMask to the TestNet, check out our MetaMask integration guide.
In addition, you need to sign up for a free OpenZeppelin Defender account, which you can do on the main Defender website.
The contract used in this guide is an extension of the
Box.sol contract used in the upgrading smart contracts guide, from the OpenZeppelin documentation. Also, the contract was made upgradable and pausable to take full advantage of the Admin component. You can deploy your contract using the following code and following the upgrading smart contracts guide:
// SPDX-License-Identifier: MIT pragma solidity ^0.8.0; import "@openzeppelin/contracts-upgradeable/security/PausableUpgradeable.sol"; import "@openzeppelin/contracts-upgradeable/access/OwnableUpgradeable.sol"; contract PausableBox is Initializable, PausableUpgradeable, OwnableUpgradeable { uint256 private value; // Emitted when the stored value changes event ValueChanged(uint256 newValue); // Initialize function initialize() initializer public { __Ownable_init(); __Pausable_init_unchained(); } // Stores a new value in the contract function store(uint256 newValue) whenNotPaused public { value = newValue; emit ValueChanged(newValue); } // Reads the last stored value function retrieve() public view returns (uint256) { return value; } function pause() public onlyOwner { _pause(); } function unpause() public onlyOwner { _unpause(); } }
Connecting Defender to Moonbase Alpha¶
Once you have an OpenZeppelin Defender account, log into the Defender App. In the main screen, with MetaMask connected to Moonbase Alpha click on the top right corner "Connect wallet" button:
If successful, you should see your address and a text stating "Connected to Moonbase Alpha."
Using the Admin Component¶
This section goes through the steps for getting started with OpenZeppelin Defender Admin component to manage smart contracts on Moonbase Alpha.
Importing your Contract¶
The first step to using Defender Admin is to add the contract you want to manage. To do so, click on the "Add contract" button near the top right corner. This will take you to the "import contract" screen, where you need to:
- Set a contract name. This is only for display purposes
- Select the network where the contract that you want to manage is deployed. This is particularly useful when a contract is deployed with the same address to multiple networks. For this example, enter
Moonbase Alpha
- Enter the contract address
- Paste the contract ABI. This can be obtained either in Remix or in the
.jsonfile generally created after the compilation process (for example, in Truffle or HardHat)
- Check that the contract features were detected correctly
- Once you've checked all the information, click on the "Add" button
If everything was successfully imported, you should see your contract in the Admin component main screen:
Create a Contract Proposal¶
Proposals are actions to be carried out in the contract. At the time of writing, there are three main proposals/actions that can take place:
- Pause — available if the pause feature is detected. Pauses token transfers, minting and burning
- Upgrade — available if the upgrade feature is detected. Allows for a contract to be upgraded via a proxy contract
- Admin action — call to any function in the managed contract
In this case, a new proposal is created to pause the contract. To do so, take the following steps:
- Click on the "New proposal" button to see all the available options
- Click on "Pause"
This will open the proposal page, where all the details regarding the proposal need to be filled in. In this example, you need to provide the following information:
- Admin account address. You can also leave this field empty if you want to run the action from your current wallet (if it has all the necessary permissions)
- Title of the proposal
- Description of the proposal. In here, you should provide as much detail as possible for other members/managers of the contract (if using a MultiSig wallet)
- Click on "Create pause proposal"
Once the proposal is successfully created, it should be listed in the contract's admin dashboard.
Approve a Contract Proposal¶
With the contract proposal created, the next step is to approve and execute it. To do so, go to the proposal and click on "Approve and Execute."
This will initiate a transaction that needs to be signed using MetaMask, after which the proposal state should change to "Executed (confirmation pending)." Once the transaction is processed, the status should show "Executed."
You can also see that the contract's status has changed from "Running" to "Paused." Great! You now know how to use the Admin component to manage your smart contracts. | https://docs.moonbeam.network/integrations/openzeppelin/defender/ | 2021-06-12T20:31:39 | CC-MAIN-2021-25 | 1623487586390.4 | [array(['/images/openzeppelin/ozdefender-banner.png',
'OpenZeppelin Defender Banner'], dtype=object)
array(['/images/openzeppelin/ozdefender-images1.png',
'OpenZeppelin Defender Connect'], dtype=object)
array(['/images/openzeppelin/ozdefender-images2.png',
'OpenZeppelin Defender Admin Add Contract'], dtype=object)
array(['/images/openzeppelin/ozdefender-images3.png',
'OpenZeppelin Defender Admin Contract Added'], dtype=object)
array(['/images/openzeppelin/ozdefender-images4.png',
'OpenZeppelin Defender Admin Contract New Pause Proposal'],
dtype=object)
array(['/images/openzeppelin/ozdefender-images5.png',
'OpenZeppelin Defender Admin Contract Pause Proposal Details'],
dtype=object)
array(['/images/openzeppelin/ozdefender-images6.png',
'OpenZeppelin Defender Admin Contract Proposal List'], dtype=object)
array(['/images/openzeppelin/ozdefender-images7.png',
'OpenZeppelin Defender Admin Contract Proposal Pause Approve'],
dtype=object)
array(['/images/openzeppelin/ozdefender-images8.png',
'OpenZeppelin Defender Admin Contract Proposal Pause Executed'],
dtype=object) ] | docs.moonbeam.network |
Activate/Deactivate Watches by Tenant API
Endpoint
PUT /_signals/tenant/{tenant}/_active
DELETE /_signals/tenant/{tenant}/_active
These endpoints can be used to activate and deactivate the execution of all watches configured for a Signals tenant.
Using the PUT verb activates the execution, using the DELETE verb deactivates the execution.
This is equivalent to changing the value of the Signals setting
tenant.{tenant}.active. However, this API requires a distinct permission. Thus, it is possible to allow a user activation and deactivation of a tenant while the user cannot change other settings.
Path Parameters
{tenant}: The name of the tenant to be activated or deactivated.
_main refers to the default tenant. Users of the community edition will can only use
_main here.
Request Body
No request body is required for this endpoint.
Responses
200 OK
The execution was successfully enabled or disabled.
403 Forbidden
The user does not have the permission to activate or deactivate the execution of a tenant.
Permissions
For being able to access the endpoint, the user needs to have the privilege
cluster:admin:searchguard:tenant:signals:tenant/start_stop .
This permission is included in the following built-in action groups:
- SGS_SIGNALS_ALL | https://docs.search-guard.com/latest/elasticsearch-alerting-rest-api-tenant-activate | 2021-06-12T20:35:59 | CC-MAIN-2021-25 | 1623487586390.4 | [] | docs.search-guard.com |
Folders for Medicine and Healthcare Documents
Medical Aesthetics & Spas
Including cosmetic procedures, vein treatment, laser hair removal, body contouring and other medical aesthetic treatments
Healthcare Counseling & Services
For physical and occupational therapy, psychology, mental health and social services, nutrition, genetic and fertility, and other counseling services
Diagnostic Services
For outpatient testing, including MRI and CT scans, bone density, echocardiography and EKG, and other clinical procedures
Dental Professionals
For professionals offering dental services such as bridges, implants, bonding and veneers, orthodontics, and oral surgery
Rehabilitation and Palliative Care
For companies who specialize in post-operative patient care, geriatric or hospice care
Our innovative Standard, Deluxe and Premium document folders are the perfect vehicle for storing important healthcare information.
Our folders neatly store paperwork, such as patient results, post-operative instructions, at-home exercises or other information along with future appointment cards and a space for promoting other services or your mission statement.
Customized to your Market
What really sets us apart is the customization you’ll receive — included in our low pricing —whether you want to highlight your services, provide post-visit instructions,. | https://folders4docs.com/folders-for-health-medicine/ | 2021-06-12T20:57:45 | CC-MAIN-2021-25 | 1623487586390.4 | [array(['https://secureservercdn.net/198.71.233.39/a2w.e2e.myftpupload.com/wp-content/uploads/2017/04/hand-xray-1024x659.jpg',
'hand-xray hand-xray'], dtype=object)
array(['https://secureservercdn.net/198.71.233.39/a2w.e2e.myftpupload.com/wp-content/uploads/2014/03/deluxe-folder-150x150.png',
'deluxe-folder deluxe-folder'], dtype=object)
array(['https://secureservercdn.net/198.71.233.39/a2w.e2e.myftpupload.com/wp-content/uploads/2014/03/premium-folder-150x150.png',
'premium-folder premium-folder'], dtype=object)
array(['https://secureservercdn.net/198.71.233.39/a2w.e2e.myftpupload.com/wp-content/uploads/2014/03/jr-pocket-folder-150x150.png',
'jr-pocket-folder jr-pocket-folder'], dtype=object)
array(['https://secureservercdn.net/198.71.233.39/a2w.e2e.myftpupload.com/wp-content/uploads/2014/03/pocket-folder-150x150.png',
'pocket-folder pocket-folder'], dtype=object) ] | folders4docs.com |
http filter will call an endpoint of your choice and capture the response.
We are going to call a dummy endpoint with following parameters:
Using http
filter it will look like that:
{% assign endpoint = "" %}{% capture request_options %}{"url": {{ endpoint | json }},"method": "POST","headers": {"Content-Type": "application/json"},"body": {"foo": "bar"}}{% endcapture %}{% assign response = request_options | http %}{% if response.ok %}{{ "Success!" | log }}{% else %}{{ "Fail!" | log }}{% endif %} | https://docs.datajet-app.com/liquid/filters/http | 2021-06-12T20:09:18 | CC-MAIN-2021-25 | 1623487586390.4 | [] | docs.datajet-app.com |
Columns
- 6 minutes to read
- Automatic Column Generation
- Add and Remove Columns Manually
- Column Header
- Hide Vertical Column Borders
- Column Width
- Best Fit
- Auto-Fill Column
- Fixed Columns
- End-User Capabilities
- Identify and Access Grid Columns in Code
Automatic Column Generation
When.
See also: Obtaining Fields Available in Data Source.
Add and Remove Columns Manually.
Column Header
To modify a column header caption and add an image, select a column at design time and invoke its smart tag. If a column has no caption assigned, the column generates its caption based on the name of a related data field.
NOTE
Related API * GridColumn.Caption - a column header caption.
- GridColumn.Image - a column header image.
- GridColumn.ImageIndex - allows you to select an image from an image collection, assigned to the ColumnView.Images property.
- GridColumn.ImageAlignment - specifies the caption icon alignment.
- GridOptionsView.ShowColumnHeaders - allows you to hide all column headers.
- GridOptionsView.ColumnHeaderAutoHeight - enables multi-line headers that do not trim captions.
Hide Vertical Column Borders
You can hide column and row borders by disabling the GridOptionsView.ShowVerticalLines and GridOptionsView.ShowHorizontalLines settings.
Column Width.
NOTE
Related API * GridColumn.MinWidth, GridColumn.Width, GridColumn.Resize - allow you to manually set the column width. In case the GridOptionsView.ColumnAutoWidth property is not disabled, the actual column width may differ from your custom settings.
- GridColumn.VisibleWidth - retrieves the actual column width.
- GridView.ColumnWidthChanged - occurs after column width has been changed.
- BaseView.IsSizingState / GridView.IsSizingState - allows you to identify whether or not an end-user is currently resizing a grid column.
- OptionsColumn.FixedWidth - in column auto-width mode, enabling this setting prevents the column from automatic resizing.
Best Fit
To ensure a column (or all the View columns) has enough width for its cells to entirely display their content, end-users can right-click a column header and choose the “Best Fit” (or “Best Fit (all columns)”) option.
NOTE
Related API * GridView.BestFitColumns - applies best fit to all columns.
- GridColumn.BestFit - applies best fit to one specific column.
- GridOptionsView.BestFitMaxRowCount - best fit operations scan all grid rows to determine the optimal column width. This property allows you to limit the number of processed rows and thus, improve overall Grid performance.
- GridOptionsView.BestFitMode - allows you to select whether best fit operations prefer precision or calculation speed.
Auto-Fill Column
A grid column assigned to the GridView.AutoFillColumn property automatically resizes to fill in any free space a View provides. In the animation below, the auto-fill column is “Address”.
Fixed Columns
Modify a column’s GridColumn.Fixed to anchor it to a left or right View side. Fixed columns remain always visible while end-users scroll the View horizontally.
The Fixed Columns demo illustrates how to supply grid columns with custom popup menus that allow users to anchor columns at runtime.
NOTE
Related API * GridView.FixedLineWidth
End-User Capabilities
By default, end-users can do the following:
Drag a right column edge to resize it.
Related API: OptionsColumn.AllowSize, GridOptionsCustomization.AllowColumnResizing.
Drag-and-drop column headers to re-arrange columns.
Related API: OptionsColumn.AllowMove, GridOptionsCustomization.AllowColumnMoving.
Hide columns by either dragging column headers down, or by right-clicking headers and selecting the “Hide This Column” option.
Related API: OptionsColumn.AllowShowHide, GridOptionsCustomization.AllowQuickHideColumns.
Right-click a column header and select the “Column Chooser” option to invoke a dialog that allows one to drag hidden columns back to the View.
Related API: GridView.CustomizationFormBounds, GridOptionsCustomization.CustomizationFormSearchBoxVisible, GridView.CustomizationForm, GridView.ShowCustomizationForm
Drag a column header into a group area to apply grouping.
Related API: OptionsColumn.AllowGroup, GridOptionsCustomization.AllowGroup
Click a column header to sort data by values of this column. Consequent clicks change the sort order from ascending to descending and back.
Related API: OptionsColumn.AllowSort, GridOptionsCustomization.AllowSort
Click the filter button within the column header to filter grid data.
Related API: OptionsColumnFilter.AllowFilter, GridOptionsCustomization.AllowFilter
Identify and Access Grid Columns in Code
To retrieve specific columns, utilize the following API:
Stores all columns that belong to this View and provides access to them by indexes or related data field names.
Retrieves a column to which a currently focused cell belongs.
ColumnView.GetVisibleColumn
Returns a column by its visible index (the GridColumn.VisibleIndex property). | https://docs.devexpress.com/WindowsForms/3483/controls-and-libraries/data-grid/views/grid-view/columns?v=18.2 | 2021-06-12T21:26:26 | CC-MAIN-2021-25 | 1623487586390.4 | [array(['/WindowsForms/images/data-grid-retrieve-fields-designer128745.png?v=18.2',
'Data Grid - Retrieve Fields Designer'], dtype=object)
array(['/WindowsForms/images/data-grid-add-bound-columns-in-designer128773.gif?v=18.2',
'Data Grid - Add Bound Columns in Designer'], dtype=object)
array(['/WindowsForms/images/data-grid-add-columns-designer-buttons128776.png?v=18.2',
'Data Grid - Add Columns Designer Buttons'], dtype=object)
array(['/WindowsForms/images/data-grid-column-smart-tag128779.png?v=18.2',
'Data Grid - Column Smart Tag'], dtype=object)
array(['/WindowsForms/images/data-grid-vertical-lines131137.png?v=18.2',
'Data Grid - Vertical Lines'], dtype=object)
array(['/WindowsForms/images/data-grid-auto-width128794.png?v=18.2',
'Data Grid - Auto Width'], dtype=object)
array(['/WindowsForms/images/data-grid-disabled-auto-width128795.png?v=18.2',
'Data Grid - Disabled Auto Width'], dtype=object)
array(['/WindowsForms/images/bestfit128112.gif?v=18.2', 'BestFit'],
dtype=object)
array(['/WindowsForms/images/grid-auto-fill-column127698.gif?v=18.2',
'Grid - Auto Fill Column'], dtype=object)
array(['/WindowsForms/images/data-grid-fixed-columns-animation128799.gif?v=18.2',
'Data Grid - Fixed Columns Animation'], dtype=object) ] | docs.devexpress.com |
ATtiny3217 ID for board option in “platformio.ini” (Project Configuration File):
[env:ATtiny3217] platform = atmelmegaavr board = ATtiny3217
You can override default ATtiny3217 settings per build environment using
board_*** option, where
*** is a JSON object path from
board manifest ATtiny3217.json. For example,
board_build.mcu,
board_build.f_cpu, etc.
[env:ATtiny3217] platform = atmelmegaavr board = ATtiny3217 ; change microcontroller board_build.mcu = attiny3217 ; change MCU frequency board_build.f_cpu = 16000000L | https://docs.platformio.org/en/stable/boards/atmelmegaavr/ATtiny3217.html | 2021-06-12T21:20:03 | CC-MAIN-2021-25 | 1623487586390.4 | [] | docs.platformio.org |
Crate init_array[−][src]
Expand description
A library for initializing arrays itemwise.
Normally, when using fixed size arrays, you can only initialize them with a const value. Example:
// Literals work. let arr = [0; 5]; // Const values work too. const STRING: String = String::new(); let arr = [STRING; 5];
// Function calls don't work. let arr = [computation(); 5];
there are a few different ways of initializing an array itemwise, including:
- Using an array of
Options, initializing them all to
Noneand then initializing each one to
Some(computation()).
- Using a
Vecand incrementally pushing items to it.
- Using an array of
MaybeUninits, gradually initializing them and then transmuting the array. This requires usage of
unsafecode.
This crate uses the third method but hides it behind a safe interface, so that no unsafe code is needed on the User end. It provides three functions to initialize arrays itemwise:
init_arrayto initialize a stack-based fixed-size array.
init_boxed_arrayto initialize a heap-allocated fixed-size array.
init_boxed_sliceto initialize a heap-allocated dynamically-sized slice.
If you have the
nightly feature enabled, you will have access to additional versions of the
init_boxed_... functions compliant with the new Allocator API.
If you turn off the
alloc feature, which is enabled by default, you can use this crate in a
#[no_std] context without an allocator.
The crate is fully
#[no_std] compatible.
All of these functions share the property that, if the initialization of any item panics (i.e. if the stack unwinds), all the already initialized items are dropped, minimizing the risk of a memory leak. | https://docs.rs/init_array/0.1.2/init_array/ | 2021-06-12T21:36:54 | CC-MAIN-2021-25 | 1623487586390.4 | [] | docs.rs |
AZ-101: Microsoft Azure Integration and Security
Languages: English
Retirement date: **
This exam measures your ability to accomplish the following technical tasks: secure identities, evaluate and perform server migration to Azure, implement and manage application services, and implement advanced virtual networking.
Price based on the country in which the exam is proctored. | https://docs.microsoft.com/en-us/learn/certifications/exams/az-101?tab=tab-instructor-led | 2021-06-12T21:56:29 | CC-MAIN-2021-25 | 1623487586390.4 | [] | docs.microsoft.com |
specialist-publisher: # Timestamps
Timestamps
There are three different timestamps that play different roles:
- first_published_at
- last_edited_at
- public_updated_at
In addition to these, some formats have timestamps that can be set by writers/editors when content items are created, e.g. 'Date of occurrence'.
first_published_at
This timestamp is set automatically by the Publishing API the first time a content item is published. For some of the formats, we present this timestamp next each document in the finder if it has been specified in the schemas:
Internally, we use this field for checking whether the content is a 'first draft', which determines whether we prompt the user for an update_type or whether we automatically set it to 'First published.' Research For Development Outputs work a bit differently because that field can be explicitly set by a user.
last_edited_at
This timestamp is set automatically by the Publishing API on a PUT /v2/content request with an update_type of 'minor' or 'major'. This field is used to order content items on the index pages of the publishing app so that writers/editors see a chronological list of content they have worked on:
Initially, we used the 'updated_at' field for this, but ran into trouble with republishing, which effected this field.
public_updated_at
This timestamp is set automatically by the Publishing API on publish if the update_type is 'major'. This field is presented to users of GOV.UK when viewing content. This appears in the 'metadata' of content items in the frontend:
In the image above, you can also see a bespoke timestamp that's used by some of the finders. In this case 'Date of occurence' is a field that can be set by writers/editors for the RAIB Reports format. | https://docs.publishing.service.gov.uk/apps/specialist-publisher/phase-2-migration/timestamps.html | 2021-06-12T19:51:14 | CC-MAIN-2021-25 | 1623487586390.4 | [array(['https://raw.githubusercontent.com/alphagov/specialist-publisher/main/docs/phase-2-migration/first-published-at.png',
'first-published-at'], dtype=object)
array(['https://raw.githubusercontent.com/alphagov/specialist-publisher/main/docs/phase-2-migration/last-edited-at.png',
'last-edited-at'], dtype=object)
array(['https://raw.githubusercontent.com/alphagov/specialist-publisher/main/docs/phase-2-migration/public-updated-at.png',
'public-updated-at'], dtype=object) ] | docs.publishing.service.gov.uk |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.