content
stringlengths 0
557k
| url
stringlengths 16
1.78k
| timestamp
timestamp[ms] | dump
stringlengths 9
15
| segment
stringlengths 13
17
| image_urls
stringlengths 2
55.5k
| netloc
stringlengths 7
77
|
---|---|---|---|---|---|---|
PyElastica Documentation
Contents
PyElastica Documentation#
Elastica is a free and open-source software project for the simulation of assemblies of slender, one-dimensional structures using Cosserat Rod theory.
More information about Elastica is available at the project website
PyElastica#
PyElastica is the python implementation of Elastica. The easiest way to install PyElastica is with PIP:
$ pip install pyelastica
Or download the source code from the GitHub repo
Elastica++#
Elastica++ is a C++ implementation of Elastica. The expected release date for the beta version is 2022 Q2.
Community#
We mainly use git-issue to communicate the roadmap, updates, helps, and bug fixes. If you have problem using PyElastica, check if similar issue is reported in git-issue.
We also opened gitter channel for short and immediate feedbacks.
Contributing#
If you are interested to contribute, please read contribution-guide first.
Elastica Overview
API Documentation
- Rods
- Rigid Body
- Constraints
- External Forces / Interactions
- Connections / Contact / Joints
- Callback Functions
- Time steppers
- Simulator
- Utility Functions
Advanced Guide
Archive | https://docs.cosseratrods.org/en/latest/ | 2022-06-25T04:33:19 | CC-MAIN-2022-27 | 1656103034170.1 | [] | docs.cosseratrods.org |
Using containers (Docker/Podman)
Since the NeuroFedora packages are available in the Fedora repositories, they can also be used in customised containers using the Fedora base containers. The Fedora community release container images for all Fedora releases which can be obtained from standard public container image registries like Docker Hub.
Podman is a replacement for Docker that does not require administrative access.
It can be used as a drop-in replacement for Docker in a majority of cases.
On a Fedora system, Podman can be installed using
dnf.
sudo dnf install podman
To use Docker, please refer to the Docker documentation.
Fedora also includes the Toolbox software, which allows the use of containerised command line environments.
Toolbox can be installed using
dnf:
sudo dnf install toolbox
You can learn more about it on the Fedora Silverblue documentation page.
Using the CompNeuro container image
In parallel with the CompNeuro lab instalation media, a container image that includes the same set of software is also provided. Where the lab image is a full operating system image that can either be used "live" or installed in a virtual machine or a computer, the container image allows us to use the same software with container technologies like Podman and Docker.
The container image can be obtained from the Fedora Container Registry.
podman pull registry.fedoraproject.org/compneuro
It can then be used interactively:
podman run -it compneuro:latest /bin/bash # terminal in the container [root@95b9db71272f /]#
Using the Fedora release containers interactively
Even though the CompNeuro container image includes a plethora of tools for computational neuroscience, any package from the Fedora repositories can be used in a container by using the base Fedora release containers. A simple example of using a Fedora container to use a NeuroFedora package is shown below. Here, we use the Fedora base container image, install the required package, and test that it works.
First, we pull the base Fedora container image:
podman pull fedora:latest Resolved short name "fedora" to a recorded short-name alias (origin: /etc/containers/registries.conf.d/shortnames.conf) Trying to pull registry.fedoraproject.org/fedora:latest... Getting image source signatures Copying blob 8fde7942e775 [--------------------------------------] 0.0b / 0.0b Copying config 79fd58dc76 done Writing manifest to image destination Storing signatures 79fd58dc76113dac76a120f22cadecc3b2d1794b414f90ea368cf66096700053
We then run the image interactively.
podman run -it fedora:latest /bin/bash # terminal in the container [root@95b9db71272f /]#
This gives us a container that we can work with interactively. We can install a package here as we would on a Fedora installation, for example:
[root@95b9db71272f /]# sudo dnf install python3-nest Last metadata expiration check: 0:06:14 ago on Wed Jan 6 10:41:28 2021. Dependencies resolved. ================================================================================ Package Arch Version Repo Size ================================================================================ Installing: python3-nest x86_64 2.20.1-5.fc33 updates 518 k Installing dependencies: .... .... Complete!
We can then run commands normally:
[root@95b9db71272f /]# ipython Python 3.9.0 (default, Oct 6 2020, 00:00:00) Type 'copyright', 'credits' or 'license' for more information IPython 7.18.1 -- An enhanced Interactive Python. Type '?' for help. In [1]: import nest [INFO] [2021.1.6 11:10:43 /builddir/build/BUILD/nest-simulator-2.20.1/nest-simulator-2.20.1/nestkernel/rng_manager.cpp:217 @ Network::create_rngs_] : Creating default RNGs [INFO] [2021.1.6 11:10:43 /builddir/build/BUILD/nest-simulator-2.20.1/nest-simulator-2.20.1/nestkernel/rng_manager.cpp:260 @ Network::create_grng_] : Creating new default global RNG Jan 06 11:10:43 [2]: nest.version() Out[2]: 'NEST nest-2.20.1' In [3]:
Creating container images
While working interactively is quite useful, it is even more useful to create container images with specific sets of packages that can then be used and re-used regularly. For reproducible research, for example, a container image that includes all the necessary software and sources for the model, its simulation, and the analysis of the generated data is most useful.
A container image can be created using a standard Containerfile (Dockerfile for Docker). The complete reference for the Containerfile/Dockerfile can be found here.
Let us create an example container that runs this short code-snippet.
#!/usr/bin/env python3 # example Python source file # saved in the current directory as nest-test.py import nest nest.version()
Our simple Containerfile looks like this:
FROM fedora:33 as fedora-33 # Install the required packages, in this case: NEST RUN sudo dnf install python3-nest -y COPY nest-test.py . # Default command to run CMD ["python", "nest-test.py"]
We can then build our container image:
ls Containerfile nest-test.py podman build -f Containerfile -t neurofedora/nest-test STEP 1: FROM fedora:33 AS fedora-33 STEP 2: RUN sudo dnf install python3-nest -y Fedora 33 openh264 (From Cisco) - x86_64 1.9 kB/s | 2.5 kB 00:01 .... Complete! --> 2efea29a8db STEP 3: COPY nest-test.py . --> b23a5c6f90d STEP 4: CMD ["python3", "nest-test.py"] STEP 5: COMMIT neurofedora/nest-test --> da9240e572b da9240e572b4c08ac010001cbc15cb81ae879c63dca70afa4b3e6f313254b218
Our image is now ready to use:
$ podman image list REPOSITORY TAG IMAGE ID CREATED SIZE localhost/neurofedora/nest-test latest da9240e572b4 17 seconds ago 911 MB
When run, it runs our simple script:
$ podman run neurofedora/nest-test [INFO] [2021.1.6 11:36:36 /builddir/build/BUILD/nest-simulator-2.20.1/nest-simulator-2.20.1/nestkernel/rng_manager.cpp:217 @ Network::create_rngs_] : Creating default RNGs [INFO] [2021.1.6 11:36:36 /builddir/build/BUILD/nest-simulator-2.20.1/nest-simulator-2.20.1/nestkernel/rng_manager.cpp:260 @ Network::create_grng_] : Creating new default global RNG Jan 06 11:36:36 a similar way, any package from the Fedora repositories can be used in containers using
dnf (not just NeuroFedora packages).
Additionally, we can also include software using
pip and other package managers, just as we would on a normal system. | https://docs.fedoraproject.org/fil/neurofedora/containers/ | 2022-06-25T04:49:46 | CC-MAIN-2022-27 | 1656103034170.1 | [] | docs.fedoraproject.org |
Variable types
This page documents an earlier version of InfluxDB. InfluxDB v2.3 is the latest stable version. View this page in the v2.3 documentation.4
Use custom dashboard variables
Use the Flux
v record and dot or bracket notation to access custom dashboard variables.
For example, to use a custom dashboard variable named
exampleVar in a query,
reference the variable with
v.exampleVar:
from(bucket: "telegraf") |> range(start: v.timeRangeStart, stop: v.timeRangeStop) |> filter(fn: (r) => r._measurement == "cpu" ) |> filter(fn: (r) => r._field == "usage_user" ) |> filter(fn: (r) => r.cpu == v.exampleVar)
To select variable values:
- In a dashboard: Use the dashboard variable drop-down menus at the top of your dashboard.
- In the Script Editor: Click the Variables tab on the right of the Script Editor, click the name of the variable, and then select the variable value from the drop-down menu.
For more on using dashboard variables, see Use and manage. | https://test2.docs.influxdata.com/influxdb/v2.0/visualize-data/variables/variable-types/ | 2022-06-25T05:05:23 | CC-MAIN-2022-27 | 1656103034170.1 | [] | test2.docs.influxdata.com |
Modules¶
Zend Framework 2 uses a module system to organise your main application-specific code within each module. The Application module provided by the skeleton is used to provide bootstrapping, error and routing configuration to the whole application. It is usually used to provide application level controllers for, say, the home page of an application, but we are not going to use the default one provided in this tutorial as we want our album list to be the home page, which will live in our own module.
We are going to put all our code into the Album module which will contain our controllers, models, forms and views, along with configuration. We’ll also tweak the Application module as required.
Let’s start with the directories required.
Setting up the Album module¶
Start by creating a directory called Album under module with the following subdirectories to hold the module’s files:
As you can see the Album module has separate directories for the different types of files we will have. The PHP files that contain classes within the Album namespace live in the src/Album directory so that we can have multiple namespaces within our module should we require it. The view directory also has a sub-folder called album for our module’s view scripts.
In order to load and configure a module, Zend Framework 2 has a ModuleManager. This will look for Module.php in the root of the module directory (module/Album) and expect to find a class called Album\Module within it. That is, the classes within a given module will have the namespace of the module’s name, which is the directory name of the module.
Create Module.php in the Album module: Create a file called Module.php under zf2-tutorial/module/Album:
The ModuleManager will call getAutoloaderConfig() and getConfig() automatically for us.
Autoloading files¶
Our getAutoloaderConfig() method returns an array that is compatible with ZF2’s AutoloaderFactory. We configure it so that we add a class map file to the ClassMapAutoloader and also add this module’s namespace to the StandardAutoloader. The standard autoloader requires a namespace and the path where to find the files for that namespace. It is PSR-0 compliant and so classes map directly to files as per the PSR-0 rules.
As we are in development, we don’t need to load files via the classmap, so we provide an empty array for the classmap autoloader. Create a file called autoload_classmap.php under zf2-tutorial/module/Album::
The config information is passed to the relevant components by the ServiceManager. We need two initial sections: controllers and view_manager. The controllers section provides a list of all the controllers provided by the module. We will need one controller, AlbumController, which we’ll reference as Album\Controller\Album. The controller key must be unique across all modules, so we prefix it with our module name.
Within the view_manager section, we add our view directory to the TemplatePathStack configuration. This will allow it to find the view scripts for the Album module that are stored in our view/ directory.
Informing the application about our new module¶
We now need to tell the ModuleManager that this new module exists. This is done in the application’s config/application.config.php file which is provided by the skeleton application. Update this file so that its modules section contains the Album module as well, so the file now looks like this:
(Changes required are highlighted using comments.)
As you can see, we have added our Album module into the list of modules after the Application module.
We have now set up the module ready for putting our custom code into it. | https://zf2.readthedocs.io/en/release-2.3.2/user-guide/modules.html | 2022-06-25T04:30:12 | CC-MAIN-2022-27 | 1656103034170.1 | [] | zf2.readthedocs.io |
recommended strategies and tools for the specified server.
See also: AWS API Documentation
See 'aws help' for descriptions of global parameters.
get-server-strategies --server-id <value> [--cli-input-json .
serverStrategies -> (list)
A list of strategy recommendations for the server.
(structure)
Contains information about a strategy recommendation for a server.
isPreferred -> (boolean)Set to true if the recommendation is set as preferred.
numberOfApplicationComponents -> (integer)The number of application components with this strategy recommendation running on the server.
recommendation -> (structure)
Strategy recommendation for the server. the strategy for the server. | https://docs.aws.amazon.com/de_de/cli/latest/reference/migrationhubstrategy/get-server-strategies.html | 2022-06-25T06:06:48 | CC-MAIN-2022-27 | 1656103034170.1 | [] | docs.aws.amazon.com |
marathon VCLx / marathon deepVCL#
marathon VCLx is fit for especially complex image processing as well as deep learning applications based on CNN (Convolutional Neuronal Networks).
marathon VCLx is also available as product variant marathon deepVCL. marathon deepVCL is already pre-licensed for running CNN applications developed by Basler. As marathon deepVCL is based on the hardware of marathon VCLx, all information provided below also applies to marathon deepVCL where not explicitly stated otherwise.
Two mini Camera Link connectors (SDR 26) are located directly on the slot bracket.
For connection to the computer, marathon VCLx#
FPGA Programming Environment#
Trigger Boards#
Application Notes#
SDK Documentation#
- SDK Manual
- SDK Function Reference
- Camera Link Serial Interface
- SDK Functions Reference - Camera Link Serial Interface. | https://docs.baslerweb.com/frame-grabbers/marathon-vclx | 2022-06-25T04:13:38 | CC-MAIN-2022-27 | 1656103034170.1 | [array(['images/image-marathon-vclx.jpg',
'marathon VCLx / marathon deepVCL'], dtype=object)] | docs.baslerweb.com |
Default Calendars for New Entries
DayBack lets each user decide which calendar should be used for new events. This setting is available for each calendar by clicking on the gear icon beside each calendar in DayBack's left-hand sidebar.
The "for me" option is available for each of your users provided you have NOT set one of the calendars to be the default "for everyone". Only admins can see the "for everyone" option.
So if you set a calendar as the default "for everyone" then your users will always use that calendar by default for new events. If you leave that setting at "no" then each user can pick their own default calendar for new events.
How can I create events for a specific source?
If you have multiple sources visible in the calendar, DayBack follows a simple rule to determine in which source an event is created: | https://docs.dayback.com/article/86-default-calendars-for-new-entries | 2022-06-25T04:23:52 | CC-MAIN-2022-27 | 1656103034170.1 | [array(['http://d33v4339jhl8k0.cloudfront.net/docs/assets/568d5975c69791436155c1b3/images/5819465cc697915f88a3baa3/file-DzQYbQnKiI.png',
None], dtype=object) ] | docs.dayback.com |
If you want to find your ideal partner find russian girl abroad, there are several foreign dating sites to choose from. JollyRomance specializes in Slavic true love, so you won’t discover many Latina women on this site, but it does have one of the highest monthly customer bases. With this significant number of users comes a better chance of searching out the perfect partner for you. Search through 1000s of dating profiles to find the perfect overseas partner and start a new relationship.
To ensure a very good foreign internet dating experience, locate a site which has professional customer care, including a help section. Be sure to look for a internet site that is open 24 hours a day. If you can’t reach a live person, you can send out a gift or use the FAQ section to answer any problems you might have. Many foreign online dating sites will have a membership that gives localized single profiles and solutions. The best international dating sites will even offer local membership.
When foreign online dating can be tricky, there are several best-in-class foreign internet dating sites that offer reliability features and free accounts. Moreover, leading foreign dating sites boast of massive user facets and huge consumer bases. If you would like to find the perfect match abroad, examine top foreign dating sites today! It’s incredibly easy to find love online. You could be surprised by international connection that you may strike up with! You may end up producing new good friends and creating a reliable relationship by simply joining another dating website.
Another well-liked international going out with site is certainly Ashley Madison. While most affiliates are from United States or Europe, the majority are looking for a significant relationship. It’s not hard to register on this website and search members’ experience with mobile platform. Just remember, though, that you should never use a site that produces critical human relationships. Instead, select a dating site that suits people with a similar way of life. You’ll never regret your option.
Among the many various other foreign seeing websites, OkCupid is a popular choice for individuals seeking ambiance. The website possesses millions of associates from all over the world. If you’re a guy looking for a partner overseas, you may meet beautiful women who speak your language. You can also exchange text messages with these kinds of women, and learn about their way of life. This way, you may not risk cultural clashes. If you choose end up assembly someone who converse your language, you will not have to worry about the clumsiness of vocabulary and customs.
If you’re searching for a relationship in foreign countries, the first step is to find a foreign internet dating site. Several sites offer intercontinental matchmaking and are completely free. However , some of these sites do require you to cover membership. You may either get a paid membership or possibly a trial regular membership. You don’t have to give until it’s ready to marry, but you can constantly opt to join a free trial.
Zoosk is another popular international dating internet site, which is also no cost. This internet dating website welcomes both immediate and long term relationships. With over fourty million users, Zoosk is an excellent decision for international dating. Zoosk allows you to flick through singles by simply location and interests. When you’re an associate of Zoosk, you can also match other users through social networking users. If you have been searching for your best partner abroad, Zoosk is the site suitable for you.
AdultFriendFinder is yet another international online dating site that focuses on bringing together lonely people worldwide. The site’s advanced search characteristic allows users to narrow their search by grow old, religion, profession, and even family pet title. Yourself your match, you can begin communicating with your new spouse as soon as you join. Whether you are thinking about a long term companion or a one-night stand, you will discover your perfect match. | https://docs.jagoanhosting.com/overseas-dating-sites/ | 2022-06-25T05:23:50 | CC-MAIN-2022-27 | 1656103034170.1 | [] | docs.jagoanhosting.com |
Ship your Azure activity logs using an automated deployment process. At the end of this process, you’ll have configured an event hub namespace, an event hub, and 2 storage blobs.
The resources set up by the automated deployment can collect data for a single Azure region.
Overview of the services you’ll be setting up in)
Determining how many automated deployments to deploy
You’ll need an event hub in the same region as your services.
How many automated deployments you will need, depends on the number of regions involved.
You’ll need at least 1 automated deployment for each region where you want to collect logs.This is because Azure requires an event hub in the same region as your services. The good news is you can stream data from multiple services to the same event hub, just as long as they are in the same region..
In the BASICS section
In the SETTINGS section. Diagnostic settings, and then click + Add diagnostic setting. This takes you to the Diagnostic settings page.. in Kibana.
If you still don’t see your logs, see log shipping troubleshooting. | https://docs.logz.io/shipping/security-sources/azure-activity-logs.html | 2022-06-25T03:56:09 | CC-MAIN-2022-27 | 1656103034170.1 | [array(['https://dytvr9ot2sszz.cloudfront.net/logz-docs/azure-event-hubs/customized-template.png',
'Customized template'], dtype=object)
array(['https://dytvr9ot2sszz.cloudfront.net/logz-docs/azure-event-hubs/azure-blob-storage-outputblob.png',
'New Blob output'], dtype=object) ] | docs.logz.io |
Brokers can be used in combination with triggers to deliver events from an event source to an event sink. Events are sent from an event source to a broker as an HTTP
POST request. After events have entered the broker, they can be filtered by CloudEvent attributes using triggers, and sent as an HTTP
POST request to an event sink.
There are multiple broker implementations available for use with OpenShift Serverless, each of which have different event delivery guarantees and use different underlying technologies. You can choose the broker implementation when creating a broker by specifying a broker class, otherwise the default broker class is used. The default broker class can be configured by cluster administrators.
The channel-based broker implementation internally uses channels for event delivery. Channel-based brokers provide different event delivery guarantees based on the channel implementation a broker instance uses, for example:
A broker using the
InMemoryChannel implementation is useful for development and testing purposes, but does not provide adequate event delivery guarantees for production environments.
A broker using the
KafkaChannel implementation provides the event delivery guarantees required for a production environment.
OpenShift Serverless provides a
default Knative broker that you can create by using the
kn CLI. You can also create the
default broker by adding the
eventing.knative.dev/injection: enabled annotation to a trigger, or by adding the
eventing.knative.dev/injection=enabled label to a namespace.
Brokers can be used in combination with triggers to deliver events from an event source to an event sink. Using the
kn CLI to create brokers provides a more streamlined and intuitive user interface over modifying YAML files directly. You can use the
kn broker create command to create a broker by using the
kn CLI.
The OpenShift Serverless Operator and Knative Eventing are installed on your OpenShift Container Platform cluster.
You have installed the Knative (
kn) CLI.
You have created a project or have access to a project with the appropriate roles and permissions to create applications and other workloads in OpenShift Container Platform.
Create the
default broker:
$ kn broker create default
Use the
kn command to list all existing brokers:
$ kn broker list
NAME URL AGE CONDITIONS READY REASON default 45s 5 OK / 5 True
Optional: If you are using the OpenShift Container Platform web console, you can navigate to the Topology view in the Developer perspective, and observe that the broker exists: | https://docs.openshift.com/container-platform/4.10/serverless/develop/serverless-using-brokers.html | 2022-06-25T04:47:39 | CC-MAIN-2022-27 | 1656103034170.1 | [] | docs.openshift.com |
Slack integration
Jira
Jira
Feature availability
This feature is available with all paid plans. See
pricing plans
for more details. For availability of this feature for Snyk Infrastructure as Code, see
Jira Integration
.
Set up your Jira integration
Our Jira integration allows you to manually raise Jira issues in the Snyk UI for vulnerabilities or license issues, and also includes an API (
see our API docs
).
Caution
if your Jira instance is private, you'll need to set up with Snyk Broker and then follow our brokered Jira setup instructions.
Prerequisites
Snyk requires Jira version 5 or above.
"Browse Projects" and "Create Issues"
permissions are needed.
How to set up your Jira integration
Jira account credentials are configured in
Organization Settings > Integrations
. Best practice suggests setting up a new user in Jira for this, rather than using an existing account's credentials.
Cloud-hosted Jira implementations require a username and API token authentication. Jira API tokens are generated in
Atlassian API tokens
. Self-hosted implementations are able to authenticate with a username and password.
Create a Jira issue
Once you’ve set up the connection, visit one of your Snyk projects. You’ll now see a new button at the bottom of each vulnerability and license issue card that allows you to create a Jira issue.
When you click on this, a Jira issue creation form will appear with the Snyk issue details copied across into the relevant fields. You can review and edit this before creating the issue.
Select which Jira project you’d like to send the issue to. The fields that we display below are based on the fields that the project has, so switching between projects may show different options.
Note
Snyk currently supports non-Epic Jira ticket creation. Epics will need to be added manually to the ticket once it has been created.
Once you’ve created a Jira issue, the Jira key with a link will display on the issue card. If you’re using the Jira API, you can generate multiple Jira issues for the same issue in Snyk.
You can also see which Jira issues have been created from the Issues view in your reports.
See also:
Enable permissions for Snyk Broker from your third-party tool
Slack integration
Vulnerability management tools
Last modified
8d ago
Export as PDF
Copy link
Edit on GitHub
Contents
Set up your Jira integration
Prerequisites
How to set up your Jira integration
Create a Jira issue
See also: | https://docs.snyk.io/integrations/notifications-ticketing-system-integrations/jira | 2022-06-25T05:41:59 | CC-MAIN-2022-27 | 1656103034170.1 | [] | docs.snyk.io |
Invoice Ninja User Guide¶
Want to find out everything there is to know about how to use your Invoice Ninja account? Look no further than our User Guide, where you’ll learn all about creating and sending invoices, receiving payments, creating tasks, converting quotes to invoices, recurring invoices, entering credits and much, much more. | https://invoice-ninja.readthedocs.io/en/latest/?badge=latest | 2022-06-25T05:38:01 | CC-MAIN-2022-27 | 1656103034170.1 | [] | invoice-ninja.readthedocs.io |
Delete an organization
This page documents an earlier version of InfluxDB. InfluxDB v2.3 is the latest stable version. View this page in the v2.3 documentation.
Use the
influx command line interface (CLI)
to delete an organization.
Delete an organization using the influx CLI
Use the
influx org delete command
to delete an organization. Deleting an organization requires the following:
- The organization ID (provided in the output of
influx org list)
# Syntax influx org delete -i <org-id> # Example influx org delete -i 034ad714fdd6. | https://test2.docs.influxdata.com/influxdb/v2.0/organizations/delete-org/ | 2022-06-25T05:40:59 | CC-MAIN-2022-27 | 1656103034170.1 | [] | test2.docs.influxdata.com |
System Configuration
This section describes how to configure the
varfish-docker-compose setup.
When running with the
varfish-docker-compose files and the provided database files, VarFish comes preconfigured with sensible default settings and also contains some example datasets to try out.
There are a few things that you might want to tweak.
Please note that there might be more settings that you can change when exploring the VarFish source code but right now their use is not supported for external users.
VarFish & Docker Compose
The recommended (and supported) way to deploy VarFish is using Docker compose.
The VarFish server and its component are not installed on the system itself but rather a number of Docker containers with fixed Docker images are run and work together.
The base
docker-compose.yml file starts a fully functional VarFish server.
Docker Compose supports using so-called override files.
Basically, the mechanism works by providing an
docker-compose.override.yml file that is automatically read at startup when running
docker-compose up.
This file is put into the .gitignore so it is not in the
varfish-docker-compose repository but rather created in the checkouts (e.g., manually or using a configuration management tool such as Ansible).
On startup, Docker Compose will read first the base
docker-compose.yml file.
It will then read the override file (if it exists) and recursively merge both YAML files with the override file overriding taking precedence over the base file.
Note that the recursive merging will be done on YAML dicts only, lists will overwritten.
The mechanism in detail is described in the official documentation.
We provide the following files that you can use/combine into the local
docker-compose.override.yml file of your installation.
docker-compose.override.yml-cert– use TLS encryption with your own certificate from your favourite certificate provider (by default an automatically generated self-signed certificate will be used by traefik, the reverse proxy).
docker-compose.override.yml-letsencrypt– use letsencrypt to obtain a certificate.
docker-compose.override.yml-cadd– spawn Docker containers for allowing pathogenicity annotation of your variants with CADD.
The overall process is to copy any of the
*.override.yml-* files to
docker-compose.yml and adjusting it to your need (e.g., merging with another such file).
Note that you could also explicitely provide multiple override files but we do not consider this further. For more information on the override mechanism see the official documentation.
The following sections describe the possible adjustment with Docker Compose override files.
TLS / SSL Configuration
The
varfish-docker-compose setup uses traefik as a reverse proxy and must be reconfigured if you want to change the default behaviour of using self-signed certificates.
Use the contents of
docker-compose.override.yml-cert for providing your own certificate.
You have to put the cerver certificate and key into
config/traefik/tls/server.crt and
server.key and then restart the
traefik container.
Make sure to provide the full certificate chain if needed (e.g., for DFN issued certificates).
If your site is reachable from the internet then you can also use the contents of
docker-compose.override.yml-letsencrypt which will use [letsencrypt]() to obtain the certificates.
Make sure to adjust the line with
--certificatesresolvers.le.acme.email= to your email address.
Note well that if you make your site reachable from the internet then you should be aware of the implications.
VarFish is MIT licensed software which means that it comes “without any warranty of any kind”, see the
LICENSE file for details.
After changing the configuration, restart the site (e.g., with
docker-compose down && docker-compose up -d if it is running in detached mode).
LDAP Configuration
VarFish can be configured to use up to two upstream LDAP servers (e.g., OpenLDAP or Microsoft Active Directory).
For this, you have to set the following environment variables in the file
.env in your
varfish-docker-compose checkout and restart the site.
The variables are given with their default values.
ENABLE_LDAP=0
Enable primary LDAP authentication server (values:
0,
1).
AUTH_LDAP_SERVER_URI=
URI for primary LDAP server (e.g.,
ldap://ldap.example.com:portor
ldaps://...).
AUTH_LDAP_BIND_DN=
Distinguished name (DN) to use for binding to the LDAP server.
AUTH_LDAP_BIND_PASSWORD=
Password to use for binding to the LDAP server.
AUTH_LDAP_USER_SEARCH_BASE=
DN to use for the search base, e.g.,
DC=com,DC=example,DC=ldap
AUTH_LDAP_USERNAME_DOMAIN=
Domain to use for user names, e.g. with
EXAMPLEusers from this domain can login with
user@EXAMPLE.
AUTH_LDAP_DOMAIN_PRINTABLE=${AUTH_LDAP_USERNAME_DOMAIN}
Domain used for printing the user name.
If you have the first LDAP configured then you can also enable the second one and configure it.
ENABLE_LDAP_SECONDARY=0
Enable secondary LDAP authentication server (values:
0,
1).
The remaining variable names are derived from the ones of the primary server but using the prefix
AUTH_LDAP2 instead of
AUTH_LDAP.
SAML Configuration
Besides LDAP configuration, it is also possible to authenticate with existing SAML 2.0 ID Providers (e.g. Keycloak). Since varfish is built on top of sodar core, you can also refer to the sodar-core documentation for further help in configuring the ID Providers.
To enable SAML authentication with your ID Provider, a few steps are necessary. First, add a SAML Client for your ID Provider of choice. The sodar-core documentation features examples for Keycloak. Make sure you have assertion signing turned on and allow redirects to your varfish site.
The SAML processing URL should be set to the externally visible address of your varfish deployment, e.g..
Next, you need to obtain your metadata.xml aswell as the signing certificate and key file from the ID Provider. Make sure you convert these keys to standard OpenSSL
format, before starting your varfish instance (you can find more details here).
If you deploy varfish without docker, you can pass the file paths of your metadata.xml and key pair directly. Otherwise, make sure that you have included them
into a single folder and added the corresponding folder to your
docker-compose.yml (or add it as a
docker-compose-overrrided.yml), like in the following snippet.
varfish-web: ... volumes: - "/path/to/my/secrets:/secrets:ro"
Then, define atleast the following variables in your docker-compose
.env file (or the environment variables when running the server natively).
ENABLE_SAML
[Default 0] Enable [1] or Disable [0] SAML authentication
SAML_CLIENT_ENTITY_ID
The SAML client ID set in the ID Provider config (e.g. “varfish”)
SAML_CLIENT_ENTITY_URL
The externally visible URL of your varfish deployment
SAML_CLIENT_METADATA_FILE
The path to the metadata.xml file retrieved from your ID Provider. If you deploy using docker, this must be a path inside the container.
SAML_CLLIENT_IDP
The url to your IDP. In case of keycloak it can look something like<my_varfish_realm>
SAML_CLIENT_KEY_FILE
Path to the SAML signing key for the client.
SAML_CLIENT_CERT_FILE
Path to the SAML certificate for the client.
SAML_CLIENT_XMLSEC1
[Default /usr/bin/xmlsec1] Path to the xmlsec executable.
By default, the SAML attributes map is configured to work with Keycloak as SAML Auth provider. If you are using a different ID Provider,
or different settings you also need to adjust the
SAML_ATTRIBUTES_MAP option.
SAML_ATTRIBUTES_MAP
A dictionary identifying the SAML claims needed to retrieve user information. You need to set atleast
username,
first_nameand
last_name. Example:
SAML_ATTRIBUTES_MAP="email=email,username=uid,first_name=firstName,last_name=name"
To set initial user permissions on first login, you can use the following options:
SAML_NEW_USER_GROUPS
Comma separated list of groups for a new user to join.
SAML_NEW_USER_ACTIVE_STATUS
[Default True] Whether a new user is considered active.
SAML_NEW_USER_STAFF_STATUS
[Default True] New users get the staff status.
SAML_NEW_USER_SUPERUSER_STATUS
[Default False] New users are marked superusers (I advise leaving this one alone).
If you encounter any troubles with this rather involved procedure, feel free to take a look at the discussion forums on github and open a thread.
Sending of Emails
You can configure VarFish to send out emails, e.g., when permissions are granted to users.
PROJECTROLES_SEND_EMAIL=0
Enable sending of emails.
String to use for the sender, e.g.,
[email protected].
Prefix to use for email subjects, e.g.,
[VarFish].
URL to the SMTP server to use, e.g.,
smtp://user:[email protected]:1234.
External Postgres Server
In some setups, it might make sense to run your own Postgres server. The most common use case would be that you want to run VarFish in a setting where fast disks are not available (virtual machines or in a “cloud” setting). You might still have a dedicated, fast Postgres server running (or available as a service from your cloud provider). In this case, you can configure the database connection settings as follows.
DATABASE_URL=postgresql://postgres:password@postgres/varfish
Adjust to the credentials, server, and database name that you want to use.
The default settings do not make for secure settings in the general case.
However, Docker Compose will create a private network that is only available to the Docker containers.
In the default
docker-compose setup, postgres server is thus not exposed to the outside and only reachable by the VarFish web server and queue workers.
Miscellaneous Configuration
VARFISH_LOGIN_PAGE_TEXT
Text to display on the login page.
FIELD_ENCRYPTION_KEY
Key to use for encrypting secrets in the database (such as saved public keys for the Beacon Site feature). You can generate such a key with the following command:
python -c 'import os, base64; print(base64.urlsafe_b64encode(os.urandom(32)))'.
VARFISH_QUERY_MAX_UNION
Maximal number of cases to query for at the same time for joint queries. Default is
20.
Sentry Configuration
Sentry is a service for monitoring web apps. Their open source version can be installed on premise. You can configure sentry support as follows
ENABLE_SENTRY=0
Enable Sentry support.
SENTRY_DSN=
A sentry DSN to report to. See Sentry documentation for details.
System and Docker (Compose) Tweaks
A number of customizations customizations of the installation can be done using Docker or Docker Compose. Other customizations have to be done on the system level. This section lists those that the authors are aware of but in particular network-related settings can be done on many levels.
Using Non-Default HTTP(S) Ports
If you want to use non-standard HTTP and HTTPS ports (defaults are 80 and 443) then you can tweak this in the
traefik container section.
You have to adjust two parts, below we give them separately with full YAML “key” paths.
services: traefik: ports: - "80:80" - "443:443"
To listen on ports
8080 and
8443 instead, your override file should have:
- services:
-
- traefik:
-
- ports:
-
-
“8080:80”
-
“8443:443”
Also, you have to adjust the command line arguments to traefik for the
web (HTTP) and
websecure (HTTPS) entrypoints.
services: traefik: command: # ... - "--entrypoints.web.address=:80" - "--entrypoints.websecure.address=:443"
Use the following in your override file.
services: traefik: command: # ... - "--entrypoints.web.address=:8080" - "--entrypoints.websecure.address=:8443"
Based on the
docker-compose.yml file alone, your
docker-compose.override.yml file should contain the following line.
You will have to adjust the file accordingly if you want to use a custom static certificate or letsencrypt by incorporating the files from the provided example
docker-compose.override.yml-* files.
services: traefik: ports: - "8080:80" - "8443:443" command: - "--providers.docker=true" - "--providers.docker.exposedbydefault=false" - "--entrypoints.web.address=:80" - "--entrypoints.web.http.redirections.entryPoint.to=websecure" - "--entrypoints.web.http.redirections.entryPoint.scheme=https" - "--entrypoints.web.http.redirections.entrypoint.permanent=true" - "--entrypoints.web.address=:80" - "--entrypoints.websecure.address=:443"
Then, restart by calling
docker-compose up -d in the directory with the
docker-compose.yml file.
Listing on Specific IPs
By default, the
traefik container will listen on all IPs and interfaces of the host machine.
You can change this by prefixing the
ports list with the IPs to listen on.
The settings to adjust here are:
services: traefik: ports: - "80:80" - "443:443"
And they need to be overwritten as follows in your override file.
services: traefik: ports: - "10.0.0.1:80:80" - "10.0.0.1:443:443"
More details can be found in the corresponding section of the Docker Compose manual.
Of course, you can combine this with adjusting the ports, e.g., to
10.0.0.1:8080:80 etc.
Limit Incoming Traffic
In some settings you might want to limit incoming traffic to certain networks / IP ranges.
In principle, this is possible with adjusting the Traefik load balancer/reverse proxy.
However, we would recommend you to use the firewall of your operating system or your overall network for this purpose.
Consult the corresponding manual (e.g., of
firewalld for CentOS/Red Hat or of
ufw for Debian/Ubuntu) for instructions.
We remark that in most cases it is better to perform an actual separation of networks and place each (virtual) machine into one network only.
Understanding Volumes
The
volumes sub directory of the
varfish-docker-compose directory contains the data for the containers.
These are as follows.
cadd-rest-api
Databases for variant annotation with CADD (large).
exomiser
Databases for variant prioritization (medium)
jannovar
Transcript databases for annotation (small).
minio
Storage for files uploaded from client via REST API (big).
postgres
PostgreSQL databases (very big).
redis
Storage for the work queues (small).
traefik
Configuration and certificates for load balancer (very small).
In principle, you can put these on different storages systems (e.g., some over the network and some on directly attached disks).
The main motivation is that fast storage is expensive.
Putting the small and medium sized directories on slower, cheaper storage will have little or no effect on storage efficiency.
At the same time, access to
redis and
exomiser directories should be fast.
As for
postgres, this storage is accessed most heavily and should be on storage as fast as you can afford.
cadd-rest-api should also be on fast storage but it is accessed almost only read-only.
You can put the
minio folder on slower storage to shave off some storage costs from your VarFish installation.
To summarize:
You can put
minioon cheaper storage.
As for
cadd-rest-api, you can probably get away to put this on cheaper storage.
Put everything else, in particular
postgreson storage as fast as you can afford.
As described in the section Performance Tuning, the authors recommend using an advanced file system such as ZFS on multiple SSDs for large, fast storage and enabling compression. You will get excellent performance and can expect storage saving of 50%.
Beacon Site (Experimental)
An experimental support for the GA4GH beacon protocol.
VARFISH_ENABLE_BEACON_SITE=
Whether or not to enable experimental beacon site support.
Undocumented Configuration
The following list remains a points to implement with Docker Compose and document.
Kiosk Mode
Updating Extras Data | https://varfish-server.readthedocs.io/en/latest/admin_config.html | 2022-06-25T05:08:10 | CC-MAIN-2022-27 | 1656103034170.1 | [] | varfish-server.readthedocs.io |
-2481)
Bugs fixed
The following bugs are fixed in this release. This list is primarily for users checking to see if their support tickets have been fixed. It's not designed to provide detailed information for all users. | https://docs.apigee.com/release/notes/17011600-monetization-apigee-edge-public-cloud-release-notes?hl=nb | 2022-06-25T04:16:26 | CC-MAIN-2022-27 | 1656103034170.1 | [] | docs.apigee.com |
Reporter is a multilingual template. So when you want to create a new post, you need to define the language. For example, creating a new post in the English language, the command is
hugo new english/articles/new-post.md and for French it’s
hugo new french/articles/new-post.md
Configure Post
You can configure your articles post from the front-matter. Front-matter starts with
--- and end with also
--- . In this front matter you can give
description = meta description,
image = images/post: "Top 7 Reasons to Visit Denver this Summer" date: 2021-05-29T11:07:10+06:00 image: "images/post/post-4.jpg" description: "this is meta description" # define subcategories using "/" # mouse is a subcategory of computer categories: ["destination"] draft: false --- | https://docs.gethugothemes.com/reporter/create-post/ | 2022-06-25T04:37:54 | CC-MAIN-2022-27 | 1656103034170.1 | [] | docs.gethugothemes.com |
Additional dimensions in Item Tracking as foundation for vertical solutions
Important
This content is archived and is not being updated. For the latest documentation, go to New and Planned for Dynamics 365 Business Central. For the latest release plans, go to Dynamics 365 and Microsoft Power Platform release plans.
Business value
With 2021 release wave 1, we introduce Package No., support for a third dimension for item tracking that you can use as is to keep track of simple WMS packages or pallets, or which you can use as a foundation for advanced vertical solutions.
Feature details
For business users
- Enable Feature Update: Use tracking by package number in reservation and tracking system on the Feature Management page to activate this functionality and make fields and actions visible.
- Specify exactly how you want to track items on the Item Tracking Code page, in the Package section. It follows the same principles and limitations as lot tracking.
- Choose the Package Caption field on the Inventory Setup page to replace the default term package with another that better fits your processes, such as Container, License Plate, or Pallet.
For ISV developers
Here is the list of changes and possibilities for extension in the Item Tracking area with recommendations for developers:
- The item tracking code now relies on the new Item Tracking Setup table instead of the Serial/Lot Nos table. Extend the new table with additional fields if you want to pass additional parameters.
- The Package No. field was added to various item tracking tables, including Tracking Specification, Reservation Entry, and Item Ledger Entry. See the full list of item tracking-enabled tables below.
- All business logic for package tracking is implemented with a subscriber model, as you can see in the Package Management codeunit.
- Use
OnAfterevents, such as
OnAfterCopyTrackingFrom,
OnAfterSetTrackingFilterFrom, and
OnAfterTrackingExist, in tracking-related tables to add your code in subscribers.
- CaptionClass ‘6,X’ allows user-defined captions and is added to all package tracking fields. Remember to use it with fields that you add with your solution.
- Visibility of package tracking fields and actions is managed by the feature key PackageTracking. Use the procedure
PackageManagement.IsEnabled()to check and manage the visibility of package fields programmatically.
- The Custom Declaration feature from Russian localization was reworked to use package tracking and is now implemented as the Custom Declaration Tracking extension. Take a look at it if you need a real-life sample for a tracking solution that is based on package tracking. Find the extension on the product media for the Russian version.
- Add more fields to the Package No. Information for advanced solutions, such as for quality management.
Item tracking tables
Here are the tables in the base application where the item tracking fields Serial No., Lot No., and Package No. have been added:
- BinContent.Table.al
- BinContentBuffer.Table.al
- EntrySummary.Table.al
- InventoryBuffer.Table.al
- InventoryProfile.Table.al
- Item.Table.al
- ItemEntryRelation.Table.al
- ItemJournalLine.Table.al
- ItemLedgerEntry.Table.al
- ItemTracingBuffer.Table.al
- ItemTrackingCode.Table.al
- ItemTrackingSetup.Table.al
- JobJournalLine.Table.al
- JobLedgerEntry.Table.al
- JobPlanningLine.Table.al
- PostedInvtPickLine.Table.al
- PostedInvtPutawayLine.Table.al
- PostedWhseReceiptLine.Table.al
- RecordBuffer.Table.al
- RegisteredInvtMovementLine.Table.al
- RegisteredWhseActivityLine.Table.al
- ReservationEntry.Table.al
- TrackingSpecification.Table.al
- WarehouseActivityLine.Table.al
- WarehouseEntry.Table.al
- WarehouseJournalLine.Table.al
- WhseItemEntryRelation.Table.al
- WhseItemTrackingLine.Table.al
See also
Track Items with Serial, Lot, and Package Numbers (docs) | https://docs.microsoft.com/en-us/dynamics365-release-plan/2021wave1/smb/dynamics365-business-central/additional-dimensions-item-tracking-as-foundation-vertical-solutions | 2022-06-25T06:06:41 | CC-MAIN-2022-27 | 1656103034170.1 | [] | docs.microsoft.com |
Valhalla NFT Metaverse Game
Valhalla is FLOKI's NFT Metaverse game that aims to tap into the $1 trillion Metaverse industry. FLOKI will be the main utility token of the Valhalla Metaverse.
The Metaverse industry is a $1 trillion annual revenue opportunity
according to Grayscale
, and with projected annual revenue of $400 billion for the Metaverse gaming industry by 2025, Metaverse games will benefit mostly from this growth.
Valhalla is FLOKI's NFT Metaverse game that will be powered by the FLOKI token. Valhalla will feature A-level game mechanics on the blockchain. This includes on-chain gaming interactions and upgradeable NFTs. Valhalla will be quite unique due to the robust PlayToEarn mechanics it will feature:
A gardening system.
In-game characters known as Vera.
A robust battle system.
An in-game ship system.
An items system.
These PlayToEarn mechanics will allow users to earn and collect FLOKI tokens by playing Valhalla, thereby making Valhalla a viable way for millions of people to generate an income. Players will be able to communicate, monitor each other’s progress, and encounter one another in the Valhalla Metaverse.
While the first major release of Valhalla on mainnet with P2E mechanics is slated to be live in Q4 2022, the alpha version of the Valhalla "Battle Arena" is live and playable on the Optimistic Kovan testnet.
Here's how to play:
1.
Add the Metamask browser extension to your browser.
2.
Visit “Settings,” “Networks” and then click “Add Network” to add the required network: - Network name: Optimism Kovan - RPC Url:
- Chain ID: 69 - Block Explorer:
3.
Visit
and claim a test token to play the Alpha. Make sure to check the box below the wallet address field. You will receive one test token displayed as “ETH” in your wallet.
4.
Make sure to switch to the Optimism Kovan network in Metamask.
5.
Visit
and click “Play.”
It’s that simple. Now loaded with test tokens, you can enter Valhalla and get an early taste of the most exciting play-to-earn metaverse project in crypto. Skol, viking!
While the first major release of Valhalla on mainnet with P2E mechanics is slated to be live in Q4 2022, the Battle Arena alpha which is currently live on testnet gives an idea into what the
Valhalla Will Be Powered By FLOKI Tokens).
Valhalla will also incentivize people to spend FLOKI to advance through the game more easily. This enhances utility and drive actual demand for the FLOKI token.
The Valhalla Team
Valhalla is currently being developed by a team of 11 people with combined team experience of over 50 years and plans to aggressively expand to a team of 20 to fast-track development. This team is led by MrBrownWhale, a renowned crypto veteran, NFT expert, and Ether Cards council member, and Jackie Xu, a blockchain veteran who has been in the industry for a decade and has worked with some of the biggest names in crypto. Jackie
Xu
has worked on blockchain since 2012 and has actively worked on smart contracts since 2017. Before that, Jackie worked as a software engineer and technical lead on projects ranging from large social networks to anti-fraud transaction processing systems for the traditional finance industry. The team developing Valhalla is made up of two sound engineers with extensive experience not only in gaming, but also in creating musical scores for, among others, Netflix, an illustrator, a character modeler, an animator and an overseeing art director.
In addition, there are two Unity game developers working on Valhalla that have been working with that engine since ~2008. They’ve seen the good and the bad, and will be working on a wonderful game client for Valhalla.
Then there’s a game designer and a “lore master” with more than a decade’s worth of D&D roleplaying experience that is being used to enhance the Valhalla narrative and world history, as well as writing out the lives of the various in-game NPCs.
Finally, there is a QA guy whose focus is to ensure the game development is flawless.
FLOKI
The Floki University Crypto Education platform
Last modified
2d ago
Copy link
Contents
Valhalla Will Be Powered By FLOKI Tokens
The Valhalla Team | https://docs.theflokiinu.com/master/the-valhalla-nft-metaverse-game | 2022-06-25T04:21:23 | CC-MAIN-2022-27 | 1656103034170.1 | [] | docs.theflokiinu.com |
NetDRMS Useful Debugging Checks
This is a collection of short checks that can be done on the SUMS or DRMS databases to debug problems. It should be fairly clear from the context if the query should be run on the DRMD database or on the SUMS one. If in doubt, try one, and fail over to the other.
Check sunum_queue size
This checks the size of the sunum_queue - the sunums waiting to be processed. This should ideally be 0 unless a lot of sunums have come in at once. For a busy system, it could be that this value hovers around a few hundred.
select count(*) from sunum_queue;
Check sunum_queue entries older than 1 day
This checks the number of entries in sunum_queue that are older than a day. This should be 0.
select count(*) from sunum_queue where timestamp < now() - interval '1 days';
See what partitions SUMS has available
This shows what partitions SUMS has available. The last entry in the table - pds_set_num - should be 0. If it is not, then perhaps the disk is unmounted, or SUMS sees it as having filled up (note that SUMS sees a disk as full slightly before the disk is at 100% use). You will have to work with sum_rm to clear up some space and then set pds_set_num to 0 again.
select * from sum_partn_avail;
Temporal coverage
When data are written to disk, they have an "effective date" - a date after which they can be deleted by sum_rm. This returns the latest effective date that is still available.
select min(effective_date) from sum_partn_alloc;
slony updates
This shows the time of the last slony update and the time it was last applied. It should be very recent, at least on the current day.
select * from _jsoc.sl_archive_tracking;
Show data on disk
This shows data that are on disk. Note that you can be subscribed to a dataset and yet not have data for it on disk (no trigger to get the data).
select owning_series, sum(bytes), count(*) from sum_main group by owning_series order by sum(bytes); | https://docs.virtualsolar.org/wiki/drmsHandyChecks | 2022-06-25T04:49:08 | CC-MAIN-2022-27 | 1656103034170.1 | [] | docs.virtualsolar.org |
Adding a plan to a group calendar makes that plan available to everyone in the group, but it won't automatically apply it to their own calendars (unless it is a Default Plan). That's where Group Plan Subscriptions come in. To apply a group plan to a group member, you subscribe them to that plan. To do so:
Navigate to the Group Profile
Click "Plans"
Locate the plan and clicking "Subscribe Group Member"
Enter the group member's name
Click "Subscribe"
The group plan and its activities will now be present on the group member's calendar, and any changes you make to the group plan will be sync'ed there as well.
Did you know?
You can mark plans as default so that every new, non-coach/admin group member automatically get's subscribed to that plan.
In one click you can subscribe every non-coach/admin group member to a plan.
You can Archive plans to remove them from the calendar and prevent their subscriptions from being modified.
You can use the "No Plan Activities" plan to control whether or not Activities with no Plan Association are assigned to group members. | http://docs.sixcycle.com/en/articles/2848875-group-plan-subscriptions | 2022-06-25T05:07:11 | CC-MAIN-2022-27 | 1656103034170.1 | [] | docs.sixcycle.com |
Class NewtonFailure¶
Defined in File exception.h
Inheritance Relationships¶
Base Type¶
public amici::AmiException(Class AmiException)
Class Documentation¶
- class
amici
::
NewtonFailure: public amici::AmiException¶
Newton failure exception.
This exception should be thrown when the steady state computation failed to converge for this exception we can assume that we can recover from the exception and return a solution struct to the user
Public Functions
NewtonFailure(int code, const char *function)¶
Constructor, simply calls AmiException constructor.
- Parameters
function: name of the function in which the error occurred
code: error code | https://amici.readthedocs.io/en/v0.11.10/_exhale_cpp_api/classamici_1_1NewtonFailure.html | 2022-06-25T05:41:38 | CC-MAIN-2022-27 | 1656103034170.1 | [] | amici.readthedocs.io |
How to create Threaded Tweets on Twitter using Content Studio
Here's a quick video tutorial to create threaded Tweets.
A Twitter thread is a set of Tweets by the same user, numbered and linked one after the other. With the help of a Twitter thread, you can expand on a topic that can't be written in 280 characters or less. Now you can add up to 25 threaded Tweets giving you 7000 extra characters to write your posts.
Here's how to make a Twitter thread using Content Studio:
- From the composer section, select your Twitter profile and type in your main Tweet to compose your posts.
2. Enable the ''Add threaded Tweets'' toggle as shown in the image below.
3. Threaded Tweet Composer Explained
Here is the overview of the features that you can use while creating a Threaded Tweet:
A. You can On/Off Twitter Toggle thread as shown in the above image
B. Add images to make your content more appealing from the composer section.
C. You can also add videos to your Threaded Tweets.
D. If you wish to choose images or videos from the Media Library or from other sources you can use the Media Library option.
E. Increase engagement by adding hashtags in your Threaded Tweets.
F. You can also boost your threaded Tweets by adding emojis as well from the composer section.
G. Create another threaded Tweet by clicking on the + icon as shown in the image.
H. The profile icon on the top right corner will show you the Twitter profile to post the thread on.
I. The hashtag count in the composer section will show you the total number of hashtags in the Tweet.
J. From the characters left option you can see the number of characters in the Tweet.
4. Here is the complete overview of how Threaded Tweet composer will look like. (see the image below)
5. The post preview will look something like as shown in the image below:
Here's a gif to make it easy for you:
| https://docs.contentstudio.io/article/888-how-to-create-threaded-tweets-on-twitter-using-content-studio | 2022-06-25T05:10:22 | CC-MAIN-2022-27 | 1656103034170.1 | [array(['http://d33v4339jhl8k0.cloudfront.net/docs/assets/576134c4c6979153877cd3cc/images/6182746f9ccf62287e5f27f2/file-8zfoaDnSu1.png',
None], dtype=object)
array(['http://d33v4339jhl8k0.cloudfront.net/docs/assets/576134c4c6979153877cd3cc/images/618275362b380503dfe00fa1/file-mxROXo8Mxm.png',
None], dtype=object)
array(['http://d33v4339jhl8k0.cloudfront.net/docs/assets/576134c4c6979153877cd3cc/images/61827773efc78d0553e5643e/file-m0xadiRZxQ.png',
None], dtype=object)
array(['http://d33v4339jhl8k0.cloudfront.net/docs/assets/576134c4c6979153877cd3cc/images/61850c8e64e42a671b633d9e/file-Dyc4mArWm2.png',
None], dtype=object)
array(['http://d33v4339jhl8k0.cloudfront.net/docs/assets/576134c4c6979153877cd3cc/images/61850de812c07c18afde4d68/file-e12O0dr1Pp.png',
None], dtype=object)
array(['http://d33v4339jhl8k0.cloudfront.net/docs/assets/576134c4c6979153877cd3cc/images/61af01237a3b8c03913d488f/file-NMeazqCaN0.gif',
None], dtype=object) ] | docs.contentstudio.io |
Configure...
APPLIES TO: ✔️ SQL Server Reporting Services (2016) Web.config file is located in \Program Files\Microsoft SQL Server\MSRS13.MSSQLSERVER\Reporting Services\Reportserver\Web.config.
Save the file.
Repeat the previous step for each report server in the scale-out deployment.
Verify that all Web.Config files for all report servers in the scale-out deployment contain identical <
machineKey> elements in the <
system.web> section.
APPLIES TO: ✔️ SQL Server Reporting Services (2017 and later) ✔️ Power BI Report Server"/>
Open the RSReportServer.config file for Reportserver, and in the <
Configuration> section paste the <MachineKey> element that you generated. By default, the RSReportServer.config file is located in \Program Files\Microsoft SQL Server Reporting Services\SSRS\ReportServer\RSReportServer.config for Reporting Services and \Program Files\Microsoft Power BI Report Server\PBIRS\ReportServer\RSReportServer.config for Power BI Report Server.
Save the file.
Repeat the previous step for each report server in the scale-out deployment.
Verify that all RSReportServer.config files for all report servers in the scale-out deployment contain identical <MachineKey> elements in the <
Configuration> section.
How to Configure Hostname and UrlRoot (Report Server and Sources . setting..
See Also
Configure a URL (Report Server Configuration Manager)
Configure a Native Mode Report Server Scale-Out Deployment (Report Server Configuration Manager)
Report Server Configuration Manager (Native Mode)
Manage a Reporting Services Native Mode Report Server | https://docs.microsoft.com/en-us/sql/reporting-services/report-server/configure-a-report-server-on-a-network-load-balancing-cluster?view=sql-server-ver15 | 2022-06-25T06:00:15 | CC-MAIN-2022-27 | 1656103034170.1 | [] | docs.microsoft.com |
sbtplugins or examining your
build.sbt, and compares the versions of every direct and transitive dependency in your project against Snyk's Maven vulnerability database.
sbt-dependency-graphplugin which has been included in
sbtas a built-in plugin since
sbt1.4.
addSbtPlugin(), instead (see below).
sbt-dependency-graphas a global plugin so you can use it in any
sbtproject.
~/.sbt/0.13/plugins/plugins.sbtfor
sbt0.13 or
~/.sbt/1.0/plugins/plugins.sbtfor
sbt1.0+.
project/plugins.sbtof your project instead.
sbtversion you are using, you must use the following command in the relevant
plugins.sbtfile:
addSbtPlugin("net.virtual-void" % "sbt-dependency-graph" % "0.10.0-RC1")
addDependencyTreePlugincommand which the
sbt-dependency-graphplugin docs recommend for
sbt1.4+. This is incompatible with the Snyk CLI. use the
addSbtPlugin()command as given above.
sbtas a package manager, Snyk analyzes your
build.sbtfile, and so you must have this file in your repository before importing. | https://docs.snyk.io/products/snyk-open-source/language-and-package-manager-support/snyk-for-scala | 2022-06-25T04:07:45 | CC-MAIN-2022-27 | 1656103034170.1 | [] | docs.snyk.io |
Architecture and Security in Salesforce
Overview
DayBack is a Canvas App in Salesforce which means the application code is not inside your Salesforce pages but is instead hosted on DayBack's application servers at dayback.com This means DayBack can be updated very frequently and without the need for customers to install new packages: bug fixes and new features are pushed to the server and customers can take advantage of them right away.
This means that some of the application settings are also stored on DayBack's servers and this document describes the division of labor between DayBack and Salesforce and which information is stored where.
Details
Security review
DayBack passed the thorough security review required of all AppExchange apps. In the case of Canvas Apps, this review also includes testing DayBack's servers, penetration testing, and probing DayBack's configuration for injection vulnerabilities, and examining all traffic between dayback.com and Salesforce.
Where is my event data stored?
In everything that follows, we'll use the word "event" to mean any Salesforce record showing on DayBack calendar: like an appointment. This could be a record from the native Event object in Salesforce, a Task or a Campaign, or a record from any custom object you've elected to show on the calendar.
Events are only stored in Salesforce and don't pass through DayBack's servers on the way to being displayed on the calendar. DayBack uses the REST API via the Canvas SDK to query Salesforce and this is all done inside your Salesforce pages. DayBack doesn't have an event database of its own or a shadow table on DayBack's servers.
Does DayBack respect our profiles and role hierarchy?
Yes. The REST API runs under the authentication of your logged in user, using the Signed Request authentication flow provided by the Canvas SDK. So a DayBack user has no more and no less access to their Salesforce data than when they're on your other Salesforce pages.. In the case of resources, it is the name of your resource stored in DayBack, not the resource's ID in those cases where your resource represents a Salesforce record.
If you've created custom actions as part of customizing DayBack, the code for those actions is stored on DayBack's servers.
Finally, DayBack records the email address of each Salesforce user who is authorized to use DayBack and actually uses the app. You may also have designated some users as DayBack admins, and those email addresses are recorded as well. Note that only identifying aspects of the user are their org iD, email address and their Salesforce record ID. No passwords or other identification about the user is stored (and this email address is not the Salesforce user's username/accountname). Here is an example of the actual data recorded for the users of DayBacks test drive org in Salesforce:
{ " } },
Here's a diagram of how data moves between Salesforce, the user's browser, and DayBack's servers:
This data is backed up daily and retained for 30 days.
What about sharing?
The sharing feature in DayBack is explicitly designed to publish calendar data to folks outside your Salesforce org. You can turn this capability off or restrict it to certain users. When you manually create a public bookmark, that is the only time event data can leave Salesforce. Details on how this works and what data is actually published can be found here: sharing. Sharing is like exporting your event data, A share recipient has no access to your Salesforce org. Bookmarks that are "shared" with "just me" or with "my group" are not available outside Salesforce in this manner: only bookmarks set to "public" see their data leave Salesforce..
DigitalOcean is certified in the international standard ISO/IEC 27001:2013. Details. | https://docs.dayback.com/article/138-architecture-and-security | 2022-06-25T04:29:47 | CC-MAIN-2022-27 | 1656103034170.1 | [] | docs.dayback.com |
Leap Seconds
A leap second is inserted every few years so that the rotation of the earth according to the time is aligned with the stars. In NetDRMS, leap seconds are hard coded in an array. This means that NetDRMS needs to be recompiled to cope with the introduction of a leap second.
First, an update has to be made to the file base/libs/timeio/timeio.c, which has the hard coded array ut_leap_time[] encoded as :
static double ut_leap_time[] = { /* * Note: the times and amounts of adjustment prior to 1972.01.01 may be * erroneous (they do not agree with those in the USNO list at *), but they should not be * changed without due care, as the calculation of utc_adjustment is * based on a count of assumed whole second changes. */ -536543999.0, /* 1960.01.01 */ -457747198.0, /* 1962.07.01 */ -394588797.0, /* 1964.07.01 */ -363052796.0, /* 1965.07.01 */ -331516795.0, /* 1966.07.01 */ -284083194.0, /* 1968.01.01 */ -252460793.0, /* 1969.01.01 */ -220924792.0, /* 1970.01.01 */ -189388791.0, /* 1971.01.01 */ -157852790.0, /* 1972.01.01 */ -142127989.0, /* 1972.07.01 */ -126230388.0, /* 1973.01.01 */ -94694387.0, /* 1974.01.01 */ -63158386.0, /* 1975.01.01 */ -31622385.0, /* 1976.01.01 */ 16.0, /* 1977.01.01 */ 31536017.0, /* 1978.01.01 */ 63072018.0, /* 1979.01.01 */ 94608019.0, /* 1980.01.01 */ 141868820.0, /* 1981.07.01 */ 173404821.0, /* 1982.07.01 */ 204940822.0, /* 1983.07.01 */ 268099223.0, /* 1985.07.01 */ 347068824.0, /* 1988.01.01 */ 410227225.0, /* 1990.01.01 */ 441763226.0, /* 1991.01.01 */ 489024027.0, /* 1992.07.01 */ 520560028.0, /* 1993.07.01 */ 552096029.0, /* 1994.07.01 */ 599529630.0, /* 1996.01.01 */ 646790431.0, /* 1997.07.01 */ 694224032.0, /* 1999.01.01 */ 915148833.0, /* 2006.01.01 */ 1009843234.0, /* 2009.01.01 */ 1120176035.0 /* 2012.07.01 */ /* * IMPORTANT NOTE --- * When adding a new leap second add time be sure to make changes in BOTH * this file and its near clone in /CM/src/timeio via your STAGING directory. * * The value to list is the time of the first second after the leap second. * So, before the addition is made, get the seconds of the time in the * comment via e.g. time_index or time_convert then add 1 to it before * adding it to the table. * */ /* * *** NOTE Please notify [email protected] at EOF * whenever any of these times are updated! *** */ };
Generally one more line will have to be added, something like :
1214784036.0 /* 2015.07.01 */
Recompiling should involve 'make clean' followed by 'make' and followed again by 'make sums'. Each of these make steps should be repeated several times, until they give the same result on successive runs, before moving on to the next stage. Note that if NetDRMS has been customized, it is possible that the customizations may be lost in the 'make clean' step. | https://docs.virtualsolar.org/wiki/drmsLeapSeconds | 2022-06-25T04:47:11 | CC-MAIN-2022-27 | 1656103034170.1 | [] | docs.virtualsolar.org |
Object “contentView”
Object > NativeObject > Widget > Composite > ContentView
A composite that does not require (or support) a parent to be visible. It also can not be disposed. Every instance of
ContentView is controlled by an associated non-widget object, either an instance of
Popover or the global
tabris object. | http://docs.tabris.com/latest/api/ContentView.html | 2022-06-25T03:53:07 | CC-MAIN-2022-27 | 1656103034170.1 | [] | docs.tabris.com |
$ systemctl start --user podman.socket
Before you can develop functions on OpenShift Serverless, you must complete the set up steps.
To enable the use of OpenShift Serverless Functions on your cluster, you must complete the following steps:
The OpenShift Serverless Operator and Knative Serving are installed on your cluster.
The
oc CLI is installed on your cluster.
The Knative (
kn) CLI is installed on your cluster. Installing the
kn CLI enables the use of
kn func commands which you can use to create and manage functions.
You have installed Docker Container Engine or podman version 3.3 or higher, and have access to an available image registry.
If you are using Quay.io as the image registry, you must ensure that either the repository is not private, or that you have followed the OpenShift Container Platform documentation on Allowing pods to reference images from other secured registries.
If you are using the OpenShift Container Registry, a cluster administrator must expose the registry.
If you are using podman, you must run the following commands before getting started with OpenShift Serverless Functions:
Start the podman service that serves the Docker API on a UNIX socket at
${XDG_RUNTIME_DIR}/podman/podman.sock:
$ systemctl start --user podman.socket
Establish the environment variable that is used to build a function:
$ export DOCKER_HOST="unix://${XDG_RUNTIME_DIR}/podman/podman.sock"
Run the build command with
-v to see verbose output. You should see a connection to your local UNIX socket:
$ kn func build -v
For more information about Docker Container Engine or podman, see Container build tool options.
See Getting started with functions. | https://docs.openshift.com/container-platform/4.10/serverless/functions/serverless-functions-setup.html | 2022-06-25T05:31:41 | CC-MAIN-2022-27 | 1656103034170.1 | [] | docs.openshift.com |
There are many types of primary date ideas for people who are fresh to each other. The foodstuff that you take in on your first of all date must be your main concern, tajikistan women however you can also make an effort something entirely out of the ordinary. You could take your date into a music concert, which will provide you with some precious time to talk. In case you both like the same kind of music, then this is a great idea! You can even make an effort making the own sushi together!
One other fun first of all date thought is to visit a shut-in. It is going to definitely lessen your spirit, and you can also get to know the date better. Try to avoid unkind dates, as you may become the sufferer of the same. Also, try to avoid saying activities, as they may just bring about awkwardness. If you need to avoid cumbersome moments, consider a first date activity that is fun and adventurous. Some first particular date ideas include:
A picnic is an excellent way to spend time with your time. It’s a great first date thought, since you aren’t expected to make an impression your day, but it could still thrilling can be an good approach to time mutually. You can also get a local recreation area or organic garden. During your visit, you can talk about what works best about the region and make it a great place to invest some time. A picnic is often a fun, low-class way to spend period with your day.
Another entertaining first particular date idea is likely to a art gallery together. Museums usually have inexpensive admission and is great for communicating. If your date is known as a history buff, visit a past site or perhaps landmark. You may also do a entertaining activity in concert, such as ice skating! The activity is definitely both thrilling conversational. With respect to something a little more unusual, try a movie night. Generally, a mild comedy is most beneficial, but you can at all times pick a thing more amazing if you’d like to win over your particular date.
Regardless of what sort of date you choose, remember to handle your partner very well. Choose a task or site that is easy on the jean pocket and inexpensive for both of you. If the girl offers to pay for the experience, decline the offer. Ladies will take pleasure in you for your thoughtfulness. The first time frame is not really supposed to be best. You can always include a Plan N just in case something goes wrong. In the event all else falters, the time frame will still be enjoyable and remarkable.
Remember that first dates happen to be stressful because you do not know the person well yet. Package activities which is interesting for you, but will not take up the entire particular date. Try to look for a fun activity that won’t take up too much of your time and is also sure to keep a lasting impression! Ultimately, this will lead to a second date. You can’t afford to waste time on a uninteresting date – pick a initial date idea that you and your lover will both have fun with! | https://docs.jagoanhosting.com/initial-date-delete-word-people-who-are-new-to-each-other/ | 2022-06-25T04:51:45 | CC-MAIN-2022-27 | 1656103034170.1 | [] | docs.jagoanhosting.com |
If you’re interested in dating a Russian woman, you might be wondering the right way to meet her. Thankfully, there are several approaches to meet Russian women totally free. You can also utilize the Russian Brides to be application. This kind of app features an attractive design and works on most contemporary mobile devices. The site also has a handy characteristic set. If you are thinking of getting a premium account, you can sign up for one or two extra features to increase your awareness and get better access to details.
Some individuals like to meet Russian girls by clubs. Organizations such as Forte Music Team and Exercise Life will be popular locations. Russian women happen to be friendly and straightforward to strategy and are generally eager to dance with guys. You can also meet an european girl at an Internet tavern. However , you ought to be careful about uncovering personal details on a seeing website. Irrespective with the method you decide on, you should be aware from the risks included in sharing personal details with strangers.
When you enroll for that dating internet site, remember to post a photo. There are a few users whom hide the look of them by simply posting photographs from unknown angles. By simply posting a photo of your self, you improve your chances of obtaining a meeting with a Russian woman. Despite the fact that don’t want to risk going on a less than comfortable date, a photograph helps you connect with the right woman and steer clear of awkward circumstances. So , be sure to post a picture of yourself plus your personality with your dating web page.
Russian ladies are likewise good at making a residence a heaven. A Russian young lady can prepare great Russian food and make your kids happy. Her nanny expertise are first rate. She will even care for your children, and you can rest assured that you’re going to never be used up of things to do. Beneath the thick be tired of a Russian girl – she is going to do anything she can to keep you happy. It’s a win win situation for the purpose of both of you!
You can also want to be allowed to communicate with her easily, and she will be able to answer any inquiries you might have. This kind of provides you with the chance to construct a relationship with her and revel in the country jane is from. You will to have her gorgeous smile as well as the warmth and care that she’ll demonstrate you. You’ll have fun meeting an eastern european girl, and she’ll become more than thrilled to chat with you for hours.
Russian females are very seriously interested in finding a compatible Traditional western man. Their very own population outnumbers men, and many of them prefer to get married to someone out of a Developed country. Despite the language screen, you’ll find that Russian females are ready to accept marriage, it is therefore important to exhibit your involvement in her ahead of approaching her. It might be intimidating at first, but they’ll be very open to your bracelets. The culture and language of their country make it unachievable to way them while not genuine curiosity. | https://docs.jagoanhosting.com/tips-on-how-to-meet-russian-women/ | 2022-06-25T04:34:20 | CC-MAIN-2022-27 | 1656103034170.1 | [array(['https://thumbs.dreamstime.com/t/young-elegant-woman-drinking-coffee-cafe-paris-france-young-elegant-woman-drinking-coffee-traditional-cafe-paris-107748531.jpg',
'online places to meet women'], dtype=object) ] | docs.jagoanhosting.com |
Orders containing items from multiple shops will be sent for authorization in multiple transactions, each transaction containing items for a single shop.
Credit card transactions are authorized and their status is communicated on the following route: Acquirer -> Nuvei -> Mirakl Marketplace.
Transactions should be captured within 7 days from authorization. For more information regarding capturing payments, please go to our section: Capture a Payment.
Tokenization, 3D and fraud prevention solutions are available upon request. For more information visit: Credit Card Payments.
Alternative payment methods are authorized and captured in one step and their status is communicated on the following route: Payment method provider -> Nuvei -> Mirakl Marketplace.
You need to integrate our REST API described here:. | https://docs.smart2pay.com/transactional-flow-for-mirakl-marketplace/ | 2022-06-25T04:25:11 | CC-MAIN-2022-27 | 1656103034170.1 | [] | docs.smart2pay.com |
(PHP 4, PHP 5, PHP 7)
setcookie — Send a cookie
$name[, string
$value= "" [, int
$expires= 0 [, string
$path= "" [, string
$domain= "" [, bool
$secure=
FALSE[, bool
$httponly=
FALSE]]]]]] ) : bool
$name[, string
$value= "" [, array
$options= [] ]] ) : bool array. Cookie values may also exist in $_REQUEST.
»']
expires
expiresparameter
options
An associative array which may have any of the keys
expires,
path,
domain,
secure,
httponly and
samesite.
If any other key is present an error of level
E_WARNING
is generated. The values have the same meaning as described for the
parameters with the same name. The value of the
samesite
element should be either
None,
Lax
or
Strict.
If any of the allowed options are not given, their default values are the
same as the default values of the explicit parameters. If the
samesite element is omitted, no SameSite cookie
attribute is set.
If output exists prior to calling this function,
setcookie() will fail and return
FALSE. If
setcookie() successfully runs, it will return
TRUE.
This does not indicate whether the user accepted the cookie. />\n";
}
}
?>
The above example will output:
three : cookiethree two : cookietwo one : cookieoneconfiguration directive on in your php.ini or server configuration files.
Note:
If the PHP directive register_globals is set to
onthen cookie values will also be made into variables. In our examples below, $TestCookie will exist. It's recommended to use $_COOKIE.
Common Pitfalls:
expires. | https://php-legacy-docs.zend.com/manual/php5/en/function.setcookie | 2022-06-25T04:28:46 | CC-MAIN-2022-27 | 1656103034170.1 | [] | php-legacy-docs.zend.com |
AWS services or capabilities described in AWS Documentation may vary by region/location. Click Getting Started with Amazon AWS to see specific differences applicable to the China (Beijing) Region.
Contains the output of CopySnapshot.
Namespace: Amazon.EC2.Model
Assembly: AWSSDK.EC2.dll
Version: 3.x.y.z
The CopySnapshotResponse type exposes the following members
This example copies a snapshot with the snapshot ID of ``snap-066877671789bd71b`` from the ``us-west-2`` region to the ``us-east-1`` region and adds a short description to identify the snapshot.
var response = client.CopySnapshot(new CopySnapshotRequest { Description = "This is my copied snapshot.", DestinationRegion = "us-east-1", SourceRegion = "us-west-2", SourceSnapshotId = "snap-066877671789bd71b" }); string snapshotId = response.Snapshot | https://docs.aws.amazon.com/sdkfornet/v3/apidocs/items/EC2/TCopySnapshotResponse.html | 2018-03-17T14:51:38 | CC-MAIN-2018-13 | 1521257645177.12 | [] | docs.aws.amazon.com |
Update a type ahead suggestion The knowledge base and global text searches provide suggestions as you type. These type-ahead suggestions are compiled on a nightly basis by a scheduled job. About this task Use the following procedure if you need to refresh this list sooner. Procedure Navigate to System Scheduler > Scheduled Jobs. Open TS Search Stats. Run the scheduled job. For more about how suggestions are generated and maintained, see the blog post Global Text Search Suggestions by a ServiceNow Technical Support Engineer in the ServiceNow Community. Related TasksConfigure a "Did You Mean?" suggestion | https://docs.servicenow.com/bundle/kingston-platform-administration/page/administer/search-administration/task/t_UpdateATypeAheadSuggestion.html | 2018-03-17T14:44:33 | CC-MAIN-2018-13 | 1521257645177.12 | [] | docs.servicenow.com |
Attrib
Applies To: Windows Server 2008, Windows Vista
Displays, sets, or removes attributes assigned to files or directories. If used without parameters, attrib displays attributes of all files in the current directory.
For examples of how to use this command, see Examples.
Syntax
attrib [{+|-}r] [{+|-}a] [{+|-}s] [{+|-}h] [{+|-}i] [<Drive>:][<Path>][<FileName>] [/s [/d] [/l]]:
attrib news86
To assign the Read-only attribute to the file named Report.txt, type:
attrib +r report.txt
To remove the Read-only attribute from files in the Public directory and its subdirectories on a disk in drive B, type:
attrib -r b:\public\*.* /s
To set the Archive attribute for all files on drive A, and then clear the Archive attribute for files with the .bak extension, type:
attrib +a a:*.* & attrib -a a:*.bak | https://docs.microsoft.com/en-us/previous-versions/windows/it-pro/windows-server-2008-R2-and-2008/cc754186(v=ws.10) | 2018-05-20T20:37:24 | CC-MAIN-2018-22 | 1526794863684.0 | [] | docs.microsoft.com |
We currently offer 3 ways to do this.
1. Simple Social Icons
This is a highly customizable widget. From your dashboard, go to Appearance->Widgets and simply drag it into your sidebar or other widget area and enter the URLs to your favorite social media sites. You can even change the size and colors of the icons.
These icons allow your readers and easy way to find and follow you on social media.
Note: the icons will not show up until you've added the links.
2. AddToAny Social Sharing Buttons
The AddToAny sharing tool is a little different. It displays "share" buttons inside individual posts so that your readers can quickly like or share a post on their feed.
From your dashboard, go to Add-Ons->AddToAny from the left hand menu. Here you will find the settings for this widget.
I recommend using the Large Standalone buttons for your favorite social media sites. They look better than the default "share" to everywhere tool.
3. AddToAny Follow Widget
This is a simply way to add follow buttons to any widget area on your blog. It's not as robust as the Simple Social Icons widget, but it's easier to set up and more colorful.
Just drag the widget to the sidebar and enter ONLY the last part of the links to your social media accounts. You DO NOT need the full URL: | http://docs.theblogpress.com/plugins-and-widgets/add-social-media-buttons-to-your-blog | 2018-05-20T19:18:43 | CC-MAIN-2018-22 | 1526794863684.0 | [] | docs.theblogpress.com |
Cache on User Context¶
Some applications differentiate the content between types of users. For instance, on one and the same URL a guest sees a ‘Log in’ message; an editor sees an ‘Edit’ button and the administrator a link to the admin backend.
The FOSHttpCache library includes a solution to cache responses per user context (whether the user is authenticated, groups the user is in, or other information), rather than individually.
If every user has their own hash, you probably don’t want to cache at all. Or if you found out its worth it, vary on the credentials and don’t use the context hash mechanism.
Caution
Whenever you share caches, make sure to not output any individual content like the user name. If you have individual parts of a page, you can load those parts over AJAX requests or look into ESI receives the request. It sends a request (the hash request) with a special accept header (
application/vnd.fos.user-context-hash) to a specific URL, e.g.,
/_fos_user_context_hash.
- The application receives the hash request. The application knows the client’s user context (roles, permissions, etc.) and generates a hash based on that information. The application then returns a response containing that hash in a custom header (
X-User-Context-Hash) and with
Content-Type
application/vnd.fos.user-context-hash.
- The proxy server receives the hash response, copies the hash header to the client’s original request for
/foo.phpand restarts that request.
- If the response to this request should differ per user context, the application specifies so by setting a
Vary: X-User-Context-Hashheader. The appropriate user role dependent representation of
/foo.phpwill then be returned to the client..
Proxy Client Configuration¶
Currently, user context caching is only supported by Varnish and by the Symfony HttpCache. See the Varnish Configuration or Symfony HttpCache Configuration.
User Context Hash from Your Application¶
It is your application’s responsibility to determine the hash for a user. Only your application can know what is relevant for the hash. You can use the path or the accept header to detect that a hash was requested.
Warning
Treat the hash lookup path like the login path so that anonymous users also can get a hash. That means that your cache can access the hash lookup even with no user provided credential and that the hash lookup never redirects to a login page.
Calculating the User Context Hash¶
The user context hash calculation (step 3 above) is managed by { protected $userService; public function __construct(YourUserService $userService) { $this->userService = $userService; } public function updateUserContext(UserContext $userContext) { $userContext->addParameter('authenticated', $this->userService->isAuthenticated()); } }
Returning the User Context Hash¶
It is up to you to return the user context hash in response to the hash request
(
/_fos_user_context_hash in step 3 above):
// <web-root>/_fos_user_context_hash/index.php $hash = $hashGenerator->generateHash(); if ('application/vnd.fos.user-context-hash' == strtolower($_SERVER['HTTP_ACCEPT'])) { header(sprintf('X-User-Context-Hash: %s', $hash)); header('Content-Type: application/vnd.fos.user-context-hash'); exit; } // 406 Not acceptable in case of an incorrect accept header header('HTTP/1.1 406');
If you use Symfony, the FOSHttpCacheBundle will set the correct response headers for you.
Caching the Hash Response¶
To optimize user context hashing performance, you should cache the hash response. By varying on the Cookie and Authorization header, the application will return the correct hash for each user. This way, subsequent hash requests (step 3 above) will be served from cache instead of requiring a roundtrip to the application.
// The application listens for hash request (by checking the accept header) // and creates an X-User-Context-Hash based on parameters in the request. // In this case it's based on Cookie. if ('application/vnd.fos.user-context-hash' === strtolower($_SERVER['HTTP_ACCEPT'])) { header(sprintf('X-User-Context-Hash: %s', $_COOKIE[0])); header('Content-Type: application/vnd.fos.user-context-hash'); header('Cache-Control: max-age=3600'); header('Vary: cookie, authorization'); exit; }
Here we say that the hash is valid for one hour. Keep in mind, however, that you need to invalidate the hash response when the parameters that determine the context change for a user, for instance, when the user logs in or out, or is granted extra permissions by an administrator.
Note
If you base the user hash on the Cookie header, you should clean up that header to make the hash request properly cacheable.
The Original Request¶
After following the steps above, the following code renders a homepage differently depending on whether the user is logged in or not, using the credentials of the particular user:
// /index.php file header('Cache-Control: max-age=3600'); header('Vary: X-User-Context-Hash'); $authenticationService = new AuthenticationService(); if ($authenticationService->isAuthenticated()) { echo "You are authenticated"; } else { echo "You are anonymous"; } | http://foshttpcache.readthedocs.io/en/latest/user-context.html | 2018-05-20T19:07:16 | CC-MAIN-2018-22 | 1526794863684.0 | [] | foshttpcache.readthedocs.io |
Change Password
Important:
Use a secure password. A secure password is not a dictionary word, and it contains uppercase and lowercase letters, numbers, and symbols.
To change a password, perform the following steps:
- Click Password for the appropriate email account.
-
Each row in the Mail Account section of the interface displays two status icons.
The first status icon indicates whether the user can log in to, send mail from, and read their mail account. The second status icon indicates whether the mail account can receive incoming email.
To suspend logins, incoming email, or both for an email account, perform the following steps:
- Click the appropriate More link that corresponds to the email account to suspend.
- Click the appropriate suspension link.
- Click Suspend to suspend both incoming mail and logins.
- Click Suspend Logins to suspend logins.
- Click Suspend Incoming to suspend incoming email.
To unsuspend logins, incoming email, or both for the email account, click More and then click the apropriate Suspend link.
Note:
When you suspend an email account, the system also suspends any aliases or forwarders that redirect email to the account.
To delete an email address, perform the following steps:
-
This feature automatically configures your email client to access your cPanel email addresses. An email client allows you to access your email account from an application on your computer (for example, Outlook® Express and Apple® Mail).
To access this feature, click More for the appropriate email account, and then select Set Up Email Client.
Notes:
-.
The actual address of the account is
[email protected], where
account represents your account username. You cannot rename, delete, or place a quota on the default account. We recommend that you create a separate email account for daily use.
This address is also the default From and Reply-to address of outgoing email that your account's PHP scripts send. | https://docs.cpanel.net/display/60Docs/Email+Accounts | 2018-05-20T19:40:25 | CC-MAIN-2018-22 | 1526794863684.0 | [] | docs.cpanel.net |
Source.
Warning
The
xml.dom.minidom module is not secure against maliciously constructed data. If you need to parse untrusted or unauthenticated data see XML vulnerabilities.=None, bufsize=None)=None)
Return a
Document that represents the string. This method creates an
io.
See also.
You can avoid calling this method explicitly by using the
with statement. The following code will automatically unlink dom when the
with block is exited:
with xml.dom.minidom.parse(datasource) as dom: ... # Work with dom.
Node node, an additional keyword argument encoding can be used to specify the encoding field of the XML header.
Node.toxml(encoding=None)="")
Return a pretty-printed version of the document. indent specifies the indentation string and defaults to a tabulator; newl specifies the string emitted at the end of each line and defaults to
\n.
The encoding argument behaves like the corresponding argument of
toxml().:
Documentobject. Derived interfaces support all operations (and attributes) from the base interfaces, plus any new operations.
inparameters, the arguments are passed in normal order (from left to right). There are no optional arguments.
voidoperations return
None.
foocan also be accessed through accessor methods
_get_foo()and
_set_foo().
readonlyattributes must not be changed; this is not enforced at runtime.
short int,
unsigned int,
unsigned long long, and
booleanall map to Python integer objects.
DOMStringmaps to Python strings.
xml.dom.minidomsupports either bytes or strings, but will normally produce
DOMImplementation
CharacterData
CDATASection
Notation
Entity
EntityReference
DocumentFragment
Most of these reflect information in the XML document that is not of general utility to most DOM users.
© 2001–2018 Python Software Foundation
Licensed under the PSF License. | http://docs.w3cub.com/python~3.6/library/xml.dom.minidom/ | 2018-05-20T19:17:34 | CC-MAIN-2018-22 | 1526794863684.0 | [] | docs.w3cub.com |
Creating a event property knowledge object
First, determine which type of knowledge object you want to create:
- Event property: Use event properties to "clean up" your data by transforming an attribute’s names and existing values post-ingest. You can use event properties in queries to summarize, group by and filter on re-usable expression that can be direct references to physical data columns, references to values derived from lookups, or logical expressions evaluating multiple physical columns.
- Flow: Use flows to analyze user actions over time, a sequence of actions, or a sequence of actions over time.
Upcoming versions of Interana 3.0 will include additional knowledge objects.
Creating a context
Use the following steps to begin creating either type of context:
- Click the Knowledge Object icon in the navigation bar.
- Make sure you have the correct dataspace selected, then click Event Properties.
- In the Contexts list, click New Event Property.
- Enter a name for the event property. We recommend creating a descriptive name that will allow you to recognize the function of the context in lists when creating queries or other knowledge objects.
- In the Definition tab, select either Defined Value or Function.
- Optionally, you can add a description in the About tab.
Creating a defined value event property
To create a defined value for an event property, enter a name for the value, then select the value conditions. You can specify multiple values for an event property, and multiple AND/OR conditions for each value.
Click the AND icon to switch between AND and OR.
You must specify at least one value for a defined value event property, and optionally a value to assign if the event property definition does not return any results.
Creating a function event property
To create a function for an event property, specify the actor or context used to evaluate the value, the mathematical operator to perform (add, subtract, multiply, or divide). Then specify whether this is for all events (the default value), or a specific action, event property, or flow that is compared to another action, event property, or flow. | https://docs.interana.com/3/Getting_Started/Creating_a_event_property_knowledge_object | 2018-05-20T20:13:47 | CC-MAIN-2018-22 | 1526794863684.0 | [] | docs.interana.com |
A backend is a way to run the commands of your workflow. Cromwell allows for backends conforming to the Cromwell backend specification to be plugged into the Cromwell engine. Additionally, backends are included with the Cromwell distribution:
- Local
- HPC, including Sun Grid Engine, LSF, HTCondor & SLURM
- Run jobs as subprocesses or via a dispatcher.
- Supports launching in Docker containers.
- Use
bash,
qsub, and
bsubto run scripts.
- Google Cloud
- Launch jobs on Google Compute Engine through the Google Genomics Pipelines API.
- GA4GH TES
- Launch jobs on servers that support the GA4GH Task Execution Schema (TES).
- Spark
- Supports execution of Spark jobs.
- Alibaba Cloud
- Launch jobs on Alibaba Cloud BatchCompute service.
HPC backends are put under the same umbrella because they all use the same generic configuration that can be specialized to fit the need of a particular technology.
Backends are specified in the
backend.providers configuration. Each backend has a configuration that looks like:
BackendName { actor-factory = "FQN of BackendLifecycleActorFactory class" config { ... } }
The structure within the
config block will vary from one backend to another; it is the backend implementation's responsibility
to be able to interpret its configuration.
The providers section can contain multiple backends which will all be available to Cromwell.
Backend Job Limits
All backends support limiting the number of concurrent jobs by specifying the following option in the backend's configuration stanza:
backend { ... providers { BackendName { actor-factory = ... config { concurrent-job-limit = 5
Backend Filesystems
Each backend will utilize a filesystem to store the directory structure and results of an executed workflow. The backend/filesystem pairings are as follows:
- Local, HPC and Spark backend use the Shared Local Filesystem.
- Google backend uses the Google Cloud Storage Filesystem.
- Alibaba Cloud backend uses the OSS Storage FileSystem.
Additional filesystems capabilities can be added depending on the backend. For instance, an HPC backend can be configured to work with files on Google Cloud Storage. See the HPC documentation for more details. | http://cromwell.readthedocs.io/en/develop/backends/Backends/ | 2018-05-20T19:09:54 | CC-MAIN-2018-22 | 1526794863684.0 | [] | cromwell.readthedocs.io |
Getting started on Google Pipelines API
Prerequisites
This tutorial page relies on completing the previous tutorial:
Goals
At the end of this tutorial you'll have run your first workflow against the Google Pipelines API.
Let's get started!
Configuring a Google Project
Install the Google Cloud SDK.
Create a Google Cloud Project and give it a project id (e.g. sample-project). We’ll refer to this as
<google-project-id> and your user login (e.g. [email protected]) as
<google-user-id>.
On your Google project, open up the API Manager and enable the following APIs:
- Google Compute Engine
- Google Cloud Storage
- Genomics API
Authenticate to Google Cloud Platform
gcloud auth login <google-user-id>
Set your default account (will require to login again)
gcloud auth application-default login
Set your default project
gcloud config set project <google-project-id>
Create a Google Cloud Storage (GCS) bucket to hold Cromwell execution directories.
We will refer to this bucket as
google-bucket-name, and the full identifier as
gs://google-bucket-name.
gsutil mb gs://<google-bucket-name>
Workflow Source Files
Copy over the sample
hello.wdl and
hello.inputs files to the same directory as the Cromwell jar.
This workflow takes a string value as specified in the inputs file and writes it to stdout.
hello.wdl
task hello { String addressee command { echo "Hello ${addressee}! Welcome to Cromwell . . . on Google Cloud!" } output { String message = read_string(stdout()) } runtime { docker: "ubuntu:latest" } } workflow wf_hello { call hello output { hello.message } }
hello.inputs
{ "wf_hello.hello.addressee": "World" }
Google Configuration File
Copy over the sample
google.conf file utilizing Application Default credentials to the same directory that contains your sample WDL, inputs and Cromwell jar.
Replace
<google-project-id> and
<google-bucket-name>in the configuration file with the project id and bucket name.
google.conf>" // Base bucket for workflow executions root = "gs://<google-bucket-name> = "" // This allows you to use an alternative service account to launch jobs, by default uses default service account compute-service-account = "default" } filesystems { gcs { // A reference to a potentially different auth for manipulating files via engine functions. auth = "application-default" } } } } } }
Run Workflow
java -Dconfig.file=google.conf -jar cromwell-29.jar run hello.wdl -i hello.inputs
Outputs
The end of your workflow logs should report the workflow outputs.
[info] SingleWorkflowRunnerActor workflow finished with status 'Succeeded'. { "outputs": { "wf_hello.hello.message": "Hello World! Welcome to Cromwell . . . on Google Cloud!" }, "id": "08213b40-bcf5-470d-b8b7-1d1a9dccb10e" }
Success!
Next steps
You might find the following tutorials interesting to tackle next: | http://cromwell.readthedocs.io/en/develop/tutorials/PipelinesApi101/ | 2018-05-20T19:22:12 | CC-MAIN-2018-22 | 1526794863684.0 | [] | cromwell.readthedocs.io |
You can log in by clicking the link at the top of the BlogPress home page or from the link in the footer of the site (available on all pages).
- Enter your email address or username.
- Enter your password
- Complete the CAPTCHA (Spam Protection)
- Click the button to LOGIN.
- If you do not remember your username or password, click the "Lost your password?" link to reset your password.
- If you need additional assistance, click "Contact Support" (or the blue smiley in the lower right-hand corner of the screen) to reach our support team and we'll be happy to help. Make sure to tell us if you're seeing a specific error message. | http://docs.theblogpress.com/get-started/how-to-access-your-blogpress-account | 2018-05-20T19:29:49 | CC-MAIN-2018-22 | 1526794863684.0 | [array(['https://d2mckvlpm046l3.cloudfront.net/6e6b6784096eb4edbe1420ca9e3e3e23298e5852/http%3A%2F%2Fcdn.danandjennifermedia.com%2FBlogPress%2FTraining3.0%2FLogin%2520Links.jpg',
None], dtype=object) ] | docs.theblogpress.com |
QueryRun Class
The QueryRun class traverses tables in the database while fetching records that satisfy constraints given by the user, and helps gather such constraints from user input.
class QueryRun extends ObjectRun
Run On
Called
Methods
Remarks
QueryRun objects are used to traverse tables in the database while fetching records that satisfy constraints given by the user. A QueryRun object may interact with the user to allow the user to enter such constraints.
Queries are used internally by reports to delineate and fetch the data to be presented in the report. A QueryRun object relies on a Query Class object to define the structure of the query (for example, which tables are searched and how the records are sorted). A QueryRun object defines the dynamic behavior of the query, while a Query object defines the static characteristics of the query.
Example
In the following example, it is assumed that there is a query named "Customer" in the AOT, and that it has one datasource, the CustTable table.
static void example() { // Create a QueryRun object from a query stored in the AOT. QueryRun qr = new QueryRun ("Customer"); CustTable customerRecord; // Display a window enabling the user to choose which records to print. if (qr.prompt()) { // The user clicked OK. while (qr.next()) { // Get the fetched record. CustomerRecord = qr.GetNo(1); // Do something with it print CustomerRecord.AccountNum; } } else { // The user pressed Cancel, so do nothing. } } | https://docs.microsoft.com/en-us/previous-versions/dynamicsax-2009/developer/aa627671(v=ax.50) | 2018-05-20T20:14:41 | CC-MAIN-2018-22 | 1526794863684.0 | [] | docs.microsoft.com |
Configuration Files
Prerequisites
This tutorial page relies on completing the previous tutorials:
Goals
At the end of this tutorial you'll have set up a configuration file for Cromwell and used it to modify Cromwell's behavior.
Let's get started
Customizing Cromwell with Configuration Files
When Cromwell runs, it contains a large number of default options useful for getting started. For example, by default Cromwell doesn't require an external database while running all workflow jobs on your local machine.
Soon you may want to start storing the results of your Cromwell runs in an external MySQL database. Or, you may want to run jobs on your organizations compute farm, or even run jobs in the cloud via the Pipelines API. All of these changes to the defaults will done by setting configuration values.
When you have many configuration settings you would like to set, you specify them in custom configuration file. See the configuration page for more specific information on the configuration file, and for links to the example configuration file.
Configuration file syntax
Cromwell configuration files are written in a syntax called HOCON. See the HOCON documentation for more information on all the ways one can create a valid configuration file.
Creating your first configuration file
To get started customizing Cromwell via a configuration file, create a new empty text file, say
your.conf. Then add this include at the top:
include required(classpath("application"))
The default Cromwell configuration values are set via Cromwell's
application.conf. To ensure that you always have the defaults from the
application.conf, you must include it at the top of your new configuration file.
Running Cromwell with your configuration file
One you have created a new configuration file, you can pass the path to Cromwell by setting the system property
config.file:
java -Dconfig.file=/path/to/your.conf -jar cromwell-[VERSION].jar server
Cromwell should start up as normal. As you haven't actually overridden any values yet, Cromwell should be running with the same settings.
Setting a configuration value
To override a configuration value, you can specify new values in your configuration file. For example, say you want to change the default port that cromwell listens from
8000 to
8080. In your config file you can set:
# below the include line from before webservice { port = 8080 }
When you then run Cromwell updated config file, cromwell will now be listening on 8080 or 8000.
Finding more configuration properties
In addition to the common configuration properties listed on the configuration page, there are also a large number of example configuration stanzas commented in cromwell.examples.conf.
Next Steps
After completing this tutorial you might find the following page interesting: | http://cromwell.readthedocs.io/en/develop/tutorials/ConfigurationFiles/ | 2018-05-20T19:11:01 | CC-MAIN-2018-22 | 1526794863684.0 | [] | cromwell.readthedocs.io |
Handling unquoted arguments as variables
Handle unquoted tag attribute values as strings
Unquoted values for tag attributes are handled as strings by default, however with Lucee 5 there is now a setting in the Administrator (under Settings - Language/Compiler) where you can select to handle these values as variables instead.
Take this example:
<cfmail subject=subject from=from to=to />
With the default setting, this is interpreted as strings, e.g.:
<cfmail subject="subject" from="from" to="to" />
However when this setting is enabled, it is interpreted as variables, e.g.:
<cfmail subject="#subject#" from="#from#" to="#to#" /> | http://docs.lucee.org/guides/lucee-5/unquoted-arguments.html | 2018-05-20T19:42:28 | CC-MAIN-2018-22 | 1526794863684.0 | [] | docs.lucee.org |
Oraide is a Python library to help presenters with live coding, demonstrations, and recording screencasts. Oraide uses tmux to create the illusion that someone is manually typing in a terminal session. Oraide is free software, provided under a BSD-style license.
Contents:
If you’d like to review the source, contribute changes, or file a bug report, please see Oraide on GitHub. | http://oraide.readthedocs.io/en/v0.2/ | 2018-05-20T19:25:18 | CC-MAIN-2018-22 | 1526794863684.0 | [] | oraide.readthedocs.io |
Introduction
This guide provides step-by-step instructions on how to add WSO2 as an identity provider in JIRA using Kantega Single Sign-on.
The guide can also be used when setting up SAML with Confluence, Bitbucket, Bamboo and FeCru..
-
Choosing the username attribute
- Select the desired username attribute.
- At the next logon [email protected] will be created in the JIRA Internal Directory.
- If a user with the user name [email protected] already exists, the following message will appear.
Redirect mode
After setting up SSO choose a redirect mode that best fit your use case. | https://docs.kantega.no/display/KA/WSO2 | 2018-05-20T19:22:42 | CC-MAIN-2018-22 | 1526794863684.0 | [] | docs.kantega.no |
Test-Outlook
Web Services
Syntax
Test-OutlookWebServices [[-Identity] <RecipientIdParameter>] [-ClientAccessServer <ClientAccessServerIdParameter>] [-Confirm] [-DomainController <Fqdn>] [-MonitoringContext <$true | $false>] [-TargetAddress <RecipientIdParameter[]>] [-WhatIf] [<CommonParameters>]
Description
The Test-OutlookWebServices cmdlet uses a specified e-mail address to verify that the Outlook provider is configured correctly.-OutlookWebServices -Identity:[email protected]
This example verifies the service information that's returned to the Outlook client from the Autodiscover service for the user [email protected]. The code example verifies information for the following services:
Availability service
Outlook Anywhere
Offline address book
Unified Messaging
The example tests for a connection to each service. The example also submits a request to the Availability service for the user [email protected] to determine whether the user's free/busy information is being returned correctly from the Client Access server to the Outlook client.
Optional Parameters
The ClientAccessServer parameter specifies the Client Access server that the client accesses.'s needed to authenticate.
The MonitoringContext parameter specifies whether the results of the command include monitoring events and performance counters. The two possible values for this parameter are $true or $false. If you specify $true, the results include monitoring events and performance counters, in addition to information about the MAPI transaction.
The TargetAddress parameter specifies the recipient that's used to test whether Availability service data can be retrieved.. | https://docs.microsoft.com/en-us/powershell/module/exchange/client-access/Test-OutlookWebServices?view=exchange-ps | 2018-05-20T20:37:26 | CC-MAIN-2018-22 | 1526794863684.0 | [] | docs.microsoft.com |
Adding a new user to your MIVA Merchant system does not give them any privileges, unless you specifically assign privileges. Once they are a user, you can make them a store manager, or assign them to any store administration group.
Any administrator can return at any time to update the information. | http://docs.mivamerchant.com/en-US/merchant/5/webhelp/Add_User.htm | 2009-07-04T19:22:57 | crawl-002 | crawl-002-021 | [] | docs.mivamerchant.com |
Let's say that new shipping company goes out of business after 6 months. You no longer offer it as a shipping option in your store. You might as well delete the module, as it is only taking up space on the server.
First, unassign the module from any store(s) where it is being used. In the case of our shipping module, click Shipping Configuration. Clear the check box and click Update to unassign it.
Next go to the Modules screen at the domain level and locate the module. Click Edit then Delete.
You will be asked to confirm the removal of the module from the Miva Merchant system. Also, select whether to Delete the module file from the server. In the case of the shipping module for the defunct shipping company, you would delete the file, because there is no chance you will be using that module again. Click Delete to confirm or Cancel if you do not want to remove the module. | http://docs.mivamerchant.com/en-US/merchant/5/webhelp/Delete_Module.htm | 2009-07-04T19:25:36 | crawl-002 | crawl-002-021 | [] | docs.mivamerchant.com |
Revision history of "Manual 1 6"
View logs for this page
There is no edit history for this page.
This page has been deleted. The deletion and move log for the page are provided below for reference.
- 02:37, 5 December 2012 Tom Hutchison (Talk | contribs) deleted page Manual 1 6 (Redundant page: page redundant and no longer needed) | https://docs.joomla.org/index.php?title=Manual_1_6&action=history | 2015-07-28T04:38:22 | CC-MAIN-2015-32 | 1438042981525.10 | [] | docs.joomla.org |
public interface Cache
null).
String getName()
Object getNativeCache()
Cache.ValueWrapper get(Object key)
Returns
null if the cache contains no mapping for this key;
otherwise, the cached value (which may be
null itself) will
be returned in a
Cache.ValueWrapper.
key- the key whose associated value is to be returned
Cache.ValueWrapperwhich may also hold a cached
nullvalue. A straight
nullbeing returned means() | http://docs.spring.io/spring/docs/3.2.x/javadoc-api/org/springframework/cache/Cache.html | 2015-07-28T03:31:38 | CC-MAIN-2015-32 | 1438042981525.10 | [] | docs.spring.io |
Robots.txt file
From Joomla! Documentation
Revision as of 18:20, 14 March 2013 by Samwilson/
Robot Exclusion
You can exclude directories or block robots from your site adding Disallow rule to | https://docs.joomla.org/index.php?title=Robots.txt_file&direction=prev&oldid=82567 | 2015-07-28T04:50:25 | CC-MAIN-2015-32 | 1438042981525.10 | [] | docs.joomla.org |
Requirements¶
Let:
-.7" # Windows c:\> %VENV%\Scripts\easy_install "pyramid==1.5.7". | http://docs.pylonsproject.org/projects/pyramid/en/latest/quick_tutorial/requirements.html | 2015-07-28T03:31:36 | CC-MAIN-2015-32 | 1438042981525.10 | [] | docs.pylonsproject.org |
Difference between revisions of "Module"
From Joomla! Documentation
Revision as of 21:23, 8 May 2010
Position
Modules can be added to a Module Position. Positions are defined in a Joomla template. Additionally you can see what positions are available in the template you are using by adding either ?tp=1 or &tp=1 to the end of the URL in the frontend. | https://docs.joomla.org/index.php?title=Module&diff=27210&oldid=12367 | 2015-07-28T04:06:28 | CC-MAIN-2015-32 | 1438042981525.10 | [] | docs.joomla.org |
Information for "Release Specific FAQ's?" Basic information Display titleRelease Specific FAQ's? Default sort keyRelease Specific FAQ's? Page length (in bytes)434 Page ID298310:51, 18 October 2008 Latest editorJoomlaWikiBot (Talk | contribs) Date of latest edit16:12, 1 September 2012 Total number of edits3 Total number of distinct authors3 Recent number of edits (within past 30 days)0 Recent number of distinct authors0 Retrieved from ‘’ | https://docs.joomla.org/index.php?title=Release_Specific_FAQ's%3F&action=info | 2015-07-28T04:26:19 | CC-MAIN-2015-32 | 1438042981525.10 | [] | docs.joomla.org |
java.lang.Object
org.jboss.seam.solder.beanManager.BeanManagerLocatororg.jboss.seam.solder.beanManager.BeanManagerLocator
public class BeanManagerLocator
A utility for use in non-managed classes, which are not able to obtain a
reference to the
BeanManager using injection.
BeanManagerProvider is an SPI that may be implemented to allow third
parties to register custom methods of looking up the BeanManager in an
external context. This class will consult implementations according to
precedence.
**WARNING** This class is NOT a clever way to get the
BeanManager, and should be avoided at all costs. If you need a handle
to the
BeanManager you should probably register an
Extension
instead of using this class; have you tried using @
Inject?
If you think you need to use this class, chat to the community and make sure you aren't missing a trick!
BeanManagerProvider,
BeanManagerAware
public BeanManagerLocator()
public BeanManager getBeanManager()
BeanManagerProviderimplementations to locate the
BeanManagerand return the result. If the
BeanManagercannot be resolved, throw a
BeanManagerUnavailableException.
BeanManagerUnavailableException- if the BeanManager cannot be resolved
public boolean isBeanManagerAvailable()
BeanManagerProviderimplementations to locate the
BeanManagerand return whether it was found, caching the result.
trueif the bean manager has been found, otherwise
false
public BeanManagerProvider getLocatingProvider()
BeanManagerProviderthat was used to locate the BeanManager. This method will not attempt a lookup. | http://docs.jboss.org/seam/3/solder/3.0.0.Beta1/api/org/jboss/seam/solder/beanManager/BeanManagerLocator.html | 2015-07-28T04:54:57 | CC-MAIN-2015-32 | 1438042981525.10 | [] | docs.jboss.org |
Help Center
Local Navigation
Key HTTP headers included with browser requests
The BlackBerry® Browser includesa number of HTTP headers with each request. You can use the following headers to ensure that you send the optimal content or resources for the BlackBerry device, the network, and the user.
Next topic: BlackBerry Browser architecture
Previous topic: Key URI link schemes supported by the BlackBerry Browser
Was this information helpful? Send us your comments. | http://docs.blackberry.com/en/developers/deliverables/18169/HTTP_headers_sent_by_BB_Browser_1234911_11.jsp | 2015-07-28T03:40:10 | CC-MAIN-2015-32 | 1438042981525.10 | [] | docs.blackberry.com |
java.lang.Object
org.springframework.conversation.manager.AbstractConversationRepositoryorg.springframework.conversation.manager.AbstractConversationRepository
public abstract class AbstractConversationRepository
An abstract implementation for a conversation repository. Its implementation is based on the
DefaultConversation and manages its initial timeout and provides
easy removal functionality. Internally, there is no explicit check for the conversation object implementing the
MutableConversation interface, it is assumed to be implemented as the abstract repository also creates the
conversation objects.
public AbstractConversationRepository()
public MutableConversation createNewConversation()
createNewConversationin interface
ConversationRepository
public MutableConversation createNewChildConversation(MutableConversation parentConversation, boolean isIsolated)
createNewChildConversationin interface
ConversationRepository
parentConversation- the parent conversation to create and attach a new child conversation to
isIsolated-
trueif the new child conversation has to be isolated from its parent state,
falseif it will inherit the state from the given parent
public void removeConversation(String id, boolean root)
rootflag automatically by invoking the
#removeConversation(org.springframework.conversation.Conversation)by either passing in the root conversation or just the given conversation.
#removeConversation(org.springframework.conversation.Conversation)method to finally remove the conversation object or they might provide their own custom implementation for the remove operation by overwriting this method completely.
removeConversationin interface
ConversationRepository
id- the id of the conversation to be removed
root- flag indicating whether the whole conversation hierarchy should be removed (
true) or just the specified conversation
protected final void removeConversation(MutableConversation conversation)
conversation- the conversation to be removed, including its children, if any
protected abstract void removeSingleConversationObject(MutableConversation conversation)
conversation- the single conversation object to be removed from this repository
public void setDefaultConversationTimeout(int defaultTimeout)
public int getDefaultConversationTimeout() | http://docs.spring.io/spring/docs/3.1.0.M2/javadoc-api/org/springframework/conversation/manager/AbstractConversationRepository.html | 2015-07-28T03:34:42 | CC-MAIN-2015-32 | 1438042981525.10 | [] | docs.spring.io |
alias of ResourceDirectory
A Resource Site that is suited for administrative purposes. By
default the user must be a staff user.
alias of AuthResource
A Resource Site that is meant for globally registering endpoints
without needing to explicitly create a Resource Site.
Full-text doc search.
States
Throttle
Enter search terms or a module, class or function name. | http://django-hyperadmin.readthedocs.io/en/latest/manual/api/sites.html | 2017-11-18T02:57:53 | CC-MAIN-2017-47 | 1510934804518.38 | [] | django-hyperadmin.readthedocs.io |
NEO node introduction
Nodes that store all of the blockchain are called “full-nodes”. They are connected to the blockchain through a P2P network. All the nodes in the blockchain network are equal, they act both as a client interface and as a server.
There are two full-node programs. The first one is Neo-GUI, it has all the basic functions of a user-client including a graphical user interface and is intended for NEO users. The second one is Neo-CLI, it provides an external API for basic wallet functions and is intended for NEO developers. It will also help other nodes achieve consensus with the network and will be involved in generating new blocks.
The NEO network protocol will provide a low level API for some transaction types that are not currently supported by the CLI, such as claiming GAS or sending NEO without an open wallet.
NEO node download address
Comparison of GUI node and CLI node functions
Port description
If you want an external program to access the node's API, an open firewall port is required. The following is a port description that can be set to fully open or open-on-demand.
For more information, please refer to test network. | http://docs.neo.org/en-us/node/introduction.html | 2017-11-18T02:49:54 | CC-MAIN-2017-47 | 1510934804518.38 | [] | docs.neo.org |
NOTE: the construction of the sender is based on the urlname and the event.
This allows for listeners to be registered independently of resource construction.
Sent by the endpoint when an event occurs
Sent by the resource when an event occurs
Full-text doc search.
Throttle
Contributing
Enter search terms or a module, class or function name. | http://django-hyperadmin.readthedocs.io/en/latest/manual/api/signals.html | 2017-11-18T02:55:50 | CC-MAIN-2017-47 | 1510934804518.38 | [] | django-hyperadmin.readthedocs.io |
these games to notify their users when
events occur.:
A local push notification is local to the device that the game
is installed on, and requires no backend server. following functions are available for local notifications:
- push_local_notification
- push_get_first_local_notification
- push_get_next_local_notification
- push_cancel_local_notification
NOTE: This function is limited to the iOS and Android target modules.
Remote notification messages are sent by a server to a service
provided by the device platform App Store, and this then forwards
those messages onto all the devices on which your application is
installed. This is supported by GameMaker: Studio on the
iOS, Android, Tizen (Native and JaveScript) target modules.
There are no functions in GameMaker: Studio to deal with remote notifications, as they must all be generated by your server and handled by the respective App Stores. However, once set up correctly, GameMaker: Studio games will receive these notifications, which can then be dealt with in the Asynchronous Push Event, as you would a local notification. (or an error message in the "error" key, if "status" is 0). You must then send this registration id to your server, and every device that your game is installed on will have a different registration id. Your server must maintain a list of ids for registered devices, as when when you send a push notification message from your server, you use the registration ids to send the message to the registered devices.
Please note that there is no guarantee that remote push notifications will be delivered, and that the allowed data payload is fairly small. This varies between platforms, but iOS is particularly limited - the apple service only delivers the most recent notification, which must be selected by the recipient for the payload data to be delivered to your async event, and these notifications have a mximum payload size of 256bytes. Typically a remote push notification would just indicate that new data is available from your server for example.
NOTE: Android requires that you add the GCM Sender ID into the Global Game Settings. This is the Project Number that is assigned when you create your Google Play API Project.
For further details on how to go about setting up a server, as
well as information specific to the available platforms, please see
the following pages on the YoYo Games
Knowledge Base:
NOTE: Implementing the server-side is entirely up to the end user, and YoYo Games do not provide any support for that side of things, other than basic set-up information available from their Knowledge Base. | http://docs.yoyogames.com/source/dadiospice/002_reference/push%20notifications/index.html | 2017-11-18T03:06:12 | CC-MAIN-2017-47 | 1510934804518.38 | [] | docs.yoyogames.com |
4.1. Set traffic control (
tcset command)¶
tcset is a command to add traffic control rule to a network interface (device).
You can delete rule(s) from a network interface by Delete traffic control (tcdel command).
4.1.1.
tcset command help¶
usage: tcset [-h] [--version] [--tc-command | --tc-script] [--debug | --quiet] [--stacktrace] (-d DEVICE | -f CONFIG_FILE) [--overwrite | --change | --add] [--rate BANDWIDTH_RATE] [--delay NETWORK_LATENCY] [--delay-distro LATENCY_DISTRO_TIME] [--loss PACKET_LOSS_RATE] [--duplicate PACKET_DUPLICATE_RATE] [--corrupt CORRUPTION_RATE] [--reordering REORDERING_RATE] [--shaping-algo {htb,htb}] [--iptables] [--direction {outgoing,incoming}] [--network DST_NETWORK] [--src-network SRC_NETWORK] [--port DST_PORT] [--src-port SRC_PORT] [--ipv6] [--exclude-dst-network EXCLUDE_DST_NETWORK] [--exclude-src-network EXCLUDE_SRC_NETWORK] [--exclude-dst-port EXCLUDE_DST_PORT] [--exclude-src-port EXCLUDE_SRC_PORT] optional arguments: -h, --help show this help message and exit --version show program's version number and exit --tc-command display tc commands to be executed and exit. these commands are not executed. --tc-script generate a script file that described tc commands which equivalent with execution tcconfig command. the script can be execute without tcconfig package installation. --debug for debug print. --quiet suppress execution log messages. -d DEVICE, --device DEVICE network device name (e.g. eth0) -f CONFIG_FILE, --config-file CONFIG_FILE setting traffic controls from a configuration file. output file of the tcshow. --overwrite overwrite existing traffic shaping rules. --change change existing traffic shaping rules to the new one. this option reduces the shaping rule switching side effect (such as traffic spike) compared to --overwrite option. note: the tcset command adds a shaping rule if there are no existing shaping rules. --add add a traffic shaping rule in addition to existing rules. Debug: --stacktrace print stack trace for debug information. --debug option required to see the debug print. Traffic Control Parameters: --rate BANDWIDTH_RATE, --bandwidth-rate BANDWIDTH_RATE network bandwidth rate [bit per second]. valid units are either: K/M/G/Kbps/Mbps/Gbps e.g. --rate 10Mbps --delay NETWORK_LATENCY round trip network delay. the valid range is from 0ms to 60min. valid time units are: m/min/mins/minute/minu tes/s/sec/secs/second/seconds/ms/msec/msecs/millisecon d/milliseconds/us/usec/usecs/microsecond/microseconds. if no unit string found, considered milliseconds as the time unit. (default=0ms) --delay-distro LATENCY_DISTRO_TIME distribution of network latency becomes X +- Y (normal distribution). Here X is the value of --delay option and Y is the value of --delay-dist option). network latency distribution is uniform, without this option. valid time units are: m/min/mins/minute/minutes/s/sec/ secs/second/seconds/ms/msec/msecs/millisecond/millisec onds/us/usec/usecs/microsecond/microseconds. if no unit string found, considered milliseconds as the time unit. --loss PACKET_LOSS_RATE round trip packet loss rate [%]. the valid range is from 0 to 100. (default=0) --duplicate PACKET_DUPLICATE_RATE round trip packet duplicate rate [%]. the valid range is from 0 to 100. (default=0) --corrupt CORRUPTION_RATE packet corruption rate [%]. the valid range is from 0 to 100. packet corruption means single bit error at a random offset in the packet. (default=0) --reordering REORDERING_RATE packet reordering rate [%]. the valid range is from 0 to 100. (default=0) --shaping-algo {htb,htb} shaping algorithm. defaults to htb (recommended). --iptables use iptables to traffic control. Routing: --direction {outgoing,incoming} the direction of network communication that impose traffic control. 'incoming' requires Linux kernel version 2.6.20 or later. (default = outgoing) --network DST_NETWORK, --dst-network DST_NETWORK target IP-address/network to control traffic --src-network SRC_NETWORK set a traffic shaping rule to specific packets that routed from --src-network to --dst-network. this option required to execute with the --iptables option when you use tbf. the shaping rule only affect to outgoing packets (no effect to if you execute with "-- direction incoming" option) --port DST_PORT, --dst-port DST_PORT target destination port number to control traffic. --src-port SRC_PORT target source port number to control traffic. --ipv6 apply traffic control to IPv6 packets rather than IPv4. --exclude-dst-network EXCLUDE_DST_NETWORK exclude a shaping rule for a specific destination IP- address/network. --exclude-src-network EXCLUDE_SRC_NETWORK exclude a shaping rule for a specific source IP- address/network. --exclude-dst-port EXCLUDE_DST_PORT exclude a shaping rule for a specific destination port. --exclude-src-port EXCLUDE_SRC_PORT exclude a shaping rule for a specific source port. Issue tracker:
4.1.2. Basic usage¶
Examples of outgoing packet traffic control settings are as follows.
4.1.2.2. e.g. Set network latency¶
You can use time units (such as us/sec/min/etc.) to designate delay time.
4.1.2.5. e.g. Specify the IP address of traffic control¶
# tcset --device eth0 --delay 100 --network 192.168.0.10
4.1.3. Advanced usage¶
4.1.3.1. Traffic control of incoming packets¶
You can set traffic shaping rule to incoming packets by executing
tcset command with
--direction incoming option.
Other options are the same as in the case of the basic usage.
4.1.3.1.1. e.g. Set traffic control for both incoming and outgoing network¶
# tcset --device eth0 --direction outgoing --rate 200K --network 192.168.0.0/24 # tcset --device eth0 --direction incoming --rate 1M --network 192.168.0.0/24
4.1.3.2. Set latency distribution¶
Network latency setting by
--delay option is a uniform distribution.
If you are using
--delay-distro option, latency decided by a normal distribution.
4.1.3.3. Set multiple traffic shaping rules per interface¶
You can set multiple shaping rules to a network interface with
--add option.
tcset --device eth0 --rate 500M --network 192.168.2.0/24 tcset --device eth0 --rate 100M --network 192.168.0.0/24 --add
4.1.3.4. Using IPv6¶
IPv6 addresses can be used at
tcset/
tcshow commands with
--ipv6 option.
# tcset --device eth0 --delay 100 --network 2001:db00::0/24 --ipv6 # tcshow --device eth0 --ipv6 { "eth0": { "outgoing": { "dst-network=2001:db00::/24, protocol=ipv6": { "delay": "100.0", "rate": "1G" } }, "incoming": {} } }
4.1.3.5. Get
tc commands¶
You can get
tc commands to be executed by
tcconfig commands by
executing with
--tc-command option
(no
tc configuration have made to the execution server by this command).
4.1.3.6. Generate a
tc script file¶
--tc-script option generates an executable script which includes
tc commands to be executed by
tcconfig commands.
The created script can execute at other servers where tcconfig not installed (however, you need the tc command to run the script).
4.1.3.7. Set a shaping rule for multiple destinations¶
4.1.3.7.1. Example Environment¶
Multiple hosts (
A,
B,
C,
D) are on the same network.
A (192.168.0.100) --+--B (192.168.0.2) | +--C (192.168.0.3) | +--D (192.168.0.4)
4.1.3.7.2. Set a shaping rule to multiple hosts¶
--dst-network/
--src-network option can specify not only a host but also network.
The following command executed at host
A will set a shaping rule that incurs 100 msec network latency to packets
from
A (192.168.0.100) to specific network (
192.168.0.0/28 which include
B/
C/
D).
You can exclude hosts from shaping rules by
--exclude-dst-network/
--exclude-src-network option.
The following command executed at host
A will set a shaping rule that incurs 100 msec network latency to packets
from host
A (192.168.0.100) to host
B (192.168.0.2)/
D (192.168.0.4).
4.1.3.8. Shaping rules for between multiple hosts¶
4.1.3.8.1. Example Environment¶
Existed multiple networks (
192.168.0.0/24,
192.168.10.1/24).
Host
A (192.168.0.100) and host
C (192.168.0.100) belong to a different network.
Host
B (192.168.0.2/192.168.1.2) belong to both networks.
A (192.168.0.100) -- (192.168.0.2) B (192.168.1.2) -- C (192.168.1.10) | http://tcconfig.readthedocs.io/en/latest/pages/usage/tcset/index.html | 2017-11-18T02:37:34 | CC-MAIN-2017-47 | 1510934804518.38 | [] | tcconfig.readthedocs.io |
AWS services or capabilities described in AWS Documentation may vary by region/location. Click Getting Started with Amazon AWS to see specific differences applicable to the China (Beijing) Region.
For more information, see Suspending and Resuming Auto Scaling Processes in the Auto Scaling Developer Guide.
Namespace: Amazon.AutoScaling
Assembly: AWSSDK.dll
Version: (assembly version)
Container for the necessary parameters to execute the ResumeProcesses service method.
.NET Framework:
Supported in: 4.5, 4.0, 3.5 | http://docs.aws.amazon.com/sdkfornet/latest/apidocs/items/MAutoScalingIAutoScalingResumeProcessesResumeProcessesRequestNET35.html | 2017-11-18T03:17:52 | CC-MAIN-2017-47 | 1510934804518.38 | [] | docs.aws.amazon.com |
AWS services or capabilities described in AWS Documentation may vary by region/location. Click Getting Started with Amazon AWS to see specific differences applicable to the China (Beijing) Region.
PENDINGor
RUNNINGstate.
Namespace: Amazon.CloudWatchLogs.Model
Assembly: AWSSDK.dll
Version: (assembly version)
The CancelExportTaskRequest type exposes the following members
.NET Framework:
Supported in: 4.5, 4.0, 3.5 | http://docs.aws.amazon.com/sdkfornet/latest/apidocs/items/TCloudWatchLogsCancelExportTaskRequestNET35.html | 2017-11-18T03:17:55 | CC-MAIN-2017-47 | 1510934804518.38 | [] | docs.aws.amazon.com |
mod_translation¶
This module provides support for dealing with multiple languages.
How content and static strings are translated is explained in full in Translation.
Language as part of the URL¶ labelled “Show the language in the URL”.
Alternatively you can set the config key
mod_translation.rewrite_url to
false.
Programmatically switching languages¶` %}
Supporting right-to-left languages¶¶
If you write your own templates, you can add the
lang tag in the html or body tag, for instance:
" %} >
To create individual right-to-left elements, you can use the same principle:
<div {% include "_language_attrs.tpl" %}></div>
And when you want to force a specific language:
<div {% include "_language_attrs.tpl" language=`en` %} >This is English content</div> | http://docs.zotonic.com/en/latest/ref/modules/mod_translation.html | 2017-02-19T14:15:10 | CC-MAIN-2017-09 | 1487501169776.21 | [] | docs.zotonic.com |
Fog Fog and Chef provisioning:
- fog_key_pair
fog_key_pair¶
The fog_key_pair resource is a driver-specific resource used by Chef provisioning for use with Fog, a Ruby gem for interacting with various cloud providers, such as Amazon EC2, CloudStack, DigitalOcean, Google Cloud Platform, Joyent, OpenStack, Rackspace, SoftLayer, and vCloud Air.
Syntax¶
A fog_key_pair resource block typically declares a key pair for use with Fog, a Ruby gem for interacting with various cloud providers. For example:
fog_key_pair 'name' do private_key_options({ :format => :pem, :type => :rsa, :regenerate_if_different => true }) allow_overwrite true end
The full syntax for all of the properties that are available to the fog_key_pair resource is:
fog_key_pair 'name' do allow_overwrite TrueClass, FalseClass driver Chef::Provisioning::Driver private_key_options Hash private_key_path String public_key_path String end
where
- fog_key_pair is the resource
- name is the name of the resource block and also the name of an instance in Amazon EC2
- allow_overwrite, driver, private_key_options, private_key_path, and public_key_path are properties of this resource, with the Ruby type shown. See “Properties” section below for more information about all of the properties that may be used with this resource. | https://docs.chef.io/provisioning_fog.html | 2017-02-19T14:18:53 | CC-MAIN-2017-09 | 1487501169776.21 | [] | docs.chef.io |
The price database editor allows you to enter exchange rates between currencies or prices for investments. You can pull it up using the → menu option.
button, you get the New Price Entry dialog. Enter the first currency (USD in the example above) in the Security field. Enter the second currency (EUR in the example above) in the Currency field. Enter the effective date of the price in the Date field.
KMyMoney will fetch currency conversions from the web. Once you have entered a single price for a pair of currencies, the online quote feature will always list that pair amongst its options.
See the section on Online Price Quotes in the Investments chapter for more details. | https://docs.kde.org/stable4/en/extragear-office/kmymoney/details.currencies.prices.html | 2017-02-19T14:24:15 | CC-MAIN-2017-09 | 1487501169776.21 | [array(['/stable4/common/top-kde.jpg', None], dtype=object)
array(['currency_priceeditor.png', 'Currency Price Editor'], dtype=object)
array(['currency_newpriceentry.png', 'New Price Entry'], dtype=object)] | docs.kde.org |
cookiecutter,
Create
tutorial/tests/test_views.py such that it appears as follows:
Create
tutorial/tests/test_functional.py such that it appears as follows:
Create
tutorial/tests/test_initdb.py such that it appears as follows:
Create
tutorial/tests/test_security.py such that it appears as follows:
Create
tutorial/tests/test_user_model.py such that it appears as follows:
Running the tests¶
We can run these tests similarly to how we did in Run the tests, but first delete the SQLite database
tutorial.sqlite. If you do not delete the database, then you will see an integrity error when running the tests.
On UNIX:
$ rm tutorial.sqlite $ $VENV/bin/py.test -q
On Windows:
c:\tutorial> del tutorial.sqlite c:\tutorial> %VENV%\Scripts\py.test -q
The expected result should look like the following:
................................ 32 passed in 9.90 seconds | http://docs.pylonsproject.org/projects/pyramid/en/latest/tutorials/wiki2/tests.html | 2017-02-19T14:21:39 | CC-MAIN-2017-09 | 1487501169776.21 | [] | docs.pylonsproject.org |
This is the latest version of the ChemAxon Documentation.
Compliance Checker is a combined software system and content package providing a way to check whether your compounds are controlled according to the relevant laws of the countries of interest.
With Web Services, API, Pipeline Pilot nodes and a user interface you have many choices in building and integrating systems to automatically consider and identify molecules requiring controlled handling.
Search within Compliance Checker documentation | https://docs.chemaxon.com/display/docs/Compliance+Checker | 2017-02-19T14:17:47 | CC-MAIN-2017-09 | 1487501169776.21 | [] | docs.chemaxon.com |
Welcome to the Actifio Public Documentation Library page. Here you’ll find a wealth of technical documentation for our current release that provides a view into the radically simple world of Actifio copy data management —- the virtual data pipeline.
Actifio customers have access to Actifio Now, our customer support portal. Actifio Now provides complete and comprehensive technical product documentation that includes proprietary information not available here. In addition, it includes support for all releases of all Actifio products: the what, why, and how information you need to keep your data and Actifio systems in tip-top shape. | http://docs.actifio.com/ | 2017-02-19T14:18:32 | CC-MAIN-2017-09 | 1487501169776.21 | [] | docs.actifio.com |
You can create a corridor surface and then add the required feature line or link codes. Also, you can create a separate corridor surface from each link code in a single operation.
When creating corridor surfaces, sometimes you may need to correct overhanging surfaces in the corridor. Overhanging surfaces are corrected in the Surfaces tab, Corridor Properties dialog box. For more information, see Resolving an Overhanging Surface.
To create a corridor surface
The Corridor Surfaces dialog box is displayed.
If the check box beside the surface name is selected, the surface is added to the Surfaces collection on the Toolspace Prospector tab.
To create a corridor surface for each link
If the check box next to a surface name is selected, the surface is added to Surfaces collection on the Toolspace Prospector tab. | http://docs.autodesk.com/CIV3D/2013/ENU/filesCUG/GUID-6633D4D3-EDCE-48CD-9EC7-34A1C1520B21.htm | 2016-06-24T23:58:55 | CC-MAIN-2016-26 | 1466783391634.7 | [] | docs.autodesk.com |
Datastore 2.3.0 User Guide Configuration objects The Designer enables you to specify: Formats that define the Collection structure. See:About Collection TypesAbout Object Types Dictionaries that define the localized labels that will be displayed in the Datastore Datastore application. General configuration for the system and user interface:Global settings. See About global settings.Object Type SelectionsCollection Type Selections Related topics Data model Manage the Datastore model Storage model Related Links | https://docs.axway.com/bundle/Datastore_230_UserGuide_allOS_en_HTML5/page/Content/UserGuide/Common/Concepts/Configuration_objects/Configuration_objects_overview.htm | 2019-02-15T20:53:46 | CC-MAIN-2019-09 | 1550247479159.2 | [] | docs.axway.com |
Sub-Domains in SSSDÂś
Currently SSSD assumes that each domain configured in sssd.conf represents exactly one domain in the backend. For example if only the domains DOM1 and DOM2 are configured in sssd.conf a request for a user for DOMX will return an error message and no backend is queried.
In an environment where different domains can trust each other and SSSD shall handle user from domains trusting each other every single domain must be configured in sssd.conf. Besides that this is cumbersome there is an additional issue with respect to group memberships. SSSD by design does not support group memberships between different configured domains, e.g. a user A from domain DOM1 cannot be a member of group G from domain DOM2.
It would be nice if SSSD can support trusted domains in the sense that
- only one domain has to be configured in sssd.conf and all trusted domains are available through SSSD
- a user can be a member of a group in a different trusted domain
To achieve this SSSD must support the concept of domains inside of a configured domain which we like to call sub-domain in the following. Instead of creating a list of know domains from the data in sssd.conf the PAM and NSS responder must query each backend for the names of the domains the backend can handle. If the backend does not support the new request the domain name from sssd.conf must be used as a fallback.
If a request for a simple user name (without @domain_name, i.e. no domain name is know) is received the first configured domain in sssd.conf and all its sub-domains is queried first before moving to the next configured domains and its sub-domains.
If a request with a fully qualified user name is received the backend handling this (sub-)domain is queried directly. If the requested domain is not know the configured domains are asked again for a list a supported domains with a
- force flag to indicate the the backed should try to updated the list of trusted domains unconditionally
- the name of the unknown domain which can be used as a hint in the backend to find the specific domain and see if it is a trusted domain (the backend may pass this hint on to a configured server and let the server do the work)
This process might take some but since it will only happen once for each unknown domain and there may be environment where it is only possible to find a trusted domain with the help of the domain name this is acceptable. Nevertheless, since a search for an unknown domain will lead to some amount of network activity and system load there should be some precaution implemented to avoid attacks based on random domain names (maybe blacklists and timeouts).
With these considerations three development tasks can be identified to add sub-domain support to SSSD
- new get_domains method: a new method to get the list of supported domains from the backend must be defined so that the responders and providers can use them
- add get_domains to providers: providers which can handled trusted domains, currently IPA and winbind, must implement the new method
- add get_domains to the responders: the responders must call get_domains to get a list of supported domains and use the configured domain name as a fallback (this might be split into two task, first call get_domains once at startup without force flag and name of searched domain; second call get_domains if domain cannot be found with force flag and name of searched domain)
The first task must be solved first but is only a minor effort. The other two must wait for the first but also require some more work.
For the first implementation it is sufficient that sub-domains work only if the user name is fully qualified and that the domain name has to be given in full and that short domain names are not supported. But it should be kept in mind user names in general are not fully qualified and that there are trust environments where short names are available to safe some typing for the users. | https://docs.pagure.org/SSSD.sssd/design_pages/subdomains.html | 2019-02-15T21:44:05 | CC-MAIN-2019-09 | 1550247479159.2 | [] | docs.pagure.org |
Designing your site in TERMINALFOUR
- Last Modified:
- 15 Jan 2019
- User Level:
- Power User
The purpose of this article is to help you understand how pages are constructed in TERMINALFOUR so you can code pages that will work neatly once built into TERMINALFOUR.
If you are undertaking your first build with the system then this is aimed at you.
Page Layouts
A single Page Layout can be used on multiple pages on your site. You can think of it as the template for pages in your site. Each Page Layout is made up of Header and Footer code. In this example, the Header contains the logo, main header text and the navigation. Both the Header and Footer will be consistent across all the pages in your site that use this Page Layout.
These two pages use the same Page Layout, so while the content is different on each, the Header and Footer are the same:
Changes to the Page Layout are reflected across all pages using that Page Layout:
Design consistency not only makes your site easier to use, it also reduces the build and any maintenance time for your site in TERMINALFOUR.
The Header and Footer code for your Page Layouts is just plain HTML (with as much JavaScript and CSS as you need). If your site is configured to use a server-side scripting language such as PHP, you can use this too.
Using T4 Tags lets you add useful features like Navigation Objects, metadata and Media Library assets, so your markup can be as simple or complex as you need it.
It's also a good idea to use consistent stylesheets & JavaScript across the Page Layouts to keep things as simple as possible.
For more on Page Layouts, check out the Page Layouts page in the documentation.
Content Types
Each content item added to your site with TERMINALFOUR uses a Content Type. A Content Type is a template for a Content Item.
For instance, you could create a Content Type called "Article" that will be used for all the news articles published on your site:
The Content Type specifies the fields (called Content Type Elements) that can be populated in order to create an article. In this example, the ‘Article’ Content Type has a heading, body text and an image. Each article that is written has some or all of these elements.
For more on Content Types, check out the Content Types page in the documentation.
Content Layouts
Content Layouts let you display a single Content Item in multiple ways. For instance, maybe we want our news article to appear on our site as a full article and within an RSS feed. While both use the same "Article" Content Type we can use different Content Layouts to present that same content in two different ways.
Like Page Layouts, Content Layouts use plain HTML. You add T4 Tags as placeholders for the content that will display on publish.
Try to be consistent & reuse designs across the site. Having a news preview with an h2 title on one page, and h3 title tags on another page will require two separate layouts
For more on Content Layouts, check out the Content Types page in the documentation.
Link Menus and Lists
TERMINALFOUR has a number of built-in Navigation Objects to help you add features like sitemaps, breadcrumbs and, of course, navigation menus to your pages quickly.
Each site in TERMINALFOUR is made up of Sections in a Site Structure. Sections, like folders, help you arrange the pages of your site. The can be used to create a menu listing and link to Sections. If new Sections are added or existing ones are removed or renamed, the Link Menu will be updated.
The most common way to set this up is to use an unordered list.
Single-Level Menu
Here you might want to create main navigation that will list and link to all Sections in the ‘Top Nav’ Branch (a Branch is just a Section that contains Child Sections).
- creating a link menu using an ordered list will only work on a single level
- linking to content such as media or PDFs from within a link menu is not possible
- you can link to external URLs if one of your Sections is a Link Section
You can create a Link Menu Navigation Object that points to that Branch. In the Page Layout, you can add the following markup:
<nav>
<t4 type="navigation" name=Top Nav content" id="363" />
</nav>
The navigation element’s source code looks like this on the published page:
<nav>
<ul>
<li><a href="/news/">News</a></li>
<li><span class=”currentbranch0”><a href="/blog/">Blog</a></span></li> <li><a href="/careers/">Careers</a></li>
<li><a href="/partners/">Partners</a></li>
<li><a href=" link.com">Community</a></li>
</ul>
</nav>
- you might notice the span with a class of “currentbranch0” that has been added to the “Blog” item. This would be added if we were currently on the “Blog” page. The span is applied to the list item of the current Section name at this level where X is the level of the menu (starting at 0 & incrementing 1 per level deep)
- it is also possible to put a class/id on the ul and li, but if using a class on the li then all li's need to use the same class (i.e. do not apply a different class to each li)
- the current branch does not have to be a link
Multilevel Menus (Two Levels)
When you have a multilevel menu in your Link Menu, the source code for your navigation will look like this:
>
Like the single-level menu, a span with a class of ‘currentbranch0’ is added to the current page’s item. In this example, however, there’s another level below ‘About Us’.
When we’re on the ‘Introduction’ page, a span with a class of ‘currentbranch1’ is added to that item. The nested unordered list has a class of ‘multilevel-linkul-0’ added to it.
Thanks to the addition of these classes, we know precisely where we are in the navigation tree.
This is especially useful when we have even more than two levels.
- Child Sections are output in a new UL LI
- this new level has a class multilevel-linkurl-X where X is the depth of the child sections (starting at 0 & incrementing 1 per level deep)
- no class can be applied to the second level of LI's
- the <span class="currentbranchX"> is applied to the current section at this level where X is the level of the menu (starting at 0 & incrementing 1 per level deep)
- making the current branch a link is optional
- the class on the main UL and first-level of LIs are optional (as per the single level menu above)
Multilevel Menus (Three Levels)
>
<ul class="multilevel-linkul-1" title=""> <li><span class="currentbranch2"><a href="/terminalfour/Aboutus/Introduction/Staff/">Staff</a></span></li> </ul>
<>
In this example, our current Branch is two levels below the top level. The list item is given a class of ‘
currentbranch2’. The second nested list has a class of ‘
multilevel-linkul-1’.
this new level has a class multilevel-linkurl
-X where X is the depth of the child sections (starting at 0 & incrementing 1 per level deep)
- adding a current class to the branch is optional
- the <span class="currentbranchX"> is applied to the current section at this level where X is the level of the menu (starting at 0 & incrementing 1 per level deep)
- making the current branch a link is optional
A-Z Navigation
The A to Z Navigation Object is a variation outputs a list of Sections in alphabetical order. The order of the list can either be ascending or descending.
For more on A-Z Navigation Navigation Object, check out the page in the documentation
Breadcrumbs
Breadcrumbs help orient a site user within the site. In general, any custom code can appear before and after the breadcrumbs, as well as between the breadcrumbs.
You cannot have separate classes for each part of the breadcrumb. Adding the class 'first' & 'last' is also not possible in TERMINALFOUR though you can target these with CSS.
When creating a Breadcrumb Navigation you can specify the separating HTML that you would like to appear between the breadcrumb items.
In this case, a right double-angled arrow and a non-breaking space ( » ) is specified as the separating HTML:
The following then appears between all links in the published breadcrumbs:
<a href="/ ">Home</a> »
<a href="/news">News</a> »
<a href="/internal">Internal</a> » <a href="/archive">Archive</a>
Breadcrumbs can also be added to a list (the UL can have any id/class):
<ul class="breadcrumb">
<li class="linkItem"><a href="/">Home</a></li>
<li class="linkItem"><a href="/news">News</a></li>
<li class="linkItem"><a href="/internal">Internal</a></li>
<li class="linkItem"><a href="/archive">Archive</a></li>
</ul>
- each link produced by the CMS will have the same code (and therefore class) applied before & after each link
- the current section can be a link or just text
- custom classes can be added with JQuery
For more on the Breadcrumbs Navigation Object, check out the page in the documentation.
Pagination
Pagination makes it easier for your site users to navigate large numbers of content items.
You can add your own HTML before, after and between the page number links:
In the set up you will be asked to specify the content type used for the pagination, how many can be displayed on the page, and how many overall can be shown through pagination.
<nav class="pagination"> <span class="currentpage">1</span> <a href="/alumni/news/2/">2</a>
<a href="/alumni/news/3/">3</a>
<a href="/alumni/news/2/">></a>
<a href="/alumni/news/3/">>></a>
</nav>
A span with a class of ‘currentpage’ is added to the current page number.
- the surrounding html (div, ul, li) are configurable
- ALL pages are shown in the pagination. If there will be a large number of pages, add JavaScript to replace some of the links with a ...
- each item produced by TERMINALFOUR will be the same, but with the updated link
For more on Pagination Navigation Object, check out the Pagination page in the documentation.
Form Builder
Form Builder is a WYSIWYG form editor that makes it easy to create, edit and publish forms on your site. When customizing the look of your form you can apply your own stylesheet but you won't be able to make changes to the markup that is generated. The form elements most commonly altered are radio buttons and checkboxes but bear in mind that the markup for these elements cannot be changed.
If you want to review Form Builder's generated markup there is a sample form featuring available form input elements.
If you want to change DOM elements with JavaScript bear in mind that there is already jQuery included for form functionality (even on unstyled forms) so conflicts may occur. See note on JavaScript below.
You can learn more about creating forms in the documentation.
TERMINALFOUR Modules
A note on Ajax
TERMINALFOUR's Sample Site contains examples of pre-made modules like the Events Calendar, Basic and Advanced Course Search. Many of these rely on Ajax to load content. If you want to use JavaScript or jQuery to manipulate module elements or their content it's worth being aware of how events are attached.
Here's a typical click event with jQuery:
$("#accordion").on("click", function() {...} );
In that case, we bind the click event to the element with an ID of 'accordion'. However, if the accordion element's content is loaded by Ajax, the click event won't work. That's because it only affects elements that are present on page load. Instead, when targetting Ajax loaded content we must bind the event to the body element or the closest static parent element:
$("body").on("click", "#accordion", function() {...} );
Have a look at this Stack Overflow page for more info.
A note of JavaScript files
When supplying JavaScript files to Professional Services please ensure that they are not minified and combined from multiple files. Ideally, a separate JavaScript file should be supplied for each module and/or page. This makes it easier for our developers to parse for potential conflicts.
Events Calendar
The TERMINALFOUR Events Calendar is a PHP application which displays event content in a calendar format.
- the events listing is within <div class="" id="calendar_events">. The code within this cannot be changed, but different CSS can be applied to style the events differently
- the details that are displayed for each event can be changed - on our sample site we have the time and venue. Up to 3 fields/elements can be displayed
- the calendar box, categories, search etc. can be moved to the left column, if preferred
- the detailed view of the event information does not need to be replicated. This is 100% customizable for each implementation | https://docs.terminalfour.com/articles/design-tips/ | 2019-02-15T21:25:14 | CC-MAIN-2019-09 | 1550247479159.2 | [array(['/media/page-layout-usage-26271-multiple-pages.png',
'Diagram showing a single Page Layout serving multiple pages'],
dtype=object)
array(['/media/page-layout-edited.png',
'Diagram of Page Layout with edited and Header and Footer content'],
dtype=object)
array(['/media/content-type-publish-26557.png',
'Diagram of a single Content Type with multiple pieces of content generated from it'],
dtype=object)
array(['/media/content-type-publish-26557-copy.png',
'Diagram showing a piece of content being generated from a Content Type and published to two Channels'],
dtype=object)
array(['/media/site-navigation.png',
'Screenshot of a navigation bar with the Sections from the Site Structure highlighted'],
dtype=object)
array(['/media/Site_structure_TERMINALFOUR.png',
'Screenshot of the Site Structure with Child Sections expanded'],
dtype=object)
array(['/media/Calendar_-_TERMINALFOUR_University.png',
'Screenshot of breadcrumbs'], dtype=object)
array(['/media/Tips_for_TERMINALFOUR_CMS_-_Google_Docs.png',
'Screenshot of the separator HTML option for Navigation Objects'],
dtype=object)
array(['/media/News_-_TERMINALFOUR_University.png',
'Screenshot of published pagination Navigation Object'],
dtype=object) ] | docs.terminalfour.com |
This section covers the most common and important questions that come up when starting to work with iOSApple’s mobile operating system. More info
See in Glossary.
A: Download the SDK, get up and running on the Apple developer site, and set up your team, devices, and provisioning. We’ve provided a basic list of steps to get you started.A downloadable app designed to help with Android, iOS and tvOS development. The app connects with Unity while you are running your Project in Play Mode from the Unity Editor. More info
See in Glossary application. Then, when you are ready to test performance and optimize the game, you should publish to iOS devices.
A: In the scripting reference inside your Unity iOS installation, you will find classes that provide the hooks into the device functionality that you will need to build your apps. Consult the Input page for more information.
A: iOS has a relatively low fillrate. If your particles cover a large portion of the screen with multiple layers, it will kill iOS performance even with the simplest shaderA small script that contains the mathematical calculations and algorithms for calculating the Color of each pixel rendered, based on the lighting input and the Material configuration. More info
See in Glossary. We suggest baking your particle effects into a series of textures offline. Then, at run-time, you can use 1–2 particles to display them via animated textures. You can get ok results with a minimum amount of overdraw this way.
A: Physics can be expensive on iOS as it requires a lot of floating point number calculations. a scriptsA piece of code that allows you to create your own Components, trigger game events, modify Component properties over time and respond to user input in any way you like. More info
See in Glossary.
A: UnityGUI consumes more resources when more.
A: Try using GUILayout as little as possible. If you are not using GUILayout at all from one
OnGUI() call, you can disable all GUILayout renderingThe process of drawing graphics to the screen (or to a render texture). By default, the main camera in Unity renders its view to the screen. More info
See in Glossary using
MonoBehaviour.useGUILayout = false; This doubles GUI rendering performance. Finally, use as few GUI elements while rendering 3D scenesA Scene contains the environments and menus of your game. Think of each unique Scene file as a unique level. In each Scene, you place your environments, obstacles, and decorations, essentially designing and building your game in pieces. More info
See in Glossary as possible.
Did you find this page useful? Please give it a rating: | https://docs.unity3d.com/Manual/iphone-basic.html | 2019-02-15T21:07:14 | CC-MAIN-2019-09 | 1550247479159.2 | [] | docs.unity3d.com |
You can use the Generate Scripts Wizard to create scripts for transferring a database between SQL Server instances. You can generate scripts for a database on an instance of the Database Engine in your local network, or from SQL Database. The generated scripts can be run on another instance of the Database Engine or SQL Database. You can create scripts for an entire database, or limit it to specific objects.
In Database Explorer, expand the node for the instance containing the database to be scripted.
Point to Tasks, and then click Generate Scripts.
On the General page, in the Load a previously saved project section, click Open to select a previously saved *.backup file. If you do not have the project file, go to step 4.
In the Connection list box, select an instance of the SQL Server database engine.
In the Database list box, select a database.
Click “…“ next to the Path text box, to select a folder to store schema export.
In the Output file name text box, specify a name of the file.
Optionally, you can select the Append timestamp to the file name option to add date-time parameters to the file name.
Optionally, you can select the Use compression (ZIP) option to compress the script file.
On the Script content page, select what you want to generate. You can also include or exclude specific database objects.
On the Options page, specify how you want this wizard to generate scripts. Many different options are available.
On the Error handling page, specify errors processing behavior and logging options.
Click Generate. | https://docs.devart.com/fusion-for-sql-server/database-tasks/generating-scripts.html | 2019-02-15T20:58:31 | CC-MAIN-2019-09 | 1550247479159.2 | [] | docs.devart.com |
Multiple Choice (with Radio boxes)
A multi-purpose field used to allow the user to “choose” one or more options. It can be rendered as radio boxes or as radio buttons.
On submit, this field will generate the following tokens: [<FieldName>] (which yields the value), [<FieldName>:Text], [<FieldName>:Value]. Note that if your options don’t have a value (there’s nothing after the pipe character in the item list), then all the tokens above will return same value.
Options:
Display Horizontally
- this option enables you to display radioboxes next to each other on the same line.
Radio Buttons
- this option transforms the circles into inline-buttons.
Radio Buttons CSS Classes
- beautifies the radio buttons using Bootstrap classes. It supports Bootstrap brand button classes (eg: btn-default, btn-primary, btn-success, btn-info , btn-warning, btn-danger, btn-link). Available only if “Radio Buttons” is checked.
Radio Buttons styles
- stylizes the radio buttons using CSS. It supports multiple css attributes separated by semicolon (eg: border:2px groove; border-radius:25px; color:#e42f43; font-family:Georgia). Available only if “Radio Buttons” is checked.
Word between buttons
- displays a text between the radio buttons. Available only if “Radio Buttons” is checked.
Word styles
- stylizes the word beetween buttons using CSS. It supports multiple CSS attributes separated by semicolon (eg: color:#e42f43; font-family:Georgia). Accepts the font-size only in px. Available only if “Radio Buttons” is checked.
Initially Checked
| https://docs.dnnsharp.com/action-form/form-fields/form-fields-types/multiple-choice/multiple-choice-with-radio-boxes.html | 2019-02-15T22:07:46 | CC-MAIN-2019-09 | 1550247479159.2 | [array(['https://s3.amazonaws.com/static.dnnsharp.com/documentation/2017/07/chrome_2017-07-11_15-14-07.png',
None], dtype=object)
array(['https://s3.amazonaws.com/static.dnnsharp.com/documentation/2017/07/chrome_2017-07-11_15-14-57.png',
None], dtype=object) ] | docs.dnnsharp.com |
- Product Support
-
- 546
- 48 minutes ago
- Plugin and Theme ManagerIssues and suggestions related to the WP Ultimo: Plugin and Theme Manager Add-on.
- 9
- 1 week, 6 days ago
- Other Add-onsIf there is no specific forum for an add-on, this is the place to go.
- 28
- 3 days, 1 hour ago | https://docs.wpultimo.com/community/forum/product-support/ | 2019-02-15T21:04:28 | CC-MAIN-2019-09 | 1550247479159.2 | [] | docs.wpultimo.com |
- Wendel Clark – Maple Leaf Legend
- Billy Smith – Hall of Famer & 4 Time Stanley Cup Champion
- Guy Carbonneau – Montreal Legend & 3 Time Stanley Cup Champion
- Shayne Corson – Montreal & Toronto Legend
- Chris Neil – Ottawa Legend
- Angela James – First Women inducted into Hockey Hall of Fame in 2010
- Mike Krushelnyski – 3 Time Stanley Cup Champion
- Geraldine Heaney – Gold Medalist & 3rd women inducted into Hockey Hall of Fame in 2013
- Matt Barnaby – Former NHL tough guy – Guaranteed Fan favourite
- Al Iafrate – Maple Leaf great & former holder of Hardest Slapshot for 15 yrs!
Clark was drafted First overall by the Toronto Maple Leafs in the 1985 NHL Entry Draft.. In 793 career NHL games Clark recorded 564 points and 1690 Penalty Minutes playing for Toronto, Quebec, NY Islanders, Tampa Bay, Detroit & Chicago. Clark also represented Canada in the ’85 World Juniors winning Gold. He currently resides with his family in Toronto and keeps busy as an Ambassador with the Leafs plus various charitable causes and has a restaurant chain and Meineke Car Care Centres.
A nasty opponent to anyone who would come near his crease, Billy Smith was one of the greatest goalies of his era. Drafted by Los Angeles in 1970, he only played five games for them before the New York Islanders took him in the 1972 Expansion Draft. In his first season there, he broke the record for penalty minutes by a goalie and actually fought some of the league’s enforcers! Paired with Chico Resch, both goalies supported a team that would go on to greatness. When Resch was traded to Colorado in 1980, Smith was the undisputed number one and helped the Islanders win their first of four consecutive Stanley Cups. He won the Vezina Trophy in 1982 and the Conn Smythe in ’83 and was inducted into the Hockey Hall of Fame in ’93.
Drafted in the 3rd round by Montreal in 1979, Guy Carbonneau would spend another year in junior and then two years in the minors before making the roster of the legendary Canadiens. Part of that team’s rebuilding after an unusual gap between Cups, Carbonneau earned 47 points and was plus-18 in his rookie season. His two Cups in 13 seasons with Montreal were a direct result of his leadership and his incredible two-way play. In fact, he single-handedly shut down Wayne Gretzky in the 1993 Stanley Cup finals. He won three Selke trophies before being traded to St. Louis. After only one season there, he moved to the Dallas Stars, where he won a third cup in 1999. He spent five seasons in Texas but then returned to Montreal, this time in the front office. After Bob Gainey became GM in 2003, he made sure that Carbonneau, his old wingman, was behind the bench. In 2006, he became the Head Coach of his beloved Canadiens, a position he held until 2009.
Canadian ice hockey player Guy Carbonneau of the Montreal Canadiens on the ice during a game, February 1983. (Photo by Bruce Bennett Studios/Getty Images)
Drafted 8th overall by the Montreal Canadiens in 1984, Shayne Corson would twice represent his country at the World Junior Championships, while playing in the OHL for the Hamilton Steelhawks, before turning pro with Montreal in 1986. He quickly established himself as a power forward and would spend parts of eight seasons with Montreal before being traded to the Edmonton Oilers. Shayne would spend the next three seasons with the Oilers before being dealt to the St. Louis Blues. He would spend just over a year with St. Louis before being dealt back to Montreal during the 1996-97 season. Shayne spent the next four seasons back with team that drafted. Throughout his NHL career, Corson played in 1156 regular season games, scoring 273 goals and adding 420 assists for 693 points along with 2357 penalty minutes.
Shayne Corson #27 of the Montreal Canadiens in action on January 18 1999 (Mandatory Credit: Robert Laberge /Allsport). always expected. Neil is also a fixture in the Ottawa community taking part in many charitable endeavours. Neil, along with his wife, serves as honorary co-chairs of the Rogers House, an Ottawa paediatric hospice.
Angela James is a legendary name among Canadian women’s hockey. The decision to leave her off the roster of the 1998 Olympic Team was as controversial as the decision to leave Mark Messier off the men’s team that year. James was a member of Canada’s gold medal teams at each of the previous four Women’s World Championships. She was Canada’s leading scorer with eleven goals at the 1990 Women’s World Championship and was an All-Star forward in 1992. James had also been a top Canadian scoring threat at the 1994 and 1997 World Championships and represented her country at the Pacific Rim Championship in 1996. Since her retirement, James has become a sport coordinator for Seneca College. Angela James became one of the first two woman to be inducted into the Hockey Hall of Fame alongside Cammi Granato in 2010.
Mike. He even played in the 1985 All-Star Game. Krushelnyski was part of the biggest trades in NHL history, On August 9, 1988, he was traded to the Los Angeles Kings with.
Mike Krushelnyski of the Toronto Maple Leafs skates up ice against on November 1, 1990 at Maple Leaf Gardens (Photo by Graig Abel Collection/Getty Images)
A pioneer of women’s hockey, Geraldine Heaney was a veteran defenseman for the Canadian National team. By the time that she retired in 2003, she had won an Olympic gold medal, an Olympic silver medal and seven World Championships. She played the most games all-time (125) with Canada’s National Women’s Hockey Team, and leads all defensemen with 27 goals, 66 assists and 93 points. She is also the 5th top scorer of all-time. Working at several youth and prospect camps, Heaney served as head coach of the University of Waterloo Warriors women’s hockey team from 2006 to 2012. Heaney was the third woman ever inducted into the Hockey Hall of Fame, when she was inducted in 2013.
Matthew was the fourth round, 84th overall selection of the Buffalo Sabres in the 1992 NHL Entry Draft, Barnaby is a graduate of the QMJHL. In his first full season in the NHL, with the Buffalo Sabres, Barnaby led the league with 335 minutes in penalties. Barnaby also played for Tampa Bay Lightning, New York Rangers, Colorado Avalanche, Chicago Blackhawks and the Dallas Stars. Everywhere he went Barnaby became a fan favorite with his feisty play and give all attitude. Unfortunately, his career was cut short due to concussion issues. He would finished with 834 games played and over 2500 penalty minutes.
Matthew Barnaby of the New York Rangers on March 30 2002. Credit: Eliot Schechter/Getty Images/NHLI
Al Iafrate was selected 4th overall by the Toronto Maple Leafs in the 1984 NHL Entry Draft. He spent almost 7 seasons in Toronto where he was known as one of the best skaters in the game. Iafrate played in four All-Star Games and at the ’93 game set the record for the hardest slapshot at 105.2 MPH and held this record for 16 years. He played 799 career NHL games over twelve NHL seasons, scoring 152 goals and 311 assists for 463 points. He also compiled 1301 penalty minutes while playing in Toronto, Washington, Boston, and San Jose. HE is also well known for having one of the best hockey hairdo’s The Mullet!
Al Iafrate holds the puck behind the net in February, 1991 at the Maple Leaf Gardens (Photo by B Bennett/Getty Images). | http://docsonice.ca/wp/docs-on-ice-history/past-tournaments/2019-lanark/legends-of-hockey/ | 2022-01-29T04:59:01 | CC-MAIN-2022-05 | 1642320299927.25 | [array(['http://docsonice.ca/wp/wp-content/uploads/2019/02/Wendel-Headshot-2-791x1024.jpg',
None], dtype=object)
array(['http://docsonice.ca/wp/wp-content/uploads/2019/02/billy-smith-867x1024.jpg',
None], dtype=object)
array(['http://docsonice.ca/wp/wp-content/uploads/2019/02/Guy-Carbonneau-5-678x1024.jpg',
None], dtype=object)
array(['http://docsonice.ca/wp/wp-content/uploads/2019/02/Shayne-Corson-8-672x1024.jpg',
None], dtype=object)
array(['http://docsonice.ca/wp/wp-content/uploads/2019/02/Chris-Neil-1024x576.jpg',
None], dtype=object)
array(['http://docsonice.ca/wp/wp-content/uploads/2019/02/Angela-James-685x1024.jpg',
None], dtype=object)
array(['http://docsonice.ca/wp/wp-content/uploads/2019/02/Mike-Krushelnyski-1-679x1024.jpg',
None], dtype=object)
array(['http://docsonice.ca/wp/wp-content/uploads/2019/02/geraldine-hi-res-861x1024.jpg',
None], dtype=object)
array(['http://docsonice.ca/wp/wp-content/uploads/2019/02/Matthew-Barnaby-2-685x1024.jpg',
None], dtype=object)
array(['http://docsonice.ca/wp/wp-content/uploads/2019/02/Al-Iafrate-4--678x1024.jpg',
None], dtype=object) ] | docsonice.ca |
Rainforest Concern
Working with Rainforest Concern to save the Cloud Rainforest
As an owner and provider of data centre and cloud computing services, we are acutely aware of our responsibilities towards preserving the earth’s natural resources.
We not only acknowledge our environmental responsibility, but we are also committed to operating our business in an environmentally sensitive manner.
We are committed to integrating realistic and energy efficient stewardship into every aspect of our data centre and cloud computing business for the benefit of our clients.
iomart pro actively seeks to reduce cost at every opportunity: the cost of power, cost to the environment and cost of services for its customers.
We are now witnessing a paradigm shift in our industry’s views on green IT and sustainability. The topic is no longer perceived as the domain of the ‘eco-warrior’ but rather a fundamental element in any organisation’s strategic business plan, and we recognise that the services we offer is playing a major role in this attitudinal shift.
Office 365 from iomart
By continuing and adapting our work with Rainforest Concern and teaming up with Microsoft, we are proud to be supplying forest rangers and university researchers with a fully functional Office 365 package and Microsoft Surface Pro 3 tablets.
This allows for instant and simultaneous communication of vital data to be synchronised between the forest and the people that work all over the world to help support such a vital cause.
In addition to this iomart is proud to be assisting with the purchase of land by Rainforest Concern on their Choco Andean Project by donating £3 for every Office 365 licence sold by iomart to the charity.
The ultimate goal of the Chocó Andean corridor project is to form a corridor of continuous protected forest from Mindo Reserve, close to the capital Quito in the south, to the Awa Reserve on the Colombian border in the north. The southern phase of this project is located between two of the Global Biodiversity hotspots: the Chocó-Darien and Tropical Andes, and will link the last vulnerable forests between the Cotacachi-Cayapas Reserve to the North, and the Maquipucuna, Mindo and Pululahua reserves to the South.
Our Environmental Policy
iomart is fully committed to the principles of the ISO 14001 Environmental Management System (EMS), as defined in the company’s Environmental Policy, Manual and supporting procedures/documentation.
We will endeavour to promote our environmental policy to suppliers, contractors, customers and any other interested party using our website or any other applicable communication media.
iomart provides its employees with education and knowledge via training and awareness programmes aimed at the identification and understanding of the environmental impact of their activities.
We recognise the environmental impacts arising from our business activities and are committed to reducing these through effective environmental management. We aim to achieve this by:
- Managing and, where possible, reducing energy use relating to our business premises.
- Increasing re-use and recycling of materials.
- Preventing pollution by disposing of waste in an environmentally responsible way.
- Respecting all existing, applicable environmental regulations and meeting all new regulations as soon as is reasonably practicable.
- Where appropriate, taking environmental factors into account when purchasing products and services.
- Complying with the ISO 14001:2015 standard for environmental management systems including setting and working towards measurable objectives and targets.
- Continually seeking ways to further improve our environment performance.
Need more information?
Our expert team are here to help with any questions you have regarding our products or services. Call or email today.
iomart works with some of the world's leading technology brands and service providers. | https://docs.iomart.com/about-iomart/corporate-responsibility/rainforest-concern/ | 2022-01-29T05:00:56 | CC-MAIN-2022-05 | 1642320299927.25 | [array(['https://docs.iomart.com/wp-content/uploads/2016/04/vmware-icon.png',
'iomart Technology partners'], dtype=object)
array(['https://docs.iomart.com/wp-content/uploads/2014/07/rainforest_concern_logo1.jpg',
None], dtype=object)
array(['https://docs.iomart.com/wp-content/uploads/2014/07/glass-frog.png',
None], dtype=object)
array(['https://docs.iomart.com/wp-content/uploads/2016/04/vmware-icon.png',
'iomart Technology partners'], dtype=object)
array(['https://docs.iomart.com/wp-content/uploads/2016/04/microsoft-icon.png',
'iomart Technology partners'], dtype=object)
array(['https://docs.iomart.com/wp-content/uploads/2015/04/AWS_Logo_PoweredBy_space.png',
'iomart Technology partners'], dtype=object)
array(['https://docs.iomart.com/wp-content/uploads/2015/08/dell-emc-partner.png',
'iomart Technology partners'], dtype=object)
array(['https://docs.iomart.com/wp-content/uploads/2014/08/cisco.png',
'iomart Technology partners'], dtype=object)
array(['https://docs.iomart.com/wp-content/uploads/2014/08/openstack.png',
'iomart Technology partners'], dtype=object)
array(['https://docs.iomart.com/wp-content/uploads/2014/09/asigra-footer.png',
'iomart Technology partners'], dtype=object)
array(['https://docs.iomart.com/wp-content/uploads/2015/08/zerto-partner.png',
'iomart Technology partners'], dtype=object)
array(['https://docs.iomart.com/wp-content/uploads/2015/08/arbor_networks-e1439290770528.jpg',
'iomart Technology partners'], dtype=object)
array(['https://docs.iomart.com/wp-content/uploads/2014/09/symantec.png',
'iomart Technology partners'], dtype=object) ] | docs.iomart.com |
BR Tool Overview
BR (Backup & Restore) is a command-line tool for distributed backup and restoration of the TiDB cluster data.
Compared with
dumpling, BR is more suitable for scenarios of huge data volume.
This document describes BR's implementation principles, recommended deployment configuration, usage restrictions and several methods to use BR.
Implementation principles
BR sends the backup or restoration commands to each TiKV node. After receiving these commands, TiKV performs the corresponding backup or restoration operations.
Each TiKV node has a path in which the backup files generated in the backup operation are stored and from which the stored backup files are read during the restoration.
Backup principle
When BR performs a backup operation, it first obtains the following information from PD:
- The current TS (timestamp) as the time of the backup snapshot
- The TiKV node information of the current cluster
According to these information, BR starts a TiDB instance internally to obtain the database or table information corresponding to the TS, and filters out the system databases (
information_schema,
performance_schema,
mysql) at the same time.
According to the backup sub-command, BR adopts the following two types of backup logic:
- Full backup: BR traverses all the tables and constructs the KV range to be backed up according to each table.
- Single table backup: BR constructs the KV range to be backed up according a single table.
Finally, BR collects the KV range to be backed up and sends the complete backup request to the TiKV node of the cluster.
The structure of the request:
BackupRequest{ ClusterId, // The cluster ID. StartKey, // The starting key of the backup (backed up). EndKey, // The ending key of the backup (not backed up). StartVersion, // The version of the last backup snapshot, used for the incremental backup. EndVersion, // The backup snapshot time. StorageBackend, // The path where backup files are stored. RateLimit, // Backup speed (MB/s). }
After receiving the backup request, the TiKV node traverses all Region leaders on the node to find the Regions that overlap with the KV ranges in this request. The TiKV node backs up some or all of the data within the range, and generates the corresponding SST file.
After finishing backing up the data of the corresponding Region, the TiKV node returns the metadata to BR. BR collects the metadata and stores it in the
backupmeta file which is used for restoration.
If
StartVersion is not
0, the backup is seen as an incremental backup. In addition to KVs, BR also collects DDLs between
[StartVersion, EndVersion). During data restoration, these DDLs are restored first.
If checksum is enabled when you execute the backup command, BR calculates the checksum of each backed up table for data check.
Types of backup files
Two types of backup files are generated in the path where backup files are stored:
- The SST file: stores the data that the TiKV node backed up.
- The
backupmetafile: stores the metadata of this backup operation, including the number, the key range, the size, and the Hash (sha256) value of the backup files.
- The
backup.lockfile: prevents multiple backup operations from storing data to the same directory.
The format of the SST file name
The SST file is named in the format of
storeID_regionID_regionEpoch_keyHash_cf, where
storeIDis the TiKV node ID;
regionIDis the Region ID;
regionEpochis the version number of the Region;
keyHashis the Hash (sha256) value of the startKey of a range, which ensures the uniqueness of a key;
cfindicates the Column Family of RocksDB (
defaultor
writeby default).
Restoration principle
During the data restoration process, BR performs the following tasks in order:
It parses the
backupmetafile in the backup path, and then starts a TiDB instance internally to create the corresponding databases and tables based on the parsed information.
It aggregates the parsed SST files according to the tables.
It pre-splits Regions according to the key range of the SST file so that every Region corresponds to at least one SST file.
It traverses each table to be restored and the SST file corresponding to each tables.
It finds the Region corresponding to the SST file and sends a request to the corresponding TiKV node for downloading the file. Then it sends a request for loading the file after the file is successfully downloaded.
After TiKV receives the request to load the SST file, TiKV uses the Raft mechanism to ensure the strong consistency of the SST data. After the downloaded SST file is loaded successfully, the file is deleted asynchronously.
After the restoration operation is completed, BR performs a checksum calculation on the restored data to compare the stored data with the backed up data.
Deploy and use BR
Recommended deployment configuration
- It is recommended that you deploy BR on the PD node.
- It is recommended that you mount a high-performance SSD to BR nodes and all TiKV nodes. A 10-gigabit network card is recommended. Otherwise, bandwidth is likely to be the performance bottleneck during the backup and restore process.
If you do not mount a network disk or use other shared storage, the data backed up by BR will be generated on each TiKV node. Because BR only backs up leader replicas, you should estimate the space reserved for each node based on the leader size.
Because TiDB uses leader count for load balancing by default, leaders can greatly differ in size. This might resulting in uneven distribution of backup data on each node.
Usage restrictions
The following are the limitations of using BR for backup and restoration:
- When BR restores data to the upstream cluster of TiCDC/Drainer, TiCDC/Drainer cannot replicate the restored data to the downstream.
- BR supports operations only between clusters with the same
new_collations_enabled_on_first_bootstrapvalue because BR only backs up KV data. If the cluster to be backed up and the cluster to be restored use different collations, the data validation fails. Therefore, before restoring a cluster, make sure that the switch value from the query result of the
select VARIABLE_VALUE from mysql.tidb where VARIABLE_NAME='new_collation_enabled';statement is consistent with that during the backup process.
Compatibility
The compatibility issues of BR and the TiDB cluster are divided into the following categories:
- Some versions of BR are not compatible with the interface of the TiDB cluster.
- The KV format might change when some features are enabled or disabled. If these features are not consistently enabled or disabled during backup and restore, compatibility issues might occur.
These features are as follows:
However, even after you have ensured that the above features are consistently enabled or disabled during backup and restore, compatibility issues might still occur due to the inconsistent internal versions or inconsistent interfaces between BR and TiKV/TiDB/PD. To avoid such cases, BR has the built-in version check.
Version check
Before performing backup and restore, BR compares and checks the TiDB cluster version and the BR version. If there is a major-version mismatch (for example, BR v4.x and TiDB v5.x), BR prompts a reminder to exit. To forcibly skip the version check, you can set
--check-requirements=false.
Note that skipping the version check might introduce incompatibility. The version compatibility information between BR and TiDB versions are as follows:
Minimum machine configuration required for running BR
The minimum machine configuration required for running BR is as follows:
In general scenarios (less than 1000 tables for backup and restore), the CPU consumption of BR at runtime does not exceed 200%, and the memory consumption does not exceed 4 GB. However, when backing up and restoring a large number of tables, BR might consume more than 4 GB of memory. In a test of backing up 24000 tables, BR consumes about 2.7 GB of memory, and the CPU consumption remains below 100%.
Best practices
The following are some recommended operations for using BR for backup and restoration:
- It is recommended that you perform the backup operation during off-peak hours to minimize the impact on applications.
- BR supports restore on clusters of different topologies. However, the online applications will be greatly impacted during the restore operation. It is recommended that you perform restore during the off-peak hours or use
rate-limitto limit the rate.
- It is recommended that you execute multiple backup operations serially. Running different backup operations in parallel reduces backup performance and also affects the online application.
- It is recommended that you execute multiple restore operations serially. Running different restore operations in parallel increases Region conflicts and also reduces restore performance.
- It is recommended that you mount a shared storage (for example, NFS) on the backup path specified by
-s, to make it easier to collect and manage backup files.
- It is recommended that you use a storage hardware with high throughput, because the throughput of a storage hardware limits the backup and restoration speed.
How to use BR
Currently, the following methods are supported to run the BR tool:
- Use SQL statements
- Use the command-line tool
- Use BR In the Kubernetes environment
Use SQL statements
TiDB supports both
BACKUP and
RESTORE SQL statements. The progress of these operations can be monitored with the statement
SHOW BACKUPS|RESTORES.
Use the command-line tool
The
br command-line utility is available as a separate download. For details, see Use BR Command-line for Backup and Restoration.
In the Kubernetes environment
In the Kubernetes environment, you can use the BR tool to back up TiDB cluster data to S3-compatible storage, Google Cloud Storage (GCS) and persistent volumes (PV), and restore them:
For Amazon S3 and Google Cloud Storage parameter descriptions, see the External Storages document.
- Back up Data to S3-Compatible Storage Using BR
- Restore Data from S3-Compatible Storage Using BR
- Back up Data to GCS Using BR
- Restore Data from GCS Using BR
- Back up Data to PV Using BR
- Restore Data from PV Using BR
Other documents about BR
- BR Tool Overview
- Implementation principles
- Deploy and use BR
- Other documents about BR | https://docs.pingcap.com/tidb/v5.0/backup-and-restore-tool/ | 2022-01-29T04:17:07 | CC-MAIN-2022-05 | 1642320299927.25 | [array(['https://download.pingcap.com/images/docs/br-arch.png', 'br-arch'],
dtype=object) ] | docs.pingcap.com |
Method
GtkTextItercan_insert
Declaration [src]
gboolean gtk_text_iter_can_insert ( const GtkTextIter* iter, gboolean default_editability )
Description [src]
Considering the default editability of the buffer, and tags that
affect editability, determines whether text inserted at
iter would
be editable.
If text inserted at
iter would be editable then the
user should be allowed to insert text at
iter.
gtk_text_buffer_insert_interactive() uses this function
to decide whether insertions are allowed at a given position. | https://docs.gtk.org/gtk4/method.TextIter.can_insert.html | 2022-01-29T04:13:21 | CC-MAIN-2022-05 | 1642320299927.25 | [] | docs.gtk.org |
Zero Knowledge
To persist your configuration online, we implemented Zero-Knowledge encryption to prevent access to your information. But how can you trust a company to keep all of your secrets secret? The answer lies in end-to-end encryption, which lays the groundwork for applications with Zero-Knowledge architectures.
Zero-knowledge refers to policies and architecture that eliminate the possibility for secret managers themselves to access your password.
Warning
This is implemented for saving your configuration online in the PRO and TEAM version of Leapp. You don't know about PRO and TEAM versions? Check our roadmap.
Users have key control
When users have complete control of the encryption key, they control access to the data, providing encrypted information to Leapp without Leapp having access to or knowledge of that data.
Info
To know more about this, you can find the whitepaper on which we based our implementation of Zero-Knowledge end-to-end encryption.
Criteria
During any phase of the registration and login process the client does not provide any password-related info to the server. - The server does not store any information that can be used to guess the password in a convenient way. In other words, the system must not be prone to brute force or dictionary attacks. - Any sensible data is encrypted client-side, the server will work with encrypted blocks only. - All the implementation is released as open-source**.
Technologies
- PBKDF2 for client hashing.
- AES 256 for symmetric cypher.
- RSA with 4096-bit keys for asymmetric cypher.
- BCrypt for server hashing. | https://docs.leapp.cloud/security/zero-knowledge/ | 2022-01-29T05:10:27 | CC-MAIN-2022-05 | 1642320299927.25 | [] | docs.leapp.cloud |
tkinter --- Tcl/Tk的Python接口¶
源代码:.)
Tk类被初始化时无参数。此时会创建一个 Tk 顶级控件,通常是应用程序的主窗口。每个实例都有自己关联的 Tcl 解释器。
tkinter.
Tcl(screenName=None, baseName=None, className='Tk', useTk=0)¶
Tcl()函数是一个工厂函数,它创建的对象与
Tk类创建的对象非常相似,只是它不初始化 Tk 子系统。 在不想创建或无法创建(如没有 X Server 的 Unix/Linux 系统)额外的顶层窗口的环境中驱动 Tcl 解释器时,这一点非常有用。 由
Tcl()对象创建的对象可以通过调用其
loadtk()方法来创建顶层窗口(并初始化 Tk 子系统)。
提供Tk支持的其他模块包括:
tkinter.scrolledtext
Text widget with a vertical scroll bar built in.
tkinter.colorchooser
让用户选择颜色的对话框。
tkinter.commondialog
在此处列出的其他模块中定义的对话框的基类。
tkinter.filedialog
Common dialogs to allow the user to specify a file to open or save.
tkinter.font
Utilities to help work with fonts.
tkinter.messagebox
Access to standard Tk dialog boxes..
注释: 'abstract '-', like Unix shell command flags, and values are put in quotes if they are more than one word.
例如')).
示例:
>>> 和 ipady
A distance - designating internal padding on each side of the slave widget.
- padx 和.
例如".
- 位图 -- 回调 -- 序列.
例如) | https://docs.python.org/zh-cn/3.7/library/tkinter.html | 2022-01-29T03:40:45 | CC-MAIN-2022-05 | 1642320299927.25 | [] | docs.python.org |
GLGraphicsItem¶
- class pyqtgraph.opengl.GLGraphicsItem.GLGraphicsItem(parentItem=None)[source]¶
- applyTransform(tr, local)[source]¶
Multiply this object’s transform by tr. If local is True, then tr is multiplied on the right of the current transform:
newTransform = transform * tr
If local is False, then tr is instead multiplied on the left:
newTransform = tr * transform
- initializeGL()[source]¶
Called after an item is added to a GLViewWidget. The widget’s GL context is made current before this method is called. (So this would be an appropriate time to generate lists, upload textures, etc.)
- paint()[source]¶
Called by the GLViewWidget to draw this item. It is the responsibility of the item to set up its own modelview matrix, but the caller will take care of pushing/popping.
- rotate(angle, x, y, z, local=False)[source]¶
Rotate the object around the axis specified by (x,y,z). angle is in degrees.
- scale(x, y, z, local=True)[source]¶
Scale the object by (dx, dy, dz) in its local coordinate system. If local is False, then scale takes place in the parent’s coordinates.
- setDepthValue(value)[source]¶
Sets the depth value of this item. Default is 0. This controls the order in which items are drawn–those with a greater depth value will be drawn later. Items with negative depth values are drawn before their parent. (This is analogous to QGraphicsItem.zValue) The depthValue does NOT affect the position of the item or the values it imparts to the GL depth buffer.
- setGLOptions(opts)[source]¶
Set the OpenGL state options to use immediately before drawing this item. (Note that subclasses must call setupGLState before painting for this to work)
The simplest way to invoke this method is to pass in the name of a predefined set of options (see the GLOptions variable):
It is also possible to specify any arbitrary settings as a dictionary. This may consist of {‘functionName’: (args…)} pairs where functionName must be a callable attribute of OpenGL.GL, or {GL_STATE_VAR: bool} pairs which will be interpreted as calls to glEnable or glDisable(GL_STATE_VAR).
For example:
{ GL_ALPHA_TEST: True, GL_CULL_FACE: False, 'glBlendFunc': (GL_SRC_ALPHA, GL_ONE_MINUS_SRC_ALPHA), }
- setTransform(tr)[source]¶
Set the local transform for this object. Must be a
Transform3Dinstance. This transform determines how the local coordinate system of the item is mapped to the coordinate system of its parent.
- setupGLState()[source]¶
This method is responsible for preparing the GL state options needed to render this item (blending, depth testing, etc). The method is called immediately before painting the item.
- show()[source]¶
Make this item visible if it was previously hidden. This is equivalent to setVisible(True).
- translate(dx, dy, dz, local=False)[source]¶
Translate the object by (dx, dy, dz) in its parent’s coordinate system. If local is True, then translation takes place in local coordinates.
- update()[source]¶
Indicates that this item needs to be redrawn, and schedules an update with the view it is displayed in.
- updateGLOptions(opts)[source]¶
Modify the OpenGL state options to use immediately before drawing this item. opts must be a dictionary as specified by setGLOptions. Values may also be None, in which case the key will be ignored.
- viewTransform()[source]¶
Return the transform mapping this item’s local coordinate system to the view coordinate system. | https://pyqtgraph.readthedocs.io/en/latest/3dgraphics/glgraphicsitem.html | 2022-01-29T04:44:51 | CC-MAIN-2022-05 | 1642320299927.25 | [] | pyqtgraph.readthedocs.io |
class Pin – control I/O pins¶
A pin is the basic object to control I/O pins (also known as GPIO - general-purpose input/output). It has methods to set the mode of the pin (input, output, etc) and methods to get and set the digital logic level. For analog control of a pin, see the ADC class.
Usage Model:
Board
machine.
Pin(id, ...)¶
Create a new Pin object associated with the id. If additional arguments are given, they are used to initialise the pin. See
Pin.init().
Methods¶
Pin.
init(mode, pull, *, drive, alt)¶
Initialise the pin:
modecan be one of:
Pin.IN- input pin.
Pin.OUT- output pin in push-pull mode.
Pin.OPEN_DRAIN- output pin in open-drain mode.
Pin.ALT- pin mapped to an alternate function.
Pin.ALT_OPEN_DRAIN- pin mapped to an alternate function in open-drain mode.
pullcan be one of:
None- no pull up or down resistor.
Pin.PULL_UP- pull up resistor enabled.
Pin.PULL_DOWN- pull down resistor enabled.
drivecan be one of:
Pin.LOW_POWER- 2mA drive capability.
Pin.MED_POWER- 4mA drive capability.
Pin.HIGH_POWER- 6mA drive capability.
altis the number of the alternate function. Please refer to the pinout and alternate functions table. for the specific alternate functions that each pin supports..
__call__([value])¶
Pin objects are callable. The call method provides a (fast) shortcut to set and get the value of the pin. See
Pin.value()for more details.
Pin.
alt_list()¶
Returns a list of the alternate functions supported by the pin. List items are a tuple of the form:
('ALT_FUN_NAME', ALT_FUN_INDEX)
Availability: WiPy.
Pin.
irq(*, trigger, priority=1, handler=None, wake=None)¶
Create a callback to be triggered when the input level at the pin changes.
triggerconfigures the pin level which can generate an interrupt. Possible values are:
Pin.IRQ_FALLINGinterrupt on falling edge.
Pin.IRQ_RISINGinterrupt on rising edge.
Pin.IRQ_LOW_LEVELinterrupt on low level.
Pin.IRQ_HIGH_LEVELinterrupt on high level.
The values can be ORed together, for instance mode=Pin.IRQ_FALLING | Pin.IRQ_RISING
prioritylevel of the interrupt. Can take values in the range 1-7. Higher values represent higher priorities.
handleris an optional function to be called when new characters arrive.
wakesselects the power mode in which this interrupt can wake up the board. Please note:
-.
- Values can be ORed to make a pin generate interrupts in more than one power mode.
Returns a callback object.
Attributes¶
Constants¶
The following constants are used to configure the pin objects. Note that not all constants are available on all ports. | http://docs.micropython.org/en/v1.8.5/wipy/library/machine.Pin.html | 2022-01-29T05:24:28 | CC-MAIN-2022-05 | 1642320299927.25 | [] | docs.micropython.org |
Time zones
The External settlement detail report shows details of transactions for which Adyen is not the acquirer. For example, if you have an agreement with AMEX or PayPal where they send funds to Adyen after acquiring funds from a transaction, then the details of those transactions are in this report.
This report is only available for company accounts, so you need to match the transactions shown in this report to ones in your merchant accounts.
You can generate and download this report manually or automatically.
Sample
Download a sample External settlement detail report for examples of the included data.
Based on your account and settings, a report you generate in your account can contain additional data or columns that are not included in the sample report.
Structure
Entries
Under the header line, each line in the report is a separate entry. A transaction shown in this report does not confirm that the external acquirer has paid you the funds.. | https://docs.adyen.com/reporting/settlement-reconciliation/transaction-level/external-settlement-detail-report | 2022-01-29T04:01:20 | CC-MAIN-2022-05 | 1642320299927.25 | [] | docs.adyen.com |
Updating#
The Engine is updated regularly. For optimal performance make sure you are running the newest version of the Engine. With the newest version, you’re always up-to-date! When an update is available, and you run the Engine using the commands explained in the previous section, it will notify you in the terminal with a message similar to this:
[WARNING] Update available v0.7.7 → v0.7.9
This means that you are running an outdated version of the Engine. You can simply update the Engine by re-using the commands you used when installing the Engine, as found in Section 1.1: Installation. | https://docs.dematrading.ai/installation_and_updating/updating/ | 2022-01-29T05:16:09 | CC-MAIN-2022-05 | 1642320299927.25 | [] | docs.dematrading.ai |
GetSocial Integration with Adjust¶
Introduction¶
When you enable the integration with Adjust you will start seeing Smart Invites/Smart Links clicks, and GetSocial attributed installs in your Adjust dashboard, in addition to all the existing insights you are already sending them.
We will also send these Adjust parameters with GetSocial custom values:
Note about re-engagement events
At the moment, re-engagement events via Smart Invites/Smart Links won’t be attributed correctly to GetSocial due to Adjust own limitations.
Setup¶
To enable the integration, you will need to create a
GetSocial specific tracker:
- Navigate to your Adjust Dashboard
- Open App Settings → Tracker URLs
- Press New Tracker
- Name it
GetSocialand press Quick Create
- Now your GetSocial Tracker is created, and you will see its ID (a 6-character code) as well.
- Copy the GetSocial Tracker ID
- Navigate to your GetSocial Dashboard
- Click on Adjust
- Enter the GetSocial Tracker ID (6-character code)
- Press Enable
Note
You need to have a valid Adjust account, and the Adjust SDK (iOS, Android or Unity) needs to be integrated into your app.
Troubleshooting¶
I do not see any data for the GetSocial tracker in Adjust
There are a few reasons this could happen:
Make sure that the GetSocial tracker is correctly set up in the GetSocial Dashboard.
If you are just testing the integration, make sure that your testing device has not been already tracked by Adjust. You can inspect and reset this information in the Adjust Dashboard by following these steps. After resetting your device, just try again.
Adjust Analytics takes ~5 min to appear on their Dashboard.
The installs reported by Adjust are slightly different than the ones I see in the GetSocial Dashboard
Adjust and GetSocial have their own attribution algorithm, and although the device fingerprint to do the match might be the same for both, there are other parameters that could affect the result:
Attribution window: Adjust has a configurable window that ranges from hours to 30 days while we have a fixed window of 6 hours.
Other integrations: if you are using Adjust with other partners and your users click on an Ad after clicking on a Smart Invite or Smart Link, Adjust will attribute the install to the latest click seen, while we would attribute the install only to the Smart Invite or Smart Link click we tracked. | https://docs.getsocial.im/guides/exporting-data/integrations/adjust/ | 2022-01-29T05:43:11 | CC-MAIN-2022-05 | 1642320299927.25 | [] | docs.getsocial.im |
Setup Invite Channels on Android¶
Prerequisite¶
- Finished Getting Started with GetSocial Android.
In this step, we will integrate GetSocial with the 3rd-party SDKs.
Setup Integration with Facebook SDK¶
Starting from the GetSocial SDK v6.20.7 we provide Facebook invite channel without the need to integrate Facebook SDK. The following guide shows how to integrate GetSocial with Facebook SDK for the older GetSocial SDK versions and to provide better invite flow UX.
The benefits of Facebook SDK integration:
- You can track
onSuccess/onError/onCancelevents.
- User is brought back to your application after an invite.
- Invite is done by Facebook application, or WebView inside your app if Facebook is not installed.
GetSocial is compatible with Facebook Android SDK v4.x, older versions are not supported. Integration does not require Facebook application installed to be able to send invitations.
Facebook is deprecating App Invites
Facebook is deprecating App Invites from February 5, 2018.
New GetSocial integration with Facebook will allow posting Smart Invite to the timeline, friend’s timeline or a group. To upgrade, replace
FacebookInvitePlugin with FacebookSharePlugin .
More:
- Integrate Facebook Android SDK into your app as described in the Official Guide.
- Copy implementation of the Facebook Share plugin from GetSocial GitHub repository into your project.
Register plugin with GetSocial:
CallbackManager facebookCallbackManager = CallbackManager.Factory.create(); Invites.registerPlugin( InviteChannelIds.FACEBOOK, new FacebookSharePlugin((Activity)this, facebookCallbackManager));
val facebookCallbackManager = CallbackManager.Factory.create() Invites.registerPlugin(InviteChannelIds.FACEBOOK, FacebookSharePlugin(this@MainActivity, facebookCallbackManager))
If you want to check Facebook’s referral data on your own, use the following property in your
getsocial.json file:
{ ... "disableFacebookReferralCheck": true }
Setup Integration with VK SDK¶
Integration does not require VK application installed to be able to send invitations.
- Integrate VK Android SDK into your app as described in the Official Guide.
- Copy implementation of the VK Invite plugin from GetSocial GitHub repository into your project.
Register plugin with GetSocial:
Invites.registerPlugin( InviteChannelIds.VK, new VKInvitePlugin((Activity)this));
Invites.registerPlugin( InviteChannelIds.VK, VKInvitePlugin(this@MainActivity))
Setup Integration with KakaoTalk SDK¶
To be able to send Smart Invites via KakaoTalk, GetSocial requires:
- Mobile KakaoTalk application installed.
- Integration with KakaoTalk SDK.
Following steps will guide you through integration with KakaoTalk SDK:
Integrate KakaoTalk Android SDK into your app as described in the Official Guide. Alternatively (if you are not good with the Korean language), follow the steps below.
1.1. In your project, open
build.gradle.
1.2. Add KakaoTalk to the list of repositories:
repositories { maven { url '' }` }
1.3. Add dependency to
kakaolinklibrary:
dependencies { compile 'com.kakao.sdk:kakaolink:1.0.52' }
1.4. Create the new app on Kakao Developers Dashboard.
1.5. Provide keystore sha1 fingerprint on the Dashboard. To get hash, use the command:
keytool -exportcert -alias [release_key_alias] -keystore [release_keystore_path] | openssl sha1 -binary | openssl base64
1.5. Add GetSocial Smart Link domain to the list of Site Domains on Kakao Developers Dashboard:
1.6. In your project, open
AndroidManifest.xml.
1.7. Add a meta-data referencing Kakao App Key to the
applicationelement:
<application android: ... </application>
When KakaoTalk SDK is added to the project, let’s integrate it with GetSocial.
- Copy implementation of the Kakao Invite plugin from GetSocial GitHub repository into your project.
Register plugin with GetSocial:
Invites.registerPlugin( InviteChannelIds.KAKAO, new KakaoInvitePlugin((Activity)this) );
Invites.registerPlugin( InviteChannelIds.KAKAO, KakaoInvitePlugin(this@MainActivity))
To validate integration, open the GetSocial Smart Invites view:
InvitesViewBuilder.create().show();
InvitesViewBuilder.create().show()
The list should contain Kakao:
Next Steps¶
- Send your first Smart Invite.
- Customize Smart Invite content.
- Secure Smart Invites with the Webhooks.
- Understand GetSocial Analytics for Smart Invites. | https://docs.getsocial.im/guides/smart-invites/channels/setup-android/ | 2022-01-29T05:16:31 | CC-MAIN-2022-05 | 1642320299927.25 | [] | docs.getsocial.im |
Receive Smart Links on iOS iOS SDK guide.
Dashboard Configuration¶
When the app will be live in the App Store, copy the App Store ID to the Dashboard. We use this ID to redirect the user to the correct page from the Smart Invites Landing Page.
To set up deep linking:
- Open your app in the iTunes web.
- Login to the GetSocial Dashboard.
Ensure that Bundle ID, Team ID and App Store ID are filled and matches the app you are integrating:
Setup Deep Linking¶
iOS Installer Script configures everything automatically. If you’re not using the script read how to configure deep linking manually.
Retrieve Referral Data¶
Follow the Referral Data guide to see how to retrieve data attached to the invitation .
Troubleshooting¶
- If the app does not open after the clicks on GetSocial Smart Link, one of the following may be a problem:
Option 1. Some apps are blocking iOS Universal Links, e.g., Smart Links will not work in Telegram.
Option 2. Outdated provisioning profiles after enabling Associated Domains for your App ID. To solve it, just download the latest provisioning profiles from Xcode settings or developer.apple.com.
Option 3. Universal links do not work with wildcard app identifiers (e.g.,
im.getsocial.*). You have to create a new specific app identifier and new provisioning profile for your app.
Option. | https://docs.getsocial.im/guides/smart-links/receive-smart-links/ios/ | 2022-01-29T04:21:46 | CC-MAIN-2022-05 | 1642320299927.25 | [] | docs.getsocial.im |
Back up Data to PV Using BR
This document describes how to back up the data of a TiDB cluster in Kubernetes to Persistent Volumes (PVs). BR is used to get the backup of the TiDB cluster, and then the backup data is sent to PVs.
PVs in this documentation can be any Kubernetes supported Persistent Volume types. This document uses NFS as an example PV type.
The backup method described in this document is supported starting from TiDB Operator v1.1.8.
Ad-hoc backup
Ad-hoc backup supports both full backup and incremental backup. It describes the backup by creating a
Backup Custom Resource (CR) object. TiDB Operator performs the specific backup operation based on this
Backup object. If an error occurs during the backup process, TiDB Operator does not retry, and you need to handle this error manually.
This document provides examples in which the data of the
demo1 TiDB cluster in the
test1 Kubernetes namespace is backed up to NFS.
Prerequisites for ad-hoc backup
If TiDB Operator >= v1.1.10 && TiDB >= v4.0.8, BR will automatically adjust
tikv_gc_life_time. You do not need to configure
spec.tikvGCLifeTime and
spec.from fields in the
Backup CR. In addition, you can skip the steps of creating the
backup-demo1-tidb-secret secret and configuring database account privileges.
Download backup-rbac.yaml, and execute the following command to create the role-based access control (RBAC) resources in the
test1namespace:
kubectl apply -f backup-rbac.yaml -n test1
Create the
backup-demo1-tidb-secretsecret which stores the root account and password needed to access the TiDB cluster:
kubectl create secret generic backup-demo1-tidb-secret --from-literal=password=<password> --namespace=test1
Ensure that the NFS server is accessible from your Kubernetes cluster, and TiKV is configured to mount the same NFS server directory to the same local path as in backup jobs. To mount NFS for TiKV, refer to the configuration below:
spec: tikv: additionalVolumes: # specify volume types that are supported by Kubernetes, Ref: - name: nfs nfs: server: 192.168.0.2 path: /nfs additionalVolumeMounts: # this must match `name` in `additionalVolumes` - name: nfs mountPath: /nfs
Required database account privileges
- The
SELECTand
UPDATEprivileges of the
mysql.tidbtable: Before and after the backup, the
BackupCR needs a database account with these privileges to adjust the GC time.
Process of ad-hoc backup
Create the
BackupCR, and back up cluster data to NFS as described below:
kubectl apply -f backup-nfs.yaml
The content of
backup-nfs.yamlis as follows:
--- apiVersion: pingcap.com/v1alpha1 kind: Backup metadata: name: demo1-backup-nfs namespace: test1 spec: # # backupType: full # # # options: # - --lastbackupts=420134118382108673 local: prefix: backup-nfs volume: name: nfs nfs: server: ${nfs_server_ip} path: /nfs volumeMount: name: nfs mountPath: /nfs
In the example above,
spec.localrefers to the configuration related to PVs. For more information about PV configuration, refer to Local storage fields.
In the example above, some parameters in
spec.brcan be ignored, such as
logLevel,
statusAddr,
concurrency,
rateLimit,
checksum, and
timeAgo. For more information about BR configuration, refer to BR fields.
Since TiDB Operator v1.1.6, if you want to back up data incrementally, you only need to specify the last backup timestamp
--lastbackuptsin
spec.br.options. For the limitations of incremental backup, refer to Use BR to Back up and Restore Data.
For more information about the
BackupCR fields, refer to Backup CR fields.
This example backs up all data in the TiDB cluster to NFS.
After creating the
BackupCR, use the following command to check the backup status:
kubectl get bk -n test1 -owide
Scheduled full backup
You can set a backup policy to perform scheduled backups of the TiDB cluster, and set a backup retention policy to avoid excessive backup items. A scheduled full backup is described by a custom
BackupSchedule CR object. A full backup is triggered at each backup time point. Its underlying implementation is the ad-hoc full backup.
Prerequisites for scheduled full backup
The prerequisites for the scheduled full backup is the same with the prerequisites for ad-hoc backup.
Process of scheduled full backup
Create the
BackupScheduleCR, and back up cluster data as described below:
kubectl apply -f backup-schedule-nfs.yaml
The content of
backup-schedule-nfs.yamlis as follows:
--- apiVersion: pingcap.com/v1alpha1 kind: BackupSchedule metadata: name: demo1-backup-schedule-nfs namespace: test1 spec: #maxBackups: 5 #pause: true maxReservedTime: "3h" schedule: "*/2 * * * *" backupTemplate: # local: prefix: backup-nfs volume: name: nfs nfs: server: ${nfs_server_ip} path: /nfs volumeMount: name: nfs mountPath: /nfs
After creating the scheduled full backup, use the following command to check the backup status:
kubectl get bks -n test1 -owide
Use the following command to check all the backup items:
kubectl get bk -l tidb.pingcap.com/backup-schedule=demo1-backup-schedule-nfs -n test1
From the example above, you can see that the
backupSchedule configuration consists of two parts. One is the unique configuration of
backupSchedule, and the other is
backupTemplate.
backupTemplate specifies the configuration related to the cluster and remote storage, which is the same as the
spec configuration of the
Backup CR. For the unique configuration of
backupSchedule, refer to BackupSchedule CR fields.
Delete the backup CR
Refer to Delete the Backup CR.
Troubleshooting
If you encounter any problem during the backup process, refer to Common Deployment Failures. | https://docs.pingcap.com/tidb-in-kubernetes/v1.1/backup-to-pv-using-br/ | 2022-01-29T05:05:10 | CC-MAIN-2022-05 | 1642320299927.25 | [] | docs.pingcap.com |
LDAP data source
An LDAP data source allows you to query an external LDAP directory within Matrix, returning LDAP groups and users as shadow assets within your system.
The LDAP information returned can then be accessed through keyword replacements for use on your site, such as listing LDAP user information on an asset listing page.
Once you have created your LDAP data source, you can configure the asset on its associated screens. Many of these screens are similar to a standard page. They are described in the Asset screens documentation.
Read the DB data source documentation for more information on the Record filter screen
This documentation will describe the Details, Search filter, and Dynamic inputs screens, which are different for an LDAP data source.
Details screen
The Details screen for an LDAP data source allows you to set up the connection details for the external LDAP database.
Read the Asset screens documentation for more information about the Status, Future status, Thumbnail, and Details sections of the Details screen.
LDAP bridge connection details
The LDAP bridge connection details section allows you to enter the settings for the LDAP directory to which you want to connect.
The fields in this section are similar to those on the Details screen of an LDAP bridge asset. Read the Details screen information in the LDAP bridge documentation
Use an LDAP bridge asset
The use of an LDAP bridge asset section allows you to select an existing LDAP directory connection within your system (through an LDAP bridge asset) rather than configuring the connection within the LDAP data source.
In the LDAP bridge asset field, select an LDAP bridge asset to connect to the external LDAP directory.
Search filter screen
The search filter screen is used to enter the LDAP query run on the LDAP database specified on the Details screen.
LDAP search filter
The LDAP search filter section allows you to enter the LDAP query to filter the results returned from the LDAP database.
Enter the search filter into the search filter query field and click Save.
Shadow assets will be displayed under the LDAP data source in the asset tree.
Read the Shadow assets documentation for more information on shadow assets. recognized as binary data. These attributes are specified as a comma-separated list in the same manner as the attributes to extract field.
Matrix will identify extracted data from the attributes specified in this field as binary. This information can then be reused within the system through the use of keyword replacements. Read the Available keywords section below for more information.
Record set asset names
The record set asset names section allows you to specify the shadow assets that appear under the LDAP data source in the asset tree.
In the record set asset names field, enter the name used for record sets exposed by the LDAP data source.
This name can either be a standard string or a combination of strings and keyword replacements.
For example, you can enter
%data_source_record_set_givenname% to display the given name of the LDAP user/group as the name of your shadow assets. to use dynamic parameters within the LDAP search filter query string.
Dynamic variables
This section allows you to add variable names for the parameters that you want to add.
Enter the variable name into the name field, enter the default value into the default value field and click Save. The variable will be added to the list.
Once you have added a variable, you can set it up within the data mappings section.
To delete a variable:
Click the Delete box.
Click Save.
To use the variable within the LDAP search filter query string, add double-percentage signs around the variable name.
For example, if the variable’s name is variable, add
%%variable%% within the search filter query field on the search filter screen.
Data mappings
This section allows you to set up the dynamic variables that have been added in the section above.
Select which variable to edit from the parameter list and select a source from the source list. Read the Asset listing documentation for more information about the options in the list. | https://docs.squiz.net/matrix/version/latest/features/data-sources/ldap-data-source.html | 2022-01-29T04:31:32 | CC-MAIN-2022-05 | 1642320299927.25 | [array(['../_images/4-0-0_example-shadow-assets.png',
'Shadow assets in the asset tree'], dtype=object)
array(['../_images/4-0-0_renamed-shadow-assets_2.png',
'Renamed shadow assets'], dtype=object)
array(['../_images/4-0-0_example-dynamic-variable.png',
'An example dynamic variable'], dtype=object) ] | docs.squiz.net |
Document Note/Page Note Indicator
Documents in the Document List that have a Document Note or Page Note will have one of two icons as indicators:
- Is there is no annotation on the document (or page) will see this icon in the document list or page thumbnail:
- If there is an annotation on the document (or page), the same icon will be shaded: | https://docs.xdocm.com/6104/user/the-xdoc-document-viewer/document-notes-and-page-notes/document-note-page-note-indicator | 2022-01-29T03:49:26 | CC-MAIN-2022-05 | 1642320299927.25 | [array(['document-note-page-note-indicator/document-list-note-indicator.png',
'Document List Note Indicator'], dtype=object) ] | docs.xdocm.com |
- Tutorial
- Kiln structure and file layout
- Components
- Running the webapp
- Using Kiln with the latest version of Solr
- Navigation
- URL matching
- Templating
- Searching and Browsing
- Schematron validation
- RDF / Linked Open Data
- Fedora Repository
- Multilingual sites
- Using Kiln as an independent backend
- Using web services
- Kiln’s admin and editorial area
- PDF and ePub generation
- Testing framework
- Error handling
- License
- Projects using Kiln | https://kiln.readthedocs.io/en/latest/?badge=latest | 2022-01-29T04:24:10 | CC-MAIN-2022-05 | 1642320299927.25 | [] | kiln.readthedocs.io |
On this page you'll find accepted formats and examples of data that account holders need to provide.
Individuals
When collecting information from individuals, make sure that:
- They do not provide PO boxes as these are not accepted as addresses.
- They provide a name that matches their photo ID. This is needed in case they have to submit a photo ID.
Individuals associated with organizations
Aside from checking the identity of account holders that are individuals, Adyen also requires information from certain individuals who are associated with businesses, nonprofits, partnerships, or public companies. The table below describes the criteria for identifying them, and the minimum required number for each.
Identification numbers
For some countries, account holders are required to provide their national identification number. For others, we highly recommend passing the ID number to to increase the likelihood of passing the checks. If the automatic verification fails, Adyen may require the account holder to provide an additional document.
For Australia
If an account holder is from Australia, they can provide an ID, passport, visa, or driver's license.
When providing a driver's license, we recommend that you provide the issuer state to increase the likelihood of passing the verification.
Organizations
The following data formats apply to data that you can get from businesses, nonprofits, partnerships, and public companies.
When collecting information from the account holder, make sure that:
- They do not provide PO boxes as these are not accepted as addresses.
Registration numbers
The following registration numbers are accepted for businesses and nonprofits. For nonprofits in some countries, you can upload documents if a nonprofit does not have a registration number. If the automatic verification fails, Adyen may require the account holder to provide an additional document.
Company Tax Identification Number
Account holders of business legal entity type that have processed payments at a higher tier must provide the company's Tax Identification Number (TIN). The following list contains TINs for each country. For countries that have two TINs available, provide either one. For countries that are not in the list, the tax identification number and business registration numbers are the same.
Supported stock exchanges
For businesses that are 100% subsidiaries of companies listed on public stock exchanges or other regulated markets, we require that they provide the stock exchange they are listed in. For example, for parent companies in the US, this means the companies are listed in the New York Stock Exchange, NYSE American, or NASDAQ.
Refer to the complete list of supported stock exchanges.
Bank accounts
Adyen verifies that a bank account exists and that the account holder owns it. We perform the check for every bank account that an account holder adds as a payout method.
When collecting information from the account holder, make sure that:
- The name of the owner of the bank account matches the name of the account holder.
- The country where the bank account is in matches the country where the account holder is operating.
Countries with IBAN
For countries where an IBAN is available, you can send either:
- The bank account's IBAN
- Or a combination of bank code, branch code, and account number
Branch code
In some countries, the bank's branch code may be required. The branch code corresponds to the following values: | https://docs.adyen.com/pt/platforms/verification-process/accepted-data-format | 2022-01-29T03:36:37 | CC-MAIN-2022-05 | 1642320299927.25 | [] | docs.adyen.com |
scc-control-planeand
scc-builder. These two components are responsible for administering the cluster of Atmo instances and your users' functions, respectively.
scc-builderis the component that builds your users' functions and provides the embedded code editor. It can compile various languages to WebAssembly, it powers the code editor, and it provides CI/CD functionality for your users' code.
scc-control-planeacts as a 'director' for Atmo, and controls things like autoscaling, collecting usage and error metrics, connecting to the Suborbital Telemetry service, and providing administrative APIs. It also manages the functions kept in your storage bucket., our authentication, billing, metadata, and telemetry service. An environment token is needed for the control plane to operate. | https://docs.suborbital.dev/concepts/data-plane-vs-control-plane | 2022-01-29T05:03:56 | CC-MAIN-2022-05 | 1642320299927.25 | [] | docs.suborbital.dev |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.