content
stringlengths 0
557k
| url
stringlengths 16
1.78k
| timestamp
timestamp[ms] | dump
stringlengths 9
15
| segment
stringlengths 13
17
| image_urls
stringlengths 2
55.5k
| netloc
stringlengths 7
77
|
---|---|---|---|---|---|---|
Flat Rate
Flat rate is a fixed, predefined charge that can be applied per item, or per shipment. Flat rate is a simple shipping solution, especially when used with the flat-rate packaging that is available from some carriers. When enabled, Flat Rate appears as an option during checkout. Because no specific carrier is specified, you can use a carrier of your choice..
A quick rating takes only 3 clicks. Add a comment to help us improve Magento even more. | https://docs.magento.com/m2/2.2/ce/user_guide/shipping/shipping-flat-rate.html | 2019-01-16T09:00:47 | CC-MAIN-2019-04 | 1547583657097.39 | [] | docs.magento.com |
fs.path¶
Useful functions for FS path manipulation.
This is broadly similar to the standard
os.path module but works with
paths in the canonical format expected by all FS objects (that is, separated
by forward slashes and with an optional leading slash).
- class
fs.path.
PathMap¶.
iternames(root='/')¶
Iterate over all names beneath the given root path.
This is basically the equivalent of listdir() for a PathMap - it yields the next level of name components beneath the given path.
fs.path.
abspath(path)¶
Convert the given path to an absolute path.
Since FS objects have no concept of a ‘current directory’ this simply adds a leading ‘/’ character if the path doesn’t already have one.
fs.path.
basename(path)¶
Returns the basename of the resource referenced by a path.
This is always equivalent to the ‘tail’ component of the value returned by pathsplit(path).
>>> basename('foo/bar/baz') 'baz'
>>> basename('foo/bar') 'bar'
>>> basename('foo/bar/') ''
fs.path.
dirname(path)¶
Returns the parent directory of a path.
This is always equivalent to the ‘head’ component of the value returned by pathsplit(path).
>>> dirname('foo/bar/baz') 'foo/bar'
>>> dirname('/foo/bar') '/foo'
>>> dirname('/foo') '/'
fs.path.
forcedir(path)¶
Ensure the path ends with a trailing forward slash
>>> forcedir("foo/bar") 'foo/bar/' >>> forcedir("foo/bar/") 'foo/bar/'
fs.path.
isdotfile(path)¶
Detects if a path references a dot file, i.e. a resource who’s name starts with a ‘.’
>>> isdotfile('.baz') True
>>> isdotfile('foo/bar/.baz') True
>>> isdotfile('foo/bar.baz') False
fs.path.
isprefix(path1, path2)¶
Return true is path1 is a prefix of path2.
>>> isprefix("foo/bar", "foo/bar/spam.txt") True >>> isprefix("foo/bar/", "foo/bar") True >>> isprefix("foo/barry", "foo/baz/bar") False >>> isprefix("foo/bar/baz/", "foo/baz/bar") False
fs.path.
issamedir(path1, path2)¶
Return true if two paths reference a resource in the same directory.
>>> issamedir("foo/bar/baz.txt", "foo/bar/spam.txt") True >>> issamedir("foo/bar/baz/txt", "spam/eggs/spam.txt") False
fs.path.
iswildcard(path)¶
Check if a path ends with a wildcard
>>> is_wildcard('foo/bar/baz.*') True >>> is_wildcard('foo/bar') False
fs.path.
join(*paths)¶
Joins any number of paths together, returning a new path string.
This is a simple alias for the
pathjoinfunction, allowing it to be used as
fs.path.joinin direct correspondence with
os.path.join.
fs.path.
normpath(path)¶
Normalizes a path to be in the format expected by FS objects.
This function removes trailing slashes, collapses duplicate slashes, and generally tries very hard to return a new path in the canonical FS format. If the path is invalid, ValueError will be raised.
>>> normpath("/foo//bar/frob/../baz") '/foo/bar/baz'
>>> normpath("foo/../../bar") Traceback (most recent call last) ... BackReferenceError: Too many backrefs in 'foo/../../bar'
fs.path.
pathcombine(path1, path2)¶
Joins two paths together.
This is faster than pathjoin, but only works when the second path is relative, and there are no backreferences in either path.
>>> pathcombine("foo/bar", "baz") 'foo/bar/baz'
fs.path.
pathjoin(*paths)¶
Joins any number of paths together, returning a new path string.
>>> pathjoin('foo', 'bar', 'baz') 'foo/bar/baz'
>>> pathjoin('foo/bar', '../baz') 'foo/baz'
>>> pathjoin('foo/bar', '/baz') '/baz'
fs.path.
pathsplit(path)¶')
fs.path.
recursepath(path, reverse=False)¶
Returns intermediate paths from the root to the given path
>>> recursepath('a/b/c') ['/', u'/a', u'/a/b', u'/a/b/c']
fs.path.
relativefrom(base, path)¶
Return a path relative from a given base path, i.e. insert backrefs as appropriate to reach the path from the base.
>>> relativefrom("foo/bar", "baz/index.html") '../../baz/index.html'
fs.path.
relpath(path)¶
Convert the given path to a relative path.
This is the inverse of abspath(), stripping a leading ‘/’ from the path if it is present.
>>> relpath('/a/b') 'a/b'
fs.path.
split(path)¶
Splits a path into (head, tail) pair.
This is a simple alias for the
pathsplitfunction, allowing it to be used as
fs.path.splitin direct correspondence with
os.path.split. | http://pyfilesystem.readthedocs.io/en/latest/path.html | 2017-11-17T19:26:09 | CC-MAIN-2017-47 | 1510934803906.12 | [] | pyfilesystem.readthedocs.io |
14. PostgreSQL Database¶
Contents
- PostgreSQL Database
- Postgres Database Setup and Operation
- Database Schema
- PostgreSQL Database Tables by Data Source
- PostgreSQL Database Table Descriptions
- Creating a New Matview
- Database Admin Function Reference
- MatView Functions
- Hourly Matview Update Functions
- Daily Matview Update Functions
- Other Matview Functions
- backfill_matviews
- backfill_reports_clean
- update_product_versions
- update_rank_compare, backfill_rank_compare
- reports_clean_done
- Schema Management Functions
- weekly_report_partitions
- try_lock_table
- create_table_if_not_exists
- add_column_if_not_exists
- drop_old_partitions
- Other Administrative Functions
- add_old_release
- add_new_release
- edit_featured_versions
- add_new_product
- truncate_partitions
- Custom Time-Date Functions
- Database Misc Function Reference
- Dumping Dump Tables
FIXME(willkg): This needs to be overhauled, updated, and reduced.
14.1. Postgres Database Setup and Operation¶
There are three major steps in creating a Socorro database for production:
- Run
socorro setupdb
- Create a new product (currently not automated)
- Run crontabber jobs to normalize incoming crash and product data
14.1.1. socorro setupdb¶
socorro setupdb is an application that will set up the Postgres database
schema for Socorro. It starts with an empty database and creates all the tables,
indexes, constraints, stored procedures and triggers needed to run a Socorro
instance.
You have to set up a regular user for day-to-day operations. While it is not recommended that the regular user have the full set of super user privileges, the regular user must be privileged enough to create tables within the database.
This tool also requires a user with superuser permissions if you want to be able
to run the script without logging into the database and first creating a
suitable database and if you want to use the
--dropdb option.
This script also requires an
alembic configuration file for initializing our
database migration system. An example configuration file can be found in
config/alembic.ini-dist.
Run it like this:
socorro setupdb --database_name=mydatabasename --createdb
Common options listed below:
--fakedata -- populate the database with preset fixture data --fakedata_days=2 -- the number of days worth of crash data to generate --dropdb -- drop a database with the same name as --database_name
For more information about fakedata, see
socorro/external/postgresql/fakedata.py.
14.1.2. Creating a new product¶
Current (as of 2/4/2015)
There is work underway to automate adding a new product.
The minimal set of actions to enable products viewable in the Django webapp is:
SELECT add_new_product()in the Postgres database
SELECT add_new_release()in the Postgres database
- Insert channels into
product_release_channels
SELECT update_product_versions()in the Postgres database
service memcached restarton your memcached server
Details on the Postgres related operations are below.
``SELECT add_new_product()``
This function adds rows to the
products,
product_build_types, and
product_productid_map tables.
Minimum required to call this function:
SELECT add_new_product('MyNewProduct', '1.0');
The first value is the product name, used in the webapp and other places for display. Currently we require that this have no spaces. We’d welcome a pull request to make whitespace in a product name possible in our Django webapp.
The second value is the product initial version. This should be the minimum version number that you’ll receive crashes for in dotted notation. This is currently a DOMAIN, and has some special type checking associated with it. In the future, we may change this to be NUMERIC type so as to make it easier to work with across ORMs and with Python.
Additional options include:
prodid TEXT -- a UUID surrounded by '{}' used by Mozilla for Firefox and other products ftpname TEXT -- an alternate name used to match the product name as given by our release metadata server (ftp and otherwise) release_throttle NUMERIC -- used by our collectors to only process a percentage of all crashes, a percentage rapid_beta_version NUMERIC -- documents the first release version that supports the 'rapid_beta' feature for Socorro
These options are not required and have suitable defaults for all installations.
``SELECT add_new_release()``
This function adds new rows to
releases_raw table, and optionally adds new
rows to
product_versions.
Minimum required to call this function:
SELECT add_new_release('MyNewProduct', '1.0', 'release', '201501010000', 'Linux');
The first value is product name and must match either the product name or
ftpname from the
add_new_product() function run (or whatever is in the
products table).
The second value is a version, and this must be numerically equal to or less
than the major version added during the
add_new_product() run. We support
several common alphanumeric versioning schemes. Examples of supported version
numbers:
1.0.1 1.0a1 1.0b10 1.0.6esr 1.0.10pre 1.0.3(beta)
The third value is a release_channel. Our supported release channel types currently include: release, nightly, aurora (aka alpha), beta, esr (extended support release). “pre” is mapped to “aurora”. Rules for support of “nightly” release channels are complicated.
If you need to support release channels in addition or with names different than
our defaults, you may need to modify the
build_type ENUM defined in the
database. There are a number of other dependencies out of scope for this
document. Recommendation at this time is to just use our release_channels.
The fourth value is a build identifier. Our builds are typically identified by a timestamp.
The fifth value is an operating system name. Supported operating systems are:
Windows, Mac OS X and Linux. There are a few caveats to Windows naming with the
tables
os_name,
os_versions and
os_name_matches playing important
roles in our materialized view generation.
Additional options include:
beta_number INTEGER -- a number derived from the version_string if passed in to help sort betas when displayed repository TEXT -- an label indicating of where the release came from, often same name as an FTP repo version_build TEXT -- a label to help identify specific builds associated with the version string update_products (True/False) -- calls update_product_versions() for you ignore_duplicates (True/False) -- catches UNIQUE violations
Insert channels into ``product_release_channels``
Here is a a SQL command to populate this table:
INSERT into product_release_channels (product_name, release_channel, throttle) VALUES ('MyNewProduct', 'release', '0.1');
The first value is product name and must match either the product name or
ftpname from the
add_new_product() function run (or whatever is in the
products table).
The second value is a release_channel. Our supported release channel types currently include: release, nightly, aurora (aka alpha), beta, esr (extended support release). “pre’ is mapped to ‘aurora’. Rules for support of ‘nightly’ release channels are complicated.
The third value is release_throttle and is a NUMERIC value indicating what percentage of crashes are processed.
``SELECT update_product_versions()``
This function inserts rows into the
product_versions and
product_version_builds tables.
Minimum required to call this function:
SELECT update_product_versions();
No values need to be passed to the function by default.
Options include:
product_window INTEGER -- the number of days you'd like product versions to be inserted and updated for, default is 30 days
14.2. Database Schema¶
14.2.1. Introduction¶
Socorro operation is deeply connected to the PostgreSQL database: It makes use of a significant number of PostgreSQL and psycopg2 (python) features and extensions. Making a database-neutral API has been explored, and for now is not being pursued.
The tables can be divided into three major categories: crash data, aggregate reporting and process control.
14.2.2. Core crash data diagram¶
reports
This table participates in DatabasePartitioning
Holds a lot of data about each crash report:
Table "public.reports" Column | Type | Modifiers ---------------------+--------------------------+------------------------------------------------------ id | integer | not null default nextval('reports_id_seq'::regclass) client_crash_date | timestamp with time zone | date_processed | timestamp with time zone | uuid | character varying(50) | not null product | character varying(30) | version | character varying(16) | build | character varying(30) | signature | character varying(255) | url | character varying(255) | install_age | integer | last_crash | integer | uptime | integer | cpu_name | character varying(100) | cpu_info | character varying(100) | reason | character varying(255) | address | character varying(20) | os_name | character varying(100) | os_version | character varying(100) | email | character varying(100) | user_id | character varying(50) | started_datetime | timestamp with time zone | completed_datetime | timestamp with time zone | success | boolean | truncated | boolean | processor_notes | text | user_comments | character varying(1024) | app_notes | character varying(1024) | distributor | character varying(20) | distributor_version | character varying(20) | topmost_filenames | text | addons_checked | boolean | flash_version | text | hangid | text | process_type | text | release_channel | text | productid | text |
Indexes and FKs from a child table:
Indexes: "reports_20121015_pkey" PRIMARY KEY, btree (id) "reports_20121015_unique_uuid" UNIQUE, btree (uuid) "reports_20121015_build_key" btree (build) "reports_20121015_date_processed_key" btree (date_processed) "reports_20121015_hangid_idx" btree (hangid) "reports_20121015_product_version_key" btree (product, version) "reports_20121015_reason" btree (reason) "reports_20121015_signature_date_processed_build_key" btree (signature, date_processed, build) "reports_20121015_url_key" btree (url) "reports_20121015_uuid_key" btree (uuid) Check constraints: "reports_20121015_date_check" CHECK ('2012-10-15 00:00:00+00'::timestamp with time zone <= date_processed AND date_processed < '2012-10-22 00:00:00+00'::timestamp with time z one) Referenced by: TABLE "extensions_20121015" CONSTRAINT "extensions_20121015_report_id_fkey" FOREIGN KEY (report_id) REFERENCES reports_20121015(id) ON DELETE CASCADE TABLE "plugins_reports_20121015" CONSTRAINT "plugins_reports_20121015_report_id_fkey" FOREIGN KEY (report_id) REFERENCES reports_20121015(id) ON DELETE CASCADE Inherits: reports
extensions
This table participates in [[DatabasePartitioning]].
Holds data about what extensions are associated with a given report:
Table "public.extensions" Column | Type | Modifiers -------------------+--------------------------+----------- report_id | integer | not null date_processed | timestamp with time zone | extension_key | integer | not null extension_id | text | not null extension_version | text |
Partitioned Child Table:
Indexes: "extensions_20121015_pkey" PRIMARY KEY, btree (report_id, extension_key) "extensions_20121015_extension_id_extension_version_idx" btree (extension_id, extension_version) "extensions_20121015_report_id_date_key" btree (report_id, date_processed, extension_key) Check constraints: "extensions_20121015_date_check" CHECK ('2012-10-15 00:00:00+00'::timestamp with time zone <= date_processed AND date_processed < '2012-10-22 00:00:00+00'::timestamp with time zone) Foreign-key constraints: "extensions_20121015_report_id_fkey" FOREIGN KEY (report_id) REFERENCES reports_20121015(id) ON DELETE CASCADE Inherits: extensions
14.2.4. Monitor, Processors and crontabber tables¶
Needs significant update (2015/02/04)
14.3. PostgreSQL Database Tables by Data Source¶
Last updated: 2012-10-22
This document breaks down the tables in the Socorro PostgreSQL database by where their data comes from, rather than by what the table contains. This is a prerequisite to populating a brand-new socorro database or creating synthetic testing workloads.
14.3.1. Manually Populated Tables¶
The following tables have no code to populate them automatically. Initial population and any updating need to be done by hand. Generally there’s no UI, either; use queries.
- crash_types
- os_name_matches
- os_names
- product_productid_map
- process_types
- product_release_channels
- products
- release_channel_matches
- release_channels
- report_partition_info
- uptime_levels
- windows_versions
14.3.2. Tables Receiving External Data¶
These tables actually get inserted into by various external utilities. This is most of our “incoming” data.
- bugs
- list of bugs, populated by socorro/cron/bugzilla.py
- bugs_associations
- bug to signature association, populated by socorro/cron/bugzilla.py
- extensions
- populated by processors
- plugins
- populated by processors based on crash data
- plugins_reports
- populated by processors
- raw_adi
- populated by daily batch job that selects ADI from Hive system backed by SEQ files from load balancers
- releases_raw
- populated by daily FTP-scraper
- reports
- populated by processors
14.3.3. Automatically Populated Reference Tables¶
Lookup lists and dimension tables, populated by cron jobs and/or processors based on the above tables. Most are annotated with the job or process which populates them. Where the populating process is marked with an @, that indicates a job which is due to be phased out.
- addresses
- cron job, by update_lookup_new_reports, part of update_reports_clean based on reports
- domains
- cron job, by update_lookup_new_reports, part of update_reports_clean based on reports
- flash_versions
- cron job, by update_lookup_new_reports, part of update_reports_clean based on reports
- os_versions
- cron job, update_os_versions_new_reports, based on reports@ cron job, update_reports_clean based on reports
- product_version_builds
- cron job, update_product_versions, based on releases_raw
- product_versions
- cron job, update_product_versions, based on releases_raw
- reasons
- cron job, update_reports_clean, based on reports
- reports_bad
- cron job, update_reports_clean, based on reports future cron job to delete data from this table
- signatures
- cron job, update_signatures, based on reports@ cron job, update_reports_clean, based on reports
14.3.4. Matviews¶
Reporting tables, designed to be called directly by the mware/UI/reports. Populated by cron job batch. Where populating functions are marked with a @, they are due to be replaced with new jobs.
- bug_associations
- not sure
- build_adu
- daily adu based on raw_adi for builds
- daily_hangs
- update_hang_report based on reports
- product_adu
- daily adu based on raw_adi for products
- reports_clean
- update_reports_clean based on reports
- reports_user_info
- update_reports_clean based on reports
- reports_duplicates
- find_reports_duplicates based don reports
- signature_products
- update_signatures based on reports@
- signature_products_rollup
- update_signatures based on reports@
- tcbs
- update_tcbs based on reports
14.3.5. Application Management Tables¶
These tables are used by various parts of the application to do other things than reporting. They are populated/managed by those applications.
- processor management tables
- processors
- transform_rules
- UI management tables
- sessions
- monitoring tables
- replication_test
- cronjob and database management
- cronjobs
- report_partition_info
14.4. PostgreSQL Database Table Descriptions¶
This document describes the various tables in PostgreSQL by their purpose and essentially what data each contains. This is intended as a reference for socorro developers and analytics users.
Tables which are in the database but not listed below are probably legacy tables which are slated for removal in future Socorro releases. Certainly if the tables are not described, they should not be used for new features or reports.
14.4.1. Raw Data Tables¶
These tables hold “raw” data as it comes in from external sources. As such, these tables are quite large and contain a lot of garbage and data which needs to be conditionally evaluated. This means that you should avoid using these tables for reports and interfaces unless the data you need isn’t available anywhere else – and even then, you should see about getting the data added to a matview or normalized fact table.
reports
The primary “raw data” table, reports contains the most used information about crashes, one row per crash report. Primary key is the UUID field.
The reports table is partitioned by date_processed into weekly partitions, so any query you run against it should include filter criteria (WHERE) on the date_processed column. Examples:
WHERE date_processed BETWEEN '2012-02-12 11:05:09+07' AND '2012-02-17 11:05:09+07' WHERE date_processed >= DATE '2012-02-12' AND date_processed < DATE '2012-02-17' WHERE utc_day_is(date_processed, '2012-02-15')
Data in this table comes from the processors.
extensions
Contains information on add-ons installed in the user’s application.. This is used by correlations.
Data in this table comes from the processors.
plugins_reports
Contains information on some, but not all, installed modules implicated in the crash: the “most interesting” modules. Relates to dimension table plugins..
Data in this table comes from the processors.
bugs
Contains lists of bugs thought to be related to crash reports, for linking to crashes. Populated by a daily cronjob.
bug_associations
Links bugs from the bugs table to crash signatures. Populated by daily cronjob.
raw_adi
Contains counts of estimated Average Daily Users as calculated by the Metrics department, grouped by product, version, build, os, and UTC date. Populated by a daily cronjob.
releases_raw
Contains raw data about Mozilla releases, including product, version, platform and build information. Populated hourly via FTP-scraping.
reports_duplicates
Contains UUIDs of groups of crash reports thought to be duplicates according to the current automated duplicate-finding algorithm. Populated by hourly cronjob.
14.4.2. Normalized Fact Tables¶
reports_clean
Contains cleaned and normalized data from the reports table, including product-version, os, os version, signature, reason, and more. Partitioned by date into weekly partitions, so each query against this table should contain a predicate on date_processed:
WHERE date_processed BETWEEN '2012-02-12 11:05:09+07' AND '2012-02-17 11:05:09+07' WHERE date_processed >= DATE '2012-02-12' AND date_processed < DATE '2012-02-17' WHERE utc_day_is(date_processed, '2012-02-15')
Because reports_clean is much smaller than reports and is normalized into unequivocal relationships with dimenstion tables, it is much easier to use and faster to execute queries against. However, it excludes data in the reports table which doesn’t conform to normalized data, including:
- product versions before the first Rapid Release versions (e.g. Firefox 3.6)
- non-rapid release products
- corrupt reports, including ones which indicate a breakpad bug
Populated hourly, 3 hours behind the current time, from data in reports via cronjob. The UUID column is the primary key. There is one row per crash report, although some crash reports are suspected to be duplicates.
Updated by
update_reports_clean().
Columns:
- uuid
- artificial unique identifier assigned by the collectors to the crash at collection time. Contains the date collected plus a random string.
- date_processed
- timestamp (with time zone) at which the crash was received by the collectors. Also the partition key for partitioning reports_clean. Note that the time will be 7-8 hours off for crashes before February 2012 due to a shift from PST to UTC.
- client_crash_date
- timestamp with time zone at which the users’ crashing machine though the crash was happening. Often innacurrate due to clock issues, is primarily supplied as an anchor timestamp for uptime and install_age.
- product_version_id
- foreign key to the product_versions table.
- build
- numeric build identifier as supplied by the client. Might not match any real build in product_version_builds for a variety of reasons.
- signature_id
- foreign key to the signatures dimension table.
- install_age
- time interval between installation and crash, as reported by the client. To get the reported install date, do
( SELECT client_crash_date - install_age ).
- uptime
- time interval between program start and crash, as reported by the client.
- reason_id
- foreign key to the reasons table.
- address_id
- foreign key to the addresses table.
- os_name
- name of the OS of the crashing host, for OSes which match known OSes.
- os_version_id
- foreign key to the os_versions table.
- hang_id
- UUID assigned to the hang pair grouping for hang pairs. May not match anything if the hang pair was broken by sampling or lost crash reports.
- flash_version_id
- foreign key to the flash_versions table
- process_type
- Crashing process type, linked to process_types dimension.
- release_channel
- release channel from which the crashing product was obtained, unless altered by the user (this happens more than you’d think). Note that non-Mozilla builds are usually lumped into the “release” channel.
- duplicate_of
- UUID of the “leader” of the duplicate group if this crash is marked as a possible duplicate. If UUID and duplicate_of are the same, this crash is the “leader”. Selection of leader is arbitrary.
- domain_id
- foreign key to the domains dimension
- architecture
- CPU architecture of the client as reported (e.g. ‘x86’, ‘arm’).
- cores
- number of CPU cores on the client, as reported.
reports_user_info
Contains a handful of “optional” information from the reports table which is either security-sensitive or is not included in all reports and is large. This includes the full URL, user email address, comments, and app_notes. As such, access to this table in production may be restricted.
Partitioned by date into weekly partitions, so each query against this table should contain a predicate on date_processed. Relates to reports_clean via UUID, which is also its primary key.
Updated by update_reports_clean().
product_adu
The normalized version of raw_adi, contains summarized estimated counts of users for each product-version since Rapid Release began. Populated by daily cronjob.
Updated by update_adu().
14.4.3. Dimensions¶
These tables contain lookup lists and taxonomy for the fact tables in Socorro. Generally they are auto-populated based on encountering new values in the raw data, on an hourly basis. A few tables below are manually populated and change extremely seldom, if at all.
Dimensions which are lookup lists of short values join to the fact tables by natural key, although it is not actually necessary to reference them (e.g. os_name, release_channel). Dimension lists which have long values or are taxonomies or heirarchies join to the fact tables using a surrogate key (e.g. product_version_id, reason_id).
Some dimensions which come from raw crash data have a “first_seen” column which displays when that value was first encountered in a crash and added to the dimension table. Since the first_seen columns were added in September 2011, most of these will have the value ‘2011-01-01’ which is not meaningful. Only dates after 2011-09-15 actually indicate a first appearance.
addresses
Contains a list of crash location “addresses”, extracted hourly from the raw data. Surrogate key: address_id.
Updated by update_reports_clean().
crash_types
Intersects process_types and whether or not a crash is a hang to supply 5 distinct crash types. Used for the “Crashes By User” screen.
Updated manually.
domains
List of HTTP domains extracted from raw reports by applying a truncation regex to the crashing URL. These should contain no personal information. Contains a “first seen” column. Surrogate key: domain_id
Updated from update_reports_clean(), with function update_lookup_new_reports().
flash_versions
List of Abobe Flash version numbers harvested from crashes. Has a “first_seen” column. Surrogate key: flash_version_id.
Updated from update_reports_clean(), with function update_lookup_new_reports().
os_names
Canonical list of OS names used in Sorocco. Natural key. Fixed list.
Updated manually.
os_versions
List of versions for each OS based on data harvested from crashes. Contains some garbage versions because we cannot validate. Surrogate key: os_version_id.
Updated from update_reports_clean(), with function update_os_versions_new_reports().
plugins
List of “interesting modules” harvested from raw crashes, populated by the processors. Surrogate key: ID. Links to plugins_reports.
process_types
Standing list of crashing process types (browser, plugin and hang). Natural key.
Updated manually.
products
List of supported products, along with the first version on rapid release. Natural key: product_name.
Updated manually.
product_versions
Contains a list of versions for each product, since the beginning of rapid release (i.e. since Firefox 5.0). Version numbers are available expressed several different ways, and there is a sort column for sorting versions. Also contains build_date/sunset_date visibility information and the featured_version flag. “build_type” means the same thing as “release_channel”. Surrogate key: product_version_id.
Updated by update_product_versions(), based on data from releases_raw.
Version columns include:
- version_string
- The canonical, complete version number for display to users
- release_version
- The version number as provided in crash reports (and usually the same as the
- one on the FTP server). Can be missing suffixes like “b2” or “esr”.
- major_version
- Just the first two numbers of the version number, e.g. “11.0”
- version_sort
- An alphanumeric string which allows you to sort version numbers in the correct order.
- beta_number
- The sequential beta release number if the product-version is a beta. For “final betas”, this number will be 99.
product_version_builds
Contains a list of builds for each product-version. Note that platform information is not at all normalized. Natural key: product_version_id, build_id.
Updated from update_os_versions_new_reports().
product_release_channels
Contains an intersection of products and release channels, mainly in order to store throttle values. Manually populated. Natural key: product_name, release_channel.
reasons
Contains a list of “crash reason” values harvested from raw crashes. Has a “first seen” column. Surrogate key: reason_id.
release_channels
Contains a list of available Release Channels. Manually populated. Natural key. See “note on release channel columns” below.
signatures
List of crash signatures harvested from incoming raw data. Populated by hourly cronjob. Has a first_seen column. Surrogate key: signature_id.
uptime_levels
Reference list of uptime “levels” for use in reports, primarily the Signature Summary. Manually populated.
windows_versions
Reference list of Window major/minor versions with their accompanying common names for reports. Manually populated.
14.4.4. Matviews¶
These data summaries are derived data from the fact tables and/or the raw data tables. They are populated by hourly or daily cronjobs, and are frequently regenerated if historical data needs to be corrected. If these matviews contain the data you need, you should use them first because they are smaller and more efficient than the fact tables or the raw tables.
build_adu
Totals ADU per product-version, OS, crash report date, and build date. Used primarily to feed data to crashes_by_user_build and home_page_build.
correlations
Summaries crashes by product-version, os, reason and signature. Populated by daily cron job. Is the root for the other correlations reports. Correlation reports in the database will not be active/populated until 2.5.2 or later.
correlation_addons
Contains crash-count summaries of addons per correlation. Populated by daily cronjob.
correlation_cores
Contains crash-count summaries of crashes per architecture and number of cores. Populated by daily cronjob.
correlation_modules
Will contain crash-counts for modules per correlation. Will be populated daily by pull from S3.
crashes_by_user, crashes_by_user_view
Totals crashes, adu, and crash/adu ratio for each product-version, crash type and OS for each crash report date. Used to populate the “Crashed By User” interactive graph. crashes_by_user_view joins crashes_by_user to its various lookup list tables.
crashes_by_user_build, crashes_by_user_build_view
The same as crashes_by_user, but also summarizes by build_date, allowing you to do a sum() and see crashes by build date instead of by crash report date.
daily_hangs and hang_report
daily_hangs contains a correlation of hang crash reports with their related hang pair crashes, plus additional summary data. Duplicates contains an array of UUIDs of possible duplicates.
hang_report is a dynamic view which flattens daily_hangs and its related dimension tables.
home_page_graph, home_page_graph_view
Summary of non-browser-hang crashes by report date and product-version, including ADU and crashes-per-hundred-adu. As the name suggests, used to populate the home page graph. The _view joins the matview to its various lookup list tables.
home_page_graph_build, home_page_graph_build_view
Same as home_page_graph, but also includes build_date. Note that since it includes report_date as well as build_date, you need to do a SUM() of the counts in order to see data just by build date.
nightly_builds
contains summaries of crashes-by-age for Nightly and Aurora releases. Will be populated in Socorro 2.5.1.
product_crash_ratio
Dynamic VIEW which shows crashes, ADU, adjusted crashes, and the crash/100ADU ratio, for each product and versions. Recommended for backing graphs and similar.
product_os_crash_ratio
Dynamic VIEW which shows crashes, ADU, adjusted crashes, and the crash/100ADU ratio for each product, OS and version. Recommended for backing graphs and similar.
product_info
dynamic VIEW which suppies the most essential information about each product version for both old and new products.
signature_products and signature_products_rollup
Summary of which signatures appear in which product_version_ids, with first appearance dates.
The rollup contains an array-style summary of the signatures with lists of product-versions.
tcbs
Short for “Top Crashes By Signature”, tcbs contains counts of crashes per day, signature, product-version, and columns counting each OS.
tcbs_build
Same as TCBS, only with build_date as well. Note that you need to SUM() values, since report_date is included as well, in order to get values just by build date.
14.4.5. Note On Release Channel Columns¶
Due to a historical error, the column name for the Release Channel in various tables may be named “release_channel”, “build_type”, or “build_channel”. All three of these column names refer to exactly the same thing. While we regret the confusion, it has not been thought to be worth the refactoring effort to clean it up.
14.4.6. Application Support Tables¶
These tables are used by various parts of the application to do other things than reporting. They are populated/managed by those applications. Most are not accessible to the various reporting users, as they do not contain reportable data.
data processing control tables
These tables contain data which supports data processing by the processors and cronjobs.
- product_productid_map
- maps product names based on productIDs, in cases where the product name supplied by Breakpad is not correct (i.e. FennecAndroid).
- reports_bad
- contains the last day of rejected UUIDs for copying from reports to reports_clean. intended for auditing of the reports_clean code.
- os_name_matches
- contains regexs for matching commonly found OS names in crashes with canonical OS names.
- release_channel_matches
- contains LIKE match strings for release channels for channel names commonly found in crashes with canonical names.
- special_product_platforms
- contains mapping information for rewriting data from FTP-scraping to have the correct product and platform. Currently used only for Fennec.
- transform_rules
- contains rule data for rewriting crashes by the processors. May be used in the future for other rule-based rewriting by other components.
These tables support the application which emails crash reporters with follow-ups. As such, access to these tables will restricted.
processor management tables
These tables are used to coordinate activities of the up-to-120 processors and the monitor.
- jobs
- The current main queue for crashes waiting to be processed.
- priorityjobs
- The queue for user-requested “priority” crash processing.
- processors
- The registration list for currently active processors.
UI management tables
- sessions
- contains session information for people logged into the administration interface for Socorro.
monitoring tables
- replication_test
- Contains a timestamp for ganglia to measure the speed of replication.
cronjob and database management
These tables support scheduled tasks which are run in Socorro.
- report_partition_info
- contains configuration information on how the partitioning cronjob needs to partition the various partitioned database tables.
- socorro_db_version
- contains the socorro version of the current database. updated by the upgrade scripts.
- socorro_db_version_history
- contains the history of version upgrades of the current database.
14.5. Creating a New Matview¶
A materialized view, or “matview” is the results of a query stored as a table in the PostgreSQL database. Matviews make user interfaces much more responsive by eliminating searches over many GB or sparse data at request time. The majority of the time, new matviews will have the following characteristics:
- they will pull data from reports_clean and/or reports_user_info
- they will be updated once per day and store daily summary data
- they will be updated by a cron job calling a stored procedure
The rest of this guide assumes that all three conditions above are true. For matviews for which one or more conditions are not true, consult the PostgreSQL DBAs for your matview.
14.5.1. Do I Want a Matview?¶
Before proceeding to construct a new matview, test the responsiveness of simply running a query over reports_clean and/or reports_user_info. You may find that the query returns fast enough ( < 100ms ) without its own matview. Remember to test the extreme cases: Firefox release version on Windows, or Fennec aurora version.
Also, matviews are really only effective if they are smaller than 1/4 the size of the base data from which they are constructed. Otherwise, it’s generally better to simply look at adding new indexes to the base data. Try populating a couple days of the matview, ad-hoc, and checking its size (pg_total_relation_size()) compared to the base table from which it’s drawn. The new signature summaries was a good example of this; the matviews to meet the spec would have been 1/3 the size of reports_clean, so we added a couple new indexes to reports_clean instead.
14.5.2. Components of a Matview¶
In order to create a new matview, you will create or modify five or six things:
- a table to hold the matview data
- an update function to insert new matview data once per day
- a backfill function to backfill one day of the matview
- add a line in the general backfill_matviews function
- if the matview is to be backfilled from deployment, a script to do this
- a test that the matview is being populated correctly.
The final point is not yet addressed by a test framework for Socorro, so we’re skipping it currently.
For the rest of this doc, please refer to the template matview code sql/templates/general_matview_template.sql in the Socorro source code.
14.5.3. Creating the Matview Table¶
The matview table should be the basis for the report or screen you want. It’s important that it be able to cope with all of the different filter and grouping criteria which users are allowed to supply. On the other hand, most of the time it’s not helpful to try to have one matview support several different reports; the matview gets bloated and slow.
In general, each matview will have the following things:
- one or more grouping columns
- a report_date column
- one or more summary data columns
If they are available, all columns should use surrogate keys to lookup lists (i.e. use signature_id, not the full text of the signature). Generally the primary key of the matview will be the combination of all grouping columns plus the report date.
So, as an example, we’re going to create a simple matview for summarizing crashes per product, web domain. While it’s unlikely that such a matview would be useful in practice (we could just query reports_clean directly) it makes a good example. Here’s the model for the table:
table product_domain_counts product_version domain report_date report_count key product_version, domain, report_date
We actually use the custom procedure create_table_if_not_exists() to create this. This function handles idempotence, permissions, and secondary indexes for us, like so:
SELECT create_table_if_not_exists('product_domain_counts' $x$ CREATE TABLE product_domain_counts ( product_version_id INT NOT NULL, domain_id INT NOT NULL, report_date DATE NOT NULL, report_count INT NOT NULL DEFAULT 0, constraint product_domain_counts_key ( product_version_id, domain_id, report_date ) ); $x$, 'breakpad_rw', ARRAY['domain_id'] );
See DatabaseAdminFunctions in the docs for more information about the function.
You’ll notice that the resulting matview uses the surrogate keys of the corresponsing lookup lists rather than the actual values. This is to keep matview sizes down and improve performance. You’ll also notice that there are no foriegn keys to the various lookup list tables; this is partly a performance optimization, but mostly because, since matviews are populated by stored procedure, validating input is not critical. We also don’t expect to need cascading updates or deletes on the lookup lists.
14.5.4. Creating The Update Function¶
Once you have the table, you’ll need to write a function to be called by cron once per day in order to populate the matview with new data.
This function will:
- be named update_{name_of_matview}
- take two parameters, a date and a boolean
- return a boolean, with true = success and ERROR = failure
- check if data it depends on is available
- check if it’s already been run for the day
- pull its data from reports_clean, reports_user_info, and/or other matviews (_not_ reports or other raw data tables)
So, here’s our update function for the product_domains table:
CREATE OR REPLACE FUNCTION update_product_domain_counts ( updateday DATE, checkdata BOOLEAN default TRUE ) RETURNS BOOLEAN LANGUAGE plpgsql SET work_mem = '512MB' SET temp_buffers = '512MB' SET client_min_messages = 'ERROR' AS $f$ BEGIN -- this function populates a daily matview -- for crash counts by product and domain -- depends on reports_clean -- check if we've been run IF checkdata THEN PERFORM 1 FROM product_domain_counts WHERE report_date = updateday LIMIT 1; IF FOUND THEN RAISE EXCEPTION 'product_domain_counts has already been run for %.',updateday; END IF; END IF; -- check if reports_clean is complete IF NOT reports_clean_done(updateday) THEN IF checkdata THEN RAISE EXCEPTION 'Reports_clean has not been updated to the end of %',updateday; ELSE RETURN TRUE; END IF; END IF; -- now insert the new records -- this should be some appropriate query, this simple group by -- is just provided as an example INSERT INTO product_domain_counts ( product_version_id, domain_id, report_date, report_count ) SELECT product_version_id, domain_id, updateday, count(*) FROM reports_clean WHERE domain_id IS NOT NULL AND date_processed >= updateday::timestamptz AND date_processed < ( updateday + 1 )::timestamptz GROUP BY product_version_id, domain_id; RETURN TRUE; END; $f$;
Note that the update functions could be written in PL/python if you wish; however, there isn’t yet a template for that.
14.5.5. Creating The Backfill Function¶
The second function which needs to be created is one for backfilling data for specific dates, for when we need to backfill missing or corrected data. This function will also be used to fill in data when we first deploy the matview.
The backfill function will generally be very simple; it just calls a delete for the days data and then the update function, with the “checkdata” flag disabled:
CREATE OR REPLACE FUNCTION backfill_product_domain_counts( updateday DATE ) RETURNS BOOLEAN LANGUAGE plpgsql AS $f$ BEGIN DELETE FROM product_domain_counts WHERE report_date = updateday; PERFORM update_product_domain_counts(updateday, false); RETURN TRUE; END; $f$;
14.5.6. Adding The Function To The Omnibus Backfill¶
Usually when we backfill data we recreate all matview data for the period affected. This is accomplished by inserting it into the backfill_matviews table:
INSERT INTO backfill_matviews ( matview, function_name, frequency ) VALUES ( 'product_domain_counts', 'backfill_product_domain_counts', 'daily' );
NOTE: the above is not yet active. Until it is, send a request to Josh Berkus to add your new backfill to the omnibus backfill function.
14.5.7. Filling in Initial Data¶
Generally when creating a new matview, we want to fill in two weeks or so of data. This can be done with either a Python or a PL/pgSQL script. A PL/pgSQL script would be created as a SQL file and look like this:
DO $f$ DECLARE thisday DATE := '2012-01-14'; lastday DATE; BEGIN -- set backfill to the last day we have ADI for SELECT max("date") INTO lastday FROM raw_adi; WHILE thisday <= lastday LOOP RAISE INFO 'backfilling %', thisday; PERFORM backfill_product_domain_counts(thisday); thisday := thisday + 1; END LOOP; END;$f$;
This script would then be checked into the set of upgrade scripts for that version of the database.
14.6. Database Admin Function Reference¶
What follows is a listing of custom functions written for Socorro in the PostgreSQL database which are intended for database administration, particularly scheduled tasks. Many of these functions depend on other, internal functions which are not documented.
All functions below return BOOLEAN, with TRUE meaning completion, and throw an ERROR if they fail, unless otherwise noted.
14.6.1. MatView Functions¶
These functions manage the population of the many Materialized Views in Socorro. In general, for each matview there are two functions which maintain it. In the cases where these functions are not listed below, assume that they fit this pattern.
update_{matview_name} ( updateday DATE optional default yesterday, checkdata BOOLEAN optional default true, check_period INTERVAL optional default '1 hour' ) fills in one day of the matview for the first time will error if data is already present, or source data is missing backfill_{matview_name} ( updateday DATE optional default yesterday, checkdata BOOLEAN optional default true, check_period INTERVAL optional default '1 hour' ) deletes one day of data for the matview and recreates it. will warn, but not error, if source data is missing safe for use without downtime
More detail on the parameters:
- updateday
- UTC day to run the update/backfill for. Also the UTC day to check for conflicting or missing dependant data.
- checkdata
- Whether or not to check for conflicting data (i.e. has this already been run?), and for missing upstream data needed to run the fill. If checkdata=false, function will just emit NOTICEs and return FALSE if upstream data is not present.
- check_period
- For functions which depend on reports_clean, the window of reports_clean to check for data being present. This is because at Mozilla we check to see that the last hour of reports_clean is filled in, but open source users need a larger window.
Matview functions return a BOOLEAN which will have one of three results: TRUE, FALSE, or ERROR. What these mean generally depend on whether or not checkdata=on. It also returns an error string which gives more information about what it did.
If checkdata=TRUE (default):
- TRUE
- matview function ran and filled in data.
- FALSE
- matview update has already been run for the relevant period. no changes to data made, and warning returned.
- ERROR
- underlying data is missing (i.e. no crashes, no raw_adi, etc.) or some unexpected error condition
IF checkdata=FALSE:
- TRUE
- matview function ran and filled in data.
- FALSE
- matview update has already been run for the relevant period, or source data (crashes, adu, etc.) is missing. no changes to data made, and no warning made.
- ERROR
- some unexpected error condition.
Or, as a grid of results (where * indicates that a warning message is returned as well):
Exceptions to the above are generally for procedures which need to run hourly or more frequently (e.g. update_reports_clean, reports_duplicates). Also, some functions have shortcut names where they don’t use the full name of the matview (e.g. update_adu).
Note that the various matviews can take radically different amounts of time to update or backfill ... from a couple of seconds to 10 minutes for one day.
In addition, there are several procedures which are designed to update or backfill multiple matviews for a range of days. These are designed for when there has been some kind of widespread issue in crash processing and a bunch of crashes have been reprocessed and need to be re-aggregated.
These mass-backfill functions generally give a lot of command-line feedback on their progress, and should be run in a screen session, as they may take hours to complete. These functions, as the most generally used, are listed first. If you are doing a mass-backfill, you probably want to limit the backfill to a week at a time in order to prevent it from running too long before committing.
14.6.2. Hourly Matview Update Functions¶
These need to be run every hour, for each hour. None of them take the standard parameters.
Since update_product_versions is cumulative, it needs to only be run once.
14.6.3. Daily Matview Update Functions¶
These daily functions generally accept the parameters given above. Unless otherwise noted, all of them depend on all of the hourly functions having completed for the day.
Functions marked “last day only” do not accumulate data, but display it only for the last day they were run. As such, there is no need to fill them in for each day.
14.6.4. Other Matview Functions¶
Matview functions which don’t fit the parameters above include:
14.6.5. backfill_matviews¶
Purpose: backfills data for all matviews for a specific range of dates. For use when data is either missing or needs to be retroactively corrected.
Called By: manually by admin as needed
backfill_matviews ( startdate DATE, optional enddate DATE default current_date, optional reportsclean BOOLEAN default true ) SELECT backfill_matviews( '2011-11-01', '2011-11-27', false ); SELECT backfill_matviews( '2011-11-01' );
- startdate
- the first date to backfill
- enddate
- the last date to backfill. defaults to the current UTC date.
- reportsclean
- whether or not to backfill reports_clean as well. defaults to true supplied because the backfill of reports_clean takes a lot of time.
14.6.6. backfill_reports_clean¶
Purpose: backfill only the reports_clean normalized fact table.
Called By: admin as needed
backfill_reports_clean ( starttime TIMESTAMPTZ, endtime TIMESTAMPTZ, ) SELECT backfill_reports_clean ( '2011-11-17', '2011-11-29 14:00:00' );
- starttime
- timestamp to start backfill
- endtime
- timestamp to halt backfill at
Note: if backfilling less than 1 day, will backfill in 1-hour increments. If backfilling more than one day, will backfill in 6-hour increments. Can take a long time to backfill more than a couple of days.
14.6.7. update_product_versions¶
Purpose: updates the list of product_versions and product_version_builds based on the contents of releases_raw, products, release_repositories, special_product_platforms, and for B2G: update_channel_map, raw_update_channels.
Called By: daily cron job
update_product_versions ( product_window INTEGER Default 30 ) SELECT update_product_versions ( );
Notes: takes no parameters as the product update is always cumulative and by default is run daily. As of 2.3.5, only looks at product_versions with build dates in the last 30 days. There is no backfill function because it is always a cumulative update.
This function is complex. If implementing this outside of Mozilla, a user may wish to create a simpler function that just inserts data into products and product_versions.
14.6.8. update_rank_compare, backfill_rank_compare¶
Purpose: updates “rank_compare” based on the contents of the reports_clean table
Called By: daily cron job
Note: this matview is not historical, but contains only one day of data. As such, running either the update or backfill function replaces all existing data. Since it needs an exclusive lock on the matview, it is possible (though unlikely) for it to fail to obtain the lock and error out.
14.6.9. reports_clean_done¶
- Purpose: supports other admin functions by checking if reports_clean is complete
- to the end of the day.
Called By: other update functions
reports_clean_done ( updateday DATE, check_period INTERVAL optional default '1 hour' ) SELECT reports_clean_done('2012-06-12'); SELECT reports_clean_done('2012-06-12','12 hours');
14.6.10. Schema Management Functions¶
These functions support partitioning, upgrades, and other management of tables and views.
14.6.11. weekly_report_partitions¶
Purpose: to create new partitions for the reports table and its child tables every week.
Called By: weekly cron job
weekly_report_partitions ( optional numweeks integer default 2, optional targetdate date default current_date ) SELECT weekly_report_partitions(); SELECT weekly_report_partitions(3,'2011-11-09');
- numweeks
- number of weeks ahead to create partitions
- targetdate
- date for the starting week, if not today
14.6.12. try_lock_table¶
Purpose: attempt to get a lock on a table, looping with sleeps until the lock is obtained.
Called by: various functions internally
try_lock_table ( tabname TEXT, mode TEXT optional default 'EXCLUSIVE', attempts INT optional default 20 ) returns BOOLEAN IF NOT try_lock_table('rank_compare', 'ACCESS EXCLUSIVE') THEN RAISE EXCEPTION 'unable to lock the rank_compare table for update.'; END IF;
- tabname
- the table name to lock
- mode
- the lock mode per PostgreSQL docs. Defaults to ‘EXCLUSIVE’.
- attempts
- the number of attempts to make, with 3 second sleeps between each. optional, defaults to 20.
Returns TRUE for table locked, FALSE for unable to lock.
14.6.13. create_table_if_not_exists¶
Purpose: creates a new table, skipping if the table is found to already exist.
Called By: upgrade scripts
create_table_if_not_exists ( tablename TEXT, declaration TEXT, tableowner TEXT optional default 'breakpad_rw', indexes TEXT ARRAY default empty list ) SELECT create_table_if_not_exists ( 'rank_compare', $q$ create table rank_compare ( product_version_id int not null, signature_id int not null, rank_days int not null, report_count int, total_reports bigint, rank_report_count int, percent_of_total numeric, constraint rank_compare_key primary key ( product_version_id, signature_id, rank_days ) );$q$, 'breakpad_rw', ARRAY [ 'product_version_id,rank_report_count', 'signature_id' ]);
- tablename
- name of the new table to create
- declaration
- full CREATE TABLE sql statement, plus whatever other SQL statements you only want to run on table creation such as priming it with a few records and creating the primary key. If running more than one SQL statement, separate them with semicolons.
- tableowner
- the ROLE which owns the table. usually ‘breakpad_rw’. optional.
- indexes
- an array of sets of columns to create regular btree indexes on. use the array declaration as demonstrated above. default is to create no indexes.
Note: this is the best way to create new tables in migration scripts, since it allows you to rerun the script multiple times without erroring out. However, be aware that it only checks for the existance of the table, not its definition, so if you modify the table definition you’ll need to manually drop and recreate it.
14.6.14. add_column_if_not_exists¶
Purpose: allow idempotent addition of new columns to existing tables.
Called by: upgrade scripts
add_column_if_not_exists ( tablename text, columnname text, datatype text, nonnull boolean default false, defaultval text default '', constrainttext text default '' ) returns boolean SELECT add_column_if_not_exists ( 'product_version_builds','repository','citext' );
- tablename
- name of the existing table to which to add the column
- columname
- name of the new column to add
- datatype
- data type of the new column to add
- nonnull
- is the column NOT NULL? defaults to false. must have a default parameter if notnull.
- defaultval
- default value for the column. this will cause the table to be rewritten if set; beware of using on large tables.
- constrainttext
- any constraint, including foreign keys, to be added to the column, written as a table constraint. will cause the whole table to be checked; beware of adding to large tables.
Note: just checks if the table & column exist, and does nothing if they do. does not check if data type, constraints and defaults match.
14.6.15. drop_old_partitions¶
Purpose: to purge old raw data quarterly per data expiration policy.
Called By: manually by DBA.
drop_old_partitions ( mastername text, cutoffdate date ) retruns BOOLEAN SELECT drop_old_partitions ( 'reports', '2011-11-01' );
- mastername
- name of the partition master, e.g. ‘reports’, ‘extensions’, etc.
- cutoffdate
- earliest date of data to retain.
Notes: drop_old_partitions assumes a table_YYYYMMDD naming format. requires a lock on the partitioned tables, which generally means shutting down the processors.
14.6.17. add_old_release¶
Obsolete; Removed.
14.6.18. add_new_release¶
Purpose: allows admin users to manually add a release to the releases_raw table.
Called By: admin interface
add_new_release ( product citext, version citext, release_channel citext, build_id numeric, platform citext, beta_number integer default NULL, repository text default 'release', update_products boolean default false, ignore_duplicates boolean default false ) returns BOOLEAN SELECT add_new_release('WaterWolf','5.0','release',201206271111,'osx'); SELECT add_new_release('WaterWolf','6.0','beta',201206271198,'osx',2, 'waterwolf-beta',true);
Notes: validates the contents of the required fields. If update_products=true, will run the update_products hourly job to process the new release into product_versions etc. If ignore_duplicates = true, will simply ignore duplicates instead of erroring on them.
14.6.19. edit_featured_versions¶
Purpose: let admin users change the featured versions for a specific product.
Called By: admin interface
edit_featured_versions ( product citext, featured_versions LIST of text ) returns BOOLEAN SELECT edit_featured_versions ( 'Firefox', '15.0a1','14.0a2','13.0b2','12.0' ); SELECT edit_featured_versions ( 'SeaMonkey', '2.9.' );
Notes: completely replaces the list of currently featured versions. Will check that versions featured have not expired. Does not validate product names or version numbers, though.
14.6.20. add_new_product¶
Purpose: allows adding new products to the database.
Called By: DBA on new product request.
add_new_product ( prodname text, initversion major_version, prodid text default null, ftpname text default null, release_throttle numeric default 1.0 ) returns BOOLEAN
- prodname
- product name, properly cased for display
- initversion
- first major version number of the product which should appear
- prodid
- “Product ID” for the product, if available
- ftpname
- Product name in the FTP release repo, if different from display name
- release_throttle
- If throttling back the number of release crashes processed, set here
Notes: add_new_product will return FALSE rather than erroring if the product already exists.
14.6.21. truncate_partitions¶
Purpose: Truncates crash report partitions for raw_crashes and processed_crashes
Called By: crontabber job TruncatePartitionsCronApp on a weekly basis
- ::
- truncate_partitions(weeks_to_keep INTEGER) RETURNS BOOLEAN
- weeks_to_keep
- Number of weeks of data to preserve
14.7. Custom Time-Date Functions¶
The present Socorro database needs to do a lot of time, date and timezone manipulation. This is partly a natural consequence of the application, and the need to use both DATE and TIMESTAMPTZ values. The greater need is legacy timestamp, conversion, however; currently the processors save crash reporting timestamps as TIMESTAMP WITHOUT TIMEZONE in Pacific time, whereas the rest of the database is TIMESTAMP WITH TIME ZONE in UTC. This necessitates a lot of tricky time zone conversions.
The functions below are meant to make it easier to write queries which return correct results based on dates and timestamps.
14.7.1. tstz_between¶
tstz_between ( tstz TIMESTAMPTZ, bdate DATE, fdate DATE ) RETURNS BOOLEAN SELECT tstz_between ( '2011-11-25 15:23:11-08', '2011-11-25', '2011-11-26' );
Checks whether a timestamp with time zone is between two UTC dates, inclusive of the entire ending day.
14.7.2. utc_day_is¶
utc_day_is ( TIMESTAMPTZ, TIMESTAMP or DATE ) RETURNS BOOLEAN SELECT utc_day_is ( '2011-11-26 15:23:11-08', '2011-11-28' );
Checks whether the provided timestamp with time zone is within the provided UTC day, expressed as either a timestamp without time zone or a date.
14.7.3. utc_day_near¶
utc_day_near ( TIMESTAMPTZ, TIMESTAMP or DATE ) RETURNS BOOLEAN SELECT utc_day_near ( '2011-11-26 15:23:11-08', '2011-11-28' );
Checks whether the provided timestamp with time zone is within an hour of the provided UTC day, expressed as either a timestamp without time zone or a date. Used for matching when related records may cross over midnight.
14.7.4. week_begins_utc¶
week_begins_utc ( TIMESTAMP or DATE ) RETURNS timestamptz SELECT week_begins_utc ( '2011-11-25' );
Given a timestamp or date, returns the timestamp with time zone corresponding to the beginning of the week in UTC time. Used for partitioning data by week.
14.7.5. week_ends_utc¶
week_ends_utc ( TIMESTAMP or DATE ) RETURNS timestamptz SELECT week_ends_utc ( '2011-11-25' );
Given a timestamp or date, returns the timestamp with time zone corresponding to the end of the week in UTC time. Used for partitioning data by week.
14.7.6. week_begins_partition¶
week_begins_partition ( partname TEXT ) RETURNS timestamptz SELECT week_begins_partition ( 'reports_20111219' );
Given a partition table name, returns a timestamptz of the date and time that weekly partition starts.
14.7.7. week_ends_partition¶
week_ends_partition ( partname TEXT ) RETURNS timestamptz SELECT week_ends_partition ( 'reports_20111219' );
Given a partition table name, returns a timestamptz of the date and time that weekly partition ends.
14.7.8. week_begins_partition_string¶
week_begins_partition_string ( partname TEXT ) RETURNS text SELECT week_begins_partition_string ( 'reports_20111219' );
Given a partition table name, returns a string of the date and time that weekly partition starts in the format ‘YYYY-MM-DD HR:MI:SS UTC’.
14.7.9. week_ends_partition_string¶
week_ends_partition_string ( partname TEXT ) RETURNS text SELECT week_ends_partition_string ( 'reports_20111219' );
Given a partition table name, returns a string of the date and time that weekly partition ends in the format ‘YYYY-MM-DD HR:MI:SS UTC’.
14.8. Database Misc Function Reference¶
What follows is a listing of custom functions written for Socorro in the PostgreSQL database which are useful for application development, but do not fit in the “Admin” or “Datetime” categories.
14.8.2. build_numeric¶
build_numeric ( build TEXT ) RETURNS NUMERIC SELECT build_numeric ( '20110811165603' );
Converts a build ID string, as supplied by the processors/breakpad, into a numeric value on which we can do computations and derive a date. Returns NULL if the build string is a non-numeric value and thus corrupted.
14.8.3. build_date¶
build_date ( buildid NUMERIC ) RETURNS DATE SELECT build_date ( 20110811165603 );
Takes a numeric build_id and returns the date of the build.
14.8.4. API Functions¶
These functions support the middleware, making it easier to look up certain things in the database.
14.8.5. get_product_version_ids¶
get_product_version_ids ( product CITEXT, versions VARIADIC CITEXT ) SELECT get_product_version_ids ( 'Firefox','11.0a1' ); SELECT get_product_version_ids ( 'Firefox','11.0a1','11.0a2','11.0b1');
Takes a product name and a list of version_strings, and returns an array (list) of surrogate keys (product_version_ids) which can then be used in queries like:
SELECT * FROM reports_clean WHERE date_processed BETWEEN '2012-03-21' AND '2012-03-38' WHERE product_version_id = ANY ( $list );
14.8.6. Mathematical Functions¶
These functions do math operations which we need to do repeatedly, saving some typing.
14.8.7. crash_hadu¶
crash_hadu ( crashes INT8, adu INT8, throttle NUMERIC default 1.0 ); returns NUMERIC (12,3)
Returns the “crashes per hundred ADU”, by this formula:
( crashes / throttle ) * 100 / adu
14.8.8. Internal Functions¶
These functions are designed to be called by other functions, so are sparsely documented.
14.8.9. nonzero_string¶
nonzero_string ( TEXT or CITEXT ) returns boolean
Returns FALSE if the string consists of ‘’, only spaces, or NULL. True otherwise.
14.8.10. validate_lookup¶
validate_lookup ( ltable TEXT, -- lookup table name lcol TEXT, -- lookup column name lval TEXT, -- value to look up lmessage TEXT -- name of the entity in error messages ) returns boolean
Returns TRUE if the value is present in the named lookup table. Raises a custom ERROR if it’s not present.
14.9. Dumping Dump Tables¶
A work item that came out of the Socorro Postgres work week is to dump the dump tables and store cooked dumps as gzipped files. Drop dumps table
convert each dumps table row to a compressed file on disk
14.9.2. Library support¶
‘done’ as of 2009-05-07 in socorro.lib.dmpStorage (Coding, testing is done; integration testing is done, ‘go live’ is today) Socorro UI
/report/index/{uuid}
Will stop using the dumps table.
- Will start using gzipped files
Will use the report uuid to locate the dump on a file system
Will use apache mod-rewrite to serve the actual file. The rewrite rule is based on the uuid, and is ‘simple’: AABBCCDDEEFFGGHHIIJJKKLLM2090308.jsonz => AA/BB/AABBCCDDEEFFGGHHIIJJKKLLM2090308.jsonz
report/index will include a link to JSON dump
link rel=’alternate’ type=’application/json’ href=’/reporter/dumps/cdaa07ae-475b-11dd-8dfa-001cc45a2ce4.jsonz’
14.9.3. Dump file format¶
- Will be gzip compressed JSON encoded cooked dump files
- Partial JSON file
- Full JSONZ file
14.9.4. On Disk Location¶
application.conf dumpPath Example for kahn $config’dumpPath’? = ‘/mnt/socorro_dumps/named’;
In the dumps directory we will have an .htaccess file:
AddType "application/json; charset=UTF-8" jsonz AddEncoding gzip jsonz
Webhead will serve these files as:
Content-Type: application/json; charset=utf-8 Content-Encoding: gzip
Note: You’d expect the dump files to be named json.gz, but this is broken in Safari. By setting HTTP headers and naming the file jsonz, an unknown file extension, this works across browsers.
14.9.5. Socorro UI¶
- Existing URL won’t change.
- Second JSON request back to server will load jsonz file
Example:
-
-
mod rewrite rules will match /dump/.jsonz and change them to access a file share.
14.9.6. Future Enhancement¶
A future enhancement if we find webheads are high CPU would be to move populating the report/index page to client side.
14.9.7. Test Page¶ - Uses browser to decompress a gzip compressed JSON file during an AJAX request, pulls it apart and appends to the page.
Test file made with gzip dump.json | http://socorro.readthedocs.io/en/latest/services/postgres.html | 2017-11-17T19:09:54 | CC-MAIN-2017-47 | 1510934803906.12 | [array(['../_images/core-socorro.png', '../_images/core-socorro.png'],
dtype=object)
array(['../_images/helper-socorro.png', '../_images/helper-socorro.png'],
dtype=object) ] | socorro.readthedocs.io |
Clustering sequences into clonal groups.
Determining a clustering threshold¶
Before running DefineClones, it is important to determine an
appropriate threshold for trimming the hierarchical clustering into B cell
clones. The distToNearest
function in the SHazaM R package calculates
the distance between each sequence in the data and its nearest neighbor. The
resulting distribution.
Using the length normalization parameter ensures that mutations are weighted equally
regardless of junction sequence length. The distance to nearest neighbor distribution
for the example data is shown below. The threshold is
0.16 - indicated
by the red dotted line.
Download the R Script to generate
the distance to nearest neighbor distribution.
See also
For additional details see the distToNearest documentation.
Assigning clones¶
There are several parameter choices when grouping Ig sequences into B cell
clones. The argument
--act set
accounts for ambiguous V-gene and J-gene calls when grouping similar sequences. The
distance metric
--model ham
is nucleotide Hamming distance. Because
the
ham distance model is symmetric,
the
--sym min argument can be left as default.
Because the threshold was generated using length normalized distances, the
--norm len argument is selected with the
resultant threshold
--dist 0.16:
DefineClones.py bygroup -d S43_db-pass_parse-select.tab --act set --model ham \ --sym min --norm len --dist 0.16 | http://changeo.readthedocs.io/en/version-0.3.7---igblast-1.7-fix/examples/cloning.html | 2017-11-17T19:19:25 | CC-MAIN-2017-47 | 1510934803906.12 | [] | changeo.readthedocs.io |
A new synapse mediator ). This mediator can be used to perform a distributed transaction. The synapse configuration has been extended to add explicit transaction markers. This means that you can use the synapse configuration language to define the start, end etc. of your transaction. It is the responsibility of the user to define when to start, commit or rollback the transaction. For example, you can mark the start of a transaction at the start of a database commit, end of the transaction at the end of the database commit and you can mark rollback transaction if a failure occurs.
<transaction action="new|use-existing-or-new|fault-if-no-tx|commit|rollback|suspend|resume"/>
The
action attribute has the following meanings:
Note
To use the Transaction Mediator, you need to have a JTA provider in your environment, for example, JBoss.
Use the following scenario to show how the Transaction Mediator works. Assume we have a record in one database and we want to delete that record from the first database and add it to the second database (these two databases can be run on the same server or they can be in two remote servers). The database tables are defined in such a way that the same entry cannot be added twice. So, in the successful scenario, the record will be deleted from the first table (of the first database) and will be added to the second table (of the second database). In a failure scenario (the record is already in the second database), no record will be deleted from first table and no record will be added into the second database.
Note
This scenario applies to version 3.0. In versions after, there is a Transaction Manager in Carbon itself, so you don't have to put the ESB in an application server to get it working.
Since the Transaction Mediator is implemented using JTA, you need to have a JTA provider. Here we used JBoss J2EE application server (which implements the transaction support through Arjuna TS) as the JTA provider, so it is necessary to deploy the WSO2 ESB in JBoss Application server (AS). Apache Derby as the database server.
JBoss server and the Derby database server have the characteristics mentioned above.
1. Unzip the WSO2 ESB distribution to a place of your choice. And then remove the
geronimo-jta_1.1_spec-1.1.0.wso2v1.jar ( This JAR file can be found in
$ESB_HOME/repository/components/plugins).
Tip
The reason is that the implementation of
javax.transaction.UserTransaction of JTA provider (here JBoss) is used there and if they both are in class path, there is a classloading issue which causes the transaction mediator to fail.
2. Deploy the WSO2 ESB on JBoss AS. The JBOSS installation path will be referred to as
$JBOSS_HOME and the WSO2 ESB repo location as
$CARBON_HOME.
3. Drop the derby client JARs (
derby.jar,
derbynet.jar and
derbyclient.jar) into
$CARBON_HOME/repository/components/lib folder and also into
$JBOSS_HOME/server/default/lib (here is used the default JBoss configuration) folder.
4. We use here a sample similar to #361, and the full Synapse configuration is shown below. (you can directly paste the following configuration into synapse configuration in
$ESB_HOME/repository/conf/synapse-config/synapse.xml). In the "In" sequence, we will send a message to the service and in the "Out" sequence we will delete an entry from the first database and update the second database with that entry. If we try to add an entry, which is already there in the second database, the whole transaction will rollback.
<definitions xmlns=""> >java:jdbc/XADerbyDS</dsName> <icClass>org.jnp.interfaces.NamingContextFactory</icClass> <url>localhost:1099</url> <user>esb</user> <password>esb<>java:jdbc/XADerbyDS1</dsName> <icClass>org.jnp.interfaces.NamingContextFactory</icClass> <url>localhost:1099</url> <user>esb</user> <password>esb</password> </pool> </connection> <statement> <sql>INSERT into company values (?,'c4',?)</sql> <parameter expression="//m0:return/m1:symbol/child::text()" xmlns: <parameter expression="//m0:return/m1:last/child::text()" xmlns: </statement> </dbreport> <transaction action="commit"/> <send/> </out> </sequence> </definitions>
5. To run the sample, you need two distributed Derby databases ("esbdb" and "esbdb1"). Refer here for details to set up the databases. The database table was created using the following SQL query.
Note
In the table schema we cannot have the same entry twice.
CREATE table company(name varchar(10) primary key, id varchar(10), price double);
Add few records to the two tables:
Database1:
INSERT into company values ('IBM','c1',0.0); INSERT into company values ('SUN','c2',0.0);
Database2:
INSERT into company values ('SUN','c2',0.0); INSERT into company values ('MSFT','c3',0.0);
Note
The order of the record matters.
6. Create two data source declarations for JBoss AS for the two distributed databases.
Datasource1:esb-derby-xa-ds.xml
<?xml version="1.0" encoding="UTF-8"?> <datasources> <xa-datasource> <jndi-name>jdbc/XADerbyDS<</xa-datasource-property> <xa-datasource-propertyesb</xa-datasource-property> <xa-datasource-propertyesb</xa-datasource-property> <metadata> <type-mapping>Derby</type-mapping> </metadata> </xa-datasource> </datasources>
Datasource2:esb-derby1-xa-ds.xml
<?xml version="1.0" encoding="UTF-8"?> <datasources> <xa-datasource> <jndi-name>jdbc/XADerbyDS1<1</xa-datasource-property> <xa-datasource-propertyesb</xa-datasource-property> <xa-datasource-propertyesb</xa-datasource-property> <metadata> <type-mapping>Derby</type-mapping> </metadata> </xa-datasource> </datasources>
Note
The two datasource file names should be
*-xa-ds.xml.
Drop the two datasource declarations above into
$JBOSS_HOME/server/default/deploy folder. Map the above
jndi names: drop the following
jboss-web.xml configuration into
$JBOSS_HOME/serer/default/deploy/esb.war/WEB-INF/.
<!DOCTYPE jboss-web PUBLIC "-//JBoss//DTD Web Application 5.0//EN" ""> <jboss-web> <resource-ref> <res-ref-name>jdbc/XADerbyDS</res-ref-name> <jndi-name>java:/XADerbyDS</jndi-name> </resource-ref> <resource-ref> <res-ref-name>jdbc/XADerbyDS1</res-ref-name> <jndi-name>java:/XADerbyDS1</jndi-name> </resource-ref> </jboss-web>
7. Go into
$JBOSS_HOME/bin and start the server. Run the
run.sh (run.bat) script.
Note
You need to set the
CARBON_HOME environment variable pointing to the Carbon repository location.
8. Try the samples. Refer to sample set up guide to know how you can set up the server. Deploy the
SimpleStockQuote service which comes with the WSO2 ESB samples.
Successful Scenario
1. To remove the IBM record from the first database and add it to the second database, run the sample with the following options.
ant stockquote -Daddurl= -Dtrpurl= -Dsymbol=IBM
2. Check both databases to see how the record is deleted from the first database and added to the second database.
Failure Scenario
1. Try to add an entry which is already there in the second database. This time use
Symbol SUN.
ant stockquote -Daddurl= -Dtrpurl= -Dsymbol=SUN
2. You will see how the fault sequence is executed and the whole transaction rollback. Check both databases again; there is no record deleted from the first database and no record added into the second database. | https://docs.wso2.com/display/ESB450/Transaction+Mediator+Example | 2017-11-17T19:30:34 | CC-MAIN-2017-47 | 1510934803906.12 | [array(['/download/attachments/9371749/logo-esb.png?version=1&modificationDate=1346236987000&api=v2',
None], dtype=object) ] | docs.wso2.com |
Support
How does link creation work?¶
Three stages of a link¶
Redirect behavior and tracking¶
When your customer clicks the click tracking link in an email, the browser will generally open. Once in the browser, the click tracking redirect will happen, followed by an instant redirect to the Branch link. At this point, Branch will either stay in the browser, and load the original URL (if the app is not installed, or the customer is on a desktop device), or Branch will open the app and deep link to content. Branch uses the information from the original URL to deep link to the correct in-app content.
Universal links and click tracking¶
Apple introduced Universal Links starting with iOS 9. Apple introduced Universal Links starting with iOS 9. You must configure your app and your links in a specific way to enable Universal Link functionality. Branch guides developers through this process so that Branch links function as Universal Links.
For Universal Links to work, Apple requires that a file called an “Apple-App-Site-Association” (AASA) file must be hosted on the domain of the link in question. When the link is clicked, Apple will check for the presence of this file to decide whether or not to open the app. All Branch links are Universal Links, because we will host this file securely on your Branch link domain.
When you click a Branch link directly from an email inside the Mail app on iOS 9+, it functions as a Universal Link - it redirects directly into the desired app. However, if you put a Branch Universal Link behind a click tracking URL, it won’t deep link into the app. This is because generally, a click tracking URL is not a Universal Link. If you’re not hosting that AASA file on the click tracking URL’s domain, you aren’t going to get Universal Link behavior for that link.
Solution
To solve this, Branch will host the AASA file on your click tracking domain. We’ll help you get set up with this.
Deep linking setup messages¶
In the Set up Deep Linking step of the email onboarding flow, you will see a result indicating the mapping between your web content and your app content.
We think you use your web URL for deep linking¶
If your webpage, for instance at the URL, has a tag like this:
<meta name="al:ios:url" content="shop://" />
or this:
<meta name="al:android:url" content="shop://shoes/brown-loafers" />
Your deep linking setup for email will use all or part of your web URL as a deep link value. It can use either the full URL including the protocol (), the full URL without the protocol (
shop.com/shoes/brown-loafers), or the path of the URL (
shoes/brown-loafers).
We think you host your deep link data on your website¶
If instead, your webpage has a tag like this:
<meta name="branch:deeplink:product_id" content="123456" />
or this:
<meta name="al:ios:url" content="shop://id/123456" />
Your deep linking setup for email will use the hosted deep link data method. This means that no mapping can be made to the URL, and meta tags that can be used for deep linking will be retrieved from your webpage on an ongoing basis.
We couldn't determine your deep linking setup from your web URL¶
If there are no meta tags for deep linking on your webpage, or you indicate that the mapping is incorrect, you can try a Branch link instead.
Here, you will want to enter a Branch link that opens to a page within your app (not the home screen).
When you click Submit, the link's values for
$canonical_url,
$desktop_url, and
$fallback_url will be compared against other values in the link. If there is a mapping between values for the full URL or the path of the URL, your deep linking setup for email will use those methods.
Test your link¶
When you submit a web URL or Branch link, you will be prompted with a test link. Click this link on iOS and Android devices, and verify that it will open your app to the right place.
Once you click Yes, your deep linking will be set up for email. When a user clicks a link in your emails, we will embed the full web URL, path of the web URL, or retrieved deep link data from the webpage into a Branch version of that link and pass it to your app, so that it will open to the right place. | https://docs.branch.io/pages/emails/support/ | 2017-11-17T19:17:49 | CC-MAIN-2017-47 | 1510934803906.12 | [array(['../../../img/pages/email/responsys/deep-linked-email-creation-flow.png',
'image'], dtype=object)
array(['../../../img/pages/email/responsys/deep-linked-email-post-click.png',
'image'], dtype=object)
array(['../../../img/pages/email/responsys/deep-linked-email-universal-links.png',
'image'], dtype=object)
array(['../../../img/pages/email/responsys/web-url-result.png', 'image'],
dtype=object)
array(['../../../img/pages/email/responsys/hosted-data-result.png',
'image'], dtype=object)
array(['../../../img/pages/email/responsys/enter-branch-link.png',
'image'], dtype=object)
array(['../../../img/pages/email/responsys/test-link.png', 'image'],
dtype=object) ] | docs.branch.io |
User manual¶
Below you’ll find an outline of the chapters in the user manual, and a set of links to each chapter’s main sections.
Back to main Binder documentation page
- What is Binder?
- Current project status
- Project goals: Binder’s long-term vision
- Technical overview
- Binder at MoMA: an example use case | http://binder.readthedocs.io/en/latest/user-manual/index.html | 2017-11-17T19:01:20 | CC-MAIN-2017-47 | 1510934803906.12 | [array(['../_images/binder_logo.png', 'Binder logo'], dtype=object)] | binder.readthedocs.io |
Firebase App Indexing
Overview¶
Google's App Indexing is a project that attempts to expose app results in Google searches performed on mobile devices. This project is formally called Firebase App Indexing.
At a high level, App Indexing has two themes to consider.
- Results, ranking and relevancy are based upon web scraping. App Indexing does not improve relevancy in results.
- App Indexing makes it so that web results also open up your app.
When enabling App Indexing, you will likely want to make these changes to your website, as well:
- Make your existing website support Apple's Universal Links and Android's App Links. After this, all of your links will correctly open the app and you're done.
- Add the undocumented header
<link rel="alternate" ..tags to your website for when Google crawls the page. Branch can assist with this using
autoAppIndex(), documented below.
If Google knows your website opens the app, when it shows up in a search result, and the user has the app installed, the app will open instead of the website, therefore achieveing App Indexing results in organic search portals.
Branch's App Indexing integration is designed for businesses that don't have a website, and want Branch to host their site for them. If you have a website, Branch can dynamically inject App Indexing tags through the WebSDK function
autoAppIndex() described here.
Note that in order for you to get traffic from this feature, your Branch link will need to appear in search results. We've just now supercharged our app indexing feature with AMP tech to leverage Google's new prioritization of these pages.
Setup¶
Define Your Content¶
The first step to listing your app content in Google is to tell Branch what the content is and how it should appear in search. Assuming you followed our get started guide, you have already indexed your content by creating Branch Universal Objects. You can create these objects using the native SDKs, where", "");
Enable App Indexing¶
Enable automatic sitemap generation on the Organic Search page of the Branch Dashboard. Check the
Automatic sitemap generation checkbox.
Once you enable this, your app will be included in our nightly job to automatically generate sitemaps. These sitemaps can be scraped by Google, and all of the included links can then be indexed.
After you've enabled App Indexing, this page will showcase the following data:
- The date the sitemap files were last generated (and included at least one of your links)
- The total number of links to unique pieces content that Branch has included in sitemaps
- The date Google last scraped your links
- The total number of times that Google has scraped links to your content
Both the sitemap itself and statistics about Google scraping your links are updated via nightly map-reduce jobs.
Advanced¶
Configure existing website for App Indexing¶
If you already have your own website, we recommend that you configure your own site for App Indexing rather than use Branch's hosted App Indexing. You want your main website, with your domain and SEO juice to appear in Google rather than try to push your
app.link domain into search results. Therefore, we recommend you go through a few steps to configure your site for App Indexing.
App Indexing, despite the confusing amount of literature out there, simply opens up your app when installed and falls back to your website when not. You actually don't need to use any of Google's tools (Firebase App Indexing) to accomplish this. Merely configuring your domain for Universal Links on iOS and App Links on Android will do the trick. Here are more details:
Recommended: Add Universal Link and App Link support to your domain¶
This is by far the easiest way to take advantage of Google App Indexing, and the recommended way per conversations that we've had with their team. All you need to do is configure Universal Links and Android App Links on your domain and your corresponding apps.
We've put together some handy guides on our blogs: - Enable Universal Links on your domain - Enable Android App Links on your domain
Feel free to drop us a line if you need help with this stuff.
Alternative: Have the WebSDK inject App Indexing tags into your Webpage¶
If you don’t want to implement Universal or App Links then you can allow the WebSDK to inject App Indexing meta tags between the head section of your webpage. These tags allow Google's web crawling bots to index your app content by launching your app through URI schemes.
This requires:
Branch to be integrated for URI based deep linking. Please ensure that steps 1, 2, 3 and 4 (iOS only) of the following guides are completed:
A call to
autoAppIndex()(a WebSDK function) to be made with the appropriate parameters (see below).
Ensure that you've placed the snippet from here somewhere between the
<head></head> tags of your webpage. Then position
branch.autoAppIndex({..}) below
branch.init() and with the optional parameters below:
branch.autoAppIndex({ iosAppId:'123456789', iosURL:'example/home/cupertino/12345', androidPackageName:'com.somecompany.app', androidURL:'example/home/cupertino/12345', data:{"walkScore":65, "transitScore":50} }, function(err) { console.log(err); });
After the WebSDK has initialized, the function will inject Firebase App Indexing tags between the head section of your webpage with the following format:
<html> <head> ... <link rel="alternate" href="android-app://{androidPackageName}/{androidURL}?{branch_tracking_params_and_additional_deep_link_data}"/> <link rel="alternate" href="ios-app://{iosAppId}/{iosURL}?{branch_tracking_params_and_additional_deep_link_data}"/> ... </head> <body> … </body>
Note: If optional parameters from above are not specified, Branch will try to build Firebase App Indexing tags using your page's App Links tags.
Alternatively, if optional parameters are specified but Firebase App Indexing tags already exist then this function will append Branch tracking params to the end of those tags and ignore what is passed into
.autoAppIndex().
For debugging purposes, you can check that the method is correctly inserting these tags by right clicking anywhere on your webpage in Chrome then clicking on inspect. After that, toggle the head section of your page's HTML and you should see the dynamically generated Firebase App Indexing tags.
Analytics related to Google's attempts to index your App's content via these tags can be found from Source Analytics in Dashboard where
channel is
Firebase App Indexing and
feature is
Auto App Indexing.
Testing with webmaster tools
We have read on Google's official blog that Googlebot renders javascript before it indexes webpages however, there are times where it may choose not to. The reasons why are unclear to us. Therefore, dynamically generated App Indexing meta tags created as part of this function may or may not appear in your tests with Webmaster Tools when you try to fetch and render as Googlebot.
Attribute app traffic to organic search¶
Curious as to how well your content is performing -- how many clicks and installs it is driving?
We automatically tag clicks on these links as coming from Google App Indexing. In the Click Flow section of our Dashboard's Summary page, you can filter for these clicks. Just select either
channel: google_search or
feature: google_app_index.
Hiding content from the index¶
Not all content is public, and not all content should be publicly indexed. If you want to enable Branch's automatic sitemap generation but exclude certain pieces of content, you can mark that content as private. You should set the content indexing mode for the individual Branch Universal Object. This property is called contentIndexMode.
iOS - Objective C
BranchUniversalObject *branchUniversalObject = [[BranchUniversalObject alloc] initWithCanonicalIdentifier:@"item/12345"]; branchUniversalObject.contentIndexMode = ContentIndexModePrivate;
iOS - Swift
let branchUniversalObject: BranchUniversalObject = BranchUniversalObject(canonicalIdentifier: "item/12345") branchUniversalObject.contentIndexMode = ContentIndexModePrivate
Android - Java
BranchUniversalObject branchUniversalObject = new BranchUniversalObject() .setCanonicalIdentifier("item/12345") .setContentIndexingMode(BranchUniversalObject.CONTENT_INDEX_MODE.PRIVATE);
You can see other platform coding examples of this on the respective sections of the integration docs. | https://docs.branch.io/pages/organic-search/firebase/ | 2017-11-17T19:07:55 | CC-MAIN-2017-47 | 1510934803906.12 | [array(['../../../img/pages/organic-search/firebase/db-settings.png',
'image'], dtype=object)
array(['../../../img/pages/organic-search/firebase/db-summary.png',
'image'], dtype=object) ] | docs.branch.io |
About This Task
Create a custom processor using the SDK, and package it into a jar file with all of its dependencies.
Steps
Create a new maven project using this maven pom file as an example..
Example
The PhoenixEnrichmentProcessor is a good example of a new custom processor implementation. | https://docs.hortonworks.com/HDPDocuments/HDF3/HDF-3.0.0/bk_streaming-analytics-manager-user-guide/content/create-custom-processor.html | 2017-11-17T19:10:52 | CC-MAIN-2017-47 | 1510934803906.12 | [] | docs.hortonworks.com |
What is Logic?¶
Logic enables you to extend your API with business logic hosted on your own infrastructure. Different apps have different workflows and logic lets you customize Scaphold to fit virtually any need. You can host your logic on-prem or in the cloud as long as its accessible over the internet and you can even authenticate requests with custom headers. If your looking for the quickest way to start, we recommend AWS Lambda, Azure Functions, or Webtask.io.
Our logic system is based on the concept of function composition. Composition lets us combine small microservices to create extremely powerful workflows. Let's take a look at how they work. | https://docs.scaphold.io/custom-logic/ | 2017-11-17T19:26:42 | CC-MAIN-2017-47 | 1510934803906.12 | [] | docs.scaphold.io |
Mitsubishi Trio¶
Vehicle Type: MI
This should be used to support the Mitsubishi i-Miev (Citroen C-Zero, Peugeot iOn) vehicles.
Trio specific metrics¶
NB! Not all metrics are correct or tested properly. This is a work in progress.
Note that some metrics are polled at different rates than others and some metrics are not available when car is off. This means that after a restart of the OVMS, some metrics will be missing until the car is turned on and maybe driven for few minutes.
Custom Commands¶
Trio Regen Brake Light Hack¶
Parts required¶
- 1x DA26 / Sub-D 26HD Plug & housing * Note: housings for Sub-D 15 fit for 26HD * e.g. Assmann-WSW-A-HDS-26-LL-Z * Encitech-DPPK15-BK-K-D-SUB-Gehaeuse
- 1x 12V Universal Car Relay + Socket * e.g. Song-Chuan-896H-1CH-C1-12V-DC-Kfz-Relais * GoodSky-Relaissockel-1-St.-GRL-CS3770
- 1x 1 Channel 12V Relay Module With Optocoupler Isolation * 12V 1 channel relay module with Optocoupler Isolation
Car wire-tap connectors, car crimp connectors, 0.5 mm² wires, zipties, shrink-on tube, tools
Note: if you already use the switched 12V output of the OVMS for something different, you can use one of the free EGPIO outputs. That requires additionally routing an EGPIO line to the DA26 connector at the expansion slot (e.g. using a jumper) and using a relay module (2/b: relay shield) with separate power input instead of the standard car relay.
I use 2/b (relay shield) variant: Be aware the MAX71317 outputs are open drain, so you need a pull up resistor to e.g. +3.3. According to the data sheet, the current should stay below 6 mA.
Inside OVMS Box: Connect JP1 Pin10 (GEP7) to Pin12 (EGPIO_8) with jumper
In DA26 connector:
pin 24(+3.3) ----- [ 680 Ohms ] ---+--- [ Relay board IN ] | pin 21 (EGPIO_8) pin 9 ----- [Relay board DC+] pin 8 ----- [Relay board DC-] [Relay board COM] ----- Brake pedal switch one side [Relay board NO] ----- Brake pedal switch other side
Configuration¶
See OVMS web user interface, menu Trio → Brake Light:
Set the port as necessary and the checkbox to enable the brakelight.
For monitoring and fine tuning, use the „regenmon“ web plugin:
| https://docs.openvehicles.com/en/latest/components/vehicle_mitsubishi/docs/ | 2020-09-18T14:33:46 | CC-MAIN-2020-40 | 1600400187899.11 | [array(['../../../_images/trio1.png', '../../../_images/trio1.png'],
dtype=object)
array(['../../../_images/trio2.png', '../../../_images/trio2.png'],
dtype=object) ] | docs.openvehicles.com |
GoQuorum Wizard
GoQuorum Wizard is a command line tool that allows users to set up a development GoQuorum network on their local machine in less than 2 minutes.
Installation
quorum-wizard is written in Javascript and designed to be installed as a global NPM module and run
from the command line. Make sure you have Node.js/NPM installed.
Using npm:
npm install -g quorum-wizard
yarn global add quorum-wizard
Using GoQuorum Wizard
Once the global module is installed, run:
quorum-wizard
The wizard walks you through setting up a network, either using our quickstart settings (a simple 3-node GoQuorum network using Raft consensus), or customizing the options to fit your needs.
Options
You can also provide these flags when running quorum-wizard:
-q,
--quickstartcreate 3 node raft network with Tessera and cakeshop (no user-input required)
-v,
--verboseTurn on additional logs for debugging
--versionShow version number
-h,
--helpShow help
Note:
npx is also way to run npm modules without the need to actually install the module. Due to
quorum-wizard needing to download and cache the quorum binaries during network setup, using
npx quorum-wizard will not work at this time.
Interacting with the Network
To explore the features of GoQuorum and deploy a private contract, follow the instructions on Interacting with the Network
Troubleshooting
EACCES error when doing global npm install:
- Sometimes npm is installed in a location where the user doesn’t have write permissions. On Mac, installing via Homebrew usually works better than the standalone installer.
- Here is the recommended solution from NPM
Developing
Clone this repo to your local machine.
yarn install to get all the dependencies.
GoQuorum Wizard]. | https://docs.goquorum.consensys.net/en/latest/HowTo/GetStarted/Wizard/GettingStarted/ | 2020-09-18T13:03:13 | CC-MAIN-2020-40 | 1600400187899.11 | [array(['../../../../images/quorum-wizard.gif', None], dtype=object)] | docs.goquorum.consensys.net |
Compiler implementation of the D programming language.
Dumps the full contents of module
m to
buf.
Pick off one of the storage classes from stc, and return a string representation of it. stc is reduced by the one picked.
trust, which is the token
trustcorresponds to
kind
Write out argument types to buf.
Pretty print function parameters.
Pretty print function parameter.
© 1999–2019 The D Language Foundation
Licensed under the Boost License 1.0. | https://docs.w3cub.com/d/dmd_hdrgen/ | 2020-09-18T14:53:46 | CC-MAIN-2020-40 | 1600400187899.11 | [] | docs.w3cub.com |
CodeScan is based on SonarQube, an open source reporting platform for coding languages. The Background Tasks that occur when an analysis report is run have been added by SonarQube to allow administrators to view technical details about why the processes fail.
To learn more about background tasks, please see the SonarQube documentation at the link below.
SonarQube Documentation - Background Tasks | https://docs.codescan.io/hc/en-us/articles/360028098052-Background-Tasks | 2020-09-18T14:13:19 | CC-MAIN-2020-40 | 1600400187899.11 | [] | docs.codescan.io |
SainSmart Ender 3 V2 3D Printer - Quick Start Guide
- Introduction
- Assembly
- Controlling the Printer
- Level the bed
- Installing a Slicer on your PC
- Printing
Introduction.
Assembly
This guide contains a quick summary of the assembly steps. Full and detailed assembly instructions is available here.
Install Z-axis Limit Switch Assembly and Z-axis Profiles
- Slide the Z axis limit switch onto the left Z-Axis profile (2 holes at the bottom) and use the M5x45 screws to secure the profiles to the base.
- Tighten the limit switch at it’s lowest position.
Install Z-axis Motor Assembly and Lead Screw
- Attach the Z axis motor to the left hand Z profile with 2 M4x18 screws into the bottom holes on the profile.
- Insert the Lead screw into the motor coupler and tighten.
Assemble the Nozzle and X axis gantry
- Fit the pneumatic joint into the Extruder.
- Bolt the XE-axis Assembly onto the X axis profile using 2 M4x16 screws.
- Wrap the drive belt around the pulley in the XE-axis Assembly.
- Slide the Nozzle assembly onto the X axis profile over the drive belt.
- Screw the Z axis Roller bracket assembly onto the X axis profile using 2 M4x16 screws.
- Thread the drive belt through the tensioner, Attach the tensioner to the X axis Gantry and attach the ends of the belt into the slots on the Nozzle assembly.
Install the Gantry assembly and Screen
- Slide the V pulleys into the vertical profiles along with threading the lead screw into the Brass nut on the Extruder assembly.
- Adjust the tension of the drive belts, not too tight but taught.
- Install the top gantry profile using 4 M5x25 screws.
- Attach the display screen to the right side of the frame, connecting the cable.
Install Filament holder, covers, Bowden tube and Extruder knob
- Assemble the filament holder and attach it to the top profile with 2 M5x8 screws and T Nuts (flat side to the back).
- Push the collar on the Extruder in, slide the Bowden tube fully in, release the collar and secure with the clip on spacer.
- Push the covers onto the ends of the top gantry profile.
- Slide the extruder knob onto the top of the extruder stepper motor shaft.
Electrical Connections
- CAUTION Select the correct input voltage on the power supply, If not set correctly this can damage your printer!
- Connect the wiring to the stepper motors and limit switches according to the labels on the wires.
Controlling the Printer
The printer is controlled by using the rotary knob at the bottom of the display to select the menu options, or to increase or decrease a selected number. The knob is also a push switch which is used to select the highlighted item or confirm a change. (This is referred to as Tap or ↓).
Level the bed
The bed must be levelled correctly before using the printer. Full instructions on how to level the bed can be found here.
The bed and Nozzle must be preheated to a working temperature before levelling the bed to allow for heat expansion of the materials of the build plate, adjusters, hot end and nozzle!
To level the bed for the first time:
- Make sure the glass plate is clipped on, after removing any protective film.
- Turn the bed height adjusters to lower the bed.
- Heat the build plate and nozzle to a working temperature by Prepare ↓ Preheat PLA ↓ or Prepare ↓ Preheat ABS ↓
Then (or for any subsequent times the bed needs re-levelling):
- Place a sheet of thin paper on top of the bed.
- From the printers control screen select Prepare ↓ Auto Home ↓
- To move the nozzle across the bed use the Prepare ↓ Move↓ Move X/Y ↓ commands, then rotate the control knob to set the position and Tap to execute the move. Coordinates are always absolute offsets from the home position.
- The approximate positions of the levelling points are as X/Y coordinates 30/30, 30/205, 205/205, 205/30
- Move the nozzle to the first point.
- Use that corners bed height adjuster to raise (or lower) the build plate height so that you feel resistance when moving the paper between the nozzle and the build plate.
- Repeat, positioning the nozzle to each levelling point in turn and adjusting the height at that corner.
- Select Prepare ↓ Auto Home ↓ again and repeat the adjustment at each corner performing any fine tuning of the height which may be necessary.
- Select Prepare ↓ Cooldown ↓ unless you will be printing immediately.
Installing a Slicer on your PC
SainSmart recommends the Ultimaker Cura Slicer, others are available with varying features and while a lot are open source or free to use some are not. Cura is fully featured, well supported and is free to use.
To download the latest version of Ultimaker Cura, go to A version may be on the SD card but the latest version will be available on the website. There are also tutorials and documentation available there. The version used in this guide is 4.6.2
Configuring your machine
You have to tell Cura (or any other slicer) what 3D Printer you are using. They vary in size, speeds, nozzle sizes etc.
From the Cura top menu select Settings / Printer / Manage Printers/Add. And select Add a non-networked printer, scroll down and click Creality 3D. recommended settings for the SainSmart Ender 3 V2 are:
In the Printer settings Start G-code and End G-code boxes copy the lines below into the relevant box replacing ALL text.
And click Next.
The Printer will now appear in the Local Printers list. Click Activate to make this the current printer. And close the window.
Selecting or adding a Filament
To add or customize a filament select Preferences/Configure Cura and then click Materials. Cura as supplied contains the settings for a lot of different filament types and brands. For the sample print I am using SainSmart PRO-3 White PLA which has the following properties:
- Diameter 1.75mm (±0.02mm)
- Print Temp 180˚C to 215˚C (200˚C is a good starting place)
Either create an entry to match your filament or select a generic type (The sample filament which comes with your printer is likely to be PLA).
The recommended starting settings for SainSmart PRO-3 PLA White are:
Save the filament settings.
Download the sample model (sainsmart keychain) here.
Go to File/ Open File(s) and select the model file to print, these are normally *.stl files but other types are supported. So select SSKeyFob.stl [File location needs specifying! Is it on the SD card or downloaded? If downloaded I would add a downloading the sample files above this section.]
Cura will centre the model on the build plate and show the file in a 3D representation, the colour of the model will be taken from the filament properties.
Make sure in the top bar that the printer is shown as SainSmart Ender-3 V2 and the filament is PRO-3 PLA (or Generic PLA).
Selecting and modifying the Print Profile
Various ‘standard’ print profiles are available in Cura. You can create and save your own but open the print settings by clicking on the right hand side of the top icon bar. Select the Super Quality in the Profile dropdown for this test.
The higher the quality, the longer it will take. This is because the higher quality uses smaller lines which means more of them, slower print speeds etc. All of these settings can be altered individually if required, also Cura by default only shows the more important settings in the print settings dialogue. Settings can be made visible or hidden by the Preferences / Configure Cura / Settings option, but there are well over 400 settings that can possibly be shown!
The print profile draws on the current machine and filament settings for some of the values.
For this model SuperQuality is suggested to produce clean detail on the lettering and smooth surfaces.
Slicing
Cura now has the model and the settings to be used set so click the Slice button. Cura will now process the model according to the profile settings and generate the Gcode needed to print the model.
The dialogue box will show an estimated print time and amount of filament required to print the model. In this case just over ½ a metre of filament will be used. To see the details of the slicing operation in more detail click the Preview button. The slider to the right of the screen shows the number of layers the model has been sliced into and the current layer being shown. The bottom slider shows and can play the process for each layer.
HINT: Use the scroll button on your mouse to zoom in and out and hold the right mouse button to rotate the model.
Save the .gcode file either to the root directory of SD card directly, or to a file on your PC and then copy the .gcode file to the root directory of the SD Card.
Printing
Insert the SD card into the printer and turn the printer on.
Make sure the correct filament is loaded, the same type as you used in the print profile, and that the build plate is level.
To load a filament
- Preheat the nozzle to a temperature above the melting point of the filament [From the controller select
- Hold in the lever on the extruder drive assembly and slide the filament all the way into the Bowden tube and into the nozzle as far as it will go.
- NOTE: it can be a bit tricky to get the end of the filament into the Bowden tube as it can get caught up on the internal changes in diameter. If this happens try twisting the filament a bit and if necessary pull out the filament and re-trim the end to round it off.
- If changing a filament sufficient material needs to be extruded until any remnants of the previous filament have been expelled.
- To remove any previous filament heat the nozzle above the melting point of the filament as above, hold in the lever and gently pull the old filament out.
- Release the lever and then on the printer control screen select Prepare ↓ Move ↓ Extruder ↓ and dial up a +5mm movement to push the filament through the nozzle until a steady clean stream is extruded. Increase the extrusion distance if necessary.
- Remove any extruded filament from the nozzle and build plate.
Select Print ↓ from the main menu and then select the file to print. Tap to start printing.
- The printer will first heat the bed to the set temperature, the only indication of this is the bed temperature on the display will slowly rise.
- Next the Nozzle will be heated to the set temperature, the only indication of this is the Nozzle temperature on the display will slowly rise.
- Once everything is up to temperature the printer will go to the home position.
- A priming line will be printed to make sure the filament is flowing correctly, this is an up and down line at the left of the bed.
Then the print will start, first laying down the skirt and then the model itself.
Once the print is finished:
- The nozzle will go to the back left corner of the build plate presenting the model for removal.
- Three beeps will sound indicating the print is ready.
- The bed and nozzle will cool down, the hot end cooling fan will still be running, do not turn the printer off until the nozzle temperature is below 50˚C.
Removing the Printed Model
- If necessary use the scraper to gently prise the job from the build plate.
- Only use gentle force to avoid damage to the machine.
- Also remove the priming lines from the build plate.
Take the key fob and after inspecting it place it on your key ring! | https://docs.sainsmart.com/article/7gpj5mg3z7-sain-smart-ender-3-v-2-3-d-printer-quick-start-guide-v-1-0 | 2020-09-18T14:23:03 | CC-MAIN-2020-40 | 1600400187899.11 | [array(['https://files.helpdocs.io/hj2i3yt73y/articles/7gpj5mg3z7/1597327808228/image.png',
None], dtype=object)
array(['https://files.helpdocs.io/hj2i3yt73y/articles/7gpj5mg3z7/1597327922517/image.png',
None], dtype=object)
array(['https://files.helpdocs.io/hj2i3yt73y/articles/7gpj5mg3z7/1597328115842/image.png',
None], dtype=object)
array(['https://files.helpdocs.io/hj2i3yt73y/articles/7gpj5mg3z7/1597328412442/image.png',
None], dtype=object)
array(['https://files.helpdocs.io/hj2i3yt73y/articles/7gpj5mg3z7/1597328421386/image.png',
None], dtype=object)
array(['https://files.helpdocs.io/hj2i3yt73y/articles/7gpj5mg3z7/1597328681895/image.png',
None], dtype=object)
array(['https://files.helpdocs.io/hj2i3yt73y/articles/7gpj5mg3z7/1597329445981/image.png',
None], dtype=object)
array(['https://files.helpdocs.io/hj2i3yt73y/articles/7gpj5mg3z7/1597329672288/image.png',
None], dtype=object)
array(['https://files.helpdocs.io/hj2i3yt73y/articles/7gpj5mg3z7/1597329798700/image.png',
None], dtype=object)
array(['https://files.helpdocs.io/hj2i3yt73y/articles/7gpj5mg3z7/1597329806232/image.png',
None], dtype=object) ] | docs.sainsmart.com |
Appear in any type specifier, including decl-specifier-seq of declaration grammar, to specify constness or volatility of the object being declared or of the type being named.
const- defines that the type is constant.
volatile- defines that the type is volatile.
For any type
T (including incomplete types), other than function type or reference type, there are three more distinct types in the C++ type system: const-qualified
T, volatile-qualified
T, and const-volatile-qualified
T. Note: array types are considered to have the same cv-qualification as their element types.
When an object is first created, the cv-qualifiers used (which could be part of decl-specifier-seq or part of a declarator in a declaration, or part of type-id in a new-expression) determine the constness or volatility of the object, as follows:
std::memory_order). Any attempt to refer to a volatile object through a non-volatile glvalue (e.g. through a reference or pointer to non-volatile type) results in undefined behavior.
mutablespecifier
mutable- permits modification of the class member declared mutable even if the containing object is declared const.
May appear in the declaration of a non-static class members of non-reference non-const type:
class X { mutable const int* p; // OK mutable int* const q; // ill-formed };
Mutable is used to specify that the member does not affect the externally visible state of the class (as often used for mutexes, memo caches, lazy evaluation, and access instrumentation).; } };
There is partial ordering of cv-qualifiers by the order of increasing restrictions. The type can be said more or less cv-qualified then:
const
volatile
const volatile
const<
const volatile
volatile<
const volatile
References and pointers to cv-qualified types may be implicitly converted to references and pointers to more cv-qualified types. In particular, the following conversions are allowed:
const
volatile
const volatile
consttype can be converted to reference/pointer to
const volatile
volatiletype can be converted to reference/pointer to
const volatile
To convert a reference or a pointer to a cv-qualified type to a reference or pointer to a less cv-qualified type, const_cast must be used.
const,
volatile,
mutable.
The
const qualifier used on a declaration of a non-local non-volatile non-template (since C++14)non-inline (since C++17) variable that is not declared
extern gives it internal linkage. This is different from C where const file scope variables have external linkage.
The C++ language grammar treats
mutable as a storage-class-specifier, rather than a type qualifier, but it does not affect storage class or linkage.
int main() { int n1 = 0; // non-const object const int n2 = 0; // const object int const n3 = 0; // const object (same as n2) volatile int n4 = 0; // volatile object const struct { int n1; mutable int n2; } x = {0, 0}; // const object with mutable member n1 = 1; // ok, modifiable object // n2 = 2; // error: non-modifiable object n4 = 3; // ok, treated as a side-effect // x.n1 = 4; // error: member of a const object is const x.n2 = 4; // ok, mutable member of a const object isn't const const int& r1 = n1; // reference to const bound to non-const object // r1 = 2; // error: attempt to modify through reference to const const_cast<int&>(r1) = 2; // ok, modifies non-const object n1 const int& r2 = n2; // reference to const bound to const object // r2 = 2; // error: attempt to modify through reference to const // const_cast<int&>(r2) = 2; // undefined behavior: attempt to modify const object n2 }
Output:
# typical machine code produced on an x86_64 platform # (only the code that contributes to observable side-effects is emitted) main: movl $0, -4(%rsp) # volatile int n4 = 0; movl $3, -4(%rsp) # n4 = 3; xorl %eax, %eax # return 0 (implicit) ret
© cppreference.com
Licensed under the Creative Commons Attribution-ShareAlike Unported License v3.0. | https://docs.w3cub.com/cpp/language/cv/ | 2020-09-18T13:27:26 | CC-MAIN-2020-40 | 1600400187899.11 | [] | docs.w3cub.com |
nodetool help
Displays nodetool command help.
Provides a synopsis and brief description of each nodetool command.
Synopsis
nodetool [connection_options] help
- command_name
- Name of nodetool command.
Examples
Print list and brief description of all nodetool commands
nodetool help
The most commonly used nodetool commands are: abortrebuild Abort a currently running rebuild operation. Currently active streams will finish but no new streams will be started. assassinate Forcefully remove a dead node without re-replicating any data. Use as a last resort if you cannot removenode bootstrap Monitor/manage node's bootstrap process cleanup Triggers the immediate cleanup of keys no longer belonging to a node. By default, clean all keyspaces clearsnapshot Remove the snapshot with the given name from the given keyspaces. If no snapshotName is specified we will remove all snapshots compact Force a (major) compaction on one or more tables or user-defined compaction on given SSTables compactionhistory Print history of compaction compactionstats Print statistics on compactions decommission Decommission the *node I am connecting to* describecluster Print the name, snitch, partitioner and schema version of a cluster describering Shows the token ranges info of a given keyspace disableautocompaction Disable autocompaction for the given keyspace and table disablebackup Disable incremental backup disablebinary Disable native transport (binary protocol) disablegossip Disable gossip (effectively marking the node down) disablehandoff Disable storing hinted handoffs disablehintsfordc Disable hints for a data center drain Drain the node (stop accepting writes and flush all tables) enableautocompaction Enable autocompaction for the given keyspace and table enablebackup Enable incremental backup enablebinary Reenable native transport (binary protocol) enablegossip Reenable gossip enablehandoff Reenable future hints storing on the current node enablehintsfordc Enable hints for a data center that was previsouly disabled failuredetector Shows the failure detector information for the cluster flush Flush one or more tables garbagecollect Remove deleted data from one or more tables gcstats Print GC Statistics getbatchlogreplaythrottle Print batchlog replay throttle in KB/s. This is reduced proportionally to the number of nodes in the cluster. getcompactionthreshold Print min and max compaction thresholds for a given table getcompactionthroughput Print the MB/s throughput cap for compaction in the system getconcurrentcompactors Get the number of concurrent compactors in the system. getconcurrentviewbuilders Get the number of concurrent view builders in the system getendpoints Print the end points that owns the key getinterdcstreamthroughput Print the Mb/s throughput cap for inter-datacenter streaming in the system getlogginglevels Get the runtime logging levels getmaxhintwindow Print the max hint window in ms getseeds Get the currently in use seed node IP list excluding the node IP getsstables Print the sstable filenames that own the key getstreamthroughput Print the Mb/s throughput cap for streaming in the system gettimeout Print the timeout of the given type in ms gettraceprobability Print the current trace probability value gossipinfo Shows the gossip information for the cluster handoffwindow Print current hinted handoff window help Display help information pausehandoff Pause hints delivery process proxyhistograms Print statistic histograms for network operations rangekeysample Shows the sampled keys held across all keyspaces rebuild Rebuild data by streaming from other nodes (similarly to bootstrap) rebuild_index A full rebuild of native secondary indexes for a given table refresh Load newly placed SSTables to the system without restart refreshsizeestimates Refresh system.size_estimates reloadlocalschema Reload local node schema from system tables reloadseeds Reload the seed node list from the seed node provider reloadtriggers Reload trigger classes relocatesstables Relocates sstables to the correct disk removenode Show status of current node removal, force completion of pending removal or remove provided ID repair Repair one or more tables repair_admin list and fail incremental repair sessions replaybatchlog Kick off batchlog replay and wait for finish resetlocalschema Reset node's local schema and resync resumehandoff Resume hints delivery process ring Print information about the token ring scrub Scrub (rebuild sstables for) one or more tables sequence Run multiple nodetool commands from a file, resource or stdin in sequence. Common options (host, port, username, password) are passed to child commands. setbatchlogreplaythrottle Set batchlog replay throttle in KB per second, or 0 to disable throttling. This will be reduced proportionally to the number of nodes in the cluster. setcachecapacity Set global key, row, and counter cache capacities (in MB units) setcachekeystosave Set number of keys saved by each cache for faster post-restart warmup. 0 to disable setcompactionthreshold Set min and max compaction thresholds for a given table setcompactionthroughput Set the MB/s throughput cap for compaction in the system, or 0 to disable throttling setconcurrentcompactors Set number of concurrent compactors in the system. setconcurrentviewbuilders Set the number of concurrent view builders in the system sethintedhandoffthrottlekb Set hinted handoff throttle in kb per second, per delivery thread. setinterdcstreamthroughput Set the Mb/s throughput cap for inter-datacenter streaming in the system, or 0 to disable throttling setlogginglevel Set the log level threshold for a given component or class. Will reset to the initial configuration if called with no parameters. setmaxhintwindow Set the specified max hint window in ms setstreamthroughput Set the Mb/s throughput cap for streaming in the system, or 0 to disable throttling settimeout Set the specified timeout in ms, or 0 to disable timeout settraceprobability Sets the probability for tracing any given request to value. 0 disables, 1 enables for all requests, 0 is the default sjk Run commands of 'Swiss Java Knife'. Run 'nodetool sjk --help' for more information. snapshot Take a snapshot of specified keyspaces or a snapshot of the specified table status Print cluster information (state, load, IDs, ...) statusautocompaction status of autocompaction of the given keyspace and table statusbackup Status of incremental backup statusbinary Status of native transport (binary protocol) statusgossip Status of gossip statushandoff Status of storing future hints on the current node stop Stop compaction stopdaemon Stop DSE daemon tablehistograms Print statistic histograms for a given table tablestats Print statistics on tables toppartitions Sample and print the most active partitions for a given column family tpstats Print usage statistics of thread pools truncatehints Truncate all hints on the local node, or truncate hints for the endpoint(s) specified. upgradesstables Rewrite sstables (for the requested tables) that are not on the current version (thus upgrading them to said current version) verify Verify (check data checksum for) one or more tables version Print DSE DB version viewbuildstatus Show progress of a materialized view build
Get synopsis and brief description of nodetool netstats
nodetool help netstats
NAME nodetool netstats - Print network information on provided host (connecting node by default) SYNOPSIS nodetool [(-h <host> | --host <host>)] [(-p <port> | --port <port>)] [(-pw <password> | --password <password>)] [(-u <username> | --username <username>)] netstats OPTIONS -h <host>, --host <host> Node hostname or ip address -p <port>, --port <port> Remote jmx agent port number -pw <password>, --password <password> Remote jmx agent password -u <username>, --username <username> Remote jmx agent username | https://docs.datastax.com/en/dse/6.0/dse-dev/datastax_enterprise/tools/nodetool/toolsHelp.html | 2020-09-18T15:01:25 | CC-MAIN-2020-40 | 1600400187899.11 | [] | docs.datastax.com |
Frequently Asked Questions
- What's the difference between Postmatic Basic and Replyable?
- How to View User Subscriptions
- What happens if a post gets a gazillion comments? Do I get a gazillion emails?
- Does Replyable create user accounts for my subscribers?
- What happens when I close comments on a post?
- What about privacy? Do you read the incoming comments?
- How does Replyable deal with spam comments?
- What about email signatures and vacation replies? Will those be published?
- Do I have to modify my DNS or make other changes to my site setup?
- What happens if a post is deleted? | https://docs.replyable.com/category/271-frequently-asked-questions | 2020-09-18T12:45:10 | CC-MAIN-2020-40 | 1600400187899.11 | [] | docs.replyable.com |
logging.yaml¶
The
logging.yaml file defines all custom log file formats, filters,
and processing options.
Important
This configuration file replaces the XML based logs_xml.config, as well as the Lua based logging.config from past Traffic Server releases. If you are upgrading from a Traffic Server release which used either the XML or the Lua configuration file format, and you have created custom log formats, filters, and destinations, you will need to update those settings to this format.
Log Definitions¶
Custom logs are configured by the combination of three key elements: a format, an optional filter, and a log destination.
A format defines how log lines will appear (as well as whether the logs using the format will be event logs or summary logs).
A filter defines what events do, and what events don’t, make it into the logs employing the filter.
A log defines where the record of events or summaries ends up.
Formats¶
Custom logging formats may be provided directly to a log definition, or they
may be defined as a reusable variable in your
logging.yaml for ease
of reference, particularly when you may have more than one log using the same
format. Which approach you use is entirely up to you, though it’s strongly
recommended to create an explicit format object if you intend to reuse the same
format for multiple log files.
Custom formats are defined by choosing a
name to identify the given logging
format, and a
format string, which defines the output format string for
every event. An optional
interval attribute can be specified to define the
aggregation interval for summary logs.
# A one-line-per-event format that just prints event timestamps. formats: - name: myformat format: '%<cqtq>' # An aggregation/summary format that prints the last event timestamp from # the interval along with the total count of events in the same interval. # (Doing so every 30 seconds.) formats: - name: mysummaryformat format: '%<LAST(cqtq)> %<COUNT(*)>' interval: 30
You may define as many and as varied a collection of format objects as you desire.
Format Specification¶
The format specification provided as the required
format attribute of the
objects listed in
formats is a simple string, containing whatever mixture
of logging field variables and literal characters meet your needs. Logging
fields are discussed in great detail in the Log Fields
section.
Flexible enough to not only emulate the logging formats of most other proxy and
HTTP servers, but also to provide even finer detail than many of them, the
logging fields are very easy to use. Within the format string, logging fields
are indicated by enclosing their name within angle brackets (
< and
>),
preceded by a percent symbol (
%). For example, returning to the altogether
too simple format shown earlier, the following format string:
'%<cqtq>'
Defines a format in which nothing but the value of the logging field cqtq is interpolated for each event’s entry in the log. We could include some literal characters in the log output by updating the format specification as so:
'Event received at %<cqtq>'
Because the string “Event received at ” (including the trailing space) is just
a bunch of characters, not enclosed in
%<...>, it is repeated verbatim in
the logging output.
Multiple logging fields may of course be used:
'%<cqtq> %<chi> %<cqhm> %<cqtx>'
Each logging field is separately enclosed in its own percent-brace set.
There are a small number of logging fields which extend this simple format, primarily those dealing with request and response headers. Instead of defining a separate logging field name for every single possible HTTP header (an impossible task, given that arbitrary vendor/application headers may be present in both requests and responses), there are instead single logging fields for each of the major stages of an event lifecycle that permit access to named headers, such as:
'%<{User-Agent}cqh>'
Which emits to the log the value of the client request’s
User-Agent HTTP
header. Other stages of the event lifecycle have similar logging fields:
pqh (proxy requests),
ssh (origin server responses), and
psh
(proxy responses).
You will find a complete listing of the available fields in Log Fields.
Aggregation Interval¶
Every format may be given an optional
interval value, specified as the
number of seconds over which events destined for a log using the format are
aggregated and summarized. Logs which use formats containing an aggregation
interval do not behave like regular logs, with a single line for every event.
Instead, they emit a single line only every interval-seconds.
These types of logs are described in more detail in Summary Logs.
Formats have no interval by default, and will generate event-based logs unless given one.
Filters¶,
reject or
wipe_field_value).
Accept,
reject or
wipe_field_value filters require
a
condition against which to match all events. The
condition fields must
be in the following format:
<field> <operator> <value>
For example, the following snippet defines a filter that matches all POST requests:
filters: - name: postfilter action: accept condition: cqhm MATCH POST
Filter Fields¶
The log fields have already been discussed in the Formats section above. For a reference to the available log field names, see Log Fields. Unlike with the log format specification, you do not wrap the log field names in any additional markup.
Filter Operators¶
The operators describe how to perform the matching in the filter rule, and may be any one of the following:
MATCH
True if the values of
fieldand
valueare identical. Case-sensitive.
CASE_INSENSITIVE_MATCH
True if the values of
fieldand
valueare identical. Case-insensitive.
CONTAIN
True if the value of
fieldcontains
value(i.e.
valueis a substring of the contents of
field). Case-sensitive.
CASE_INSENSITIVE_CONTAIN
True if the value of
fieldcontains
value(i.e.
valueis a substring of the contents of
field). Case-insensitive.
Filter Values¶
The final component of a filter string specifies the value against which the name field will be compared.
For integer matches, all of the operators are effectively equivalent and require the field to be equal to the given integer. If you wish to match multiple integers, provide a comma separated list like this:
<field> <operator> 4,5,6,7
String matches work similarly to integer matches. Multiple matches are also supported via a comma separated list. For example:
<field> <operator> e1host,host2,hostz
For IP addresses, ranges may be specified by separating the first address and
the last of the range with a single
- dash, as
10.0.0.0-10.255.255.255
which gives the ranges for the 10/8 network. Other network notations are not
supported at this time.
Note
It may be tempting to attach multiple Filters to a log object reject multiple log fields (in lieu of providing a single comma separated list to a single Filter). Avoid this temptation and use a comma separated list of reject objects instead. Remember that you may not have multiple accept filter objects. Attaching multiple filters does the opposite of what you’d expect. If, for example, we had 2 accept log filters, each disjoint from the other, nothing will ever get logged on the given log object.
Logs¶
Up to this point, we’ve only described what events should be logged and what they should look like in the logging output. Now we define where those logs should be sent.
Three options currently exist for the type of logging output:
ascii,
binary, and
ascii_pipe. Which type of logging output you choose
depends largely on how you intend to process the logs with other tools, and a
discussion of the merits of each is covered elsewhere, in
Deciding Between ASCII or Binary Output.
The following subsections cover the attributes you should specify when creating
your logging object. Only
filename and
format are required.
Enabling log rolling may be done globally in
records.config, or on a
per-log basis by passing appropriate values for the
rolling_enabled key. The
latter method may also be used to effect different rolling settings for
individual logs. The numeric values that may be passed are the same as used by
proxy.config.log.rolling_enabled. For convenience and readability,
the following predefined variables may also be used in
logging.yaml:
- log.roll.none
Disable log rolling.
- log.roll.time
Roll at a certain time frequency, specified by RollingIntervalSec and RollingOffsetHr.
- log.roll.size
Roll when the size exceeds RollingSizeMb.
- log.roll.both
Roll when either the specified rolling time is reached or the specified file size is reached.
- log.roll.any
Roll the log file when the specified rolling time is reached if the size of the file equals or exceeds the specified size.
Examples¶
The following is an example of a format that collects information using three common fields:
formats: - name: minimalfmt format: '%<chi> , %<cqu> , %<pssc>'
The following is an example of a format that uses aggregate operators to produce a summary log:
formats: - name: summaryfmt format: '%<LAST(cqts)>:%<COUNT(*)>:%<SUM(psql)>' interval: 10
The following is an example of a filter that will cause only REFRESH_HIT events to be logged:
filters: - name: refreshhitfilter action: accept condition: pssc MATCH REFRESH_HIT
The following is an example of a log specification that creates a local log
file for the minimal format defined earlier. The log filename will be
minimal.log because we select the ASCII logging format.
logs: - mode: ascii filename: minimal format: minimalfmt
The following is an example of a log specification that creates a local log file using the summary format from earlier, and only includes events that matched the REFRESH_HIT filter we created.
logs: - mode: ascii filename: refreshhit_summary format: summaryfmt filters: - refreshhitfilter | https://docs.trafficserver.apache.org/en/latest/admin-guide/files/logging.yaml.en.html | 2020-09-18T14:22:42 | CC-MAIN-2020-40 | 1600400187899.11 | [] | docs.trafficserver.apache.org |
Creating a Program Increment - Cloud
A Program Increment is a collection of Sprints.
You can choose existing Sprints from your team's board, or where there are no existing Sprints, Easy Agile Programs will automatically create them for you.
An Increment has the following properties:
- Increment Name
- Description
- Start Date
- Sprint Length
- Sprint Count
- Feature Roadmap? | https://docs.arijea.com/easy-agile-programs/getting-started-with-easy-agile-programs/cloud/creating-a-program-increment-cloud/ | 2020-08-03T15:11:55 | CC-MAIN-2020-34 | 1596439735812.88 | [] | docs.arijea.com |
Modifying join form properties
After you create a join form, you can modify properties that determine the characteristics of how that join form looks and performs during operations performed in a browser.
You can "swap" which form is primary and which is secondary. You can also change the type of join — inner or outer. Depending on whether you are working with an inner join or outer join, swapping forms can result in completely different criteria. For example, if the primary form (A) has three fields (1, 2, 3) and the secondary form (B) has three fields (3, 4, 5), an inner join retrieves the field that the two forms have in common (field 3), and an outer join retrieves this field and the remaining primary form fields, that is, fields 1, 2, and 3. If you swap forms so that form B becomes the primary form and form A becomes the secondary form, an inner join yields the same results (field 3), but an outer join now retrieves the fields 3, 4, and 5. For more information about inner and outer joins, see Inner and outer joins.
The Join Information panel in the Definitions tab of BMC Remedy Developer Studio allows you to modify options specific to join forms.
Definitions tab — Join Information panel
(Click the image to expand it.) | https://docs.bmc.com/docs/ars91/en/modifying-join-form-properties-609074208.html | 2020-08-03T14:43:41 | CC-MAIN-2020-34 | 1596439735812.88 | [] | docs.bmc.com |
Creating and managing views
Views are layouts that comprise various types of charts (widgets) for your events and metrics. Create copies of default and public views or create new views for quick access to metrics and events you want to view frequently.
The following types of views are available:
- System default views- Views available out-of-the-box that can be copied and modified.
- Custom views - Create new views from scratch.
- Public views- Views created by other users belonging to the same tenant that are available for copying and modifying.
To copy a default system (out-of-the-box) or public view (created by another user)
System default views are out-of-the-box views that cannot be modified. Use this procedure to create a copy of this view and customize as required.
- Click Views > Manage Views and select one of the following:
- An out-of-the-box system default view
- Click the Public Views tab and select a view created by another user (OR click Make a Copy column.) from the
- From the top-right of the screen, click Make a Copy.
- Type a new name for the view, select the folder (optional) where you want to create the view, and click Save.
- From the top-right of the screen, click Edit view to customize the view.
To create a view
- Click Views > Create View OR Click Views > Manage Views > Create View.
Clickto view the available filter options. The following filter options are displayed and the button changes to .
- App: Select one or more apps.
- Event Type: Select one or more event types.
- Search and select fields using which you want to filter the data and specify values for the fields.
Tip
Select Exact Field Values or use the Contains Keywords section to enter (one or more) keywords to filter the data. Type a substring of the value of any field to filter and display data that contain the substring.
After you set the filters, the button changes to. Click to hide the filter options and click to display the filter options.
- Add metrics to the view.
- From the toolbox on the left, enter the name of the metric in the search box under the ADD METRICS panel.
Use additional symbols and words while searching for metrics in a View to get more precise results.Click here to expand...
A valid expression syntax format is !/NOT ( expression ) bool( expression ) where
- Valid expressions need to use the following Regex Pattern → [a-zA-Z0-9_+.*?/@%-] ( [a-zA-Z0-9_.,*?/#@%-] +)? and
- Valid bools are
- AND or &&
- OR or ||
- Clickto add specific metrics to the view.
- After adding the metric, clickto use the various features and customization available for the metric.
- Add widgets to the view.
- Click Save to save the view.
- Click Finished to exit the edit view.
- (optional) Click Views > Manage Views and select the corresponding checkbox under the Public View column to share the view with other users from your organization who are using the same account.
Widgets
Select a widget to visualize your data, consider the data type of the metric, the volume of data, and what you are trying to learn by visualizing the data. From the toolbox on the left, select one of the following available widgets:
To manage views
- Click Views > Manage Views to manage the views.
- Click Add Folder to add a folder and organize your views.
Tip
Drag-and-drop entries in the table to arrange the views in your preferred order. | https://docs.bmc.com/docs/intelligence/creating-and-managing-views-726637292.html | 2020-08-03T15:48:50 | CC-MAIN-2020-34 | 1596439735812.88 | [] | docs.bmc.com |
Units and currencies (for units) and here (for currencies). Example 1: Units Let us create a basic question with units: We need to enable the units and prefixes we want to use in Input options > Input syntax. You can check only those units and prefixes you are interested in or use the All tab. The algorithm is very simple: Note that since the corresponding units and prefixes are selected, the units are highlighted in blue. s and l are also highlighted because there exist the units second and liter, but since they are preceded by #, they are interpreted as a variable name. Finally, you can also select the Match unit of measure validation option to force the student answer to be expressed in meters. Example 2: Currencies Let us create a basic question with currencies: As you have seen in the example involving units, you need to enable the currencies in Input options > Input syntax. Notice all currencies can be selected with only one box. The algorithm is very simple. Notice that if you want to make an algorithm for a question with currencies, you must use the corresponding CalcMe symbol for them under Units of measure tab. To conclude, currencies are similar to physical units and we have to take into account: You can do basic arithmetic with currencies ... except you can not convert() between them. Students can write the currency symbol at left ($12) or at right (34€). Both styles will be graded as correct. Students must type the currency symbol using their keyboard. The toolbar for the answer doesn't have any currency symbols. So please ensure that students have the symbol on their keyboard! Alternatively, provide it in the question wording for them to copy & paste. Currencies See below a list of the currently accepted currencies: Symbol Name BTC Bitcoin $ Dollar € Euro Fr Franc kr Koruna/Krona/Krone £ Pound ₽ Ruble ₹ Rupee ₩ Won ¥ Yen/Yuan Table of Contents Example 1: Units Units Metric prefixes Example 2: Currencies Currencies | https://docs.wiris.com/en/quizzes/user_interface/validation/units_currencies | 2020-08-03T15:17:57 | CC-MAIN-2020-34 | 1596439735812.88 | [] | docs.wiris.com |
Having a spam trap on your email list will negatively affect your email deliverability, meaning that email you send to subscribers will start going to their spam folders and in some cases, not being delivered at all. This even applies to subscribers you opted into your email marketing the proper way and have been engaging with your emails in the past.
As this is something that should be avoided at all costs, it’s important to understand what spam traps are, how they can end up in your email list, how you can avoid them, and what to do if you end up with a spam trap on your list.
What are Spam Traps?
Spam traps are simply email addresses that inbox service providers such as Gmail, Hotmail, Yahoo, etc. use to identify spammers.
There are three types of spam traps:
- Email addresses that were once active but have not been used for a very long time. That is, the owner has not logged into their account to check emails or use their email account to sign up for other lists.
- Misspelt email addresses such as [email protected] instead of [email protected].
- Pristine spam traps. These are email addresses that have been deliberately created with the purpose of catching spammers. Nobody will ever signup to receive emails using these addresses and they most commonly appear on bought email lists.
If you are sending to any of these types of addresses (especially pristine traps), there’s a high chance that inbox service providers will flag you as a spammer.
How Spam Traps can Appear on your Email List
While spam traps frequently appear on the email lists of spammers, occasionally they can appear on the email lists of legitimate senders too.
There are several ways this can occur:
- Not obtaining proper permission to send marketing emails
- Migrating to a new email service provider (such as SmartrMail) and not unsubscribing those who unsubscribed with your previous email service provider(s)
- Sending to unengaged subscribers
- Not honoring unsubscribe requests
- Misspellings when adding email addresses
- Purchasing email addresses
How to Avoid Spam Traps
To minimize the risk of spam traps appearing on your email list, it’s important to only email subscribers who have properly opted into receiving emails.
It is also important to follow our email list requirements.
Buying lists and adding people to your list who have not agreed to receive marketing emails are not only against laws including CAN-SPAM and the GDPR, but also drastically increase the chances of spam traps appearing on your list.
However, even when you follow all the rules, spam traps can still appear occasionally. Especially as your list grows and ages.
This is why it’s important to regularly clean your email list especially if you notice your open rates starting to drop.
To be extra safe, you might want to require double opt-in when people sign up to your email list. This will prevent issues with misspelt email addresses and ensure everyone on your list actively opted in.
What if a Spam Trap Appears on your List?
SmartrMail actively keeps track of the deliverability of all our users, so if we detect an issue with your list that’s due to a spam trap, we will let you know and give you instructions on what to do next.
This will typically involve having to clean your list so that you’re only sending to engaged subscribers.
If you suspect you have a spam trap after cleaning your list, contact our support team and we can guide you through what needs to be done. | http://docs.smartrmail.com/en/articles/3957528-what-is-a-spam-trap-and-how-to-avoid-them | 2020-08-03T14:31:12 | CC-MAIN-2020-34 | 1596439735812.88 | [] | docs.smartrmail.com |
SUMO co-simulation
CARLA has developed a co-simulation feature with SUMO. This allows to distribute the tasks at will, and exploit the capabilities of each simulation in favour of the user.
- Requisites
- Run a custom co-simulation
- Spawn NPCs controlled by SUMO
Requisites
First and foremost, it is necessary to install SUMO to run the co-simulation. Building from source is recommended over a simple installation, as there are new features and fixes that will improve the co-simulation.
Once that is done, set the SUMO environment variable.
echo "export SUMO_HOME=/usr/share/sumo" >> ~/.bashrc && source ~/.bashrc
SUMO is ready to run the co-simulations. There are some examples in
Co-Simulation/Sumo/examples for Town01, Town04, and Town05. These
.sumocfg files describe the configuration of the simulation (e.g., net, routes, vehicle types...). Use one of these to test the co-simulation. The script has different options that are detailed below. For the time being, let's run a simple example for Town04.
Run a CARLA simulation with Town04.
cd ~/carla ./CarlaUE4.sh cd PythonAPI/util python config.py --map Town04
Then, run the SUMO co-simulation example.
cd ~/carla/Co-Simulation/Sumo python run_synchronization.py examples/Town04.sumocfg --sumo-gui
Important
Run a custom co-simulation
Create carla vtypes
With the script
Co-Simulation/Sumo/util/create_sumo_vtypes.py the user can create sumo vtypes, the equivalent to CARLA blueprints, based on the CARLA blueprint library.
--carla-host(default: 127.0.0.1) — IP of the carla host server.
--carla-port(default: 2000) — TCP port to listen to.
--output-file(default: carlavtypes.rou.xml) — The generated file containing the vtypes.
This script uses the information stored in
data/vtypes.json to create the SUMO vtypes. These can be modified by editing said file.
Warning
A CARLA simulation must be running to execute the script.
Create the SUMO net
The recommended way to create a SUMO net that synchronizes with CARLA is using the script
Co-Simulation/Sumo/util/netconvert_carla.py. This will draw on the netconvert tool provided by SUMO. In order to run the script, some arguments are needed.
xodr_file— OpenDRIVE file
.xodr.
--output'(default:
net.net.xml) — output file
.net.xml.
--guess-tls(default:false) — SUMO can set traffic lights only for specific lanes in a road, but CARLA can't. If set to True, SUMO will not differenciate traffic lights for specific lanes, and these will be in sync with CARLA.
The output of the script will be a
.net.xml that can be edited using NETEDIT. Use it to edit the routes, add demand, and eventually, prepare a simulation that can be saved as
.sumocfg.
The examples provided may be helpful during this process. Take a look at
Co-Simulation/Sumo/examples. For every
example.sumocfg there are several related files under the same name. All of them comprise a co-simulation example.
Run the synchronization
Once a simulation is ready and saved as a
.sumocfg, it is ready to run. There are some optional parameters to change the settings of the co-simulation.
sumo_cfg_file— The SUMO configuration file.
--carla-host(default: 127.0.0.1) — IP of the carla host server
--carla-port(default: 2000) — TCP port to listen to
--sumo-host(default: 127.0.0.1) — IP of the SUMO host server.
--sumo-port(default: 8813) — TCP port to listen to.
--sumo-gui— Open a window to visualize the gui version of SUMO.
--step-length(default: 0.05s) — Set fixed delta seconds for the simulation time-step.
--sync-vehicle-lights(default: False) — Synchronize vehicle lights.
--sync-vehicle-color(default: False) — Synchronize vehicle color.
--sync-vehicle-all(default: False) — Synchronize all vehicle properties.
--tls-manager(default: none) — Choose which simulator should manage the traffic lights. The other will update those accordingly. The options are
carla,
sumo, and
none. If
noneis chosen, traffic lights will not be synchronized. Each vehicle would only obey the traffic lights in the simulator that spawn it.
python run_synchronization.py <SUMOCFG FILE> --tls-manager carla --sumo-gui
Warning
To stop the co-simulation, press
Ctrl+C in the terminal that run the script.
Spawn NPCs controlled by SUMO
The co-simulation with SUMO makes for an additional feature. Vehicles can be spawned in CARLA through SUMO, and managed by the later as the Traffi Manager would do.
The script
spawn_npc_sumo.py is almost equivalent to the already-known
spawn_npc.py..
--host(default: 127.0.0.1) — IP of the host server.
--port(default: 2000) — TCP port to listen to.
-n,--number-of-vehicles(default: 10) — Number of vehicles spawned.
--safe— Avoid spawning vehicles prone to accidents.
--filterv(default: "vehicle.")* — Filter the blueprint of the vehicles spawned.
--sumo-gui— Open a window to visualize SUMO.
--step-length(default: 0.05s) — Set fixed delta seconds for the simulation time-step.
--sync-vehicle-lights(default: False) — Synchronize vehicle lights state.
--sync-vehicle-color(default: False) — Synchronize vehicle color.
--sync-vehicle-all(default: False) — Synchronize all vehicle properties.
--tls-manager(default: none) — Choose which simulator will change the traffic lights' state. The other will update them accordingly. If
none, traffic lights will not be synchronized.
# Spawn 10 vehicles, that will be managed by SUMO instead of Traffic Manager. # CARLA in charge of traffic lights. # Open a window for SUMO visualization. python spawn_sumo_npc.py -n 10 --tls-manager carla --sumo-gui
That is all there is so far, regarding for the SUMO co-simulation with CARLA.
Open CARLA and mess around for a while. If there are any doubts, feel free to post these in the forum. | https://carla.readthedocs.io/en/latest/adv_sumo/ | 2020-08-03T14:18:48 | CC-MAIN-2020-34 | 1596439735812.88 | [] | carla.readthedocs.io |
TOPICS×
Deploy this on your server
The following are required to get this running on your system AEM Forms(version 6.3 or above) MYSQL Database
To test this capability on your AEM Forms instance, please follow the following steps
- Download and unzip the tutorial assets on to your local system
- Deploy and start the techmarketingdemos.jar and mysqldriver.jar bundles using Felix web console
- Import the aemformstutorial.sql using MYSQL Workbench. This will create the necessary schema and tables in your database for this tutorial to work.
- Import StoreAndRetrieve.zip using AEM package manager. This package contains the Adaptive Form template, page component client lib, and sample adaptive form and data source configuration.
- Login to configMgr. Search for "Apache Sling Connection Pooled DataSource. Open the data source entry associated with aemformstutorial and enter the username and password specific to your database instance.
- Open the Adaptive Form
- Fill in some details and click on the "Save And Continue Later" button
- You should get back a URL with a GUID in it.
- Copy the URL and paste it in a new browser tab. Make sure there are no empty space at the end of the URL
- Adaptive Form should get populated with the data from the previous step | https://docs.adobe.com/content/help/en/experience-manager-learn/forms/storing-and-retrieving-form-data/part6.html | 2020-08-03T16:25:56 | CC-MAIN-2020-34 | 1596439735812.88 | [] | docs.adobe.com |
Connecting to vCenter VMs
This topic explains how to configure vCenter VM connections, and how to open them.
In this topic:
Note: When troubleshooting console connection issues, always try the console connection in vCenter. If you're unable to establish a console connection from vCenter, you have to troubleshoot the issue outside Commander.
Console connection methods
VMware supports both the WebMKS and VMRC (VMware Remote Console) methods for opening a console session. Each of these methods has its own set of prerequisites and limitations.
- WebMKS console connection method
The WebMKS method requires no additional plug-in or application to be installed, and is the default method for console connections. Commander automatically uses the WebMKS connection method for all supported versions of vCenter.
- Secure WebSockets (wss://) must be enabled on the vCenter cloud account.
- The vCenter cloud account must have a valid SSL certificate issued by a certificate authority, or from your domain.
- When using the VM Access Proxy with WebMKS, the VM Access Proxy must have a valid SSL certificate issued by a certificate authority, or from your domain.
If you're using WebMKS for direct console connections with ESXi 6.0 or newer hosted by vCenter 6.0 or newer and your ESXi hosts don't have CA signed certificates installed, you must add a certificate exception in your browser for Commander to open a direct console connection. See the Configuring Internet Explorer 11 for Direct Console Connections knowledge base article for setup information.
- VMRC console connection method
The VMRC method requires VMware Remote Console 7.0 to be installed. VMRC is a standalone Windows-only application supported on vCenter.
Note: The VM Access Proxy doesn't support the VMRC application.
Additional prerequisites for opening console sessions on vCenter
In addition to the requirements listed above that are specific to the WebMKS and VMRC console connection methods, note the following prerequisites:
- For direct (non-proxied) console connections, there must be a route between the initiating user's computer and vCenter.
- Depending on the web browser used, you may have to enable compatibility mode for connections to vCenter. Note that you can't use compatibility mode.
Configuring console credentials for connections to vCenter VMs
Console credentials are configured at the cloud account level. These credentials are used for both direct (non-proxied) and secure (proxied) connections.
By default, when users open a console to a vCenter VM, they're automatically signed in to vSphere using the credentials of the cloud account. Allowing users to open a console on a VM means that they can carry out any command on the VM that's allowed by the cloud account credentials.
Optionally, you can change the default sign in process used so that:
- users are prompted for credentials. With this option, users are presented with a credentials dialog within Commander or the Service Portal.
- users are automatically signed in to vSphere using a set of credentials that you specify. This option allows you to enhance security in a VM console session. Controlling the user's credentials allows you fine-grained control over what actions they can perform on the VM.
To configure prompting users for credentials:
- From the Inventory tree, select a cloud account.
- On the Summary page, select Actions > Configure Console Credentials.
- In the Configure Console Credentials dialog, select Prompt for credentials for Commander and/or Service Portal users.
- Click OK.
To configure automatic sign in with specific credentials:
- Set up one or two reduced-privilege accounts in the cloud account. You can specify separate accounts to be used by Commander users and Service Portal users.
- From the Inventory tree, select the cloud account.
- On the Summary page, select Actions > Configure Console Credentials.
- In the Configure Console Credentials dialog, select Use these credentials for Commander users and/or Service Portal users.
- Enter the user name and password for the account(s) you set up in the cloud account.
- Click Test Credentials to ensure that these credentials can access the cloud account.
- Click OK.
Note: If you don't have Administrator access rights for the cloud account, you can view console credentials by right-clicking a cloud account and select View Console Credentials.
Opening connections to vCenter VMs
To open a connection to a vCenter VM:
- From the Inventory tree, select a cloud account.
Note: You can also select a VM from the list on the Virtual Machines tab.
- On the Summary page, select Actions > Open Connection, then choose the connection type:
- Open Console: Open a VM console. See Additional prerequisites for opening console sessions on vCenter above.
- Open Secure Console: Open a VM console using the VM Access Proxy, in your browser.
Note:).
- Open RDP Session: Opens an.
Viewing VM consoles with screenshots
For vCenter, you can take a look, through a screenshot, at what's happening on a VM console without having to RDP into the VM.
To view a VM console through a screenshot:
- From the Inventory tree, select a powered-on VM.
- Select a powered-on VM in the tree.
- On the Summary page, select Actions > Configuration Management > View Console Screenshot.
- If a screenshot was taken previously, that screenshot is displayed on the screen.
- If a screenshot hasn't been taken yet, or if you want to update the screenshot, click Update Screenshot.
- Click Close to exit the screenshot. | https://docs.embotics.com/commander/open_vm_console.htm | 2020-08-03T14:45:47 | CC-MAIN-2020-34 | 1596439735812.88 | [] | docs.embotics.com |
TOPICS×
Create and Assign Enablement Resources
Add an Enablement Resource
To add an enablement resource to the new community site:
- On the author instance
- For example,
- Select the community site to which enablement resources are being added
- Select Enablement Tutorial
- From the menu, select Create
- Select Resource
Basic Info
- Select Next
Add Content).
- select Next
Settings
-.
- Select Next
Assignments
-_4<<
- Select Create
.
Publish the Resource
Before Enrollees are able to see the assigned Resourse, it must be published:
- Select the world Publish icon
Activation is confirmed with a success message:
Add a Second Enablement Resource
Repeat the steps above to create and publish a second related enablement resource from which a learning path will be created.
Publish the second Resource.
Return to the Enablement Tutorial listing of it's Resources.
Hint: if both Resources are not visible, refresh the page.
Add a Learning PathNote: Only published Resources will be selectable.
You can only select the resources available at the same level as the learning path. For example, for a learning path created in a group only the group level resources are available; for a learning path created in a community site the resources in that site are available for adding to the learning path.
- Select Submit .
- Select Next
-. | https://docs.adobe.com/content/help/en/experience-manager-64/communities/introduction/resource.html | 2020-08-03T15:59:08 | CC-MAIN-2020-34 | 1596439735812.88 | [array(['/content/dam/help/experience-manager-64.en/help/communities/assets/chlimage_1-201.png',
None], dtype=object)
array(['/content/dam/help/experience-manager-64.en/help/communities/assets/chlimage_1-203.png',
None], dtype=object)
array(['/content/dam/help/experience-manager-64.en/help/communities/assets/chlimage_1-204.png',
None], dtype=object)
array(['/content/dam/help/experience-manager-64.en/help/communities/assets/chlimage_1-205.png',
None], dtype=object)
array(['/content/dam/help/experience-manager-64.en/help/communities/assets/chlimage_1-207.png',
None], dtype=object)
array(['/content/dam/help/experience-manager-64.en/help/communities/assets/chlimage_1-208.png',
None], dtype=object)
array(['/content/dam/help/experience-manager-64.en/help/communities/assets/chlimage_1-209.png',
None], dtype=object)
array(['/content/dam/help/experience-manager-64.en/help/communities/assets/chlimage_1-210.png',
None], dtype=object)
array(['/content/dam/help/experience-manager-64.en/help/communities/assets/chlimage_1-211.png',
None], dtype=object)
array(['/content/dam/help/experience-manager-64.en/help/communities/assets/chlimage_1-212.png',
None], dtype=object)
array(['/content/dam/help/experience-manager-64.en/help/communities/assets/chlimage_1-213.png',
None], dtype=object)
array(['/content/dam/help/experience-manager-64.en/help/communities/assets/chlimage_1-214.png',
None], dtype=object)
array(['/content/dam/help/experience-manager-64.en/help/communities/assets/chlimage_1-215.png',
None], dtype=object)
array(['/content/dam/help/experience-manager-64.en/help/communities/assets/chlimage_1-216.png',
None], dtype=object)
array(['/content/dam/help/experience-manager-64.en/help/communities/assets/chlimage_1-217.png',
None], dtype=object) ] | docs.adobe.com |
DetectStackSetDrift
Detect returns the
OperationId of the stack set
drift detection operation. Use this operation id with
DescribeStackSetOperation
to monitor the progress of the drift
detection operation. The drift detection operation may take some time, depending on
the
number of stack instances included in the stack set, as well as the number of resources
included in each stack.
Once the operation has completed, use the following actions to return drift information:
Use
DescribeStackSetto return detailed informaiton about the stack set, including detailed information about the last completed drift operation performed on the stack set. (Information about drift operations that are in progress is not included.)
Use
ListStackInstancesto return a list of stack instances belonging to the stack set, including the drift status and last drift time checked of each instance.
Use
DescribeStackInstanceto return detailed information about a specific stack instance, including its drift status and last drift time checked.
For more information on performing a drift detection operation on a stack set, see Detecting Unmanaged Changes in Stack Sets.
You can only run a single drift detection operation on a given stack set at one time.
To stop a drift detection stack set operation, use
StopStackSetOperation
.
Request Parameters
For information about the parameters that are common to all actions, see Common Parameters.
- OperationId
The ID of the stack set operation.
Type: String
Length Constraints: Minimum length of 1. Maximum length of 128.
Pattern:
[a-zA-Z0-9][-a-zA-Z0-9]*
Required: No
- OperationPreferences
The user-specified preferences for how AWS CloudFormation performs a stack set operation.
For more information on maximum concurrent accounts and failure tolerance, see Stack set operation options.
Type: StackSetOperationPreferences object
Required: No
- StackSetName
The name of the stack set on which to perform the drift detection operation.
Type: String
Pattern:
[a-zA-Z][-a-zA-Z0-9]*(?::[a-zA-Z0-9]{8}-[a-zA-Z0-9]{4}-[a-zA-Z0-9]{4}-[a-zA-Z0-9]{4}-[a-zA-Z0-9]{12})?
Required: Yes
Response Elements
The following element is returned by the service.
- OperationId
The ID of the drift detection stack set operation.
you can use this operation id with
DescribeStackSetOperationto monitor the progress of the drift detectionInProgress
Another operation is currently in progress for this stack set. Only one operation can be performed for a stack set at a given time.
HTTP Status Code: 409
- StackSetNotFound
The specified stack set doesn't exist.
HTTP Status Code: 404
Example
DetectStackSetDrift
Sample Request ?Action=DetectStackSetDrift &Version=2010-05-15 &StackSetName=stack-set-example &OperationId=9cc082fa-df4c-45cd-b9a8-7e56example &X-Amz-Algorithm=AWS4-HMAC-SHA256 &X-Amz-Credential=[Access key ID and scope] &X-Amz-Date=20191203T195756Z &X-Amz-SignedHeaders=content-type;host &X-Amz-Signature=[Signature]
Sample Response
<DetectStackSetDriftResponse xmlns=""> <DetectStackSetDriftResult> <OperationId>9cc082fa-df4c-45cd-b9a8-7e56example</OperationId> </DetectStackSetDriftResult> <ResponseMetadata> <RequestId>38309f0a-d5f5-4330-b6ca-8eb1example</RequestId> </ResponseMetadata> </DetectStackSetDriftResponse>
See Also
For more information about using this API in one of the language-specific AWS SDKs, see the following: | https://docs.aws.amazon.com/AWSCloudFormation/latest/APIReference/API_DetectStackSetDrift.html | 2020-08-03T15:19:26 | CC-MAIN-2020-34 | 1596439735812.88 | [] | docs.aws.amazon.com |
Expression field
- Enter test data into the Test field
- The contents of the Test field will be color coded to indicate any matches with the pattern you have specified. The Results list will also display all matches found, selecting a match will highlight the matching text in the Test URL's button automatically load all URL's in the current link map into the test data field | https://docs.cyotek.com/cyowcopy/1.6/regularexpressioneditor.html | 2020-08-03T15:18:31 | CC-MAIN-2020-34 | 1596439735812.88 | [array(['images/regexedit_thumb.png', None], dtype=object)] | docs.cyotek.com |
Using the panning tool
The panning tool is useful if your preview item is so large that an entire page cannot fit in the viewer.
Select the panning icon
at the bottom of the page. Your cursor changes to crosshairs.
Select and hold the crosshairs on the item in preview, and drag the item up or down to scroll though it.
To scroll through entire pages, use the page through tool at the bottom
, next the panning tool. | https://docs.imanage.com/work-web-help/10.2.5/en-US/Using_the_panning_tool.html | 2020-08-03T14:58:47 | CC-MAIN-2020-34 | 1596439735812.88 | [] | docs.imanage.com |
Working with documents in Share
You can download documents from a Share folder or delete them.
Highlight a document and select the kebab menu
to perform the following tasks:
Details: Opens a preview of the document.
Download
Tags: Add a descriptive label to the document.
New Version
Share Link: Generates an HTML link which you can copy and paste elsewhere.
Rename
Move to Trash
View Activity: View activity on the document, such as uploads, downloads, views, and so on.
Save to iManage Work: Save the document into iManage as a new version or a new document. | https://docs.imanage.com/work-web-help/10.2.5/en-US/Working_with_documents_in_Share.html | 2020-08-03T15:37:05 | CC-MAIN-2020-34 | 1596439735812.88 | [] | docs.imanage.com |
Set a sound to use for a hitpeds condition or the insidetrigger / outsidetrigger conditions.
Context
This command should be called between calls to AddCondition and CloseCondition.
Syntax
hitpeds Condition
SetCondSound( sound );
Game.SetCondSound( sound )
- sound: The name of the daSoundResourceData to play.
insidetrigger / outsidetrigger Conditions
SetCondSound( sound, event, [time_between, start_delay] );
Game.SetCondSound( sound, event, [time_between, start_delay] )
- sound: The name of the daSoundResourceData to play.
- event: The event to play the sound on.
- enter_trigger: Play the sound when entering the trigger.
- inside_trigger: Play the sound when inside the trigger
- Only for "insidetrigger" conditions.
- outside_trigger: Play the sound when outside the trigger
- Only for "outsidetrigger" conditions.
- exit_trigger: Play the sound when exiting the trigger.
- time_between: The time between each time the sound plays in second
- Only for "inside_trigger" or "outside_trigger" events.
- start_delay: The delay before the sound starts playing after entering/exiting the trigger in seconds
- Only for "inside_trigger" or "outside_trigger" events.
Examples
hitpeds Condition
AddCondition("hitpeds"); AddObjSound("generic_car_explode"); CloseCondition();
Game.AddCondition("hitpeds") Game.AddObjSound("generic_car_explode") Game.CloseCondition()
insidetrigger / outsidetrigger Conditions
AddCondition("insidetrigger"); SetCondTrigger("z2phone1"); SetCondSound("enter_trigger","gag_alm2"); SetCondSound("inside_trigger","countdown_beeps",1,5); SetCondSound("exit_trigger","P_HitByC_Mrg_01"); CloseCondition();
Game.AddCondition("insidetrigger") Game.SetCondTrigger("z2phone1") Game.SetCondSound("enter_trigger","gag_alm2") Game.SetCondSound("inside_trigger","countdown_beeps",1,5) Game.SetCondSound("exit_trigger","P_HitByC_Mrg_01") Game.CloseCondition()
Notes
No additional notes.
Version History
1.18
Added this command. | http://docs.donutteam.com/docs/lucasmodlauncher/hacks/asf/mfk-commands/setcondsound | 2020-08-03T15:50:20 | CC-MAIN-2020-34 | 1596439735812.88 | [] | docs.donutteam.com |
Customize your profile
You can customize your Insight Platform profile to suit your needs.
Establish a Personalized Session Timeout
If a Platform Administrator at your company enables Personalized Session Timeout, you can establish your own idle session timeout by selecting one of the available options in the Idle Session Timeout section of your Profile Settings.
Select a timeout option to override the Default Session Timeout. If you don’t make a new selection, the default still applies.
Change your user display name
- Log in to the Insight Platform.
- On the top right, click the User Profile menu to display the options.
- Click the Profile Settings link.
- Change your first and last name as desired.
- Click the Save button to save your updated name.
Set a landing page
If you have multiple organizations, products, or want quick access to the Customer Portal, you can set a preferred landing page to see when logging into the Insight platform.
To set your preferred landing page:
- Log in to the Insight Platform.
- On the top right, click User Profile menu to display the options.
- Click on the Profile Settings link.
- Click the Default Landing Page to see all available pages.
- Select your desired landing page.
- Click the Save button.
When you next log in to the Insight Platform, it will redirect to your set landing page.
Did this page help you? | https://docs.rapid7.com/insight/profile-settings/ | 2020-08-03T15:58:40 | CC-MAIN-2020-34 | 1596439735812.88 | [array(['/areas/docs/_repos//product-documentation__master/219cac3b8113f35fe0fea66a9c8d532502b33c46//insight/images/insightplatform_profilesettings_idlesessiontimeout.png',
'Idle Session Timeout options on the Profile Settings page'],
dtype=object) ] | docs.rapid7.com |
Add a new map
Users can create their own maps, and run CARLA using these. The creation of the map object is quite independent from CARLA. Nonetheless, the process to ingest it has been refined to be automatic. Thus, the new map can be used in CARLA almost out-of-the-box.
- Introduction
- Create a map with RoadRunner
- Map ingestion in a CARLA package
- Map ingestion in a build from source
- Deprecated ways to import a map
Introduction
RoadRunner is the recommended software to create a map due to its simplicity. Some basic steps on how to do it are provided in the next section. The resulting map should consist of a
.fbx and a
.xodr with the mesh and road network informtion respectively.
The process of the map ingestion has been simplified to minimize the users' intervention. For said reason, there are certains steps have been automatized.
- Package
.jsonfile and folder structure. Normally packages need a certain folder structure and a
.jsonfile describing them to be imported. However, as regards the map ingestion, this can be created automatically during the process.
- Traffic signs and traffic lights. The simulator will generate the traffic lights, stops, and yields automatically when running. These will be creatd according to their
.xodrdefinition. The rest of landmarks present in the road map will not be physically on scene, but they can be queried using the API.
- Pedestrian navigation. The ingestion will generate a
.binfile describing the pedestrian navigation. It is based on the sidewalks and crosswalks that appear in the OpenDRIVE map. This can only be modified if working in a build from source.
Important
If a map contains additional elements besides the
.fbx and
.xodr, the package has to be prepared manually.
The map ingestion process differs, depending if the package is destined to be in a CARLA package (e.g., 0.9.9) or a build from source.
There are other ways to import a map into CARLA, which are now deprecated. They require the user to manually set the map ready. Nonetheless, as they may be useful for specific cases when the user wants to customize a specific setting, they are listed in the last section of this tutorial.
Create a map with RoadRunner
RoadRunner is an accessible and powerful software from Vector Zero to create 3D scenes. There is a trial version available at their site, and an installation guide.
The process is quite straightforward, but there are some things to take into account.
- Center the map in (0,0).
- Create the map definition. Take a look at the official tutorials.
- Check the map validation. Take a close look at all connections and geometries.
Once the map is ready, click on the
OpenDRIVE Preview Tool button to visualize the OpenDRIVE road network. Give one last check to everything. Once the map is exported, it cannot be modified.
Note
OpenDrive Preview Tool makes it easier to test the integrity of the map. If there is any error with junctions, click on
Maneuver Tool, and
Rebuild Maneuver Roads.
Export from RoadRunner
1. Export the scene using the CARLA option.
File/Export/CARLA(.fbx+.xml+.xodr)
2. Leave
Export individual Tiles unchecked. This will generate only one .fbx with all the pieces. It makes easier to keep track of the map.
3. Click
Export.
This will generate a
mapname.fbx and
mapname.xodr files within others. There is more detailed information about how to export to CARLA in VectorZero's documentation.
Warning
Make sure that the .xodr and the .fbx files have the same name.
Map ingestion in a CARLA package
This is the recommended method to import a map into a CARLA package. It will run a Docker image of Unreal Engine to import the files, and export them as a standalone package. The Docker image takes 4h and 400GB to be built. However, this is only needed the first time.
1. Build a Docker image of Unreal Engine. Follow these instructions to build the image.
2. Change permissions on the input folder. If no
.json file is provided, the Docker will try to create it on the input folder. To be successful, said folder must have all permissions enabled for others.
#Go to the parent folder, where the input folder is contained chmod 777 input_folder
Note
This is not necessary if the package is prepared manually, and contains a
.json file.
2. Run the script to cook the map. In the folder
~/carla/Util/Docker there is a script that connects with the Docker image previously created, and makes the ingestion automatically. It only needs the path for the input and output files, and the name of the package to be ingested. If no
.json is provided, the name must be
map_package.
python docker_tools.py --input ~/path_to_input_folder --output ~/path_to_output_folder --packages map_package
Warning
If the argument
--package <package_name> is not provided, the Docker will make a package of CARLA.
3. Locate the package. The Docker should have generated the package
map_package
5. Change the name of the package folder. Two packages cannot have the same name in CARLA. Go to
Content and find the package. Change the name if necessary, to use one that identifies it.
Map ingestion in a build from source
This is method is meant to be used if working with the source version of CARLA. Place the maps to be imported in the
Import folder. The script will make the ingestion, but the pedestrian navigation will have to be generated after that. Make sure that the name of the
.xodr and
.fbx files are the same for each of the maps being imported. Otherwise, the script will not recognize them as a map.
There are two parameters to be set.
- Name of the package. By default, the script ingest the map or maps in a package named
map_package. This could lead to error the second time an ingestion is made, as two packages cannot have the same name. It is highly recommended to change the name of the package.
ARGS="--package package_name"
- Usage of CARLA materials. By default, the maps imported will use CARLA materials, but this can be changed using a flag.
ARGS="--no-carla-materials"
Check that there is an
.fbx and a
.xodr for each map in the
Import folder, and make the ingestion.
make import ARGS="--package package_name --no-carla-materials"
After the ingestion, only the pedestrian navigation is yet to be generated. However there is an optional step that can be done before that.
- Create new spawning points. Place them a over the road, around 0.5/1m so the wheels do not collide with the ground. These will be used in scripts such as
spawn_npc.py.
Generate pedestrian navigation
The pedestrian navigation is managed using a
.bin. However, before generating it, there are two things to be done.
- Add crosswalk meshes. Crosswalks defined inside the
.xodrremain in the logic of the map, but are not visible. For each of them, create a plane mesh that extends a bit over both sidewalks connected. Place it overlapping the ground, and disable its physics and rendering.
Note
To generate new crosswalks, change the name of the mesh to
Road_Crosswalk. Avoid doing so if the crosswalk is in the
.xodr. Otherwise, it will be duplicated.
- Customize the map. In is common to modify the map after the ingestion. Props such as trees, streetlights or grass zones are added, probably interfering with the pedestrian navigation. Make sure to have the desired result before generating the pedestrian navigation. Otherwise, it will have to be generated again.
Now that the version of the map is final, it is time to generate the pedestrian navigation file.
1. Select the Skybox object and add a tag
NoExport to it. Otherwise, the map will not be exported, as the size would be too big.
2. Check the name of the meshes. By default, pedestrians will be able to walk over sidewalks, crosswalks, and grass (with minor influence over the rest).
- Sidewalk =
Road_Sidewalk.
- Crosswalk =
Road_Crosswalk.
- Grass =
Road_Grass.
3. Name these planes following the common format
Road_Crosswalk_mapname.
4. Press
G to deselect everything, and export the map.
File > Export CARLA.... A
map_file.obj file will be created in
Unreal/CarlaUE4/Saved.
5. Move the
map_file.obj and the
map_file.xodr to
Util/DockerUtils/dist.
6. Run the following command to generate the navigation file.
- Windows
build.bat map_file # map_file has no extension
- Linux
./build.sh map_file # map_file has no extension
7. Move the
.bin into the
Nav folder of the package that contains the map.
Deprecated ways to import a map
There are other ways to import a map used in previous CARLA releases. These required to manually cook the map and prepare everything, so they are now deprecated. However, they are explained below in case they are needed.
Prepare the package manually
A package needs to follow a certain folder structure and contain a
.json file describing it. This steps can be saved under certains circumstances, but doing it manually will always work.
Read how to prepare the folder structure and .json file
Create the folder structure
1. Create a folder inside
carla/Import. The name of the folder is not relevant.
2. Create different subfolders for each map to import.
3. Move the files of each map to the corresponding subfolder. A subfolder will contain a specific set of elements.
- The mesh of the map in a
.fbx.
- The OpenDRIVE definition in a
.xodr.
- Optionally, the textures required by the asset.
For instance, an
Import folder with one package containing two maps should have a structure similar to the one below.
Import │ └── Package01 ├── Package01.json ├── Map01 │ ├── Asphalt1_Diff.jpg │ ├── Asphalt1_Norm.jpg │ ├── Asphalt1_Spec.jpg │ ├── Grass1_Diff.jpg │ ├── Grass1_Norm.jpg │ ├── Grass1_Spec.jpg │ ├── LaneMarking1_Diff.jpg │ ├── LaneMarking1_Norm.jpg │ ├── LaneMarking1_Spec.jpg │ ├── Map01.fbx │ └── Map01.xodr └── Map02 └── Map need the following parameters.
- name of the map. This must be the same as the
.fbxand
.xodrfiles.
- source path to the
.fbx.
- use_carla_materials. If True, the map will use CARLA materials. Otherwise, it will use RoadRunner materials.
- xodr Path to the
.xodr.
Props are not part of this tutorial. The field will be left empty. There is another tutorial on how to add new props.
In the end, the
.json should look similar to the one below.
{ "maps": [ { "name": "Map01", "source": "./Map01/Map01.fbx", "use_carla_materials": true, "xodr": "./Map01/Map01.xodr" }, { "name": "Map02", "source": "./Map02/Map02.fbx", "use_carla_materials": false, "xodr": "./Map02/Map02.xodr" } ], "props": [ ] }
RoadRunner plugin import
This software provides specific plugins for CARLA. Get those and follow some simple steps to get the map.
Read RoadRunner plugin import guide
Warning
These importing tutorials are deprecated. There are new ways to ingest a map to simplify the process.
Plugin installation
These plugins will set everything ready to be used in CARLA. It makes the import process more simple.
1. Locate the plugins in RoadRunner's installation folder
/usr/bin/VectorZero/Tools/Unreal/Plugins.
2. Copy those folders to the CarlaUE4 plugins directory
/carla/Unreal/CarlaUE4/Plugins/.
3. Rebuild the plugin following the instructions below.
a) Rebuild on Windows.
- Right-click the
.uprojectfile and
Generate Visual Studio project files.
- Open the project and build the plugins.
b) Rebuild on Linux.
- Run the following command.
> UE4_ROOT/GenerateProjectFiles.sh -project="carla/Unreal/CarlaUE4/CarlaUE4.uproject" -game -engine
4. Restart Unreal Engine. Make sure the checkbox is on for both plugins
Edit > Plugins.
Import map
1. Import the mapname.fbx file to a new folder under
/Content/Carla/Maps with the
Import button.
2. Set
Scene > Hierarchy Type to Create One Blueprint Asset (selected by default).
3. Set
Static Meshes > Normal Import Method to Import Normals.
4. Click
Import.
5. Save the current level
File > Save Current As... > mapname.
The new map should now appear next to the others in the Unreal Engine Content Browser.
Note
The tags for semantic segmentation will be assigned by the name of the asset. And the asset moved to the corresponding folder in
Content/Carla/PackageName/Static. To change these, move them manually after imported.
Manual import
This process requires to go through all the process manually. From importing .fbx and .xodr to setting the static meshes.
Read manual import guide
Warning
These importing tutorials are deprecated. There are new ways to ingest a map to simplify the process.
This is the generic way to import maps into Unreal Engine using any .fbx and .xodr files. As there is no plugin to ease the process, there are many settings to be done before the map is available in CARLA.
1. Create a new level with the Map name in Unreal
Add New > Level under
Content/Carla/Maps.
2. Copy the illumination folder and its content from the BaseMap
Content/Carla/Maps/BaseMap, and paste it in the new level. Otherwise, the map will be in the dark.
Import binaries
1. Import the mapname.fbx file to a new folder under
/Content/Carla/Maps with the
Import button. Make sure the following options are unchecked.
- Auto Generate Collision
- Combine Meshes
- Force Front xAxis
- Normal Import Method - To import normals
2. Check the following options.
- Convert Scene Unit
- To import materials and textures.
- Material Import Method - To create new materials
- Import Textures
3. Check that the static meshes have appeared in the chosen folder.
4. Drag the meshes into the level.
5. Center the meshes at point (0,0,0) when Unreal finishes loading.
6. Generate collisions. Otherwise, pedestrians and vehicles will fall into the abyss.
- Select the meshes meant to have colliders.
- Right-click
Asset Actions > Bulk Edit via Property Matrix....
- Change
Collision complexityfrom
Project Defaultto
Use Complex Collision As Simple.
- Go to
File > Save All.
7. Move the static meshes from
Content/Carla/Maps/mapfolder to the corresponding
Carla/Static subsequent folder. This will be meaningful for the semantic segmentation ground truth. ├── RoadLines | └── mapname | └── Static Meshes └── Sidewalks └── mapname └── Static Meshes
Import OpenDRIVE files
1. Copy the
.xodr file inside the
Content/Carla/Maps/OpenDrive folder.
2. Open the Unreal level. Drag the Open Drive Actor inside the level. It will read the level's name. Search the Opendrive file with the same name and load it.
Set traffic and pedestrian behaviour
This software provides specific plugins for CARLA. Get those and follow some simple steps to get the map.
Read traffic and pedestrian setting guide
Warning
These importing tutorials are deprecated. There are new ways to ingest a map to simplify the process.
Set traffic behavior
Once everything is loaded into the level, it is time to create traffic behavior.
1. Click on the Open Drive Actor.
2. Check the following boxes in the same order.
- Add Spawners.
- (Optional for more spawn points) On Intersections.
- Generate Routes.
This will generate a series of RoutePlanner and VehicleSpawnPoint actors. These are used for vehicle spawning and navigation.
Traffic lights and signs
Traffic lights and signs must be placed all over the map.
1. Drag traffic light/sign actors into the level and place them.
2. Adjust the
trigger volume for each of them. This will determine their area of influence.
3. In junctions, drag a traffic light group actor into the level. Assign to it all the traffic lights involved and configure their timing. Make sure to understand how do traffic lights work.
4. Test traffic light timing and traffic trigger volumes. This may need trial and error to fit perfectly.
Example: Traffic Signs, Traffic lights and Turn based stop.
Add pedestrian navigation
In order to prepare the map for pedestrian navigation, there are some settings to be done before exporting it.
1. Select the Skybox object and add a tag
NoExport to it. Otherwise, the map will not be exported, as the size would be too big. Any geometry that is not involved or interfering in the pedestrian navigation can be tagged also as
NoExport.
2. Check the name of the meshes. By default, pedestrians will be able to walk over sidewalks, crosswalks, and grass (with minor influence over the rest).
3. Crosswalks have to be manually created. For each of them, create a plane mesh that extends a bit over both sidewalks connected. Place it overlapping the ground, and disable its physics and rendering.
4. Name these planes following the common format
Road_Crosswalk_mapname.
5. Press
G to deselect everything, and export the map.
File > Export CARLA....
6. Run RecastDemo
./RecastDemo.
Solo Meshfrom the
Sampleparameter's box.
- Select the mapname.obj file from the
Input Meshparameter's box.
7. Click on the
Build button.
8. Once the build has finished, click on the
Save button.
9. Change the filename of the binary file generated at
RecastDemo/Bin to
mapname.bin.
10. Drag the mapname.bin file into the
Nav folder under
Content/Carla/Maps.
That comprises the process to create and import a new map into CARLA. If during the process any doubts arise, feel free to post these in the forum. | https://carla.readthedocs.io/en/latest/tuto_A_add_map/ | 2020-08-03T14:35:26 | CC-MAIN-2020-34 | 1596439735812.88 | [array(['../img/check_geometry.jpg', 'CheckGeometry'], dtype=object)
array(['../img/check_open.jpg', 'checkopen'], dtype=object)
array(['../img/ue_crosswalks.jpg', 'ue_crosswalks'], dtype=object)
array(['../img/ue_noexport.jpg', 'ue_skybox_no_export'], dtype=object)
array(['../img/ue_meshes.jpg', 'ue_meshes'], dtype=object)
array(['../img/rr-ue4_plugins.jpg', 'rr_ue_plugins'], dtype=object)
array(['../img/ue_import_mapname.jpg', 'ue_import'], dtype=object)
array(['../img/ue_import_options.jpg', 'ue_import_options'], dtype=object)
array(['../img/ue_level_content.jpg', 'ue_level_content'], dtype=object)
array(['../img/ue_illumination.jpg', 'ue_illumination'], dtype=object)
array(['../img/ue_import_file.jpg', 'ue_import_file'], dtype=object)
array(['../img/ue_drag_meshes.jpg', 'ue_meshes'], dtype=object)
array(['../img/transform.jpg', 'Transform_Map'], dtype=object)
array(['../img/ue_selectmesh_collision.jpg', 'ue_selectmesh_collision'],
dtype=object)
array(['../img/ue_collision_complexity.jpg', 'ue_collision_complexity'],
dtype=object)
array(['../img/ue_ssgt.jpg', 'ue__semantic_segmentation'], dtype=object)
array(['../img/ue_opendrive_actor.jpg', 'ue_opendrive_actor'],
dtype=object)
array(['../img/ue_trafficlight.jpg', 'ue_trafficlight'], dtype=object)
array(['../img/ue_tl_group.jpg', 'ue_tl_group'], dtype=object)
array(['../img/ue_tlsigns_example.jpg', 'ue_tlsigns_example'],
dtype=object)
array(['../img/ue_noexport.jpg', 'ue_skybox_no_export'], dtype=object)
array(['../img/ue_meshes.jpg', 'ue_meshes'], dtype=object)
array(['../img/ue_crosswalks.jpg', 'ue_crosswalks'], dtype=object)] | carla.readthedocs.io |
Auction Settings - Bid Sniping
What is Bid Sniping?
Kindly go through initial paragraph of this article.
How to prevent it?
Since user who snipes at the last moment of auction getting expired, our plugin provides a configuration to admin where at certain time before the auction expires, auction gets extended by a defined time interval specified by admin. What this does is that it then gives opportunity to all the bidders to place their bids again if they want to.
Plugin provides two set of text fields:
- Mention time left for auction to close: HOURS MINUTES
As the parameter name says, this is the time which is left for auction to close.
- Extend auction by following time: HOURS MINUTES
| https://docs.auctionplugin.net/article/83-auction-settings-bid-sniping | 2020-08-03T15:21:24 | CC-MAIN-2020-34 | 1596439735812.88 | [array(['http://d33v4339jhl8k0.cloudfront.net/docs/assets/58806643dd8c8e484b24e64f/images/5cee277f0428632d9eec0aa4/file-r2jAnTV7k5.png',
None], dtype=object) ] | docs.auctionplugin.net |
Release notes and notices
This section provides information about what is new or changed in this space, including documentation updates.
This section provides information about what is new or changed with the deployment of components that make up the TrueSight IT Data Analytics product, including urgent issues, documentation updates, feature packs, service packs and fix packs.
Known and corrected issues
Planning
Installing
Upgrading
License entitlements
Tip
To stay informed of changes to this space, place a watch on this page.
Ready-made PDFs are available on the PDFs | https://docs.bmc.com/docs/ITDA/113/release-notes-and-notices-766663724.html | 2020-08-03T15:47:57 | CC-MAIN-2020-34 | 1596439735812.88 | [] | docs.bmc.com |
To send an iManage Work document as an email attachment:
Ensure that your edits are saved to iManage Work before emailing the link.
On the iManage tab, select Send Link. Alternatively, select Send Link in the iManage group on the Home tab.
This launches a new email in Outlook with a link to the document attached as:
NRL: this opens the document directly in the relevant Office application.
URL: this opens the document in iManage Work.
Smart document check
Documents sent through iManage Work by using either the Attach File option in Outlook or File Share from Office applications include intelligent information. When the document is returned to the recipient, the recipient is prompted to save it as a New Version in its original location. | https://docs.imanage.com/work-help/10.2.6/en/Emailing_a_document.html | 2020-08-03T14:51:24 | CC-MAIN-2020-34 | 1596439735812.88 | [] | docs.imanage.com |
Tree View as Coming Soon
This release of
Figure: Tree view
The tree view is available in the following views:
iManage Work Web in desktop view.
iManage Work Panel in Microsoft Outlook when the panel is docked and undocked.
In iManage Work Panel in Microsoft Outlook, when the panel is docked but the width is insufficient, the tree is contained within the Browse navigation tab.
Figure: Tree view in iManage Work panel
The tree view is not enabled by default. To enable the tree view:
Navigate to the Coming Soon icon adjacent to the user menu.
Select the coming soon icon, the following dialog box appears.
Figure: Enabling tree view
Select Enable.
In the tree view, users can:
Access their workspaces under Recent Matters and My Matters in a single flat list that is sorted alphabetically
Access the Recent matters and My matters of another user's Matter list they are subscribed to
Dive deeper into a workspace in the same view to see the full folder hierarchy at a glance
Expand the tree horizontally to increase the width of the panel to fully view folders several levels deep, or hover over a workspace to see a tooltip displaying its full name
View up to 50 subfolders upon initial expansion of the parent folder. Select View All at the bottom of the tree to view the entire set of subfolders in the center panel
Select an item on the tree to see its content in the center panel. The focus also shifts automatically to select the first item in the list or grid view in the container in the center panel
Receive an indication if a folder or a workspace has sub-folders through the presence of the chevron
against the container (Under My Matters a chevron is always shown at the workspace level)
Access newly created subfolders under a folder or workspace, by right-clicking on the item and select refresh from the context menu
If iManage Share is enabled, select a blue Share folder in the tree view to open up the Share folder within the same tab
Use the following keyboard shortcuts to quickly navigate within the tree and back to it from the center panel:
After the tree view is enabled, the filter panel is relocated to the header below the Filter icon and the tree view is positioned in the left panel where the filters were available earlier.
| https://docs.imanage.com/work-web-help/10.2.5/en-US/Tree_View_as_Coming_Soon.html | 2020-08-03T15:29:56 | CC-MAIN-2020-34 | 1596439735812.88 | [array(['images/download/attachments/65989090/image2019-12-5_15-22-30.png',
'images/download/attachments/65989090/image2019-12-5_15-22-30.png'],
dtype=object)
array(['images/download/attachments/65989090/image2019-12-5_15-20-3.png',
'images/download/attachments/65989090/image2019-12-5_15-20-3.png'],
dtype=object)
array(['images/download/attachments/65989090/tree_view_filters.png',
'images/download/attachments/65989090/tree_view_filters.png'],
dtype=object) ] | docs.imanage.com |
A “tag” is a label that can be added to a CADF Event Record to qualify or categorize an event.
Tags provide a powerful mechanism for adding domain-specific identifiers and classifications to CADF Event Records that can be referenced by the CADF Query Interface. This allows customers to construct custom reports or views on the event data held by a provider for a specific domain of interest. A CADF Event Record can have multiple tags that enable cross-domain analysis. | https://docs.openstack.org/pycadf/latest/specification/tags.html | 2020-08-03T15:30:37 | CC-MAIN-2020-34 | 1596439735812.88 | [] | docs.openstack.org |
.
In addition, this guide provides an overview of the global navigation and information about the controls, fields, and options available throughout the Explore Admin UI.
After you have deployed your Explore appliance, see the Explore the ExtraHop Explore ExtraHop appliance. For more information, see the Log in and log out of the Admin UI of the Admin UI
The Admin UI on the Explore.
Health
The Health page provides a collection of metrics that enable you check the operation of the Explore appliance. If issues occur with the Explore appliance, the metrics on the Health page help you to troubleshoot the problem and determine why the appliance is not performing as expected.
The following information is collected on the Health page.
- System
- Reports the following information about the system CPU usage and disk drives.
- CPU User
- Specifies the percentage of CPU usage associated with the Explore appliance user
- CPU System
- Specifies the percentage of CPU usage associated with the Explore appliance.
- CPU Idle
- Identifies the CPU idle percentage associated with the Explore appliance.
- CPU IO
- Specifies the percentage of CPU usage associated with the Explore appliance IO functions.
- Service Status
- Reports the status of Explore appliance system services
- exadmin
- Specifies the amount of time the Explore appliance web portal service has been running.
- exconfig
- Specifies the amount of time the Explore appliance config service has been running
- exreceiver
- Specifies the amount of time the Explore appliance receiver service has been running.
- exsearch
- Specifies that amount of time that the Explore appliance search service has been running.
- Interfaces
- Reports the status of Explore appliance network interfaces.
- RX packets
- Specifies the number of packets received by the Explore appliance on the specified interface.
- RX Errors
- Specifies the number of received packet errors on the specified interface.
- RX Drops
- Specifies the number of received packets dropped on the specified interface.
- TX Packets
- Specifies the number of packets transmitted by the Explore appliance on the specified interface.
- TX Errors
- Specifies the number of transmitted packet errors on the specified interface.
- TX Drops
- Specifies the number of transmitted packets dropped on the specified interface.
- RX Bytes
- Specifies the number of bytes received by the Explore appliance on the specified interface.
- TX Bytes
- Specifies the number of bytes transmitted by the Explore appliance on the specified interface.
- Partitions
- Reports the status and usage of Explore appliance components. The configuration settings for these components are stored on disk and retained even when the power to the appliance is turned off.
- Name
- Specifies the Explore appliance settings that are stored on disk.
- Options
- Specifies the read-write options for the settings stored on disk.
- Size
- Specifies the size in gigabytes for the identified component.
- Utilization
- Specifies the amount of memory usage for each of the components as a quantity and as percentage of total disk space.
-
- Displays the number of bytes received from the Discover appliance.
- Record Bytes Saved
- Displays the number of bytes successfully saved to the Explore appliance.
- Records Saved
- Displays the number of records successfully saved to the Explore appliance.
- Record Errors
- Displays the number of individual record transfers that resulted in an error. This value indicates the number of records that did not transfer successfully from the exreceiver process.
- TXN Errors
- Displays the number of bulk record transactions that resulted in an error. Errors in this field might indicate missing records.
- TXN Drops
- Displays the number of bulk records transactions that did not complete successfully. All records in the transaction are missing. Explore appliance. When joining a new Explore node or pairing a new publisher or client with the Explore cluster through this node, make sure that the fingerprint displayed is exactly the same as the fingerprint shown on the join or pairing page.
Explore Cluster Status
The Explore Cluster Status page provides details on the health of the Explore appliance.
Cluster
- Status
- The following status names can appear:
- Ready
- The node is available to join an Explore cluster.
- Green
- All data is replicated across the cluster.
- Yellow
- The primary shard is allocated but replica shards are not.
- Red
- One or more shards from the index are missing.
Cluster Nodes
- Nickname
- Displays the nickname of the Explore node when configured on thepage.
- Host
- Displays the IP address or hostname of the Explore node.
Indices
- Date (UTC)
- Displays the date the index was created.
- ID
- Displays the ID of the index. An ID other than 0 means that an index with the same date, but from a different source exists on the cluster.
- Source
- Displays the hostname or IP address of the Discover appliance where the record data originated.
- Records
- Displays the total number of records sent to the Explore appliance.
- Size
- Displays the size of the index.
- Status
- Displays the replication status of data on the cluster.
- Shards
- Displays the number of shards in the index.
- Unassigned Shards
- Displays the number of shards that have not been assigned to a node. Unassigned shards are typically replica shards that need to be kept on a different node than the node with the corresponding primary shard, but there are not enough nodes in the cluster. For example, a cluster with just one member will not have a place to store the replica shards, so with the default replication level of 1, the index will always have unassigned shards and have a yellow status.
- Relocating Shards
- Displays the number of shards that are moving from one node to another. Relocating shards typically occurs when an Explore node in the cluster fails.
Delete records
In certain circumstances, such as moving an Explore cluster from one network to another, you might want to delete records from the cluster.
You can delete records by index. An index is a collection of records that were created on the same day. Indexes are named according to the following pattern:
..
Network settings
The Network Settings section includes the following configurable network connectivity settings.
- Connectivity
- Configure network connections.
- SSL Certificate
- Generate and upload a self-signed certificate.
- Notifications
- Set up alert notifications through email and SNMP traps.
The Explore appliance has four 10/100/1000baseT network ports and two 10GbE SFP+ network ports. By default, the Gb1 port is configured as the management port and requires an IP address. The Gb2, Gb3 and Gb4 ports are disabled and not configurable.
You can configure either of the 10GbE networks ports as the management port, but you can only have one management port enabled at a time.
Before you begin configuring the network settings on an Explore appliance, verify that a network patch cable connects the Gb1 port on the Explore appliance to the management network. For more information about installing an Explore appliance, refer to the Explore appliance deployment guide or contact ExtraHop Support for assistance.
For specifications, installation guides, and more information about your appliance, refer physical Explore appliance displays the following information about the current interface connections:
- Blue Ethernet Port
- Identifies the management.
Change interface 1
- Go to the Network Settings section and click Connectivity.
- In the Interfaces section, click Interface 1.The Network Settings for Interface 1 page appears with the following editable fields:
- Interface Mode
- The Interface Mode is set to Management Port by default. All management, data and intra-node communications are transmitted through the management port.
- Enable DHCPv4
- DHCP is enabled by default. When you turn on the system, interface 1 attempts to acquire an IP address using DHCP. After the DHCP server assigns an IP address to a physical appliance, the IP address apears on the LCD on to the Admin UI again.
If you are changing from a static IP address to a DHCP-acquired IP address, the changes occur immediately after clicking Save, which results in a loss of connection to the Admin UI web page. After the system acquires an IP address, log on to the Admin UI again.
- IPv4 Address
The Explore appliance provides configuration settings to acquire an IP address automatically or to configure a static IP address manually. The Explore appliance displays the assigned IP address on the LCD at the front of the appliance. If your network does not support DHCP, you can configure a static IP address using the Explore Admin UI.
To configure the IP Address network setting manually, disable DHCP, enter a static IP address, and click Save.
- Netmask
Devices on a local network have unique IP addresses, but this unique address can be thought of as having two parts: The shared network part that is common to all devices on the network, and a unique host part. Both the shared and unique parts of the IP address are used by the TCP/IP stack for routing.
The shared network parts of the address and host parts are determined by the netmask, which looks like this: 255.255.0.0. In this example, the masked part of the network is represented by 255.255, and the unmasked host part is represented by 0.0, where the number of unique device addresses that can be supported on the network is approximately 65,000.
- Gateway
- The network's gateway address is the IP address of the device that is used by other devices on the network to access another network or a public network like the Internet. The address for the gateway is often a router with a public IP.
Change the remaining interfaces
- In the Network Settings section, click Connectivity.
- For each interface that you want to change, click the name for that interface.In the Network Settings page for the interface, select one of the following interface mode options:
- Disabled
- The interface is disabled.
- Management Port
- All management, data, and cluster communications are transmitted through the Management Port.
- Change the settings as needed and virtual disk is in a degraded state.
- A physical disk is in a degraded state.
- A physical disk has an increasing error count.
- A registered Explore node is missing from the cluster. The node might have failed, or it is powered off. password settings
- In the Access Settings section, click Change Password.
- Select the user from the drop-down list.
- Type the new password In the New password field.
- Retype the new password in the Confirm password field.
- Click Save.. Cluster Settings section, click Join Cluster.
- In the Host text box, type the host name or IP address of a node in the Explore cluster and then click Continue.
- Verify the fingerprint that appears matches the fingerprint of the Explore node that you are joining.
- In the Setup Password field, type the password for the setup user.
- Click Join.
Cluster Members
The Explore Cluster Members page connected client.
Data Management
You can configure the replication level of data on the Explore cluster. Additionally, you can enable and disable shard reallocation. You must connect a Discover appliance to the Explore cluster before you can configure replication level and shard reallocation settings.
Replication
You can change the replication level to specify the number of copies of the collected data stored on the cluster. A higher number of copies improves fault tolerance if a node fails and also improves the speed of query results. However, a higher number of copies takes up more disk space and might slow the indexing of the data.
- In the Cluster Settings section, click Data Management.
- Select one of the following replication levels from the Replication Level drop-down list:
- Click Update Replication Level.
Shard reallocation
Data in an Explore cluster is split up into manageable chunks called shards. Shards might need to be created or moved from one node to another, as in the case of a node failure.
Shard reallocation is enabled by default. Prior to updating the firmware or taking the node offline for maintenance (for example, replacing disks, power cycling the appliance, or removing network connectivity between Explore nodes), you should disable shard reallocation by doing the following:
- upgrade firmware on the Explore appliance.
The Explore appliance connects to the Command appliance through a tunneled connection. Tunneled connections are required in network environments where a direct connection from the Command appliance is not possible because of firewalls or other network restrictions.
-.
When you restore the cluster state, the Explore cluster is updated with the latest stored information about the Explore nodes in the cluster and all other connected appliances (Discover and Command appliances).
- In the Explore Cluster Settings section, click Restore Cluster State.
- On the Restore Cluster State page, click Restore Cluster State.
- Click Restore Cluster to confirm. Explore Admin UI provides an interface to halt, shutdown, and restart the Explore appliance components.
- System
- Restart or shut down the Explore appliance.
- Admin
- Restart the Explore appliance administrator component.
- Receiver
- Restart the Explore receiver component.
- Search
- Restart the Explore search service.
For each Explore appliance component, the table includes a time stamp to show the start time..
Disks
The Disks page provides information about the configuration and status of the disks in your Explore appliance. The information displayed on this page varies based on whether you have a physical or virtual appliance.
The following information displays on the page:
- Drive Map
- (Physical only) Provides a visual representation of the front of the Explore appliance.
- RAID Disk Details
- Provides access to detailed information about all the disks in the node.
- Firmware
- Displays information about disks reserved for the Explore appliance firmware.
- Utility (Var)
- Displays information about disks reserved for system files.
- Search
- Displays information about disks reserved for data storage.
- Direct Connected Disks
- Displays information about virtual disks on virtual machine deployments, or USB media in physical appliances.
Thank you for your feedback. Can we contact you to ask follow up questions? | https://docs.extrahop.com/7.0/exa-admin-ui-guide/ | 2018-01-16T13:14:02 | CC-MAIN-2018-05 | 1516084886436.25 | [] | docs.extrahop.com |
Introduction
What is an Agent App Widget?
Agent App Widgets are web applications loaded inside the LiveChat Agent App. All agents can interact with the widget during chats with customers. The widget itself is displayed in the Agent’s App sidebar:
Sample use cases
There are number of ways you can utilize the widgets:
- embed and display static content, e.g. knowledge base articles, conversation prompts or context information,
- embed your service or web app as a part of agents’ workspace,
- query external services with visitor email or LiveChat group ID (CRM, marketing automation, etc.),
- query LiveChat REST API to do basically anything with the visitor, agent or chat.
Getting started in 5 minutes
- Go to the LiveChat Developers Console.
- Create a new app and follow the app wizard.
- Set up app name, descriptions and icon in Display settings.
- Configure Agent App Widget in the Features tab. If you don’t have a working app at hand, feel free to start with the sample ones:
- iFrame loader, so you can embed any website,
- Visitor preview widget, which displays currently selected visitor data.
- Go to Distribution settings and install the app at your license. You’ll see it in the LiveChat Agent App.
Advanced use
Sample widgets
We’ve prepared two example repositories for your convenience. Both examples show how to receive data from Events and display them within the sidebar.
You can take it from there and use the visitor’s email to query your own service or provide contextual help for the agent based on visitor details.
PHP and Silex
Set up the environment
git clone cd agent-app-sample-extension composer install
Configure your local web server to serve content over HTTPS and turn on the extension.
A basic backend application example written with the use of Silex.
Webpack (JS)
Set up the environment
git clone cd agent-app-sample-extension-webpack npm install
Run the webpack server
npm start
A basic static application example served from Webpack Server.
The content of the extension should be available at.
You can now turn on the extension.
Hosting the widget
You can host your widget locally or on a dedicated server. The hosted content has to be served over HTTPS Protocol. You can use a self-signed certificate for
localhost or upload your widget to an SSL-enabled host. If you go for the Webpack Example, you’ll get the setup out of-the-box.
Developing your own widget
If you want to build your own widget, be sure to include both the LiveChat Boilerplate and JavaScript Widget API:
Place this tag within the
<head></head>section:
<link rel="stylesheet" href="//cdn.livechatinc.com/boilerplate/1.1.css"> <script src="//cdn.livechatinc.com/boilerplate/1.1.js"></script>
After your widget content is loaded, fire the
LiveChat.init() method. It will let the Agent App know when to hide the spinning loader.
Fire
LiveChat.init()method after body is loaded (e.g. using jQuery):
// If you authorize using "Basic authorization flow": $(document).ready(function () { LiveChat.init(); }); // If you authorize using "Sign in with LiveChat": $(document).ready(function () { LiveChat.init({ authorize: false }); });
Layout and Styling
We ship a LiveChat Boilerplate – it’s a lightweight CSS stylesheet to help you lift off with creating the widget interface.
Authorization
If you want to interact with agents data, you have two options.
This way you can leverage safe OAuth2.0 authorization flow. Head to Sign in with LiveChat docs for more information.
Basic authorization flow (deprecated)
1. First, the extension content is requested by the Agent App. A basic HTTPS GET request is sent.
2. Within the body of your extension, you should call the
LiveChat.init();method once the extension is loaded. This will tell the Agent App to start the initialization and hide the spinning loader.
3. In return, the Agent App sends a HTTPS POST request to. Note that this path is non-configurable. Within the body of the post, you’ll find two keys:
{ "login": [email protected], // current LiveChat user email address "api_key": <agent_api_key> // current LiveChat API key }
4. You can now create a custom authorization logic (e.g. request external services, define scopes for your user, etc.)
5. In order to complete the flow, you should respond with a JSON response:
{ "session_id": 12345 // any string or value }
When the basic authorization flow is completed, you can use the
LiveChat.on("event", ... ) method to catch the incoming events.
After a successful initialization, the Agent App should remove the spinner, display the content of your extension and push an
authorizeevent via postMessage.
You can listen to
authorizeand
authorize_errorto catch the result of the authorization flow and, for instance, to display adequate information.
You should now receive events from the Agent App. Check out the JavaScript API events.
JavaScript API
To use the JavaScript API you have to attach the core functionality script.
Initialize the communication
// If you authorize using "Basic authorization flow": LiveChat.init(); // If you authorize using "Sign in with LiveChat": LiveChat.init({ authorize: false });
Let the Agent App know the extension is ready. Once called, the Agent App removes the loader screen from the extension and sends a request to. This mechanism allows you to introduce an authorization flow for your service.
Get the ID of the session
Returns the ID of the current extension session.
LiveChat.getSessionId();
Refresh the session ID
Deletes the ID of the previous session and calls of a new one.
LiveChat.refreshSessionId();
Events
Events allow you react to the actions in the Agent App. Use this method as a listener for certain events.
LiveChat.on("<event_name>", function( data ) { // ... })
Events
customer_profile and
customer_profile_hidden return an object width additional properties.
Customer profile displayed
Sample
dataobject for
customer_profileevent
{ "id": "S126126161.O136OJPO1", "name": "Mary Brown", "email": "[email protected]", "chat": { "id": "NY0U96PIT4", "groupID": "42" }, "source": "chats" }
Customer profile hidden
Sample
dataobject for
customer_profile_hiddenevent
{ "id": "S126126161.O136OJPO1" }
Troubleshooting
There are errors in the console
Check out your browser’s console to see if there are any of the errors listed below.
The loader never stops spinning
Make sure you followed the initiallization flow mentioned in Developing your own extension. If the
LiveChat.init() method is fired correctly, the spinner disappears and the extension becomes visible. | https://docs.livechatinc.com/agent-app-widgets/ | 2018-01-16T13:19:16 | CC-MAIN-2018-05 | 1516084886436.25 | [array(['https://d33wubrfki0l68.cloudfront.net/5f40bf3a3027bf504e932c0e1dd0d81c3c9b2e27/db535/assets/images/agent-app-extension.png',
None], dtype=object)
array(['https://d33wubrfki0l68.cloudfront.net/778f55df1487964bd0b12b92f02a9596099feb11/bc7bf/assets/images/agent-app-sample-extension.png',
None], dtype=object) ] | docs.livechatinc.com |
Defining new commands¶
A pyqi Command is a class that accepts inputs, does some work, and produces outputs. A Command is designed to be interface agnostic, so ideally should not be tied to a filesystem (i.e., it shouldn’t do I/O or take filepaths) though there are some exceptions. Your Command class ultimately defines an API for your Command that can then easily be wrapped in other interface types (for example, a command line interface and/or a web interface) which handle input and output in an interface-specific way. This strategy also facilitates unit testing of your Command (by separating core functionality, which is essential to test, from interfaces, which can be very difficult to test in an automated fashion), parallel processing with your Command, and constructing workflows that chain multiple Commands together. In general, your Command should take structured input (for example, a list of tuples or a numpy array), not a file that needs to be parsed.
This document describes how to create your first pyqi Command.
Stubbing a new command¶
After installing pyqi, you can easily stub (i.e., create templates for) new commands using pyqi make-command. You can get usage information by calling:
pyqi make-command -h
To create our sequence collection summarizer, we can start by stubbing a SequenceCollectionSummarizer class:
pyqi make-command -n SequenceCollectionSummarizer --credits "Greg Caporaso" -o sequence_collection_summarizer.py
If you run this command locally, substituting your own name where applicable, you’ll have a new file called sequence_collection_summarizer.py, which will look roughly like the following:
#!/usr/bin/env python from __future__ import division __credits__ = ["Greg Caporaso"] from pyqi.core.command import (Command, CommandIn, CommandOut, ParameterCollection) class SequenceCollectionSummarizer(Command): BriefDescription = "FILL IN A 1 SENTENCE DESCRIPTION" LongDescription = "GO INTO MORE DETAIL" CommandIns = ParameterCollection([ CommandIn(Name='foo', DataType=str, Description='some required parameter', Required=True), CommandIn(Name='bar', DataType=int, Description='some optional parameter', Required=False, Default=1) ]) CommandOuts = ParameterCollection([ CommandOut(Name="result_1", DataType=str, Description="xyz"), CommandOut(Name="result_2", DataType=str, Description="123"), ]) def run(self, **kwargs): # EXAMPLE: # return {'result_1': kwargs['foo'] * kwargs['bar'], # 'result_2': "Some output bits"} raise NotImplementedError("You must define this method") CommandConstructor = SequenceCollectionSummarizer
Defining a command¶
There are several values that you’ll need to fill in to define your command based on the stub that is created by make-command. The first, which are the easiest, are BriefDescription and LongDescription. BriefDescription should be a one sentence description of your command, and LongDescription should be a more detailed explanation (usually 2-3 sentences). These are used in auto-generated documentation.
Next, you’ll need to define the parameters that your new command can take as input. Each of these parameters will be an instance of the pyqi.core.command.CommandIn class.
Our SequenceCollectionSummarizer command will take one required parameter and one optional parameter. The required parameter will be called seqs, and will be a list (or some other iterable type) of tuples of (sequence identifier, sequence) pairs. For example:
[('sequence1','ACCGTGGACCAA'),('sequence2','TGTGGA'), ...]
We’ll also need to provide a description of this parameter (used in documentation), its type, and indicate that it is required. The final CommandIn definition should look like this:
CommandIn(Name='seqs', DataType=list, Description='sequences to be summarized', Required=True)
The optional parameter will be called suppress_length_summary, and if passed will indicate that we don’t want information on sequence lengths included in our output summary. The Parameter definition in this case should look like this:
CommandIn(Name='suppress_length_summary', DataType=bool, Description='do not generate summary information on the sequence lengths', Required=False, Default=False)
The only additional parameter that is passed here, relative to our seqs parameter, is Default. Because this parameter isn’t required, it’s necessary to give it a default value here. All of the CommandIns should be included in a pyqi.core.command.ParameterCollection object (as in the stubbed file).
Note
There are a few restrictions on what Name can be set to for a Parameter (e.g., a CommandIn or a CommandOut). It must be a valid python identifier (e.g., it cannot contain - characters or begin with a number) so the Command can be called with named options instead of passing a dict. Parameter names also must be unique for a Command.
Next, you’ll need to define the results that the Command generates as output. In this example, our Command will generate three results: the number of sequences, the minimum sequence length, and the maximum sequence length. Each of these results will be an instance of the pyqi.core.command.CommandOut class. We define the name of the result, its type, and a description. The final CommandOuts should look like this:
CommandOut(Name='num_seqs', DataType=int, Description='number of sequences'), CommandOut(Name='min_length', DataType=int, Description='minimum sequence length'), CommandOut(Name='max_length', DataType=int, Description='maximum sequence length')
All of the CommandOuts should be included in a pyqi.core.command.ParameterCollection object (as in the stubbed file).
Next, we’ll need to define what our Command will actually do. This is done in the run method, and all results are returned in a dictionary. The run method for our SequenceCollectionSummarizer object would look like the following:}
In practice, if your Command is more complex than our SequenceCollectionSummarizer (which it probably is), you can define other methods that are called by run. These should likely be private methods.
Note
kwargs is validated prior to run being called, so that any required kwargs that are missing will raise an error, and any optional kwargs that are missing will have their default values filled in. To customize the validation that is performed on kwargs for your Command you should override _validate_kwargs in your Command.
A complete example Command¶
The following illustrates a complete python file defining a new pyqi Command:
#!/usr/bin/env python from __future__ import division __credits__ = ["Greg Caporaso"] from pyqi.core.command import (Command, CommandIn, CommandOut, ParameterCollection) class SequenceCollectionSummarizer(Command): BriefDescription = "Generate summary statistics on a collection of sequences." LongDescription = "Provide the number of sequences, the minimum sequence length, and the maximum sequence length given a collection of sequences. Sequences should be provided as a list (or other iterable object) of tuples of (sequence id, sequence) pairs." CommandIns = ParameterCollection([ CommandIn(Name='seqs', DataType=list, Description='sequences to be summarized', Required=True), CommandIn(Name='suppress_length_summary', DataType=bool, Description='do not generate summary information on the sequence lengths', Required=False, Default=False) ]) CommandOuts = ParameterCollection([ CommandOut(Name='num_seqs', DataType=int, Description='number of sequences'), CommandOut(Name='min_length', DataType=int, Description='minimum sequence length'), CommandOut(Name='max_length', DataType=int, Description='maximum sequence length') ])} CommandConstructor = SequenceCollectionSummarizer
At this stage you have defined a new command and its API. To access the API in the python terminal, you could do the following:
# Import your new class >>> from sequence_collection_summarizer import SequenceCollectionSummarizer # Instantiate it >>> s = SequenceCollectionSummarizer() # Call the command, passing a list of (seq id, sequence) tuples as input. # Note that because the parameters are provided as kwargs, you need to # pass the parameter with a keyword. >>> r = s(seqs=[('sequence1','ACCGTGGACCAA'),('sequence2','TGTGGA')]) # You can now see the full output of the command by inspecting the # result dictionary. >>> r {'max_length': 12, 'min_length': 6, 'num_seqs': 2} # Alternatively, you can access each value independently, as with any dictionary. >>> print r['num_seqs'] 2 >>> print r['min_length'] 6 >>> print r['max_length'] 12 # You can call this command again with different input. # For example, we can call the command again passing the # suppress_length_summary parameter. >>> r = s(seqs=[('sequence1','ACCGTGGACCAA'),('sequence2','TGTGGA')],suppress_length_summary=True) >>> r {'max_length': None, 'min_length': None, 'num_seqs': 2} | http://pyqi.readthedocs.io/en/latest/tutorials/defining_new_commands.html | 2018-01-16T12:56:41 | CC-MAIN-2018-05 | 1516084886436.25 | [] | pyqi.readthedocs.io |
Below are the 1 most popular labels used in Graphene 1.
The bigger the text, the more popular the label. Click on a label to see its associated content.
See also: global popular labels.
Powered by a free Atlassian Confluence Open Source Project License granted to Red Hat, Inc.. Evaluate Confluence today. | https://docs.jboss.org/author/labels/listlabels-heatmap.action?key=ARQGRA | 2018-01-16T14:39:11 | CC-MAIN-2018-05 | 1516084886436.25 | [] | docs.jboss.org |
It's that time of the year here in the United States where we give thanks for the bountiful harvest, and stuff ourselves with Turkey while enjoying the pleasure of the company of our friends and family. Please note that in order to allow our teams to enjoy the holiday with their families, we will be providing email-based support on Thursday and Friday of this week. Our European team will continue to provide service in line with normal levels.
Last week, we ran ThousandEyes Connect in San Francisco, and had great sessions by presenters QuantCast and ServiceNow. In 2 weeks we'll be running ThousandEyes Connect at LMHQ in New York City - you can register for the event here.
Ok, now on with the good stuff!
We've implemented the System for Cross-domain Identity Management (SCIM) protocol for use with organizations that have Single Sign-On (SSO) configured. What does this mean for you? It means that you can now configure ThousandEyes in conjunction with an SSO provider to automatically add new users into ThousandEyes when they're added or removed from your provider's system.
You can find more information feature by reviewing this article: ThousandEyes Support for SCIM
ThousandEyes implements the SCIM specification, however in practice it appears that many identity provider implementations of this protocol vary slightly. Our initial implementation will be updated as soon as we have certification from Okta, Microsoft Azure Active Directory (Premium), and OneLogin. If your provider isn't listed here but does support SCIM 2.0, please let us know and we'll be happy to look into the implementation. If you're using one of these Identity Providers and would like to be an early SCIM adopter with ThousandEyes, let us know and we'll be in touch as soon as we have certification from those providers.
In addition to the SCIM implementation, we've added support for ways to configure your SAML 2.0 compliant provider, using explicit, imported XML or dynamic configuration. Check out this article for more information.
We've long supported live sharing of tests, which allows a test to be shared both with other account groups inside your organization, and to other organizations altogether. With this new feature, rather than sharing the test, you can move the test into another account group. This can be useful for keeping your tests organized, attributing cloud usage to a specific account group, or reorganizing your company's ThousandEyes organization to keep it in line with business needs.
Find out more about moving tests between accounts by reviewing the Knowledge Base article Changing ownership of a test.
We've introduced our reporting API. This capability will allow experienced API users to access the aggregate metrics shown in our reporting interface. This feature is currently in beta, and is only exposed via version 6 of the API. For more information on the new reports API, along with other changes, see our developer reference site.
We've moved a few things around on the Test Settings interface. We now have a More Actions menu for each test on the test settings page. You can now duplicate, delete or transfer ownership for a test. We've removed buttons for duplicating and deleting tests. We've also changed how the test settings panels are rendered.
In addition, we've enabled live sharing of DNS+ and BGP tests.
We've recently added another 3 locations to our Cloud Agent list. These new agents can be found in Mississauga, Canada, Markham, Canada and Tunis, Tunisia. Look for more exciting locations as we continue to build out our cloud agent network. The full list can be found here.
We've updated our click-through subscription agreements, found at, and moved the Support Services and Security Policy into a separate click-through agreement, found at.
...and last but not least, we've corrected a number of bugs.
Corrected a problem that caused the Agent to crash in certain circumstances when installed on a RedHat/CentOS 6 server when ipTables is enabled.
We've taken steps to ensure that we are auditing deletion of external emails (non-ThousandEyes users). Prior to this update, deletion of an external email address was not being added to the Activity Log.
Corrected a problem that caused the BrowserBot component to crash while executing certain transaction steps.
Fixed the link shown in the "Views enabled for this test" information panel for the test settings interface with Agent to Agent tests
Fixed an issue that caused snapshot shares to not show any data in the Page Load view.
We've corrected an issue that caused ICMP network tests to send traffic to port -1
We've corrected an issue that warns about problems with a test configuration before any data is entered.
Fixed an issue that caused the dateStart, metricsAtStart, metricsAtEnd fields in the API from being shown when making calls to the /alerts endpoint.
Corrected the permalink returned in the /alerts endpoint to include the account ID of the test.
Corrected the behavior of test settings changes for DNS Server tests with the recursive bit enabled.
Corrected the behavior of the /usage endpoint that caused the cloudUnitsProjected field to fluctuate between hourly tabulations of actual usage.
Fixed the card labels in the built-in Page Load test report.
Fixed a problem with Transaction tests created via the API: tests were not defaulting to a 30-second timeout.
Added support for advanced fields (dnsOverride, desiredStatusCode, clientCertificate, userAgent, pingPayloadSize) via API write endpoints for tests. | https://docs.thousandeyes.com/release-notes/2016/2016-11-23-release-notes | 2021-01-16T04:47:42 | CC-MAIN-2021-04 | 1610703500028.5 | [] | docs.thousandeyes.com |
Importing Keyboard Shortcuts
You can import a keyboard shortcut configuration file exported from Harmony.
NOTES
- To export keyboard shortcuts into a file, see Exporting Keyboard Shortcuts .
- When you import a keyboard shortcuts file, it is added to the list of keyboard shortcut sets in the Keyboard Shortcuts drop-down menu of the Keyboard Shortcuts dialog.
Do one of the following to open the Keyboard Shortcuts dialog:
- Windows
or GNU/Linux: In the top menu, select Edit > Keyboard Shortcuts.
- macOS: In the top menu, select Harmony [Edition] > Keyboard Shortcuts.
At the right of the Keyboard Shortcuts: drop-down, click on the Load... button.
An open dialog appears.
Browse to the directory where your keyboard shortcut file is located.
- Select the keyboard shortcut file you want to import.
Click on Open.
The configuration selected keyboard shortcut file is loaded into the Keyboard Shortcuts dialog, and is added as a keyboard shortcut set in the Keyboard Shortcuts: drop-down menu. | https://docs.toonboom.com/help/harmony-17/paint/keyboard-shortcuts/import-keyboard-shortcut-set.html | 2021-01-16T05:53:58 | CC-MAIN-2021-04 | 1610703500028.5 | [array(['../Skins/Default/Stylesheets/Images/transparent.gif', 'Closed'],
dtype=object) ] | docs.toonboom.com |
You.
AD group membership changes do not immediately take effect for logged in users using RDSH Identity Firewall rules, this includes enabling and disenabling users, and deleting users. For changes to take effect, users must log off and then log back on. We recommend AD administrators force a log off when group membership is modified. This behavior is a limitation of Active Directory. | https://docs.vmware.com/en/VMware-NSX-Data-Center-for-vSphere/6.4/com.vmware.nsx.admin.doc/GUID-E064B271-ED38-4B13-A871-DE5E72CF7DD2.html | 2021-01-16T06:41:39 | CC-MAIN-2021-04 | 1610703500028.5 | [] | docs.vmware.com |
Adding Widgets
With a click on the
button in Sidebar the Widget panel opens, showing the list of the available widget types that can be added to the dashboard:
In particular, it is possible to choose between:
Chart
Text
Table
Counter
Map
Creating Chart, Text, Table and Counter widgets the procedure is almost the same as that described for create widgets in maps. The only minor differences are the following:
In dashboards as soon as the user select the widget type, a panel appears in order to select the layer from which the widget will be created
In dashboards the possibility to connect/disconnect widgets to the map is replaced with the possibility to connect/disconnect the Map widgets together or with other widget types (this point will be better explained in Connecting Widgets section)
Creating Map type widgets, otherwise, is a functionality present only in dashboards.
Map Widget
In dashboards, selecting the Map type widget, the following panel appears:
Here the user can:
Go back
to widget type selection
Select a map (mandatory in order to move forward)
Move forward to the next step
The next panel is similar to the following (in this case an empty map was selected):
Now the user is allowed to add layers to the map through the
button:
Once a layer is added to the map widget, it will be displayed in preview and in layers list:
It's now possible to toggle the layer visibility, and set layers transparency (more information in Display options section). Furthermore, by selecting it, new buttons are added to the toolbar allowing to:
Zoom to layers
Access Layer Settings
Remove layers
Note
Adding layers is not mandatory, it is possible to create a widget map using an empty map.
Once the
button is clicked, the last step of the process is displayed like the following:
Here the user has the possibility to insert a Title and a Description for the widget (optional fields) and to complete its creation by clicking on the
button. After that, the widget is added to the viewer space:
Legend widget
When at least one Map widget is created and added to the dashboard, there's the possibility to add also the Legend widget, available in the widget types list:
Selecting the Legend widget, the user can choose the Map widget to which the legend will be connected (when only a Map widget is present in the dashboard this step is skipped):
Once a Map widget is connected, the preview panel is similar to the following:
Here the user can go back
to the widget types section, connect
or disconnect
the legend to a map and move forward
to widget options.
If the last option is selected, a configuration panel similar to the Map widgets one gives the possibility, before save, to set the Title and the Description for the Legend widget.
An example of a Map widgets and a Legend widget is the following:
| https://mapstore.readthedocs.io/en/latest/user-guide/adding-widgets/ | 2021-01-16T06:35:15 | CC-MAIN-2021-04 | 1610703500028.5 | [array(['../img/adding-widgets/widgets-panel.jpg', None], dtype=object)
array(['../img/adding-widgets/wid-select-map.jpg', None], dtype=object)
array(['../img/adding-widgets/wid-map-options.jpg', None], dtype=object)
array(['../img/adding-widgets/wid-add-layer.gif', None], dtype=object)
array(['../img/adding-widgets/wid-layers-list.jpg', None], dtype=object)
array(['../img/adding-widgets/map-wid-info.jpg', None], dtype=object)
array(['../img/adding-widgets/viewer-map.jpg', None], dtype=object)
array(['../img/adding-widgets/list-legend.jpg', None], dtype=object)
array(['../img/adding-widgets/select-map-connection.jpg', None],
dtype=object)
array(['../img/adding-widgets/legend-preview.jpg', None], dtype=object)
array(['../img/adding-widgets/legend-ex.jpg', None], dtype=object)] | mapstore.readthedocs.io |
Custom Attributes¶
Attributes in PynamoDB are classes that are serialized to and from DynamoDB attributes. PynamoDB provides attribute classes
for all DynamoDB data types, as defined in the DynamoDB documentation.
Higher level attribute types (internally stored as a DynamoDB data types) can be defined with PynamoDB. Two such types
are included with PynamoDB for convenience:
JSONAttribute and
UTCDateTimeAttribute.
Attribute Methods¶
All
Attribute classes must define three methods,
serialize,
deserialize and
get_value. The
serialize method takes a Python
value and converts it into a format that can be stored into DynamoDB. The
get_value method reads the serialized value out of the DynamoDB record.
This raw value is then passed to the
deserialize method. The
deserialize method then converts it back into its value in Python.
Additionally, a class attribute called
attr_type is required for PynamoDB to know which DynamoDB data type the attribute is stored as.
The
get_value method is provided to help when migrating from one attribute type to another, specifically with the
BooleanAttribute type.
If you’re writing your own attribute and the
attr_type has not changed you can simply use the base
Attribute implementation of
get_value.
Writing your own attribute¶
You can write your own attribute class which defines the necessary methods like this:
from pynamodb.attributes import Attribute from pynamodb.constants import BINARY class CustomAttribute(Attribute): """ A custom model attribute """ # This tells PynamoDB that the attribute is stored in DynamoDB as a binary # attribute attr_type = BINARY def serialize(self, value): # convert the value to binary and return it def deserialize(self, value): # convert the value from binary back into whatever type you require
Custom Attribute Example¶
The example below shows how to write a custom attribute that will pickle a customized class. The attribute itself is stored
in DynamoDB as a binary attribute. The
pickle module is used to serialize and deserialize the attribute. In this example,
it is not necessary to define
attr_type because the
PickleAttribute class is inheriting from
BinaryAttribute which has
already defined it.
import pickle from pynamodb.attributes import BinaryAttribute, UnicodeAttribute from pynamodb.models import Model class Color(object): """ This class is used to demonstrate the PickleAttribute below """ def __init__(self, name): self.name = name def __str__(self): return "<Color: {}>".format(self.name) class PickleAttribute(BinaryAttribute): """ This class will serializer/deserialize any picklable Python object. The value will be stored as a binary attribute in DynamoDB. """ def serialize(self, value): """ The super class takes the binary string returned from pickle.dumps and encodes it for storage in DynamoDB """ return super(PickleAttribute, self).serialize(pickle.dumps(value)) def deserialize(self, value): return pickle.loads(super(PickleAttribute, self).deserialize(value)) class CustomAttributeModel(Model): """ A model with a custom attribute """ class Meta: host = '' table_name = 'custom_attr' read_capacity_units = 1 write_capacity_units = 1 id = UnicodeAttribute(hash_key=True) obj = PickleAttribute()
Now we can use our custom attribute to round trip any object that can be pickled.
>>>instance = CustomAttributeModel() >>>instance.obj = Color('red') >>>instance.>>instance.save() >>>instance = CustomAttributeModel.get('red') >>>print(instance.obj) <Color: red>
List Attributes¶
DynamoDB list attributes are simply lists of other attributes. DynamoDB asserts no requirements about the types embedded within the list. Creating an untyped list is done like so:
from pynamodb.attributes import ListAttribute, NumberAttribute, UnicodeAttribute class GroceryList(Model): class Meta: table_name = 'GroceryListModel' store_name = UnicodeAttribute(hash_key=True) groceries = ListAttribute() # Example usage: GroceryList(store_name='Haight Street Market', groceries=['bread', 1, 'butter', 6, 'milk', 1])
PynamoDB can provide type safety if it is required. Currently PynamoDB does not allow type checks on anything other than subclasses of
Attribute. We’re working on adding more generic type checking in a future version.
When defining your model use the
of= kwarg and pass in a class. PynamoDB will check that all items in the list are of the type you require.
from pynamodb.attributes import ListAttribute, NumberAttribute class OfficeEmployeeMap(MapAttribute): office_employee_id = NumberAttribute() person = UnicodeAttribute() class Office(Model): class Meta: table_name = 'OfficeModel' office_id = NumberAttribute(hash_key=True) employees = ListAttribute(of=OfficeEmployeeMap) # Example usage: emp1 = OfficeEmployeeMap( office_employee_id=123, person='justin' ) emp2 = OfficeEmployeeMap( office_employee_id=125, person='lita' ) emp4 = OfficeEmployeeMap( office_employee_id=126, person='garrett' ) Office( office_id=3, employees=[emp1, emp2, emp3] ).save() # persists Office( office_id=3, employees=['justin', 'lita', 'garrett'] ).save() # raises ValueError
Map Attributes¶
DynamoDB map attributes are objects embedded inside of top level models. See the examples here.
When implementing your own MapAttribute you can simply extend
MapAttribute and ignore writing serialization code.
These attributes can then be used inside of Model classes just like any other attribute.
from pynamodb.attributes import MapAttribute, UnicodeAttribute class CarInfoMap(MapAttribute): make = UnicodeAttribute(null=False) model = UnicodeAttribute(null=True)
As with a model and its top-level attributes, a PynamoDB MapAttribute will ignore sub-attributes it does not know about during deserialization. As a result, if the item in DynamoDB contains sub-attributes not declared as properties of the corresponding MapAttribute, save() will cause those sub-attributes to be deleted. | https://pynamodb.readthedocs.io/en/latest/attributes.html | 2021-01-16T04:49:12 | CC-MAIN-2021-04 | 1610703500028.5 | [] | pynamodb.readthedocs.io |
Manage Blob
Add Blob in Presentation
Aspose.Slides for .NET provides a facility to add large files (video file in that case) and prevent a high memory consumption. An example is given below that shows how to add Blob in presentations.
Export Blob from Presentation
Aspose.Slides for .NET provides a facility to Export large files (audio and video file in that case). We want to extract these files from the presentation and do not want to load this presentation into memory to keep our memory consumption low. Here is an example is given below how we can export blob from presentations.
Add Image as BLOB in Presentation
Aspose.Slides for .NET added a new method to IImageCollection interface and ImageCollection class to support adding a large image as streams to treat them as BLOBs.
This example demonstrates how to include the large BLOB (image) and prevent high memory consumption. | https://docs.aspose.com/slides/net/manage-blob/ | 2021-01-16T06:09:42 | CC-MAIN-2021-04 | 1610703500028.5 | [] | docs.aspose.com |
StartCanary
Use this operation to run a canary that has already been created.
The frequency of the canary runs is determined by the value of the canary's
Schedule. To see a canary's schedule,
use GetCanary.
Request Syntax
POST /canary/
name/start HTTP/1.1
URI Request Parameters
The request uses the following URI parameters.
- name
The name of the canary that you want to run. To find canary names, use DescribeCanaries.
Length Constraints: Minimum length of 1. Maximum length of 21.
Pattern:
^[0-9a.
-: | https://docs.aws.amazon.com/AmazonSynthetics/latest/APIReference/API_StartCanary.html | 2021-01-16T07:04:20 | CC-MAIN-2021-04 | 1610703500028.5 | [] | docs.aws.amazon.com |
DAML: Defining Contract Models Compactly¶
As described in preceeding sections, both the integrity and privacy notions depend on a contract model, and such a model must specify:
- a set of allowed actions on the contracts, and
- the signatories, observers, and
- an optional agreement text associated with each contract, and
- the optional key associated with each contract and its maintainers.
The sets of allowed actions can in general be infinite. For instance, the actions in the IOU contract model considered earlier can be instantiated for an arbitrary obligor and an arbitrary owner. As enumerating all possible actions from an infinite set is infeasible, a more compact way of representing models is needed.
DAML provides exactly that: a compact representation of a contract model. Intuitively, the allowed actions are:
Create actions on all instances of DAML templates such that the template arguments satisfy the ensure clause of the template
Exercise actions on a contract instance corresponding to DAML choices on that template, with given choice arguments, such that:
- The actors match the controllers of the choice. That is, the DAML controllers define the required authorizers of the choice.
- The exercise kind matches.
- All assertions in the update block hold for the given choice arguments.
- Create, exercise, fetch and key statements in the DAML update block are represented as create, exercise and fetch actions and key assertions in the consequences of the exercise action.
Fetch actions on a contract instance corresponding to a fetch of that instance inside of an update block. The actors must be a non-empty subset of the contract stakeholders. The actors are determined dynamically as follows: if the fetch appears in an update block of a choice ch on a contract c1, and the fetched contract ID resolves to a contract c2, then the actors are defined as the intersection of (1) the signatories of c1 union the controllers of ch with (2) the stakeholders of c2.
A fetchByKey statement also produces a Fetch action with the actors determined in the same way. A lookupByKey statement that finds a contract also translates into a Fetch action, but all maintainers of the key are the actors.
NoSuchKey assertions corresponding to a lookupByKey update statement for the given key that does not find a contract.
An instance of a DAML template, that is, a DAML contract or contract instance, is a triple of:
- a contract identifier
- the template identifier
- the template arguments
The signatories of a DAML contract are derived from the template arguments and the explicit signatory annotations on the contract template. The observers are also derived from the template arguments and include:
- the observers as explicitly annotated on the template
- all controllers c of every choice defined using the syntax
controller c can...(as opposed to the syntax
choice ... controller c)
For example, the following DAML template exactly describes the contract model of a simple IOU with a unit amount, shown earlier.
template MustPay with obligor : Party owner : Party where signatory obligor, owner agreement show obligor <> " must pay " <> show owner <> " one unit of value" template Iou with obligor : Party owner : Party where signatory obligor controller owner can Transfer : ContractId Iou with newOwner : Party do create Iou with obligor; owner = newOwner controller owner can Settle : ContractId MustPay do create MustPay with obligor; owner
In this example, the owner is automatically made an observer on the contract, as the
Transfer and
Settle choices use the
controller owner can syntax.
The template identifiers of DAML contracts are created through a content-addressing scheme. This means every DAML contract is self-describing in a sense: it constrains its stakeholder annotations and all DAML-conformant actions on itself. As a consequence, one can talk about “the” DAML contract model, as a single contract model encoding all possible instances of all possible DAML templates. This model is subaction-closed; all exercise and create actions done within an update block are also always permissible as top-level actions. | https://docs.daml.com/1.4.0-snapshot.20200729.4851.0.224ab362/concepts/ledger-model/ledger-daml.html | 2021-01-16T05:29:38 | CC-MAIN-2021-04 | 1610703500028.5 | [] | docs.daml.com |
Module core.backends.orchestrator.local.zenml_local_orchestrator¶
Classes¶
ZenMLLocalDagRunner(config: Union[tfx.orchestration.config.pipeline_config.PipelineConfig, NoneType] = None)
: This is the almost the same as the super class from tfx:
tfx.orchestration.local.local_dag_runner.LocalDagRunner with the exception
being that the pipeline_run is not overridden. Full credit to Google LLC
for the original source code found at:
Initializes local TFX orchestrator. Args: config: Optional pipeline config for customizing the launching of each component. Defaults to pipeline config that supports InProcessComponentLauncher and DockerComponentLauncher. ### Ancestors (in MRO) * tfx.orchestration.local.local_dag_runner.LocalDagRunner * tfx.orchestration.tfx_runner.TfxRunner ### Methods `run(self, tfx_pipeline: tfx.orchestration.pipeline.Pipeline) ‑> NoneType` : Runs given logical pipeline locally. Args: tfx_pipeline: Logical pipeline containing pipeline args and components. | https://docs.zenml.io/reference/core/backends/orchestrator/local/zenml_local_orchestrator.html | 2021-01-16T04:59:37 | CC-MAIN-2021-04 | 1610703500028.5 | [] | docs.zenml.io |
Local Data Stores
- 2 minutes to read
Sometimes.
Creating and Using Local Data Stores
The PivotGridControl.SavePivotGridToFile method saves Pivot Grid Control's current data to a file for later use (if required, you can save the data to a stream using resulting file, but may increase the time required for data saving/loading.
To load.
NOTE
A PivotFileDataSource object can be created using the constructor that takes a Stream object as a parameter. In this instance, do not dispose of the specified stream until the PivotFileDataSource object is assigned to the PivotGridControl.DataSource property.
NOTE
Typically, you save and restore data in the same control. However, it is possible to restore saved data in another PivotGridControl. In this instance, custom logic implemented in the first Pivot Grid Control in code will not be available in the second control. For example, unbound fields that are populated using the PivotGridControl.CustomUnboundFieldData event will not provide any data in the second PivotGridControl. | https://docs.devexpress.com/WPF/115662/controls-and-libraries/pivot-grid/binding-to-data/local-data-stores | 2020-09-18T10:56:35 | CC-MAIN-2020-40 | 1600400187390.18 | [] | docs.devexpress.com |
Design Time
The Smart Tag of RadCaptcha lets you easily enable the httpHandler for your control or quickly get help. You can display the Smart Tag by right clicking on a RadCaptcha:
Enable RadCaptcha Http Handler
Enables the RadCaptcha httpHandler. Click OK to close the confirmation dialog for the RadCaptcha handler.
Ajax Resources
Add RadAjaxManager... adds a RadAjaxManager component to your Web page, and displays the r.a.d.aj.
Learning Center
Links navigate you directly to examples, help, and code library.
You can navigate directly to the Telerik Support Center. | https://docs.telerik.com/devtools/aspnet-ajax/controls/captcha/design-time | 2020-09-18T11:13:05 | CC-MAIN-2020-40 | 1600400187390.18 | [array(['images/captcha-smart-tag.png', 'captcha-smart-tag'], dtype=object)] | docs.telerik.com |
- CheckBox control
- CommandButton control
- DatePicker control
- DropDownListBox control
- DropDownPictureListBox control
- EditMask control
- GroupBox control
- HProgressBar control
- HScrollBar control
- HTrackBar control
- InkPicture control
- Line control
- ListBox control
- ListView control
- MonthCalendar control
- MultiLineEdit control
- OLEControl control
- OLECustomControl control
- Oval control
- Picture control
- PictureButton control
- PictureHyperLink control
- PictureListBox control
- RadioButton control
- Rectangle control
- RichTextEdit control
- RoundRectangle control
- SingleLineEdit control
- StaticHyperLink control
- StaticText control
- Tab control
- TreeView control
- VProgressBar control
- VScrollBar control
- VTrackBar control
- Window control
Difference
Left mouse clicking on the DatePicker control will trigger the Clicked, GetFocused events in sequence.
Important Requirements
In PowerBuilder, if a DropDownListBox has no item, an empty row will display in the ListBox portion when the user clicks the down arrow. However, on the Web application, no empty row will display.
Property added by PowerServer
Recognitiontimer - Specifies the time period in milliseconds between the last ink stroke and the start of text recognition (the Appeon_recognition event). The default is 2000 (two seconds).
Event added by PowerServer
Appeon_recognition - Occurs when the last ink stroke has finished (that is, Stroke event is ended) for the period of time specified in the Recognitiontimer property. This event provides a way for the developer to write PowerScript to, for example, save user strokes as images or blob data to the database.
Important Requirements
In the ListView control, selecting multiple items at one time is unsupported. | https://docs.appeon.com/ps2020/features_help_for_appeon_web/ch05s01s01.html | 2020-09-18T10:06:46 | CC-MAIN-2020-40 | 1600400187390.18 | [] | docs.appeon.com |
Sharing New Posts
To share new posts to your connected social media profiles you must first activate the module. You can do this by navigating to the General Settings tab from the plugin’s Settings page and checking the “Share when publishing a new Post” option.
Posts will be shared only when you publish them. If you save the post as draft the post will not be shared. If you schedule the post, then it will be shared when the post gets published.
You can further filter the sharing of new posts by custom post type. Navigating to the On Post Publish tab of the plugin’s Settings page will give you the option to choose which custom post types should have the option to share on post publish available and also which social media profiles to be by default selected.
If the post type does not have the option to share on post publish available the share box will be closed by default in the add new post screen.
If the post type does have the option to share on post publish available the share box will be opened and have the social media profiles selected by default. You can also add a custom message to be shared, instead of the default one in the Settings page.
| https://docs.devpups.com/skyepress/sharing-new-posts/ | 2020-09-18T11:41:36 | CC-MAIN-2020-40 | 1600400187390.18 | [array(['https://docs.devpups.com/wp-content/uploads/2016/12/share-new-post-1.png',
'share-new-post-1'], dtype=object)
array(['https://docs.devpups.com/wp-content/uploads/2016/12/share-new-post-2.png',
'share-new-post-2'], dtype=object)
array(['https://docs.devpups.com/wp-content/uploads/2016/12/share-new-post-3.png',
'share-new-post-3'], dtype=object)
array(['https://docs.devpups.com/wp-content/uploads/2016/12/share-new-post-4.png',
'share-new-post-4'], dtype=object) ] | docs.devpups.com |
Build Loyalty
Magento Commerce for B2B only
The content on this page is for Magento Commerce for B2B only. Learn more
- Purchase Orders
- Use purchase orders and approval rules to allow purchasing according to your company’s purchasing policies.
- Friction-Free Purchasing
- Magento’s self-service model makes it easy to build loyalty for fast, friction-free purchasing.
- Fast Reordering
- Customers can create new orders based on previous orders from the convenience of their customer account.
- Order by SKU
- Customers can add individual products to their cart by SKU and quantity or import a list of products from a file.
- Request a Quote
- Authorized company buyers can initiate a price negotiation by requesting a quote from the shopping cart.
- Punch Out Solutions
- Establish new customers with third-party solutions, such as Punch Out Catalogs and PunchOut2Go. You can find these solutions on the Magento Marketplace. | https://docs.magento.com/user-guide/quick-tour/build-loyalty.html | 2020-09-18T10:22:48 | CC-MAIN-2020-40 | 1600400187390.18 | [] | docs.magento.com |
Article sections
To show share counters with your buttons on site you need at first to set the feature to show counters. This happens inside the style settings (global for site or for each personalized settings). Learn everything about style settings and activation of share counter to selected design here. All settings related to social share counters you can find in Social Sharing -> Share Counters Setup.
Counter Update
Counter update section holds all options about share counter update and a few for controlling the counter display.
Counter update interval
Choose how your counters will update. Real-time share counter will update on each page load and usage on production site can produce extreme load over admin-ajax WordPress component – use it with caution. We strongly recommend using updated on interval counters. They will update once on chosen interval and ensure your site will work fast and smooth (if you use cache plugin and they do not update frequently you can activate in advanced options cache compatible update mode). For most sites recommended update counter value is 3 or 6 hours.
Avoid Negative Social Proof
The avoid negative social proof allows if active to hide the share counter unless a specific value is set. If you wish to use it set the option to Yes and you will have the chance to setup value for total counter and/or single button counter.
Advanced Counter Update Options
- Cache server/plugin update mode – Activation of this option will start a second check after each page load to check if share counters should update or not. This will be done on background and it will ensure that share values will be up-to-date when site cache is updated (usually once/twice per day for most cache solutions). Warning! This option may produce slow down if you use too frequent counter update or when you have high traffic on site (and if site is not fully optimized for that)
- Speed up process of counter update – This option will activate asynchronous counter update mode which is up to 5 times faster than regular update. Option requires to have PHP 5.4 or newer. Warning! This option may not be supported on all hosts (or not active by default). If you see an issue immediately deactivate it.
- Increase update period for older posts – Use this option to increase progressive update counter interval for older posts of your site. This will make less calls to social APIs and make counters update faster. Recommended for usage on sites
- Force save new shares – Plugin comes with share counter protection. This protection will avoid saving share counter values if they are lower than the past update. That is made to protect the social privacy. Very rear it may be required to avoid that protection. Set the option to Yes temporary to save any of the saved share values.
- Client side Facebook counter update – Use client side Facebook counter update to eliminate Facebook rate policy for number of connection you can send. The client side update will ensure your counters will frequently update. Option is compatible with share recovery. It is recommended for usage on sites to ensure a proper Facebook share counter update. The option will executed the share counter update inside client browser (not on server level) – see note at the end of the article.
- Client side Pinterest counter update – Pinterest apply restrictions when you are using few hosts that avoid Pinterest counter extraction. In such case please activate this option to avoid missing Pinterest counters. Due to Pinterest rate limitations this option cannot work with share recovery.
Share Counter Recovery
Share counter recovery allows you restore back shares once you make a permalink change (including installing a SSL certificate). Share recovery will show back shares only if they are present for both versions of URL (before and after change). We have a detailed article explaining work of share recovery that you can pay attention here.
Single Button Counter Settings
The single button share counter section holds additional options for individual share counters. Those options are set globally and you does not need to change them regularly unless a social API change is presented.
- Twitter share counter – Twitter does not have official share counter. As of this inside plugin you can choose between showing internal counter, external service counter or leave the button like the network standard – without share counter. Please be aware that if you select external service you should visit its site and complete all the requirements they have (otherwise counter value will not appear).
- Facebook access token key – To avoid missing Facebook share counter due to rate limits we recommend to fill access token key. Access token generation of counter can work only when you do not use real time counters. To generate your access token key please visit and follow instructions to generate application based token
- Facebook counter update API – the API endpoint represent a different approach of reading the share counters. Default is the #1 end point. The setting for endpoint and token are not required to make selection if you will use a client side Facebook update option
- LinkedIn share counter – LinkedIn recenly announced that they are removing share counters from their button and API. The API may still return data for your site but it is adviced to switch to internal counters.
- Google+ share counter – Google+ recenly announced that they are removing share counters from their button and API. When that become globally you can switch to internal counter.
- Activate internal counters for all networks that do not support API count – Not all networks has official share counter. With share counter activation those networks that does not support will appear without value. If you wish to show share counter value for them too you can activate Internal Share Counter feature. This will generate a share counter based on click over the selected button (and value will increase with each button click)
- Deactivate counters for Mail & Print – Enable this option if you wish to deactivate internal counters for mail & print buttons. That buttons are in the list of default social networks that support counters. Deactivating them will lower down request to internal WordPress AJAX event.
- Fully deactivate internal share counter tracking – Even when you do not display share counters on site at this moment plugin tracks internal counter with each button click. This is made to provide a share counter value when you decide to show or use share counters. Activation of this option will completely remove the execution and work of code for all internal tracked share counters – if you have any existing internal counter values they will stop increase and for all others it will not add a value. Hint: Rember that major networks like LinkedIn and Google+ removed share counters and there is no alternative of counter value at this time rather than internal counter.
- Share counter format – Choose how you wish to present your share counter value – short number of full number. This option will not work if you use real time share counters – in this mode you will always see short number format.
- Animate Numbers – Enable this option to apply nice animation of counters on appear.
Total Counter Settings
The total counter section contains additional options about total counter display.
- Change total text – This option allows you to change text Total that appear when left/right position of total counter is selected.
- Append text to total counter when big number styles are active – This option allows you to add custom text below counter when big number styles are active. For example you can add text shares.
- Change total counter text when before/after styles are active – Customize the text that is displayed in before/after share buttons display method. To display the total share number use the string {TOTAL} in text. Example: {TOTAL} users share us
- Total counter icon – Choose icon displayed on total counter when position with such is selected
- Total counter format – Choose how you wish to present your share counter value – short number of full number. This option will not work if you use real time share counters – in this mode you will always see short number format
- Always generate total counter based on all social networks – Enable this option if you wish to see total counter generated based on all installed in plugin social networks no matter of ones you have active. Default plugin setup is made to show total counter based on active for display social networks only and using different social networks on different locations may cause to have difference in total counter. Use this option to make it always be the same.
- Animate Numbers – Enable this option to apply nice animation of counters on appear.
Note. The client site update methods are executed with a call inside visitor browser to a related service (not on server side). The services imposes a connection rate limit that each site can use per hour. With the help of this option you can eliminate those rates. Another possible method that you can use is to increase the update period of your share counters and keep them running on the server side only. Use these option on your risk. By setting this option to Yes you understand that a counter update will be executed on the front of your site using a javascript functional call (instead on server level).
Troubleshooting Share Counter Problems
- How To Recover Shares When You Switch To HTTPS (activate SSL)?
- My Share Counts Are Not Showing, Not Updating Or Not Accurate | https://docs.socialsharingplugin.com/knowledgebase/social-sharing-counters-setup/ | 2020-09-18T10:21:22 | CC-MAIN-2020-40 | 1600400187390.18 | [] | docs.socialsharingplugin.com |
on the type of flow item it is processing. Flow items with a product type of 1 will be assigned a faster process time than flow items with a product type of 2. You'll also assign different colors to the flow items based on their product type. You'll create this logic by adding a Decide activity in the sub flow that will send tokens to one of two Finish activities.
For more in-depth explanations of the concepts covered in these tutorials see: | https://docs.flexsim.com/en/19.1/Tutorials/ProcessFlow/Tutorial3SubProcessFlows/SubProcessFlowsOverview/SubProcessFlowsOverview.html | 2020-09-18T09:50:20 | CC-MAIN-2020-40 | 1600400187390.18 | [] | docs.flexsim.com |
Answers
From Carleton Moodle Docs
This page is about Answers used in a Course Lesson activity question or in a Course Quiz activity or a Question bank question. Typically students are presented with a question, they select or create an answer, which may be scored and bring additional information to the student or present a new set of information to consider.
Links to answers
- Answers used in Lesson Page
- Types of questions in a Lesson that have answers
- Quiz activity page in MoodleDocs
- Question types available in a Quiz that have answers
- The Quiz Topic Index Quiz index is also an excellent source of information about different types of questions and their types of answers.
- Lesson activity questions types are fewer in number and function differently from Quiz Question types.
Links to places to find answers
- Moodle Documentation article- do not click your mouse, you are here. Use the search field on the top right
- Seek out your favorite spiritual adviser for additional help, or perhaps
- Go to one of the Using Moodle forums | https://docs.moodle.carleton.edu/index.php?title=Answers&oldid=16125 | 2020-09-18T11:33:27 | CC-MAIN-2020-40 | 1600400187390.18 | [] | docs.moodle.carleton.edu |
# Getting Started
Updated: 9/18/2020, 6:16:41 AM
Created: 9/18/2020, 6:16:41 AM
Last Updated By: ronaldojr9
Read Time: 1 minute(s)
This getting started guide walks you through the process of setting up your Zumasys Customer Portal account, assigning your organization's AccuTerm licenses to your account, adding users and assigning AccuTerm licenses to those users. Additional instructions can be found on how to create roles and profiles for AccuTerm Web and how to install AccuTerm IO on your MultiValue server.
# Licensing
- Create your Zumasys Customer Portal login (Customer Portal Quick Start)
- Upon successful creation, log into the portal using your newly created credentials ()
- Create your users and assign licenses (Creating Users)
- Allow access to Accuterm 8 Web (admin will need to provide the user with login credentials)
- Generate license keys for Accuterm 8 Desktop (user will receive a welcome email with license key)
# AccuTerm Desktop
Install and activate AccuTerm Desktop (License Activation)
# AccuTerm Web
- Create roles and profiles
- Roles allow access to certain profiles (Creating Roles)
- Profiles are Accuterm Configuration files (Creating Profiles)
- Install Accuterm Web server package (Installing AccuTerm IO Server) | https://docs.zumasys.com/accuterm/getting-started/ | 2020-09-18T09:59:38 | CC-MAIN-2020-40 | 1600400187390.18 | [] | docs.zumasys.com |
UTM parameters are customized tags that can be added to URLs. They don't change how the link works, they only provide you with useful information so you can analyze where your customers come from. They can be used both in Google Analytics and Exponea. Exponea automatically recognizes some UTM parameters from links as well as adds some into all the emails sent.
How UTM parameters work in Exponea
UTM for the session_start events
When a visitor comes to your website with UTM parameters present in the URL, Exponea JS SDK automatically parses (transforms) the following into attributes of the
session_start event:
- utm_campaign
- utm_source
- utm_medium
- utm_content
- utm_term
- campaign_id
- gclid
More information on these attributes can be found in the System events article.
Other URL parameters are not parsed but you can add more event attributes like these into
session_start by following our guide to tracking events.
Automatic UTMs for emails and push notifications
When working with emails and push notifications in Exponea, the following UTM tags are added to hyperlinks automatically:
- utm_campaign
- utm_source
- utm_medium
For example, if you send an email with an email action node named "April newsletter" with a link to, the final link will look like this:
Modifying automatic UTM parameters
You can define your own values for the UTM parameters in campaign settings, under "Link transformation".
You can also change the default value of
utm_source in Project settings > Campaigns > General.
Adding other UTM parameters
If you want to add other UTM parameters, which are not added by Exponea by default, you can do so by directly modifying the links within your campaign. For example, to add
utm_id you can write your link like this:.
The automatic UTM parameters will then be added to this link automatically, keeping your custom UTM tag as well.
Best Practices
Using UTM parameters alone is not enough to be able to analyze them successfully. Here are a few tips that can help you use them effectively:
Create a naming convention
- Be consistent in naming (have a guideline to be shared within the team) - do not use different terms in different campaigns to prevent duplicates (e.g. fb, facebook, etc...)
- Use dashes instead of underscores, percentage and plus scores - This helps different browser and search engines recognize the words
- Use only lowercase - when analyzing, the terms are case-sensitive, and it prevents having different duplicates
- Use simple, easy-to-read naming - don't use internal numbering system or other terms that are not obvious
- Each parameter should provide different, but useful information -e.g. when you already use NL03052017 as a campaign name, it is not necessary to put newsletter into utm_source.
Choose the correct parameter:
- Campaign Source – The platform (or vendor) where the traffic originates, like Facebook, Google, etc..
- Campaign Medium – You can use this to identify the medium like Cost Per Click (CPC), social media, email, SMS, affiliate or QR code. This can be used to e.g. differentiate links from paid traffic, profile info links, posts on wall, etc.
- Campaign Term – You’ll use this mainly for tracking your keywords during a paid Ads campaign.
- Campaign Content – If you’re A/B testing ads, then this is a useful metric that passes details about your ad. You can also use it to differentiate links that point to the same URL, e.g. from a picture, from text link, from a button, etc. If there are multiple links with the same URL, they should each have a different UTM parameter.
- Campaign Name – This is just to identify your campaign. Like your website or specific product promotion, date of the newsletter, etc.
You can use this table as a guide when creating your own UTM tags.
Updated 2 months ago | https://docs.exponea.com/docs/utm-parameters | 2020-09-18T09:59:47 | CC-MAIN-2020-40 | 1600400187390.18 | [array(['https://files.readme.io/a19a7c1-UTM_in_campaign_settings.png',
'UTM in campaign settings.png'], dtype=object)
array(['https://files.readme.io/a19a7c1-UTM_in_campaign_settings.png',
'Click to close...'], dtype=object) ] | docs.exponea.com |
Installing Plesk in Virtuozzo Containers
There are two ways to install Plesk on Virtuozzo:
- Using the Plesk installer script – the same way as in the case with physical servers
- Using Virtuozzo Application Templates – the native application management mechanism
Note: We recommend that you use the Plesk installer script for this purpose.
The process of deploying multiple Plesk servers on Virtuozzo in this case follows this workflow:
Create a Virtuozzo container
For instructions on creating Virtuozzo containers, see Creating Virtual machines and Containers in the Virtuozzo User’s Guide.
Install Plesk in the container using the installer script
You can choose any of the ways described in this guide.
Prepare the Plesk instance installed in the container for cloning
For instructions on how to prepare a Plesk instance for cloning, see Deploying Plesk Servers by Cloning.
Clone the container as many times as necessary
For instructions on cloning Virtuozzo containers, see Copying Virtual Machines and Containers within Server in the Virtuozzo User’s Guide.
Perform post-install configuration
The post-installation setup for Plesk in a Virtuozzo container is absolutely the same as for other types of installation. It includes Plesk initialization, installation of a license key, and so on. You can either perform it manually or automate the process using the Plesk API.
For instructions on performing manual post-install configuration, see Post-install Configuration on a Single Server.
For instructions on how to initialize Plesk programmatically, see Post-install Configuration on Multiple Servers. | https://docs.plesk.com/en-US/obsidian/deployment-guide/plesk-installation-and-upgrade-on-multiple-servers/installing-plesk-in-virtuozzo-containers.76513/ | 2020-09-18T10:08:58 | CC-MAIN-2020-40 | 1600400187390.18 | [] | docs.plesk.com |
Follow Surface Modifier
Summary
This modifier causes particles to flow over the surface of an object.
Note: due to the way this modifier works, you cannot render a single frame to the picture viewer. If you need to render a frame, you must render the entire sequence up to and including that frame. The best solution to this is to cache the scene; then you can render one frame, and this may well be much faster than rendering a frame sequence (depending on your scene, of course).
If you experience problems with axis flipping when particles move over a surface, see the emitter's extended data tab 'Up Vector' rotation mode.
Interface
This modifier has the following sections:
For the 'Groups Affected', 'Mapping', and 'Falloff' tabs, and for the buttons at the bottom of the interface, please see the 'Common interface elements' page.
Note that particles outside the falloff zone will not be affected by the modifier. Once within the falloff zone, the 'Pull', 'Distance' and 'Friction' parameters are all affected by the fall.
Objects
Drag the objects the particles are to move over into this list.
Pull (and Variation)
The Pull is the strength with which the particle is pulled to the surface. A high pull will cause the particles to snap to the surface when they are within the value in the 'Distance' setting from the target. A low Pull will attract the particles very softly. You can vary this with the 'Variation' setting.
Offset (and Variation)
The particles will be offset from the surface by the value in this setting, which is useful to reduce interpenetration by particle geometry. You can vary this with the 'Variation' setting.
Note that the particle radius also affects the offset from the surface. If 'Offset' is zero and the particle radius is 5, the particle will be offset by 5 screen units from the surface. An offset and radius of zero will cause the particle to be located exactly on the surface.
Distance
The particle's distance from the surface must be equal to or less than this value before it is affected by the modifier.
Friction
This setting will reduce the particle speed over the surface and eventually bring it to a halt.
Accurate
Turning this switch on may in some cases improve the accuracy of movement over a complex surface, but at the expense of increased computation time.
Actions quicktab
Actions on Capture
Actions dragged into this list will be executed when a particle is captured by the modifier and pulled to the object's surface.
Actions on Escape
Actions dragged into this list will be executed when a particle is escapes from the modifier's field or if it moves too far from the object's surface to be affected by the modifier.
Add Action (two buttons)
Clicking either button will add an action to the scene and drop it into the appropriate Actions list. | http://docs.x-particles.net/html/oversurfacemod.php | 2021-07-23T21:20:17 | CC-MAIN-2021-31 | 1627046150067.51 | [array(['../images/followsurfacemod_1.jpg', None], dtype=object)
array(['../images/followsurfacemod_2.jpg', None], dtype=object)] | docs.x-particles.net |
Business Metadata overview
Atlas allows you to define your own attributes as key-value pairs to add to the technical metadata Atlas stores for data assets. These attributes are grouped in Business Metadata collections so you can control which users have access to create, update, and assign these attributes.
There's a fixed set of attributes that Atlas collects for each entity type from services on the cluster. You can augment these attributes by updating the entity model, but this requires changing the code in the plugin that manages metadata collection for the service. Defining business metadata attributes allows you to add to the technical metadata for an entity type, but also to control access to those attributes in groups. You can use business metadata groupings or collections in Ranger policies to determine who can view business metadata attributes, who can set their values, and who can create additional attributes. For example, you could create a business metadata collection called "Operations," that included attributes such as IT-owner, Operational Phase, Processing Strategy, and Processed Date. In a Ranger policy, you could expose these attributes only to the IT staff or expose them to all users but only allow the IT staff to set their values. You could define attributes that had meaning for a specific entity type or applied to any entity types.
In Atlas, you define a business metadata collection—simply the label for a group of attributes. You can than define attributes in that business metadata collection. After you've defined business metadata attributes, you can add values through the Atlas UI or apply them in bulk to entities using a bulk import process. | https://docs.cloudera.com/cdp-private-cloud-base/7.1.6/atlas-leveraging-business-metadata/topics/atlas-business-metadata-overview.html | 2021-07-23T21:20:01 | CC-MAIN-2021-31 | 1627046150067.51 | [] | docs.cloudera.com |
FIF make sure that Amazon SQS preserves the order in which messages are sent and received, each producer should use
FIFO queues allow the producer or consumer to attempt multiple retries:.
If the consumer detects a failed
ReceiveMessageaction, it can retry as many times as necessary, using the same receive request attempt ID. Assuming that the consumer receives at least one acknowledgement before the visibility timeout expires, multiple retries don't affect the ordering of messages.
When you receive a message with a message group ID, no more messages for the same message group ID are returned unless you delete the message or it becomes visible. | https://docs.aws.amazon.com/AWSSimpleQueueService/latest/SQSDeveloperGuide/FIFO-queues-understanding-logic.html | 2021-07-23T21:14:55 | CC-MAIN-2021-31 | 1627046150067.51 | [] | docs.aws.amazon.com |
-
a tool that identifies IP addresses that send unwanted requests. Using the IP reputation list you can reject requests that are coming from an IP address with a bad reputation. Optimize Web Application Firewall performance by filtering requests that you do not want to process. Reset, drop a request, or even configure a responder policy to take a specific responder action.
Following are some. Attackers have gained popularity for stealing passwords, because it doesn’t take long when hundreds of computers work together to crack your password. It is easy to launch botnet attacks to figure out passwords that use commonly used dictionary words.
- Compromised web-server. Attacks are not as common because awareness and server security have increased, so hackers and spammers look for easier targets. There are still web servers and online forms that hackers can compromise and use to send spam (such as viruses and porn). Such activity is easier to detect and quickly shut down, or block with a reputation list such as SpamRats.
- Windows Exploits. (such as Active IPs offering must be able to connect to api.bcti.brightcloud.com on port 443. Each node in the HA or cluster deployment gets the database from Webroot and must be able to access this Fully Qualified Domain Name (FQDN).
-
Webroot hosts its reputation database in AWS currently. Therefore, Citrix ADC must be able to resolve AWS domains for downloading the reputation db. Also, the firewall must be open for AWS domains.
Note:
Each packet engine requires at least 4 GB to function properly when the IP Reputation feature is enabled.
Advanced policy Expressions. Configure the IP Reputation feature by using advanced policy expressions (default syntax expressions) in the policies bound to supported modules, such as Web, TOR_PROXY.
Note:
The IP reputation feature checks both source and destination IP addresses. It blocklist or allowlist of IPs using a policy data set. You can maintain an allow list to allow access to specific IP addresses that are block. When configuring a policy for comparing a string in a packet, use an appropriate operator and pass the name of the pattern set or data set as an argument.
To create an allow list of addresses to treat as exceptions during IP reputation evaluation:
- Configure the policy so that the PI expression evaluates to False even if an address in the allow list. Premiumfile to show that the request was processed as specified in the profile.
Configure the IP reputation feature using the CLI
At the command prompt, type: block list or an allow the
X-Forwarded-For header1 ipv4
> bind policy dataset Allow_list1 10.217.25.17 -index 1
> bind policy dataset Allow_list1 10.217.25.18 -index 2
Example 4:
The following example shows how to add the customized list to flag specified IP addresses as malicious:
> add policy dataset Block_list1 ipv4
> bind policy dataset Block_list1 10.217.31.48 -index 1
> bind policy dataset Block_list1 10.217.25.19 -index 2
Example 5:
The following example shows a policy expression to block the client IP in the following conditions:
- It matches an IP address configured in the customized Block_list1 (example 4)
- It matches an IP address listed in the Webroot database unless relaxed by inclusion in the Allow_list1 (example 3).
> add appfw policy "Ip_Rep_Policy" "((CLIENT.IP.SRC.IPREP_IS_MALICIOUS || CLIENT.IP.SRC.TYPECAST_TEXT_T.CONTAINS_ANY(\"Block_list1\")) && ! (CLIENT.IP.SRC.TYPECAST_TEXT_T.CONTAINS_ANY(\"Allow_list1\")))" APPFW_BLOCK
Using Proxy server:
If the Citrix ADC appliance does not have direct access to the internet and is connected to a proxy, configure the IP Reputation client to send requests to the proxy.
At the command prompt, type:).
Configure IP reputation by using Citrix ADC GUI
- Navigate to the System > Settings. In the Modes and Features section, click the link to access the Configure Advanced Features pane and enable the Reputation check box.
- Click OK.
To configure a proxy server by using the Citrix ADC GUI
-<<
Create an allow list and a block list of client IP addresses using the GUI
- On the Configuration tab, navigate to AppExpert > Data Sets.
- Click Add.
- In the Create Data Set (or Configure Data set) pane, provide a meaningful name for the list of the IP addresses. The name must reflect the purpose of the list.
-.
Configure an application firewall policy by using the Citrix ADC GUI
- Reputation blocklisted in the Webroot database. You can also create your own customized block list to designate specific IPs as malicious.
- The iprep.db file is created in the
/var/nslog/iprepfolder. Once created, it is not deleted even if the feature is disabled.
- When the reputation feature is enabled, the Citrix ADC Reputation process takes about five minutes to start after you enable the reputation feature. The IP reputation feature might not work for that duration.
- Database download: If the IP DB data download is failing after enabling the IP Reputation feature, the following error is seen in. | https://docs.citrix.com/en-us/citrix-adc/current-release/reputation/ip-reputation.html | 2021-07-23T22:36:49 | CC-MAIN-2021-31 | 1627046150067.51 | [array(['/en-us/citrix-adc/media/enable_advfeature_reputation.png',
'Enable IP reputation'], dtype=object)
array(['/en-us/citrix-adc/media/changereputationsettings.png',
'Reputation settings'], dtype=object)
array(['/en-us/citrix-adc/media/add_new_policy_dataset.png',
'Configure dataset'], dtype=object)
array(['/en-us/citrix-adc/media/insert_dataset_list-copy.png',
'Insert dataset'], dtype=object) ] | docs.citrix.com |
Behavior Rules are very flexible in structure to cover most use cases that you will come across.
Behavior Rules are clustered in
Groups.
Behavior Rules are executed sequential within each
Group. As soon as one
Behavior Rule succeeds, all remaining
Behavior Rules in this
Group will be skipped.
{"behaviorGroups": [{"name": "GroupName","behaviorRules": [{"name": "RuleName","actions": ["action-to-be-triggered"],"conditions": [<CONDITIONS>]},{"name": "DifferentRule","actions": ["another-action-to-be-triggered"],"conditions": [<CONDITIONS>]},<MORE_RULES>]}]}
Each
Behavior Rule has a list of
conditions, that, depending on the
condition , might have a list of
sub-conditions.
If all conditions are true, then the Behavior Rule is successful and it will trigger predefined actions.
conditions are always children of either a
Behavior Rule or another
condition. It will always follows that same structure.
The
inputmatcher is used to match user inputs. Not directly the real input of the user, but the meaning of it, represented by
expressions that are resolved from by the
parser.
If the user would type "hello", and the parser resolves this as expressions "
greeting(hello)" [assuming it has been defined in one of the dictionaries], then a
condition could look as following in order to match this user input meaning:
(...)"conditions": [{"type": "inputmatcher","configs": {"expressions": "greeting(*)","occurrence": "currentStep"}}](...)
This
inputmatcher
condition will match any
expression of type greeting, may that be "
greeting(hello)", "
greeting(hi)" or anything else. Of course, if you would want to match
greeting(hello) explicitly, you would put "
greeting(hello)" as value for the "
expressions" field.
The
contextmatcher is used to match
context data that has been handed over to EDDI alongside the user input. This is great to check certain
conditions that come from another system, such as the day time or to check the existence of user data.
(...)"conditions": [{"type": "contextmatcher","configs": {"contextType": "expressions","contextKey": "someContextName","expressions": "contextDataExpression(*)"}}](...)(...)"conditions": [{"type": "contextmatcher","configs": {"contextType": "object","contextKey": "userInfo","objectKeyPath": "profile.username","objectValue": "John"}}](...)(...)"conditions": [{"type": "contextmatcher","configs": {"contextType": "string","contextKey": "daytime","string": "night"}}](...)
The
connector is there to all logical
OR conditions within rules. By default all conditions are
AND
conditions, but in some cases it might be suitable to connect conditions with a logical
OR.
(...)"conditions": [{"type": "connector","configs": {"operator": "OR"},"conditions": [<any other conditions>]}](...)
Inverts the overall outcome of the children conditions
In some cases it is more relevant if a
condition is
false than if it is
true, this is where the
negation
condition comes into play. The logical result of all children together (
AND connected), will be inverted.
Child 1 - trueChild 2 - true→ Negation = falseChild 1 - falseChild 2 - true→ Negation = true(...)"conditions": [{"type": "negation","conditions": [<any other conditions>]}](...)
Defines the occurrence/frequency of an action in a
Behavior Rule.
(...){"type": "occurrence","configs": {"maxTimesOccurred": "0","minTimesOccurred": "0","behaviorRuleName": "Welcome"}}(...)
Check if another
Behavior Rule has met it's condition or not in the same
conversationStep. Sometimes you need to know if a rule has succeeded ,
dependency will take that rule that hasn't been executed yet in a sandbox environment as a
reference for an other behavior rule.
(...){"type": "dependency","configs": {"reference": "<name-of-another-behavior-rule>"}}(...)
As
inputMatcher doesn't look at expressions but it looks for actions instead, imagine a
Behavior Rule has been triggered and you want to check if that action has been triggered before.
(...){"type": "actionmatcher","configs": {"actions": "show_available_products","occurrence": "lastStep"}}(...)
This will allow you to compile a condition based on any http request/properties or any sort of variables available in EDDI's context.
(...){"type": "dynamicvaluematcher","configs": {"valuePath": "memory.current.httpCalls.someObj.errors","contains": "partly matching","equals": "needs to be equals"}}(...)
The API Endpoints below will allow you to manage the
Behavior Rules in your EDDI instance.
The
{id} is a path parameters that indicate which behavior rule you want to alter.
We will demonstrate here the creation of a
BehaviorSet
Request URL
Request Body
{"behaviorGroups": [{"name": "Smalltalk","behaviorRules": [{"name": "Welcome","actions": ["welcome"],"conditions": [{"type": "negation","conditions": [{"type": "occurrence","configs": {"maxTimesOccurred": "1","}}]}]}]}
Response Body
no content
Response Code
201
Response Headers
{"access-control-allow-origin": "*","date": "Thu, 21 Jun 2018 01:00:02 GMT","access-control-allow-headers": "authorization, Content-Type","content-length": "0","location": "eddi://ai.labs.behavior/behaviorstore/behaviorsets/5b2af892ee5ee72440ee1b4b?version=1","access-control-allow-methods": "GET, PUT, POST, DELETE, PATCH, OPTIONS","access-control-expose-headers": "location","content-type": null} | https://docs.labs.ai/behavior-rules | 2021-07-23T23:18:53 | CC-MAIN-2021-31 | 1627046150067.51 | [] | docs.labs.ai |
Facet types - toggle between categories
Description
Sometimes also referred to as a radio facet.
Allows a user to filter the result set on a single category. The set of categories remain present after the filter is applied allowing a user to toggle through the different category values.
Example
In this example the industry facet is defined as a toggle between categories facet:
Selecting advertising & media from the industry facet filters the result set to only results categorised as advertising & media. Observe that the other categories remain available and the counts are unchanged.
Selecting banking & financial services switches (or toggles) to filter the result set to only include results categorised as banking & financial services. Once again the other categories remain available and unchanged.
Clicking on the next category shows the same behaviour.
Add or edit a toggle between categories facet
The attributes that must be defined for a toggle between categories facet are:
Name: Unique name identifying the facet. This name is presented as the heading for the facet.
Default Option: This includes an extra value that represents all results; equivalent to unselecting this facet.
Category values sourced from: This defines the source of the category values. See: category sources
Category sort: Defines how the categories are sorted. See: sorting facet category values
Toggle between categories facet properties
The following properties define a toggle between categories facet, and the information can be used when converting a facet to a toggle between categories facet.
Category values: includes an all documents category
Selection type: one value at a time
Category matching logic: all selected values
Scope: original query and other facets
Data model definition
A toggle between categories facet has the following data model properties:
selectionType:
SINGLE
constraintJoin:
AND
facetValues:
FROM_SCOPED_QUERY_WITH_FACET_UNSELECTED
guessedDisplayType:
RADIO_BUTTON | https://docs.squiz.net/funnelback/docs/latest/build/results-pages/faceted-navigation/facet-types-radio.html | 2021-07-23T23:08:15 | CC-MAIN-2021-31 | 1627046150067.51 | [array(['../../../_images/facet-type-tog1.png', 'facet-type-tog1.png'],
dtype=object)
array(['../../../_images/facet-type-tog2.png', 'facet-type-tog2.png'],
dtype=object)
array(['../../../_images/facet-type-tog3.png', 'facet-type-tog3.png'],
dtype=object)
array(['../../../_images/facet-type-tog4.png', 'facet-type-tog4.png'],
dtype=object)
array(['../../../_images/facet-type-tog-edit-1.png',
'facet-type-tog-edit-1.png'], dtype=object) ] | docs.squiz.net |
How to enable notifications in web apps
WebCatalog supports notifications out of the box. But for some web apps, to receive notifications, you will need to manually configure additional web app settings. Here is a list of how you can fully enable notifications in some popular web apps.
Gmail
Learn more at.
Google Drive
Learn more at.
Messenger
- On your computer, open Messenger.
- In the top right, click ⚙️ and then Settings.
- Check the “Desktop notifications enabled” box.
Outlook
Outlook uses an in-house solution to display notifications, instead of standard HTML5 Notification API. Thus, WebCatalog/Singlebox won’t be able to integrate Outlook’s notifications with your system. Instead, notifications will still show up in the right corner of Outlook’s window. For details, see. | https://docs.webcatalog.io/article/17-how-to-enable-notifications-in-web-apps | 2021-07-23T23:23:38 | CC-MAIN-2021-31 | 1627046150067.51 | [] | docs.webcatalog.io |
This topic guides you through consuming an OpenID connect basic client profile that is based on authorization code authorization server sends the end-user back to the client with an authorization code.
- The client requests a response using the authorization code at the token endpoint.
- The client receives a response that contains an ID token and an access token in the response body.
- The client validates the ID token and retrieves the end-user's subject identifier.
The following parameters are mandatory and have to be included in the authorization request in order to execute this flow..
- See the Basic Client Profile with Playground topic to try out this flow with the playground sample for OAuth in WSO2 Identity Server. | https://docs.wso2.com/display/IS540/OpenID+Connect+Basic+Client+Profile | 2021-07-23T22:24:19 | CC-MAIN-2021-31 | 1627046150067.51 | [] | docs.wso2.com |
Date: Fri, 23 Jul 2021 16:05:45 -0700 (PDT) Message-ID: <[email protected]> Subject: Exported From Confluence MIME-Version: 1.0 Content-Type: multipart/related; boundary="----=_Part_75584_1824225565.1627081545563" ------=_Part_75584_1824225565.1627081545563 Content-Type: text/html; charset=UTF-8 Content-Transfer-Encoding: quoted-printable Content-Location:
All WSO2 product servers are based on Carbon. Therefore, most of= the server administration functions in WSO2 products are common to the Car= bon platform. From Carbon 4.4.6 onwards, we have a WSO2 Administratio= n Guide - Carbon 4.4.x, which will provide instructions on how to setup= and configure WSO2 servers that are based on Carbon 4.4.x versions.
Given below is a list of administration tasks that you will find in the = administration guide. | https://docs.wso2.com/exportword?pageId=52527285 | 2021-07-23T23:05:45 | CC-MAIN-2021-31 | 1627046150067.51 | [] | docs.wso2.com |
Logical Operations - XOR¶
Join two images using the bitwise XOR operator (difference between the two images). Images must be the same size. This is a wrapper for the Opencv Function bitwise_xor.
logical_xor(bin_img1, bin_img2)
returns ='xor' image
- Parameters:
- bin_img1 - Binary image data to be compared to bin_img2.
- bin_img2 - Binary image data to be compared to bin_img1.
- Context:
- Used to combine to images. Very useful when combining image channels that have been thresholded seperately.
- Example use:
Input binary image 1
Input binary image 2
from plantcv import plantcv as pcv # Set global debug behavior to None (default), "print" (to file), # or "plot" (Jupyter Notebooks or X11) pcv.params.debug = "print" # Combine two images that have had different thresholds applied to them. # For logical 'and' operation object pixel must be in both images # to be included in 'and' image. xor_image = pcv.logical_xor(s_threshold, b_threshold)
Combined image
| https://plantcv.readthedocs.io/en/stable/logical_xor/ | 2021-02-25T07:25:34 | CC-MAIN-2021-10 | 1614178350846.9 | [array(['../img/documentation_images/logical_xor/19_binary_threshold120_inv.png',
'Screenshot'], dtype=object)
array(['../img/documentation_images/logical_xor/20_binary_threshold50.png',
'Screenshot'], dtype=object)
array(['../img/documentation_images/logical_xor/21_xor_joined.png',
'Screenshot'], dtype=object) ] | plantcv.readthedocs.io |
Data Dump¶
Data dump is a dump of all fields of all segments returned by a search. they may be run in parallel and the buffered nature of datadump spreads them out.
Buffered Response¶
Datadump uses an iterable class which uses threads to buffer data. The buffering threads use a paginator to split a result set into pieces. This allows downloading to start almost immediately after the button is clicked, rather than waiting for the entire dump to be written to memory.
This is not a perfect solution. Python uses green threads so threads are not truly concurrent. Threads will often be starved and may simply alternate between filling the buffer and emptying it. This is mainly an issue with the threads fighting over the lock. There may be a better solution using double-buffering. | https://protein-geometry-database.readthedocs.io/en/develop/data_dump.html | 2021-02-25T07:36:32 | CC-MAIN-2021-10 | 1614178350846.9 | [] | protein-geometry-database.readthedocs.io |
Air gap archive (AER 2.31)¶
This section contains information about where to get the air gap archives and their contents.
The Air Gap archives are generated monthly, generally on the 1st of each month. Monthly archives are hosted at organized in folders by date.
Archive Contents¶
Installers Archive¶
All the installers and the latest Miniconda and Anaconda installers for all platforms are in the archive titled:
anaconda-enterprise-`date +%Y-%m-%d`.tar
It is about 14GB. It contains everything to install Anaconda Repository, Anaconda Enterprise Notebooks, Anaconda Adam and Anaconda Scale.
It contains:
Mirrors archives¶
In addition, the anaconda-server-sync-conda subdirectory contains mirror archives. These are platform specific conda packages that must be mirrored after AE-Repo is installed. If you only need packages for a subset of platforms, download the platform based installers as they will be much smaller in size.
Each component has an md5 file and a list file which are both small and included for convenience.
Note
Currently, the archives contain packages for channels: anaconda, R, adam, wakari. The anaconda-nb-extensions packages are in the wakari channel. | https://docs.anaconda.com/anaconda-repository/2.31/admin/airgap-archive/ | 2021-02-25T08:12:42 | CC-MAIN-2021-10 | 1614178350846.9 | [] | docs.anaconda.com |
Set up Access to Role-Specific Scenarios#
ADONIS NP is organised into a number of so-called application scenarios that accommodate different stakeholders' needs. In order to set up access to these scenarios, you need to prepare the appropriate system roles. This involves the following steps:
Example Scenario: The ADO Money Bank
As the ADONIS NP administrator of the ADO Money Bank you have already set up a modelling environment for your company. Now you want to set up access to the application scenarios
"Design & Document"
"Control & Release"
"Read & Explore"
for your colleagues. The following training tasks will familiarize you with the typical activities in this context.
note
Available application scenarios vary depending on the application library and the licence. All examples and descriptions in this tutorial refer exclusively to the ADONIS BPMS Application Library.
note
Access to the Organisation Portal and to the release workflows is also role-specific and therefore set up in a similar way. Please refer to the sections Set Up Access to the Organisation Portal, Set up Access to Model Release Workflow and Set up Access to Document Release Workflow for details.
Create System Roles and Assign Users#
Similar to rights, system roles define and limit the possibilities of users within ADONIS NP. In particular, system roles grant or deny access to certain ADONIS NP web client features and metamodel elements.
Example Scenario: What needs to be done here?
Your next task is to set up a system role structure that contains users with access to different application scenarios. In order to do this you have to create the following system roles:
System role "Read & Explore Members":
Unique name: R&E
Members: User group "Process Responsible"
Interface texts: Read & Explore Members
Description: Members of this system role will have access to the "Read & Explore" scenario. The purpose of the "Read & Explore" scenario is to let users read processes, explore working instructions and process handbooks.
System role "Design & Document Members":
Unique name: D&D
Members: User group "Process Contributor"
Interface texts: Design & Document Members
Description: Members of this system role will have access to both the "Read & Explore" scenario and the "Design & Document" scenario. Via the "Design & Document" scenario they will be able to model and create transparency in a structured way.
System role "Control & Release Members":
Unique name: C&R
Members: User group "Process Responsible"
Interface texts: Control & Release Members
Description: Members of this system role will have access to the "Control & Release" scenario. The purpose of the "Control & Release" scenario is to let users review and release processes with a single click.
Create System Role#
In order to create a system role:
Open the User Management component via the component selection (1).
In the User Management tab, click the Manage system roles button (2).
Click the Create new system role... button (3).
In the Unique name box, type a name for the system role. This language-independent name uniquely identifies the system role (4).
Click the Add members... button and add system role members from the user catalogue (5). Click OK.
note
You can assign system roles to user groups or individual users.
In the Interface texts area, type a name for every language ADONIS NP supports. Interface Texts are texts that are visible on the user interface (6).
In the Description box, type a description of the system role (7).
Click OK (8). The new system role is added to the System roles catalogue.
Optionally you can also:
Select the check box Default role so that this system role is applied to all users on login.
Select the check box Relevant for metamodel rights so that you can edit the metamodel rights of the system role.
Assign Web Modules to System Roles#
You can assign web modules (plug-ins) to system roles to grant permissions for functionalities in the web client. This allows you to define different scenarios for different user groups or users.
Example Scenario: What needs to be done here?
Assign the following web modules to the system role "Read & Explore Members" by ticking the appropriate check boxes:
- Read & Explore and all its dependent web modules
Do not change the other settings.
Next, assign the following web modules to the system role "Design & Document Members":
All web modules assigned to the system role "Read & Explore Members"
Design & Document
Do not change the other settings.
Next, assign the following web modules to the system role "Control & Release Members":
- Control & Release
Do not change the other settings.
To check if the newly assigned users have access to the right scenarios:
Start the web client and log in as a user of the group "Process Responsible". Do you have access to the "Read & Explore" and "Control & Release" scenarios?
Log out and log in as a user of the group "Process Contributor". Do you have access to the "Read & Explore" and "Design & Document" scenarios?
With this step you have accomplished setting up access to the standard application scenarios in ADONIS NP for your users.
Assign Web Modules#
In order to assign web modules to a system role:
Open the Library Management component (1) and switch to the tab Component Settings (2).
Double-click the appropriate library in the Component Settings catalogue to open the list of components available for configuration (3).
Double-click the entry called “Web Modules” in the node Web Client (4). The settings open in the Library Management tab (5).
Select the Business Modules tab (6). This tab contains all web modules that can be assigned to users. In contrast, the System Modules are necessary for the operation of the web client and always activated for every user.
Select a web module from the list of web modules (7).
Activate or deactivate a web module for all users or for users with specific system roles by ticking the appropriate check boxes (8).
Confirm with OK (9).
important
The ADONIS NP application server has to be restarted if these settings are changed. Otherwise the changes will not become effective. | https://docs.boc-group.com/adonis/en/docs/11.0/administration_manual/ssc-000000/ | 2021-02-25T07:59:28 | CC-MAIN-2021-10 | 1614178350846.9 | [array(['/adonis/en/assets/images/1e94cb389838418da4fafeb79abf22b238cf63eb-0d0e8947122d6555272b91ea2ea4c493.png',
'Create System Role'], dtype=object)
array(['/adonis/en/assets/images/b11e0286fdac6d1e8e5ea0f73c420d2b67a5c201-823a8f29129cc55f97cde6bdbc995882.png',
'Assign Web Modules'], dtype=object) ] | docs.boc-group.com |
Google Cloud Google Cloud Google Cloud-functions:latest
You can install a specific version by replacing
latest with a version number. For example:
confluent-hub install confluentinc/kafka-connect-gcp Functions Sink Connector Configuration Properties.
Quick Start¶
This quick start uses the.name": "<insert function name here>", "project.id": "<insert project id here>", "region": "<insert region here>", } }. | https://docs.confluent.io/5.5.0/connect/kafka-connect-gcp-functions/index.html | 2021-02-25T08:23:12 | CC-MAIN-2021-10 | 1614178350846.9 | [] | docs.confluent.io |
Sheet Monkey was designed to make it as simple as possible to send data from your website into a Google Sheet without any backend code and minimal configuration.
All these guides require that you have a free Sheet Monkey account. If you haven't registered your account, please do that before continuing.
Every Sheet Monkey forms starts with a unique url (or a form action) where you can post your data. This url is available with every form you create in the Sheet Monkey dashboard. This form action is your connection to the Google Sheet.
Once you have your form action then you can follow three rules to send your data into the linked Google Sheet.
When building your HTML form the input names must match a column name in the linked Google Sheet.
<input type="text" name="Column 1" /><!-- In this example, anything submitted with this fieldwill be inserted beneath "Column 1" in the linked sheet -->
Each time the form is submitted, Sheet Monkey will append that submission to the bottom of the form as a new row.
When you submit a field with a name that Sheet Monkey doesn't recognize, it will automatically add it to the end of the spreadsheet.
<input type="text" name="Column 1" /><!-- Here's a field that doesn't exist in the spreadsheet --><input type="text" name="New Column" />
With this method, you can link your forms to a blank sheet and Sheet Monkey will automatically build the header fields for you the first time you submit the form.
Sheet Monkey was designed to give you as much control over appearance and validation as you want. Because of this all of our HTML is merely a suggestion. You can design your forms as creatively as you want and Sheet Monkey will accept the data and organize it in a sheet for you.
The best way to get familiar with how to use Sheet Monkey is to create a free account and build your first form. | https://docs.sheetmonkey.io/ | 2021-02-25T07:00:01 | CC-MAIN-2021-10 | 1614178350846.9 | [] | docs.sheetmonkey.io |
Installation (AEN 4.2.1)¶
- Installation requirements (AEN 4.2.1)
- Preparing for installation (AEN 4.2.1)
- Installing the AEN server (AEN 4.2.1)
- Installing the AEN gateway (AEN 4.2.1)
- Installing the AEN compute node(s) (AEN 4.2.1)
- Configuring conda to use your local on-site AEN repository (AEN 4.2.1)
- Optional configuration (AEN 4.2.1)
- Upgrading AEN (AEN 4.2.1)
- Uninstalling AEN (AEN 4.2.1). | https://docs.anaconda.com/ae-notebooks/4.2.1/admin-guide/install/ | 2021-02-25T08:40:39 | CC-MAIN-2021-10 | 1614178350846.9 | [] | docs.anaconda.com |
Smallest Free Space % Available in Temporary Tablespaces (TempTSLeftPct)
Warning
This feature is no longer supported by the KM version 9.7.11.03. For more information see, 9.7.11.03: Fix Pack 3 for BMC PATROL for Oracle Database.
This parameter displays the percentage of space left in the temporary tablespaces (both DMTS and LMTS). The calculations include both extensible and non-extensible datafiles and temp files.
TempTSLeftPct will go into warning when the percentage of space left in the TEMPORARY tablespace has reached 10%, and will go into alarm when the percentage of space left reaches 5%.
This parameter considers tablespace, user and object exclusion in its calculations for warnings and alarms.
This parameter is supported if the instance uses ASM storage.
Recommendations
If this parameter goes into alarm, consider adding another datafile.
BMC PATROL properties
BMC ProactiveNet Performance Management properties | https://docs.bmc.com/docs/PATROL4Oracle/97/smallest-free-space-available-in-temporary-tablespaces-temptsleftpct-603828644.html | 2021-02-25T08:46:43 | CC-MAIN-2021-10 | 1614178350846.9 | [] | docs.bmc.com |
Docstring guide¶
“Code is more often read than written.”
—Guido van Rossum
In Jina, we are aware that documentation is an important part of sofware, here you should document the constructor. Use the parameters to document the constructor parameters under _() """ def __init__(self, param1: int, param2: str): """ Specify what the contructor does :param param1: This is an example of a param1 :type param1: int :param param2: This is an example of a param2 :type param2: str """ | https://docs.jina.ai/v0.9.33/chapters/docstring/docstring.html | 2021-02-25T08:23:39 | CC-MAIN-2021-10 | 1614178350846.9 | [] | docs.jina.ai |
Traditionally, Kong Gateway has always required a database, either PostgreSQL or Cassandra, to store entities such as Routes, Services, and Plugins during runtime. The database settings are typically stored in a configuration file called
kong.conf.
Kong Gateway 1.1 added the ability to run the Gateway without a database. This ability is called DB-less mode. Since not having a database would preclude using the Admin API to create entities, the entities must be declared in another configuration file (YAML or JSON). This file is known as the declarative configuration, or declarative config for short.
DB-less mode and declarative configuration bring a number of benefits over using a database:
- Reduced lookup latency: all data is local to the cluster’s node.
- Reduced dependencies: only the Kong Gateway nodes and configuration are required.
- Single source of truth: entities are stored in a single file which can be source-controlled.
- New deployment models: Kong Gateway can now easily serve as a lightweight node.
Now that you’re familiar with the value that DB-less mode and declarative configuration can provide, let’s walk through how Kong Studio’s Insomnia Designer can help you go from an OpenAPI spec to Kong Declarative Configuration.
Prerequisites
Step 1: Create or import a spec into Insomnia Designer
Copy the spec you’d like to convert to declarative config to your clipboard. If you don’t have a spec on hand, you can test it out using the Petstore specification.
Navigate to the Documents Listing View and click Create, then select Blank Document from the dropdown.
- Name the document and click Create.
- Inside the Editor field, paste the specification you copied in Step 1. You should now see the specification in the Editor.
Step 2: Generate a config
Now that you’ve added a specification to Insomnia Designer, you can generate a Kong Declarative Configuration.
In the upper right-hand corner of the editor, click Generate Config.
A modal window appears displaying the generated configuration as YAML. At the bottom of the modal, click the button to Copy to Clipboard.
| https://docs.konghq.com/enterprise/2.3.x/studio/dec-conf-studio/ | 2021-02-25T07:14:46 | CC-MAIN-2021-10 | 1614178350846.9 | [array(['https://s3.amazonaws.com/helpscout.net/docs/assets/59e383122c7d3a40f0ed78e2/images/5ea7f7292c7d3a7e9aebbe3f/file-jTMVWOdyOR.gif',
'Declarative config'], dtype=object)
array(['https://s3.amazonaws.com/helpscout.net/docs/assets/59e383122c7d3a40f0ed78e2/images/5ea7f8e82c7d3a7e9aebbe60/file-wQRSDB15e3.png',
'Generate config'], dtype=object) ] | docs.konghq.com |
2.2.5.10.3 ModLinkAtt Request Type Failure Response Body
The ModLinkAtt request type failure response body contains the following fields.
StatusCode (4 bytes): An unsigned integer that specifies the status of the request. This field MUST NOT be set to 0x00000000.
AuxiliaryBufferSize (4 bytes): An unsigned integer that specifies the size, in bytes, of the AuxiliaryBuffer field.
AuxiliaryBuffer (variable): An array of bytes that constitute the auxiliary payload data returned from the server. The size of this field, in bytes, is specified by the AuxiliaryBufferSize field. For details about extended buffers and auxiliary payloads, see [MS-OXCRPC] section 3.1.4.2.1 and section 3.1.4.2.2. | https://docs.microsoft.com/en-us/openspecs/exchange_server_protocols/ms-oxcmapihttp/8a5f497a-57ae-471f-9ac4-6caf44cf55dd | 2021-02-25T07:55:45 | CC-MAIN-2021-10 | 1614178350846.9 | [] | docs.microsoft.com |
2.2.4.9.2 Server Response Extensions
A successful response takes the following format. If the server receives more than one SMB_COM_NT_CREATE_ANDX request from a client before it sends back any response, then the server can respond to these requests in any order.
When a client requests extended information, then the response takes the form described below. Aside from the WordCount, ResourceType, NMPipeStatus_or_FileStatusFlags, FileId, VolumeGUID, FileId, MaximalAccessRights, and GuestMaximalAccessRights fields, all other fields are as specified in [MS-CIFS] section 2.2.4.64.2.
SMB_Parameters { UCHAR WordCount; Words { UCHAR AndXCommand; UCHAR AndXReserved; USHORT AndXOffset; UCHAR OplockLevel; USHORT FID; ULONG CreateDisposition; FILETIME CreateTime; FILETIME LastAccessTime; FILETIME LastWriteTime; FILETIME LastChangeTime; SMB_EXT_FILE_ATTR ExtFileAttributes; LARGE_INTERGER AllocationSize; LARGE_INTERGER EndOfFile; USHORT ResourceType; USHORT NMPipeStatus_or_FileStatusFlags; UCHAR Directory; GUID VolumeGUID; ULONGLONG FileId; ACCESS_MASK MaximalAccessRights; ACCESS_MASK GuestMaximalAccessRights; } } SMB_Data { USHORT ByteCount; }
SMB_Parameters:
WordCount (1 bytes): This field SHOULD<50> be 0x2A.
ResourceType (2 bytes): The file type. This field MUST be interpreted as follows:
NMPipeStatus_or_FileStatusFlags (2 bytes): A union between the NMPipeStatus field and the new FileStatusFlags field. If the ResourceType field is a named pipe (FileTypeByteModePipe or FileTypeMessageModePipe), then this field MUST be the NMPipeStatus field:
NMPipeStatus (2 bytes): A 16-bit field that shows the status of the opened named pipe. This field is formatted as an SMB_NMPIPE_STATUS ([MS-CIFS] section 2.2.1.3).
If the ResourceType field is FileTypeDisk, then this field MUST be the FileStatusFlags field:
FileStatusFlags (2 bytes): A 16-bit field that shows extra information about the opened file or directory. Any combination of the following flags is valid. Unused bit fields SHOULD be set to zero by the server and MUST be ignored by the client.
-
For all other values of ResourceType, this field SHOULD be set to zero by the server when sending a response and MUST be ignored when received by the client.
VolumeGUID (16 bytes): This field MUST be a GUID value that uniquely identifies the volume on which the file resides. This field MUST zero if the underlying file system does not support volume GUIDs.<51>
FileId (8 bytes): This field MUST be a 64-bit opaque value that uniquely identifies this file on a volume. This field MUST be set to zero if the underlying file system does not support unique FileId numbers on a volume. If the underlying file system does support unique FileId numbers, then this value SHOULD<52> be set to the unique FileId for this file.
MaximalAccessRights (4 bytes): The maximum access rights that the user opening the file has been granted for this file open. This field MUST be encoded in an ACCESS_MASK format, as specified in section 2.2.1.4.
GuestMaximalAccessRights (4 bytes): The maximum access rights that the guest account has when opening this file. This field MUST be encoded in an ACCESS_MASK format, as specified in section 2.2.1.4. Note that the notion of a guest account is implementation-specific<53>. Implementations that do not support the notion of a guest account MUST set this field to zero.
SMB_Data: | https://docs.microsoft.com/en-us/openspecs/windows_protocols/ms-smb/9e7d1874-92bd-4409-8089-ffc1a4a4d94e | 2021-02-25T07:24:54 | CC-MAIN-2021-10 | 1614178350846.9 | [] | docs.microsoft.com |
Date: Thu, 25 Feb 2021 08:42:28 +0000 (GMT) Message-ID: <1182232338.92738.1614242548760@df68ed866f50> Subject: Exported From Confluence MIME-Version: 1.0 Content-Type: multipart/related; boundary="----=_Part_92737_1633525575.1614242548760" ------=_Part_92737_1633525575.1614242548760 Content-Type: text/html; charset=UTF-8 Content-Transfer-Encoding: quoted-printable Content-Location:
Contents:=20
Input can be specified as a column reference or a string literal, = although string literal usage is rare.
RIGHTFINDfunction useful for filtering da= ta before it has been completely un-nested into tabular data., se= e Wrangle Language.
Column reference example:=20
rightfind(MyName,'find this',true,0)
Ou=
tput: Searches the
MyName=
code> column value for the last instance of the string
find this from the end of the value, ignoring cas=
e. If a match is found, the index value from the beginning of the string is=
returned.
String literal example:=20
rightfind('Hello, World','lo',false,2)
Ou=
tput: Searches the string
H=
ello, World for the string
lo=
, in a case-sensitive search from the third-to-last character of the=
string. Since the match is found at the fourth character from the left, th=
e value
3 is returned.
If example:=20
if(rightfind(SearchPool,'FindIt') >=3D = 0, 'found it', '')
Ou=
tput: Searches the
SearchPo=
ol column value for the string
FindIt from the end of the value (default). Default behavior is to=
not ignore case. If the string is found, the value
found it is returned. Otherwise, the value=
is empty.
rightfind(column_string,string_pattern,[ig= nore_case], [start_index])
For more information on syntax standards, see Language Documentation Syntax Notes<= /a>.
Name of the item to be searched. Valid values can be:
'Hello, World' ).
Missing values generate the start-index parameter value.
Usage Notes:=20
String literal or pattern to find. This value can be a string lite= ral, a Pattern , or a regular expression.
'Hello, World').
Usage Notes:
If
true, the
RIGHTFIND function ignores case w=
hen trying to match the string literal or pattern value.
Default value is
false, which means that case-sensitive mat=
ching is performed by default.
Usage Notes:
The index of the character in the column or string literal value at whic=
h to begin the search, from the end of the string. For example, a value of =
2 instructs the
RIGHTFIND function to begin searc=
hing from the third character in the column or string value.
NOTE: Index values begin at
0. If not spec=
ified, the default value is
0, which searches the entire strin=
g from the end of the string.
Value must be a non-negative integer value.
Usage Notes:
Tip: For additional examples, see Common Tasks.
In this example, you must extract filenames from a column of URL values.= Some rows do not have filenames, and there is some variation in the struct= ure of the URLs.
Source:
Transformation:
To preserve the original column, you can use the following to create a w= orking version of the source:
You can use the following to standardize the formatting of the working c= olumn:
Tip: You may need to modify the above to use a Pattern to also remove
https://.
The next two steps calculate where in the
filename values t=
he forward slash and dot values are located, if at all:
If either of the above values is
0, then there is no filena=
me present:
Results:
After removing the intermediate columns, you should end up with somethin= g like the following:
=20 | https://docs.trifacta.com/exportword?pageId=109906342 | 2021-02-25T08:42:28 | CC-MAIN-2021-10 | 1614178350846.9 | [] | docs.trifacta.com |
Composing forms
After you create a new form, you can compose it on the form's Form builder tab. The Form builder tab offers an intuitive graphical interface intended for content creators and marketers.
To open the Form builder tab:
- Open the Forms application.
- Edit () a form.
- Switch to the Form builder tab.
The Form builder interface is divided into two parts:
- The designer window, located in the middle, used to add, reorder, or remove individual form fields and set the form's layout.
- The properties panel, located on the right, enabling further configuration and customization of each selected form field.
Adding new fields
To add new form fields to an existing form:
- Open the Forms application.
- Edit () a form.
- Switch to the Form builder tab.
- Click on the Add Field () button.
- Click on the field you wish to add from the list of available form components.
You have added the selected field to the form.
Tip: After creating a new field, we recommend setting an appropriate Name property based on the field's purpose. By default, the Name is generated based on the type of the selected form component and changing the value later once the form starts collecting data can be problematic. See Configuring field properties for details.
Moving and reordering fields
To change the order of existing fields or move them between different zones in the form layout:
- Open the Forms application.
- Edit () a form.
- Switch to the Form builder tab.
- Click and drag the field by the drag handle () or the header that appears when you select or hover your mouse over a field.
You have changed the position of the field within the form.
Removing fields
To remove a field from a form:
Removing fields from forms with existing records
Removing a field from forms with existing records also deletes all data gathered using that field.
- Open the Forms application.
- Edit () a form.
- Switch to the Form builder tab.
- Click on the field you wish to remove.
- Click on the delete () icon in the upper right.
- Confirm the removal on the popup dialog.
You have removed the field from the form.
Configuring field properties
When you add new form fields or select existing ones, the panel on the right side of the form builder interface is replaced with the selected field's properties panel. The properties panel allows you to configure the field's label, tooltip, default value, and other properties.
- Open the Forms application.
- Edit () a form.
- Switch to the Form builder tab.
- Click on the field whose properties you wish to configure.
- Specify the field's properties in the panel on the right
- Click Apply to save any changes of the field properties.
You have configured the selected form field.
Tip: You can configure fields to behave as "smart fields", which means they are only displayed on repeated views of a form, as a replacement for other fields that were already filled in by the given visitor. For more information, see Using smart fields in forms.
Editing form layout
The overall layout of a form is composed of elements called sections. Each section contains one or more zones, to which you can add fields.
The system provides a Default section which organizes fields in a basic single-column layout. Your developers may prepare additional types of sections that allow you to create more advanced form layouts.
To edit the layout of a form:
- Open the Forms application.
- Edit () a form.
- Switch to the Form builder tab.
- Add or adjust sections using the gray UI elements:
- Add section – click a gray plus button located on the left to insert a new section. The list of available section types depends on the implementation of your website.
- Move section – hover over a section and drag it by the drag handle among the section buttons on the right.
- Change section type – you can change the type of a section to adjust the layout of the form. Hover over the section you want to modify, click the Change section type button, and select the section type you want to use.
- Delete section – hover over the section you want to remove and click the delete button on the right. This also removes all form fields in the section.
Changes made to sections are saved automatically. After you create the required form layout through sections, you can move fields between the resulting zones.
Adding field validation
Validation rules allow you to define constraints on user input. Each field can contain multiple validation rules.
If a form field with validation rules is submitted unfilled, the system skips the evaluation of all associated validation rules. If you wish to enforce user input for a particular field, mark it as Required.
To add a field validation rule:
- Open the Forms application.
- Edit () a form.
- Switch to the Form builder tab.
- Click on the field whose validation rules you wish to configure.
- In the properties panel on the right, switch to the Validation tab.
- Click Add validation rule.
- Select a rule from the drop-down list.
- Configure the rule according to your requirements and specify its Error message.
- Click Apply.
You have added the validation rule for the selected form field.
Adding field visibility conditions
Field visibility conditions allow you to specify rules based on which certain form fields can be hidden or displayed to users. By default, you can specify field comparison validation rules that set the visibility of fields based on values provided elsewhere in the form.
Your developers can extend this functionality with other conditions by utilizing contextual information provided by the system. For example, based on whether the user is in a certain persona, belongs to a specific contact group, is subscribed to an email feed, and so on.
Each form field can only have one visibility condition specified. Moreover, certain visibility conditions are only available if fields of a matching type precede the field that you are configuring (i.e., are placed "above" the field in the form builder).
To configure a field's visibility:
- Open the Forms application.
- Edit () a form.
- Switch to the Form builder tab.
- Click on the field whose visibility condition you wish to configure.
- On the properties panel, switch to the Visibility tab.
- Select from the available options:
- Always – the field is always visible. This is the default state for all fields.
- Never – the field is always hidden. Useful if you need to remove a field from the form but want to keep its associated data stored in the system.
- Condition – the field is displayed based on a set condition.
- Click Apply to save your changes.
You have added the visibility condition to the selected field.
Was this page helpful? | https://docs.xperience.io/k12sp/managing-website-content/forms/composing-forms | 2021-02-25T08:23:14 | CC-MAIN-2021-10 | 1614178350846.9 | [] | docs.xperience.io |
Content personalization
Content personalization is an on‑line marketing feature that can significantly increase the flexibility of your website. Personalization allows you to create pages that display different content depending on the circumstances in which they are viewed. For example, you can custom-build pages that offer special content for different types of visitors or dynamically change for each user according to the actions they performed on the website.
How is personalization applied
Personalization is applied through the basic components that form the content of pages, which includes:
- Web parts
- Entire web part zones
- Widgets added into page editor zones
Before you start personalizing content
Consider which data you want to use to personalize content. This is important to achieve effective and well-targeted content personalization. You can use the data provided by other on‑line marketing features, such as:
To start personalizing content
- Enable content personalization
- Prepare the personas, groups, activities, scores and/or campaigns that you want to personalize content with
- Define personalization variants
- (Personas) Recommend documents to personas
Features described on this page require the Kentico EMS license. | https://docs.xperience.io/k8/on-line-marketing-features/content-personalization | 2021-02-25T08:23:35 | CC-MAIN-2021-10 | 1614178350846.9 | [] | docs.xperience.io |
Data Access Pattern¶
Tip
This section just serves as a very concise overview of the available functionality that is provided by MLDataPattern.jl. Take a look at the full documentation for a far more detailed treatment.
If there is one requirement that almost all machine learning experiments have in common, it is that they have to interact with “data” in one way or the other. After all, the goal is for a program to learn from the implicit information contained in that data. Consequently, it is of no surprise that over time a number of particularly useful pattern emerged for how to utilize this data effectively. For instance, we learned that we should leave a subset of the available data out of the training process in order to spot and subsequently prevent over-fitting.
Terms and Definitions¶
In the context of this package we differentiate between two categories of data sources based on some useful properties. A “data source”, by the way, is simply any Julia type that can provide data. We need not be more precise with this definition, since it is of little practical consequence. The definitions that matter are for the two sub-categories of data sources that this package can actually interact with: Data Containers and Data Iterators. These abstractions will allow us to interact with many different types of data using a coherent and non-invasive interface.
- Data Container
For a data source to belong in this category it needs to be able to provide two things:
- The total number of observations \(N\), that the data source contains.
- A way to query a specific observation or sequence of observations. This must be done using indices, where every observation has a unique index \(i \in I\) assigned from the set of indices \(I = \{1, 2, ..., N\}\).
- Data Iterator
To belong to this group, a data source must implement Julia’s iterator interface. The data source may or may not know the total amount of observations it can provide, which means that knowing \(N\) is not necessary.
The key requirement for a iteration-based data source is that every iteration consistently returns either a single observation or a batch of observations.
The more flexible of the two categories are what we call data
containers. A good example for such a type is a plain Julia
Array or a
DataFrame. Well, almost. To be considered a
data container, the type has to implement the required interface.
In particular, a data container has to implement the functions
getobs() and
nobs(). For convenience both of those
implementations are already provided for
Array and
DataFrame out of the box. Thus on package import each of
these types becomes a data container type. For more details on
the required interface take a look at the section on Data
Container in the
MLDataPattern documentation.
Working with Data Container¶
Consider the following toy feature matrix
X, which has 2 rows
and 6 columns. We can use
nobs() to query the number of
observations it contains, and
getobs() to query one or more
specific observation(s).> nobs(X) 6 julia> getobs(X, 2) # query the second observation 2-element Array{Float64,1}: 0.933372 0.522172 julia> getobs(X, [4, 1]) # create a batch with observation 4 and 1 2×2 Array{Float64,2}: 0.0443222 0.226582 0.722906 0.504629
As you may have noticed, the two functions make a pretty strong
assumption about how to interpret the shape of
X. In
particular, they assume that each column denotes a single
observation. This may not be what we want. Given that
X has
two dimensions that we could assign meaning to, we should have
the opportunity to choose which dimension enumerates the
observations. After all, we can think of
X as a data
container that has 6 observations with 2 features each, or as a
data container that has 2 observations with 6 features each. To
allow for that choice, all relevant functions accept the optional
parameter
obsdim. For more information take a look at the
section on Observation Dimension.
julia> nobs(X, obsdim = 1) 2 julia> getobs(X, 2, obsdim = 1) 6-element Array{Float64,1}: 0.504629 0.522172 0.0997825 0.722906 0.245457 0.000341996
While arrays are very useful to work with, there are not the only
type of data container that is supported by this package.
Consider the following toy
DataFrame.
julia> df = DataFrame(x1 = rand(4), x2 = rand(4)) 4×2 DataFrames.DataFrame │ 4 │ 0.522172 │ 0.722906 │ julia> nobs(df) 4 julia> getobs(df, 2) 1×2 DataFrames.DataFrame │ Row │ x1 │ x2 │ ├─────┼──────────┼───────────┤ │ 1 │ 0.504629 │ 0.0997825 │
Subsetting and Shuffling¶
Every data container can be subsetted manually using the
low-level function
datasubset(). Its signature is identical
to
getobs(), but instead of copying the data it returns a
lazy subset. A lot of the higher-level functions use
datasubset() internally to provide their functionality.
This allows for delaying the actual data access until the data is
actually needed. For arrays the returned subset is in the form of
a
SubArray. For more information take a look at the section
on Data Subsets.
julia> datasubset(X, 2) 2-element SubArray{Float64,1,Array{Float64,2},Tuple{Colon,Int64},true}: 0.933372 0.522172 julia> datasubset(X, [4, 1]) 2×2 SubArray{Float64,2,Array{Float64,2},Tuple{Colon,Array{Int64,1}},false}: 0.0443222 0.226582 0.722906 0.504629 julia> datasubset(X, 2, obsdim = 1) 6-element SubArray{Float64,1,Array{Float64,2},Tuple{Int64,Colon},true}: 0.504629 0.522172 0.0997825 0.722906 0.245457 0.000341996
This is of course also true for any
DataFrame, in which case
the function returns a
SubDataFrame.
julia> datasubset(df, 2) 1×2 DataFrames.SubDataFrame{Array{Int64,1}} │ Row │ x1 │ x2 │ ├─────┼──────────┼───────────┤ │ 1 │ 0.504629 │ 0.0997825 │ julia> datasubset(df, [4, 1]) 2×2 DataFrames.SubDataFrame{Array{Int64,1}} │ Row │ x1 │ x2 │ ├─────┼──────────┼──────────┤ │ 1 │ 0.522172 │ 0.722906 │ │ 2 │ 0.226582 │ 0.505208 │
Note that a data subset doesn’t strictly have to be a true
“subset” of the data set. For example, the function
shuffleobs() returns a lazy data subset, which contains
exactly the same observations, but in a randomly permuted order.
julia> shuffleobs(X) 2×6 SubArray{Float64,2,Array{Float64,2},Tuple{Colon,Array{Int64,1}},false}: 0.0443222 0.812814 0.226582 0.11202 0.505208 0.933372 0.722906 0.245457 0.504629 0.000341996 0.0997825 0.522172 julia> shuffleobs(df) 4×2 DataFrames.SubDataFrame{Array{Int64,1}} │ Row │ x1 │ x2 │ ├─────┼──────────┼───────────┤ │ 1 │ 0.226582 │ 0.505208 │ │ 2 │ 0.933372 │ 0.0443222 │ │ 3 │ 0.522172 │ 0.722906 │ │ 4 │ 0.504629 │ 0.0997825 │
Since this function is non-deterministic, it raises the question
of what to do when our data set is made up of multiple variables.
It is not uncommon, for example, that the targets of a labeled
data set are stored in a separate
Vector. To support such a
scenario, all relevant functions also accept a
Tuple as the
data argument. If that is the case, then all elements of the
given tuple will be processed in the exact same manner. The
return value will then again be a tuple with the individual
results. As you can see in the following code snippet, the
observation-link between
x and
y is preserved after the
shuffling. For more information about grouping data containers in
a
Tuple, take a look at the section on Tuples and Labeled
Data.
julia> x = collect(1:6); julia> y = [:a, :b, :c, :d, :e, :f]; julia> xs, ys = shuffleobs((x, y)) ([6,1,4,5,3,2],Symbol[:f,:a,:d,:e,:c,:b])
Splitting into Train / Test¶
A common requirement in a machine learning experiment is to split
the data set into a training and a test portion. While we could
already do this manually using
datasubset(), this package
also provides a high-level convenience function
splitobs().
julia> y1, y2 = splitobs(y, at = 0.6) (Symbol[:a,:b,:c,:d],Symbol[:e,:f]) julia> train, test = splitobs(df) (3×2 DataFrames.SubDataFrame{UnitRange{Int64}}, 1×2 DataFrames.SubDataFrame{UnitRange{Int64}} │ Row │ x1 │ x2 │ ├─────┼──────────┼──────────┤ │ 1 │ 0.522172 │ 0.722906 │)
As we can see in the example above, the function
splitobs()
performs a static “split” of the given data at the relative
position
at, and returns the result in the form of two data
subsets. It is also possible to specify multiple fractions, which
will cause the function to perform additional splits.
julia> y1, y2, y3 = splitobs(y, at = (0.5, 0.3)) (Symbol[:a,:b,:c],Symbol[:d,:e],Symbol[:f])
Of course, a simple static split isn’t always what we want. In
most situations we would rather partition the data set into two
disjoint subsets using random assignment. We can do this by
combining
splitobs() with
shuffleobs(). Since neither
of which copies actual data we do not pay any significant
performance penalty for nesting “subsetting” functions.
julia> y1, y2 = splitobs(shuffleobs(y), at = 0.6) (Symbol[:c,:e,:f,:a],Symbol[:b,:d]) julia> y1, y2, y3 = splitobs(shuffleobs(y), at = (0.5, 0.3)) (Symbol[:b,:f,:e],Symbol[:d,:a],Symbol[:c])
It is also possible to call
splitobs() with two data
containers grouped in a
Tuple. While this is especially
useful for working with labeled data, neither implies the other.
That means that one can use tuples to group together unlabeled
data, or have a labeled data container that is not a tuple (see
Labeled Data Container for some examples). For instance, since
the function
splitobs() performs a static split, it doesn’t
actually care if the given
Tuple describes a labeled data
set. In fact, it makes no difference.> y = ["a", "a", "b", "b", "b", "b"] 6-element Array{String,1}: "a" "a" "b" "b" "b" "b" julia> (X1, y1), (X2, y2) = splitobs((X, y), at = 0.5); julia> y1, y2 (String["a","a","b"],String["b","b","b"])
Stratified Sampling¶
Usually it is a good idea to make sure that we actively try to preserve the class distribution for every data subset. This will help to make sure that the data subsets are similar in structure and more likely to be representative of the full data set.
julia> (X1, y1), (X2, y2) = stratifiedobs((X, y), p = 0.5); julia> y1, y2 (String["b","a","b"],String["b","b","a"])
Note how both,
y1 and
y2, contain twice as many
"b"
as
"a", just like
y does. For more information on
stratified sampling, take a look at Stratified Sampling
Over- and Undersampling¶
On the other hand, some functions require the presence of targets
to perform their respective tasks. In such a case, it is always
assumed that the last tuple element contains the targets. Two
such functions are
undersample() and
oversample(),
which can be used to re-sample a labeled data container in such a
way, that the resulting class distribution is uniform.
julia> undersample(y) 4-element SubArray{String,1,Array{String,1},Tuple{Array{Int64,1}},false}: "a" "b" "b" "a" julia> Xnew, ynew = undersample((X, y), shuffle = false) ([0.226582 0.933372 0.812814 0.11202; 0.504629 0.522172 0.245457 0.000341996], String["a","b","b","a"]) julia> Xnew, ynew = oversample((X, y), shuffle = true) ([0.11202 0.933372 … 0.505208 0.0443222; 0.000341996 0.522172 … 0.0997825 0.722906], String["a","b","a","a","b","a","b","b"])
If need be, all functions that require a labeled data container
accept a target-extraction-function as an optional first
parameter. If such a function is provided, it will be applied to
each observation individually. In the following example the
function
indmax will be applied to each column slice of
Y
in order to derive a class label, which is then used for
down-sampling. For more information take a look at the section on
Labeled Data Container.
julia> Y = [1. 0. 0. 0. 0. 1.; 0. 1. 1. 1. 1. 0.] 2×6 Array{Float64,2}: 1.0 0.0 0.0 0.0 0.0 1.0 0.0 1.0 1.0 1.0 1.0 0.0 julia> Xnew, Ynew = undersample(indmax, (X, Y)); julia> Ynew 2×4 SubArray{Float64,2,Array{Float64,2},Tuple{Colon,Array{Int64,1}},false}: 1.0 0.0 0.0 1.0 0.0 1.0 1.0 0.0
Special support is provided for
DataFrame where the first
parameter can also be a
Symbol that denotes which column
contains the targets.
julia> df = DataFrame(x1 = rand(5), x2 = rand(5), y = [:a,:a,:b,:a,:b]) 5×3 DataFrames.DataFrame │ Row │ x1 │ x2 │ y │ ├─────┼──────────┼───────────┼───┤ │ 1 │ 0.226582 │ 0.0997825 │ a │ │ 2 │ 0.504629 │ 0.0443222 │ a │ │ 3 │ 0.933372 │ 0.722906 │ b │ │ 4 │ 0.522172 │ 0.812814 │ a │ │ 5 │ 0.505208 │ 0.245457 │ b │ julia> undersample(:y, df) 4×3 DataFrames.SubDataFrame{Array{Int64,1}} │ Row │ x1 │ x2 │ y │ ├─────┼──────────┼───────────┼───┤ │ 1 │ 0.226582 │ 0.0997825 │ a │ │ 2 │ 0.933372 │ 0.722906 │ b │ │ 3 │ 0.522172 │ 0.812814 │ a │ │ 4 │ 0.505208 │ 0.245457 │ b │
K-Folds Repartitioning¶
This package also provides functions to perform re-partitioning strategies. These result in vector-like views that can be iterated over, in which each element is a different partition of the original data. Note again that all partitions are just lazy subsets, which means that no data is copied. For more information take a look at Repartitioning Strategies.
julia> x = collect(1:10); julia> folds = kfolds(x, k = 5) 5-fold MLDataPattern.FoldsView of 10 observations: data: 10-element Array{Int64,1} training: 8 observations/fold validation: 2 observations/fold obsdim: :last julia> train, val = folds[1] # access first fold ([3,4,5,6,7,8,9,10],[1,2])
Data Views and Iterators¶
Such “views” also exist for other purposes. For example, the
function
obsview() will create a decorator around some data
container, that makes the given data container appear as a vector
of individual observations. This “vector” can then be indexed
into or iterated> ov = obsview(X) 6-element obsview(::Array{Float64,2}, ObsDim.Last()) with element type SubArray{...}: [0.226582,0.504629] [0.933372,0.522172] [0.505208,0.0997825] [0.0443222,0.722906] [0.812814,0.245457] [0.11202,0.000341996]
Similarly, the function
batchview() creates a decorator
that makes the given data container appear as a vector of equally
sized mini-batches.
julia> bv = batchview(X, size = 2) 3-element batchview(::Array{Float64,2}, 2, 3, ObsDim.Last()) with element type SubArray{...} [0.226582 0.933372; 0.504629 0.522172] [0.505208 0.0443222; 0.0997825 0.722906] [0.812814 0.11202; 0.245457 0.000341996]
A third but conceptually different kind of view is provided by
slidingwindow(). This function is particularly useful for
preparing sequence data for various training tasks. For more
information take a look at the section on Data Views.
julia> data = split("The quick brown fox jumps over the lazy dog") 9-element Array{SubString{String},1}: "The" "quick" "brown" "fox" "jumps" "over" "the" "lazy" "dog" julia> A = slidingwindow(i->i+2, data, 2, stride=1) 7-element slidingwindow(::##9#10, ::Array{SubString{String},1}, 2, stride = 1) with element type Tuple{...}: (["The", "quick"], "brown") (["quick", "brown"], "fox") (["brown", "fox"], "jumps") (["fox", "jumps"], "over") (["jumps", "over"], "the") (["over", "the"], "lazy") (["the", "lazy"], "dog") julia> A = slidingwindow(i->[i-2:i-1; i+1:i+2], data, 1) 5-element slidingwindow(::##11#12, ::Array{SubString{String},1}, 1) with element type Tuple{...}: (["brown"], ["The", "quick", "fox", "jumps"]) (["fox"], ["quick", "brown", "jumps", "over"]) (["jumps"], ["brown", "fox", "over", "the"]) (["over"], ["fox", "jumps", "the", "lazy"]) (["the"], ["jumps", "over", "lazy", "dog"])
Aside from data containers, there is also another sub-category of data sources, called data iterators, that can not be indexed into. For example the following code creates an object that when iterated over, continuously and indefinitely samples a random observation (with replacement) from the given data container.
julia> iter = RandomObs(X) RandomObs(::Array{Float64,2}, ObsDim.Last()) Iterator providing Inf observations
To give a second example for a data iterator, the type
RandomBatches generates randomly sampled mini-batches
for a fixed size. For more information on that topic, take a look
at the section on Data Iterators.
julia> iter = RandomBatches(X, size = 10) RandomBatches(::Array{Float64,2}, 10, ObsDim.Last()) Iterator providing Inf batches of size 10 julia> iter = RandomBatches(X, count = 50, size = 10) RandomBatches(::Array{Float64,2}, 10, 50, ObsDim.Last()) Iterator providing 50 batches of size 10
Putting it all together¶
Let us round out this introduction by taking a look at a “hello world” example (with little explanation) to get a feeling for how to combine the various functions of this package in a typical ML scenario.
# X is a matrix; Y is a vector X, Y = rand(4, 150), rand(150) # we provide a generalization of such a data, one is not required to work with buffers like this, as
stateful iterators can have undesired side-effects when used
without care. For example
collect(eachbatch(X)) would result
in an array that has the exact same batch in each position.
Oftentimes, though, reusing buffers is preferable. This package
provides different alternatives for different use-cases. | https://mldatautilsjl.readthedocs.io/en/dev/data/pattern.html | 2021-02-25T07:23:10 | CC-MAIN-2021-10 | 1614178350846.9 | [] | mldatautilsjl.readthedocs.io |
Uploading a package¶
To upload a package to Repository, using the Client CLI, run the upload command:
anaconda login anaconda upload PACKAGE
NOTE: Replace
PACKAGE with the name of the desired package.
Repository automatically detects packages and notebooks, package or notebook types, and their versions.
Your package is now available at:
https://<your-anaconda-repo>/USERNAME/PACKAGE
NOTE:
<your-anaconda-repo> is the name of your local
Repository,
USERNAME is your username and
PACKAGE is the
package name.
Anyone can download your package by using Client:
anaconda download USERNAME/PACKAGE
NOTE:
USERNAME is their username, and
PACKAGE is your
package name.
If you want to restrict access to your package, see Controlling access to packages. | https://docs.anaconda.com/anaconda-repository/user-guide/tasks/pkgs/upload-pkg/ | 2021-02-25T08:42:52 | CC-MAIN-2021-10 | 1614178350846.9 | [] | docs.anaconda.com |
Unable to view Snappy-compressed files
You must install the
python-snappy library on your cluster to view
files compressed with Snappy using the Hue File Browser and the HBase Browser.
Post-installation, Hue automatically detects and displays the Snappy-compressed
files.
The
python-snappy library is incompatible with the python library
called
snappy. You must uninstall
snappy if it is
present on your cluster.
Run the following command to check whether the
snappylibrary is installed on your cluster:
/usr/bin/pip show snappy
No output on the console indicates that the
snappylibrary is not installed on your cluster. If you get any results for
snappy, then uninstall it by running the following command:
/usr/bin/pip uninstall snappy
Next, check whether you have the
python-snappylibrary is installed on your cluster by running the following command:
/usr/bin/pip show python-snappy
Sample output:
Name: python-snappy Version: 0.5.4 Location: /usr/lib64/python2.7/site-packages
- Stop the Hue service by going to .
- Change to the following directory depending on whether you have used parcels or packages to set up your CDH cluster.For parcels:
cd /opt/cloudera/parcels/CDH/lib/hueFor package:
cd /usr/lib/hue
- Install the
python-snappypackage by running the following commands:
yum install gcc gcc-c++ python-devel snappy-devel ./build/env/bin/pip install -U setuptools ./build/env/bin/pip install python-snappy
- Verify that the
python-snappylibrary is readable by all users by running the following commands:
ls -lart `locate snappy.py`The output should be similar to the following:
-rw-r--r-- 1 root root 11900 Sep 1 12:25 /usr/lib64/python2.7/site-packages/snappy.py -rw-r--r-- 1 root root 10344 Sep 1 12:26 /usr/lib64/python2.7/site-packages/snappy.pyc
- Start the Hue service by going to .
- Verify that the
python-snappylibrary is working for Hue by running the following command:
sudo -u hue /bin/bash -c "echo 'import snappy' | python"If the
python-snappylibrary is working as expected, then no output is displayed for this command. | https://docs.cloudera.com/runtime/7.2.7/troubleshooting-hue/topics/hue-cannot-view-snappy-compressed-files.html | 2021-02-25T08:26:44 | CC-MAIN-2021-10 | 1614178350846.9 | [] | docs.cloudera.com |
# Work orders
Work orders show planned work, previous work, reported issues, and problem resolutions for a piece of equipment. Note that some of the functionality described below is only available if this is featured in the source work management system.
In this article:
# Access work orders for a tag
Scroll down the Overview page or use the left-hand navigation to see open and closed work orders for a tag. On the Overview page, you can only see open work orders, while on the Work order page, you can see both open and closed work orders. Select a work order to see more details.
# Access notifications for a tag
If you are using SAP, navigate to Notifications on the left-hand menu to see the notifications sorted by creation date. Select a notification to see more details.
# View work plan and work orders in revisions
If you are using SAP, click Revisions on the Home page to see the active work plan and the work orders. To quickly search for work orders, type in any part of the work order ID in the Search field. Click a work order if you want to see more details:
- Details displays the descriptions added from the work management system.
- Objects displays the number of objects found in a work order. You can open all the objects as a checklist.
- Operations displays all the operations planned for a work order. Click an operation to see detailed information, such as planned work, duration, and remaining work. Checklists based on objects display the operation IDs for each element.
# View work orders in work packages
If you are using WorkMate, click Work packages on the Home page to see the active work orders. To quickly search for work orders, type in any part of the work order ID in the Search field. Click a work order if you want to see more details:
- Details displays the descriptions added from the work management system. You can open all the objects as a checklist
- EQ History displays the entire equipment history for a tag.
# Access equipment history on tag
If you are using WorkMate, navigate to Equipment history on the left-hand menu to see the entire equipment history for a tag regardless of whether the history element is connected to a work order or not.
# Open objects as checklists
Navigate to an open work order and click Objects on the left side heading. You will see the number of objects retrieved from the source work management system. Click Copy to checklist to create a checklist that is automatically named with the work order code. Note that the operation ID is displayed for each checklist element.
If a colleague has already created a checklist, you can even join this list.
| https://docs.cognite.com/infield/guides/work_orders.html | 2021-02-25T08:36:51 | CC-MAIN-2021-10 | 1614178350846.9 | [array(['/assets/img/SAP_workorders.afa631f1.png', 'access work orders'],
dtype=object)
array(['/assets/img/SAP_notifications.0c38aa91.png', 'Notifications'],
dtype=object)
array(['/assets/img/Workorder_search.7e904a47.png', 'SAP Revisions'],
dtype=object)
array(['/assets/img/WorkMate_work_packages.68ca9679.png',
'WorkMate_Work_Packages'], dtype=object)
array(['/assets/img/WORKMATE_EquipmentHistory.50c5e562.png',
'WorkMate Equipment History'], dtype=object)
array(['/assets/img/checklist_from_objectlist.e34e12d4.png',
'Open objects to a checklist'], dtype=object) ] | docs.cognite.com |
A TeamForge user who wants to leave a project must submit a request. The project administrator can approve or reject the request.
A request to Leave Project, select the user whose request you want to approve or reject.
- Click Approve to approve the request and remove the user from the project.
- Click Reject to deny the request.
- To view the user details or add a comment before approving or rejecting the request, click the user name. This is optional.
The user receives an email notification when the request is approved or rejected.
Related Links
[]: | https://docs.collab.net/teamforge192/projectadmin-handlingarequesttoleaveaproj.html | 2021-02-25T07:55:38 | CC-MAIN-2021-10 | 1614178350846.9 | [] | docs.collab.net |
GRS and Genesys Composer
You can use Genesys Composer to create applications for rules evaluation requests. Composer 8.1.0 is required, because that is the version in which the Business Rule block was introduced. Refer to the Composer documentation (new document) for more information about how to use the Business Rule block.
This function block is available on both the callflow and the workflow diagram palettes.
This page was last edited on June 30, 2016, at 09:42.
Feedback
Comment on this article: | https://docs.genesys.com/Documentation/GRS/8.5.3/User/GRS_Composer | 2021-02-25T08:00:20 | CC-MAIN-2021-10 | 1614178350846.9 | [] | docs.genesys.com |
Difference between revisions of "Running Serial Jobs on Niagara"
Revision as of 13:34, 23 June 2020
Contents
General considerations
Use whole nodes...
When you submit a job to Niagara, it is run on one (or more than one) entire node - meaning that your job is occupying at least 40 processors for the duration of its run. The SciNet systems are usually fully utilized, with many researchers waiting in the queue for computational resources, so we require that you make full use of the nodes that your job is allocated, so other researchers don't have to wait unnecessarily, and so that your jobs get as much work done.
... memory permitting
When running multiple serial jobs on the same node, it is essential to have a good idea of how much memory the jobs will require. The Niagara compute nodes have about 200GB of memory available to user jobs running on the 40 cores, i.e., a bit over 4GB per core. So the jobs also have to be bunched in ways that will fit into 200GB. If they use more than this, it will crash the node, inconveniencing you and other researchers waiting for that node.
If 40 serial jobs would not fit within the 200GB limit -- i.e. each individual job requires significantly in excess of ~4GB -- then it's allowed to just run fewer jobs so that they do fit. Note that in that case, the jobs are likely candidates for parallelization, and you can contact us at <[email protected]> and arrange a meeting with one of the technical analysts to help you with that.
If the memory requirements allow it, you could actually run more than 40 jobs at the same time, up to 80, exploiting the HyperThreading feature of the Intel CPUs. It may seem counter-intuitive, but running 80 simultaneous jobs on 40 cores for certain types of tasks has increased some users overall throughput.
Is your job really serial?
While your program may not be explicitly parallel, it may use some of Niagara's threaded libraries for numerical computations, which can make use of multiple processors. In particular, Niagara's Python and R modules are compiled with aggressive optimization and using threaded numerical libraries which by default will make use of multiple cores for computations such as large matrix operations. This can greatly speed up individual runs, but by less (usually much less) than a factor of 40. If you do have many such threaded computations to do, you often get more calculations done per unit time if you turn off the threading and run multiple such computations at once (provided that fits in memory, as explained above). You can turn off threading of these libraries with the shell script line export OMP_NUM_THREADS=1; that line will be included in the scripts below.
If your calculations implicitly use threading, you may want to experiment to see what gives you the best performance - you may find that running 4 (or even 8) jobs with 10 threads each (OMP_NUM_THREADS=10), or 2 jobs with 20 threads, gives better performance than 40 jobs with 1 thread (and almost certainly better than 1 job with 40 threads). We'd encourage to you to perform exactly such a scaling test to find the combination of number of threads per process and processes per job that maximizes your throughput; for a small up-front investment in time you may significantly speed up all the computations you need to do.
Serial jobs of similar duration
The most straightforward way to run multiple serial jobs is to bunch the serial jobs in groups of 40 or more that will take roughly the same amount of time, and create a job script that looks a bit like this
#!/bin/bash # SLURM submission script for multiple serial jobs on Niagara # #SBATCH --nodes=1 #SBATCH --ntasks-per-node=40 #SBATCH --time=1:00:00 #SBATCH --job-name serialx40 # Turn off implicit threading in Python, R export OMP_NUM_THREADS=1 # EXECUTION COMMAND; ampersand off 40 jobs and wait (cd serialjobdir01 && ./doserialjob01 && echo "job 01 finished") & (cd serialjobdir02 && ./doserialjob02 && echo "job 02 finished") & (cd serialjobdir03 && ./doserialjob03 && echo "job 03 finished") & (cd serialjobdir04 && ./doserialjob04 && echo "job 04 finished") & (cd serialjobdir05 && ./doserialjob05 && echo "job 05 finished") & (cd serialjobdir06 && ./doserialjob06 && echo "job 06 finished") & (cd serialjobdir07 && ./doserialjob07 && echo "job 07 finished") & (cd serialjobdir08 && ./doserialjob08 && echo "job 08 finished") & (cd serialjobdir09 && ./doserialjob09 && echo "job 09 finished") & (cd serialjobdir10 && ./doserialjob10 && echo "job 10 finished") & (cd serialjobdir11 && ./doserialjob11 && echo "job 11 finished") & (cd serialjobdir12 && ./doserialjob12 && echo "job 12 finished") & (cd serialjobdir13 && ./doserialjob13 && echo "job 13 finished") & (cd serialjobdir14 && ./doserialjob14 && echo "job 14 finished") & (cd serialjobdir15 && ./doserialjob15 && echo "job 15 finished") & (cd serialjobdir16 && ./doserialjob16 && echo "job 16 finished") & (cd serialjobdir17 && ./doserialjob17 && echo "job 17 finished") & (cd serialjobdir18 && ./doserialjob18 && echo "job 18 finished") & (cd serialjobdir19 && ./doserialjob19 && echo "job 19 finished") & (cd serialjobdir20 && ./doserialjob20 && echo "job 20 finished") & (cd serialjobdir21 && ./doserialjob21 && echo "job 21 finished") & (cd serialjobdir22 && ./doserialjob22 && echo "job 22 finished") & (cd serialjobdir23 && ./doserialjob23 && echo "job 23 finished") & (cd serialjobdir24 && ./doserialjob24 && echo "job 24 finished") & (cd serialjobdir25 && ./doserialjob25 && echo "job 25 finished") & (cd serialjobdir26 && ./doserialjob26 && echo "job 26 finished") & (cd serialjobdir27 && ./doserialjob27 && echo "job 27 finished") & (cd serialjobdir28 && ./doserialjob28 && echo "job 28 finished") & (cd serialjobdir29 && ./doserialjob29 && echo "job 29 finished") & (cd serialjobdir30 && ./doserialjob30 && echo "job 30 finished") & (cd serialjobdir31 && ./doserialjob31 && echo "job 31 finished") & (cd serialjobdir32 && ./doserialjob32 && echo "job 32 finished") & (cd serialjobdir33 && ./doserialjob33 && echo "job 33 finished") & (cd serialjobdir34 && ./doserialjob34 && echo "job 34 finished") & (cd serialjobdir35 && ./doserialjob35 && echo "job 35 finished") & (cd serialjobdir36 && ./doserialjob36 && echo "job 36 finished") & (cd serialjobdir37 && ./doserialjob37 && echo "job 37 finished") & (cd serialjobdir38 && ./doserialjob38 && echo "job 38 finished") & (cd serialjobdir39 && ./doserialjob39 && echo "job 39 finished") & (cd serialjobdir40 && ./doserialjob40 && echo "job 40 finished") & wait
There are four important things to take note of here. First, the wait command at the end is crucial; without it the job will terminate immediately, killing the 40 programs you just started.
Second is that every serial job is running in its own directory; this is important because writing to the same directory from different processes can lead to slow down because of directory locking. How badly your job suffers from this depends on how much I/O your serial jobs are doing, but with 40 jobs on a node, it can quickly add up.
Third is that it is important to group the programs by how long they will take. If (say) dojob08 takes 2 hours and the rest only take 1, then for one hour 39 of the 40 cores on that Niagara node are wasted; they are sitting idle but are unavailable for other users, and the utilization of this node over the whole run is only 51%. This is the sort of thing we'll notice, and users who don't make efficient use of the machine will have their ability to use Niagara resources reduced. If you have many serial jobs of varying length, use the submission script to balance the computational load, as explained below.
Fourth, if memory requirements allow it, you should try to run more than 40 jobs at once, with a maximum of 80 jobs.
Finally, writing out 80 cases (or even just 40, as in the above example) can become highly tedious, as can keeping track of all these subjobs. You should consider using a tool that automates this, like:
GNU Parallel
GNU parallel is a really nice tool written by Ole Tange to run multiple serial jobs in parallel. It allows you to keep the processors on each 40-core node busy, if you provide enough jobs to do.
GNU parallel is accessible on Niagara in the module gnu-parallel:
module load NiaEnv/2019b gnu-parallel
This also switches to the newer NiaEnv/2019b stack. The current version of the GNU parallel module in that stack is 20191122. In the older stack, NiaEnv/2018a (which is loaded by default), the version of GNU parallel is 20180322.
The command man parallel_tutorial shows much of GNU parallel's functionality, while man parallel gives the details of its syntax.
The citation for GNU Parallel is: O. Tange (2018): GNU Parallel 2018, March 2018,.
It is easiest to demonstrate the usage of GNU parallel by examples. First, suppose you have 80 jobs to do (similar to the above case), and that these jobs duration varies quite a bit, but that the average job duration is around 5 hours. You could use the following script (but don't, see below):
#!/bin/bash # SLURM submission script for multiple serial jobs on Niagara # #SBATCH --nodes=1 #SBATCH --ntasks-per-node=40 #SBATCH --time=12:00:00 #SBATCH --job-name gnu-parallel-example # Turn off implicit threading in Python, R export OMP_NUM_THREADS=1 module load NiaEnv/2019b gnu-parallel # EXECUTION COMMAND - DON'T USE THIS ONE parallel -j $SLURM_TASKS_PER_NODE <<EOF cd serialjobdir01 && ./doserialjob01 && echo "job 01 finished" cd serialjobdir02 && ./doserialjob02 && echo "job 02 finished" ... cd serialjobdir80 && ./doserialjob80 && echo "job 80 finished" EOF
The -j $SLURM_TASKS_PER_NODE parameter sets the number of jobs to run at the same time on each compute node, and is using the slurm value, which coincides with the --ntasks-per-node parameter. For gpu-parallel modules starting from version 20191122, if you omit the option -j $SLURM_TASKS_PER_NODE, you will get as many simultaneous subjobs as the ntask-per-node parameter you specify in the #SBATCH part of the jobs script.
Each line in the input given to parallel is a separate subjob, so 80 jobs are lined up to run. Initially, 40 subjobs are given to the 40 processors on the node. When one of the processors is done with its assigned subjob, it will get a next subjob instead of sitting idle until the other processors are done. While you would expect that on average this script should take 10 hours (each processor on average has to complete two jobs of 5 hours), there's a good chance that one of the processors gets two jobs that take more than 5 hours, so the job script requests 12 hours to be safe. How much more time you should ask for in practice depends on the spread in expected run times of the separate jobs.
Serial jobs of varying duration
The script above works, and can be extended to more subjobs, which is especially important if you have to do a lot (100+) of relatively short serial runs of which the walltime varies. But it gets tedious to write out all the cases. You could write a script to automate this, but you do not have to, because GNU Parallel already has ways of generating subjobs, as we will show below.
GNU Parallel can also keep track of the subjobs with succeeded, failed, or never started. For that, you just add --joblog to the parallel command followed by a filename to which to write the status:
# EXECUTION COMMAND - DON'T USE THIS ONE parallel --joblog slurm-$SLURM_JOBID.log -j $SLURM_TASKS_PER_NODE <<EOF cd serialjobdir01 && ./doserialjob01 cd serialjobdir02 && ./doserialjob02 ... cd serialjobdir80 && ./doserialjob80 EOF
In this case, the job log gets written to "slurm-$SLURM_JOBID.log", where "$SLURM_JOBID" will be replaced by the job number. The joblog can also be used to retry failed jobs (more below).
Second, we can generate that set of subjobs instead of writing them out by hand. The following does the trick:
# EXECUTION COMMAND parallel --joblog slurm-$SLURM_JOBID.log -j $SLURM_TASKS_PER_NODE "cd serialjobdir{} && ./doserialjob{}" ::: {01..80}
This works as follows: "cd serialjobdir{} && ./doserialjob{}" is a template command, with placeholders {}. ::: indicated that a set of parameters follows that are to be put into the template, thus generating the commands for each subjob. After the ::: we can place a space-separated set of arguments, which in this case are generated using the bash-specific construct for a range, {01..80}.
The final script now looks like this:
#!/bin/bash # SLURM submission script for multiple serial jobs on Niagara # #SBATCH --nodes=1 #SBATCH --ntasks-per-node=40 #SBATCH --time=12:00:00 #SBATCH --job-name gnu-parallel-example # DIRECTORY TO RUN - $SLURM_SUBMIT_DIR is the directory from which the job was submitted cd $SLURM_SUBMIT_DIR # Turn off implicit threading in Python, R export OMP_NUM_THREADS=1 module load NiaEnv/2019b gnu-parallel # EXECUTION COMMAND parallel --joblog slurm-$SLURM_JOBID.log "cd serialjobdir{} && ./doserialjob{}" ::: {01..80}
Notes:
- As before, GNU Parallel keeps 40 jobs running at a time, and if one finishes, starts the next. This is an easy way to do load balancing.
- The -j option was omitted, which works if using GNU Parallel module version 20191122 or higher. Otherwise, you need to add the -j $SLURM_TASKS_PER_NODE flag to the parallel command.
- page.
- This script optimizes resource utility, but can only use 1 node (40 cores) at a time. The next section addresses how to use more nodes.
- While on the command line, the option "--bar" can be nice to see the progress, when running as a job, you would not see this status bar.
- The --joblog parameter also keeps track of failed or unfinished jobs, so you can later try to redo those with the same command, but with the option "--resume" added.
Version for more than 1 node at once
If you have many hundreds of serial jobs that you want to run concurrently and the nodes are available, then the approach above, while useful, would require tens of scripts to be submitted separately. Alternatively, it is possible to request more than one node and to use the following routine to distribute your processes amongst the cores. gnu-parallel HOSTS=$(scontrol show hostnames $SLURM_NODELIST | tr '\n' ,) NCORES=40 parallel --env OMP_NUM_THREADS,PATH,LD_LIBRARY_PATH --joblog slurm-$SLURM_JOBID.log -j $NCORES -S $HOSTS --wd $PWD "cd serialjobdir{} && ./doserialjob{}" ::: {001..800}
- The parameter -S $HOSTS divides the work over different nodes. $HOSTS should be a comma separated list of the node names. These node names are also stored in $SLURM_NODELIST, but with a syntax that allows for ranges, which GNU parallel does not understand. The scontrol command in the script above fixes that.
- Alternatively, GNU Parallel can be passed a file with the list of nodes to which to ssh, using --sshloginfile, but your jobs script would first have to create that file.
- The parameter -j $SLURM_TASKS_PER_NODE tells parallel to run 40 subjobs simultaneously on each of the nodes (note: do not use the similarly named variable $SLURM_TASKS_PER_NODE as its format is incompatible with GNU example above copies the most common variables that a remote command may need.
Much of this is automated in GNU parallel modules starting from version 20191122 that is available in NiaEnv/2019b, and the script should look like this: NiaEnv/2019b gnu-parallel parallel --joblog slurm-$SLURM_JOBID.log --wd $PWD "cd serialjobdir{} && ./doserialjob{}" ::: {001..800}
- The mechanism of the automation of the number of tasks per nodes and the node names that GNU Parallel can use, is all through the environment variable $PARALLEL environment variable is already set to copy the most common variables $PATH, $LD_LIBRARY_PATH, and $OMP_NUM_THREADS..
Submitting several bunches to single nodes, as in the section above, is a more fail-safe way of proceeding, since a node failure would only affect one of these bunches, rather than all runs.
We reiterate that if memory requirements allow it, you should try to run more than 40 jobs at once, with a maximum of 80 jobs. The way the above example job script are written, you simple change #SBATCH --ntasks-per-node=40 to #SBATCH --ntasks-per-node=80 to accomplish this.
More on GNU parallel
- The documentation for GNU parallel can be found at
- Its man page can be found here
GNU Parallel Reference
The author of GNU parallel request that when using GNU parallel for a publication, you please cite:
- O. Tange (2018): GNU Parallel 2018, March 2018,. | https://docs.scinet.utoronto.ca/index.php?title=Running_Serial_Jobs_on_Niagara&oldid=2653&diff=prev | 2021-02-25T08:11:48 | CC-MAIN-2021-10 | 1614178350846.9 | [] | docs.scinet.utoronto.ca |
Thursday, May 31, 2018
Part of the power of Packages is its plugin architecture. This system allows you to hook into different areas of Packages to enable new functionality.
Tip: It is recommended that you fork the Packages repository before modification. This will allow you to continue to use git while keeping access to upstream changes.
The plugin system has three main components:
Pluginclass which wires up services in the service container.
Notice: This section is a work-in-progress.
Create a
Plugin class that implements
Terramar\Packages\Plugin\PluginInterface.
Create any event subscribers, register them in your
Plugin's configure method.
Tip: Check the Event Reference for more details on which events you can listen to.
Create any action handlers, register them in your
Plugin's configure method.
Tip: Check the Action Reference for more details on which actions you can handle.
Terramar Labs | http://docs.terramarlabs.com/packages/3.0/plugins/creating-a-plugin | 2021-02-25T08:26:23 | CC-MAIN-2021-10 | 1614178350846.9 | [] | docs.terramarlabs.com |
Admin ontology
Knora has an admin ontology where object properties, datatype properties, classes, individuals and permission class properties necessary for project administration are modelled.
The admin ontology is identified by the IRI. In our documents it will be identified by the prefix
knora-admin. The prefix
kb used here refers to the Knora-base ontology.
Projects
In Knora each item of data belongs to some particular project. Each project using Knora must define a
knora-admin:knoraProject, which has the following properties:
projectShortname: A short name that can be used to identify the project in configuration files and the like.
projectLongname: The full name of the project.
projectShortcode: A hexadecimal code that uniquely identifies the project. These codes are assigned to projects by the DaSCH.
projectDescription: A description of the project.
belongsToInstitution: The
knora-admin:Institutionthat the project belongs to.
Ontologies, resources and values are attached to projects by means of the
kb:attachedToProject property. Users are associated with a project by means of the
knora-admin:isInProject property.
Authorisation
Users and Groups
Each Knora user is represented by an object belonging to the class
knora-admin:User, which is a subclass of
foaf:Person, and has the properties in the following list. The numbers given in parentheses after each property are the so-called cardinalities. For more information on cardinalities see here.
*
userid (1): A unique identifier that the user must provide when logging in.
*
password (1): A cryptographic hash of the user’s password.
*
isInProject (0-n): Projects that the user is a member of.
*
isInGroup (0-n): User-created groups that the user is a member of.
*
foaf:familyName (1): The user’s family name.
*
foaf:givenName (1): The user’s given name.
Knora’s concept of access control is that an object -a resource or value - can grant permissions to groups of users, but not to individual users. There are several built-in groups:
*
knora-admin:UnknownUser: Any user who has not logged into Knora is automatically assigned to this group.
*
knora-admin:KnownUser: Any user who has logged into Knora is automatically assigned to this group.
*
knora-admin:ProjectMember: When checking a user’s permissions on an object, the user is automatically assigned to this group if she is a member of the project that the object belongs to.
*
knora-admin:Creator: When checking a user’s permissions on an object, the user is automatically assigned to this group if he is the creator of the object.
*
knora-admin:ProjectAdmin: When checking a user’s permissions on an object, the user is automatically assigned to this group if she is an administrator of the project that the object belongs to.
*
knora-admin:SystemAdmin: The group of Knora system administrators.
A user-created ontology can define additional groups, which must belong to the OWL class
knora-admin:UserGroup.
There is one built-in
knora-admin:SystemUser, which is the creator of link values created automatically for resource references in standoff markup (see StandoffLinkTag).
Permissions
Each resource or value can grant certain permissions to specified user groups. These permissions are represented as the object of the predicate
kb:hasPermissions, which is required on every
kb:Resource and on the current version of every
kb:Value. The permissions attached to the current version of a value also apply to previous versions of the value. Value versions other than the current one do not have this predicate.
The following permissions can be granted:
1. Restricted view permission (RV): Allows a restricted view of the object, e.g. a view of an image with a watermark.
2. View permission (V): Allows an unrestricted view of the object. Having view permission on a resource only affects the user’s ability to view information about the resource other than its values. To view a value, she must have view permission on the value itself.
3. Modify permission (M): For values, this permission allows a new version of a value to be created. For resources, this allows the user to create a new value (as opposed to a new version of an existing value), or to change information about the resource other than its values. When he wants to make a new version of a value, his permissions on the containing resource are not relevant. However, when he wants to change the target of a link, the old link must be deleted and a new one created, so he needs modify permission on the resource.
4. Delete permission (D): Allows the item to be marked as deleted.
5. Change rights permission (CR): Allows the permissions granted by the object to be changed.
Each permission in the above list implies all lower-numbered permissions. A user’s permission level on a particular object is calculated in the following way:
1. Make a list of the groups that the user belongs to, including
knora-admin:Creator and/or
knora-admin:ProjectMember if applicable.
2. Make a list of the permissions that she can obtain on the object, by iterating over the permissions that the object grants. For each permission, if she is in the specified group, add the specified permission to the list of permissions she can obtain.
3. From the resulting list, select the highest-level permission.
4. If the result is that she would have no permissions, give her whatever
permission
knora-admin:UnknownUser would have.
To view a link between resources, a user needs permission to view the source and target resources. He also needs permission to view the
kb:LinkValue representing the link, unless the link property is
kb:hasStandoffLinkTo (see StandoffLinkTag).
The format of the object of
kb:hasPermissions is as follows:
* Each permission is represented by the one-letter or two-letter abbreviation given above.
* Each permission abbreviation is followed by a space, then a comma-separated list of groups that the permission is granted to.
* The IRIs of built-in groups are shortened using the
knora-admin prefix. Multiple permissions are separated by a vertical bar (
|).
For example, if an object grants view permission to unknown and known users, and modify permission to project members, the resulting permission literal would be:
V knora-admin:UnknownUser,knora-admin:KnownUser|M knora-admin:ProjectMember | https://docs.dasch.swiss/developers/dsp-api/documentation/knora-admin/ | 2021-02-25T08:33:21 | CC-MAIN-2021-10 | 1614178350846.9 | [] | docs.dasch.swiss |
-RPCE] Microsoft Corporation, "Remote Procedure Call Protocol Extensions".
[MS-SMB] Microsoft Corporation, "Server Message Block (SMB) Protocol".
", STD 19, RFC 1002, March 1987,
[RFC2119] Bradner, S., "Key words for use in RFCs to Indicate Requirement Levels", BCP 14, RFC 2119, March 1997, | https://docs.microsoft.com/en-us/openspecs/windows_protocols/ms-brwsa/e95ef121-3ea9-48c4-8d5a-195951d86cd6 | 2021-02-25T08:56:00 | CC-MAIN-2021-10 | 1614178350846.9 | [] | docs.microsoft.com |
This book supports the following releases:
Note: See “Displaying Information About Teradata Workload Analyzer” on page 33 to verify the Teradata WA version number.
Note: Teradata WA 15.10 supports the current Teradata Database version and versions 15.10, 14.10, 14.0, and 13.x. (It does not support versions of Teradata Database earlier than 13.x.) However, when used with Teradata Database 13.x Teradata WA 15.10 is limited to its 13.x features, — that is, only those features supported by the earlier database.
To locate detailed supported-release information:
1 Go to.
2 Under Online Publications, click General Search.
3 Type 3119 in the Publication Product ID box.
4 Under Sort By, select Date.
5 Click Search.
6 Open the version of the Teradata Tools and Utilities ##.# Supported Platforms and Product Versions spreadsheet associated with this release.
The spreadsheet includes supported Teradata Database versions, platforms, and product release numbers. | https://docs.teradata.com/r/X8xJ_REyI_AFhmE12yb6zg/CjDyhpH0wUWCL63bXm7SiA | 2021-02-25T07:51:04 | CC-MAIN-2021-10 | 1614178350846.9 | [] | docs.teradata.com |
Resetting Keyboard Shortcuts
You can reset the configuration of your keyboard shortcuts to the default values for the selected keyboard shortcut set.
IMPORTANT This will replace your current keyboard shortcut configuration and discard any custom keyboard shortcut you created.
Do one of the following to open the Keyboard Shortcuts dialog:
- Windows: In the top menu, select Edit > Keyboard Shortcuts.
- macOS: In the top menu, select Harmony Essentials > Keyboard Shortcuts.
In the Keyboard Shortcuts: drop-down, make sure the keyboard shortcut set that you want to restore to its default configuration is selected.
In the bottom-left corner of the Keyboard Shortcuts dialog, click on Restore All Defaults.
A confirmation prompt appears.
If you are sure you want to restore the default keyboard shortcut configuration for the selected keyboard shortcut set, click on Yes.
All the commands in the list are now set to their default keyboard shortcut. | https://docs.toonboom.com/help/harmony-17/essentials/keyboard-shortcuts/reset-keyboard-shortcuts.html | 2021-02-25T07:57:40 | CC-MAIN-2021-10 | 1614178350846.9 | [array(['../Skins/Default/Stylesheets/Images/transparent.gif', 'Closed'],
dtype=object) ] | docs.toonboom.com |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.