content
stringlengths 0
557k
| url
stringlengths 16
1.78k
| timestamp
timestamp[ms] | dump
stringlengths 9
15
| segment
stringlengths 13
17
| image_urls
stringlengths 2
55.5k
| netloc
stringlengths 7
77
|
---|---|---|---|---|---|---|
On Windows Vista and later versions of Windows, the core stages of device installation are always run in a non-interactive context known as server-side installations. The host process for device installation (DrvInst.exe) runs under the security context of the LocalSystem account.
Because the server-side installations run non-interactively and must complete without any user input, it provides some challenges to the driver package developer who wants to debug the actions of the driver package's class-installer and co-installer DLLs. For the developer of a driver package, it is usually most desirable to debug the actions of a co-installer DLL during the installation of a device.
This section contains the following topics, which describe techniques that are used to debug co-installers during the core stages of device installation:
Enabling Support for Debugging Device Installations
Debugging Device Installations with a User-mode Debugger
Debugging Device Installations with the Kernel Debugger (KD)
For more information about co-installers, see Writing a Co-installer. | https://docs.microsoft.com/en-us/windows-hardware/drivers/install/debugging-device-installations--windows-vista-and-later- | 2017-05-22T19:55:40 | CC-MAIN-2017-22 | 1495463607046.17 | [] | docs.microsoft.com |
Using Jenkins API¶
JenkinsAPI lets you query the state of a running Jenkins server. It also allows you to change configuration and automate minor tasks on nodes and jobs.
Example 1: Get version of Jenkins¶
from jenkinsapi.jenkins import Jenkins def get_server_instance(): jenkins_url = '' server = Jenkins(jenkins_url, username='foouser', password='foopassword') return server if __name__ == '__main__': print get_server_instance().version
The above code prints version of Jenkins running on the host jenkins_host.
From Jenkins vesion 1.426 onward one can specify an API token instead of your real password while authenticating the user against Jenkins instance. Refer to the the Jenkis wiki page Authenticating scripted clients for details about how a user can generate an API token. Once you have API token you can pass the API token instead of real password while creating an Jenkins server instance using Jenkins API.
Example 2: Get details of jobs running on Jenkins server¶
"""Get job details of each job that is running on the Jenkins instance""" def get_job_details(): # Refer Example #1 for definition of function 'get_server_instance' server = get_server_instance() for job_name, job_instance in server.get_jobs(): print 'Job Name:%s' % (job_instance.name) print 'Job Description:%s' % (job_instance.get_description()) print 'Is Job running:%s' % (job_instance.is_running()) print 'Is Job enabled:%s' % (job_instance.is_enabled())
Example 3: Disable/Enable a Jenkins Job¶
"""Disable a Jenkins job""" def disable_job(): # Refer Example #1 for definition of function 'get_server_instance' server = get_server_instance() job_name = 'nightly-build-job' if (server.has_job(job_name)): job_instance = server.get_job(job_name) job_instance.disable() print 'Name:%s,Is Job Enabled ?:%s' % (job_name,job_instance.is_enabled())
Use the call
job_instance.enable() to enable a Jenkins Job.
Example 4: Get Plugin details¶
Below chunk of code gets the details of the plugins currently installed in the Jenkins instance.
def get_plugin_details(): # Refer Example #1 for definition of function 'get_server_instance' server = get_server_instance() for plugin in server.get_plugins().values(): print "Short Name:%s" % (plugin.shortName) print "Long Name:%s" % (plugin.longName) print "Version:%s" % (plugin.version) print "URL:%s" % (plugin.url) print "Active:%s" % (plugin.active) print "Enabled:%s" % (plugin.enabled)
Example 5: Getting version information from a completed build¶
This is a typical use of JenkinsAPI - it was the very first use I had in mind when the project was first built: In a continuous-integration environment you want to be able to programatically detect the version-control information. | http://jenkinsapi.readthedocs.io/en/latest/using_jenkinsapi.html | 2017-05-22T19:07:05 | CC-MAIN-2017-22 | 1495463607046.17 | [] | jenkinsapi.readthedocs.io |
🔗Creating CDAP Pipelines using CDAP System Artifacts
Source Code Repository: Source code (and other resources) for this guide are available at the CDAP Guides GitHub repository.
Using the built-in
cdap-data-pipeline and
cdap-data-streams system artifacts, you
can create CDAP pipelines with just a JSON configuration file. CDAP ships with a set of
built-in Sources, Sinks, Transforms, and other plugins (described here)
which can be used to create batch and real-time data pipeline applications right out of
the box.
Note: If you want to create your own Source, Sink, or other plugin, you can find more instructions on how to do that here.
Note: Both the
cdap-etl-batch and
cdap-etl-realtime system artifacts have been
deprecated as of CDAP 3.5.0 and replaced with the artifacts
cdap-data-pipeline and
cdap-data-streams respectively.
🔗What You Will Create
- Batch CDAP HBase Table to Database Table: This application exports the contents of a CDAP HBase Table to a Database Table in Batch.
- Batch Database Table to CDAP HBase Table: In this application, we will export the contents of a Database Table to a CDAP HBase table in Batch.
- Batch CDAP Stream to Impala: This application makes the events ingested in a CDAP Stream queryable through Impala.
- Real-time JMS to Stream: In this application, we will read messages from a JMS producer in real time and write to a CDAP Stream.
- Real-time Kafka to TPFS Avro: This application fetches messages from Kafka in real time and writes to Time-PartitionedFileSets in Avro format.
- Real-time Twitter to HBase: In this application, we will read Tweets from Twitter in real time and write to an HBase Table.
🔗What You Will Need
🔗Let's Begin!
For these guides, we will use the CDAP CLI to create and manage CDAP pipelines. The CLI
commands assume that the
cdap script is available on your PATH. If this is not
the case, please add it:
$ export PATH=$PATH:<CDAP home>/bin
or, from within the <CDAP home> directory:
$ export PATH=${PATH}:`pwd`/bin
If you haven't already started a Standalone CDAP installation, start it with the command:
$ cdap sdk start
Now navigate to the CDAP pipeline example (see list above) that you want to create and you will find further instructions on how to create that specific application.
🔗. | http://docs.cask.co/cdap/4.1.1/en/examples-manual/how-to-guides/cdap-etl-guide.html | 2017-05-22T19:34:16 | CC-MAIN-2017-22 | 1495463607046.17 | [] | docs.cask.co |
certmgr - restricted. The following error message will appear if the current user doesn't have the minimum access rights to remove the certificate:>
Written by Sebastien Pouliot Minor additions by Pablo Ruiz García
Visit for details.
Visit for details
.BR makecert(1), setreg(1) | http://docs.go-mono.com/monodoc.ashx?link=man%3Acertmgr(1) | 2017-05-22T19:26:38 | CC-MAIN-2017-22 | 1495463607046.17 | [] | docs.go-mono.com |
Configuring the Pages tab
The "Pages" tab in "Options" is used for configuring your basic MemberPress pages. Below is a description of these options based on their title.
Reserved Pages
MemberPress requires a handful of pages for its operation. Here is where you can tell MemberPress which pages to use on your site. The pages marked with an * are required and must be set. If " - Auto Create New Page - " is selected when you click on "Update Options", the page(s) will be created and configured automatically for you. If you have existing pages you'd rather use, select them from the drop down menu.
You do not have to put any content on these pages, although if you do, MemberPress will add its content below yours. Below is a detailed description of each page and its function ( please click on each page name to learn how to create, edit, and use these pages):
Thank You Page - The thank you page is the page your members will be shown after they purchase one of your Memberships. Typically you'll want to hide this page from the front end navigation menus.
Account Page - After a member logs into your site, they are normally taken to this page unless you specify otherwise in the Options. Here the member can update their profile information, change their password, view payment histories, and upgrade/cancel their subscriptions. You will normally want to show this page in your front end navigation menus.
Login Page - This one is pretty straightforward, this page shows a login form where members can login to your site and access any content they have paid for. This page will also show a forgot password link in case the user can't remember their password.
Group and Membership Pages Slugs
Group Pages Slug - the text you set here will be your slug for all your MemberPress Group Pricing pages. For example, if you enter "plans" as your slug and you have a group called "Trial and Paid," your URL would look like by default.
Membership Pages Slug - the text you set here will be your slug for all your MemberPress Membership Registration pages. For example, if you enter "subscribe" as your slug and you have a membership called "Gold," your URL would look like by default.
Note: It isn't recommended that you change these values if you already have existing groups and membership pages on a production membership site because all your URLs for them will change (WordPress will attempt to redirect from old URLs to new URLs).
Unauthorized Access
Redirect unauthorized visitors to a specific URL - Enabling this setting allows you to redirect users to a specific page on your site instead of just seeing your default unauthorized message. To learn more about this powerful feature and what you can accomplish with it, please visit this page.
Show an excerpt to unauthorized visitors - Enable this option if you would like to show some type of excerpt on your protected pages for unauthorized visitors. This can be set to "More Tag", "Post Excerpt", or "Custom". When set to "Custom" you can select the number of characters that the user will see, which is useful if you would like to show more or less than the default post excerpt of 55 characters.
Show a login form on pages containing unauthorized content - If you would like to show a login form on any page that has unauthorized content, enable this option.
Default Unauthorized Message - Clicking this link will reveal a live editor where you can enter the default unauthorized message. Your unauthorized users will see this message on all protected pages that they are unauthorized to see unless you have set up a custom unauthorized message per rule. You can learn more about managing this per page, post, or custom post type here. | https://docs.memberpress.com/article/39-pages | 2019-02-16T01:32:50 | CC-MAIN-2019-09 | 1550247479729.27 | [array(['http://cspf.co/image/2J3N1Y0h3x0j/Image%202015-11-21%20at%2012.51.54%20PM.png',
None], dtype=object) ] | docs.memberpress.com |
Bus¶
See also
Unit Systems and Conventions
Create Function¶
pandapower.
create_bus(net, vn_kv, name=None, index=None, geodata=None,, min_vm_pu=<Mock name='mock.nan' id='140428840819960'>, **kwargs)¶
Adds one bus in table net[“bus”].
Busses are the nodes of the network that all other elements connect to.
- INPUT:
- net (pandapowerNet) - The pandapower network in which the element is created
- OPTIONAL:
name (string, default None) - the name for this bus
index (int, default None) - Force a specified ID if it is available
vn_kv (float) - The grid voltage level.
busgeodata ((x,y)-tuple, default None) - coordinates used for plotting
type (string, default “b”) - Type of the bus. “n” - auxilary node, “b” - busbar, “m” - muff
zone (string, None) - grid region
in_service (boolean) - True for in_service or False for out of service
- OUTPUT:
- eid (int) - The index of the created element
- EXAMPLE:
- create_bus(net, name = “bus1”)
Input Parameters¶
net.bus
*necessary for executing a power flow calculation
**optimal power flow parameter
Note
Bus voltage limits can not be set for slack buses and will be ignored by the optimal power flow.
net.bus_geodata
Result Parameters¶
net.res_bus
The power flow bus results are defined as:
net.res_bus_est
The state estimation results are put into net.res_bus_est with the same definition as in net.res_bus.
Note
All power values are given in the consumer system. Therefore a bus with positive p_kw value consumes power while a bus with negative active power supplies power. | https://pandapower.readthedocs.io/en/v1.3.0/elements/bus.html | 2019-02-16T01:34:40 | CC-MAIN-2019-09 | 1550247479729.27 | [] | pandapower.readthedocs.io |
REST API Introduction¶
This documentation section is a user guide for w3af’s REST API service, its goal is to provide developers the knowledge to consume w3af as a service using any development language.
We recommend you read through the w3af users guide before diving into this REST API-specific section.
Starting the REST API service¶
The REST API can be started by running:
$ ./w3af_api * Running on (Press CTRL+C to quit)
Or it can also be run inside a docker container:
$ cd extras/docker/scripts/ $ ./w3af_api_docker * Running on (Press CTRL+C to quit)
Authentication¶
It is possible to require HTTP basic authentication for all REST API requests by
specifying a SHA512-hashed password on the command line (with
-p <SHA512_HASH>)
or in a configuration file using the
password: directive (see the section
below for more information about configuration files).
Linux or Mac users can generate a SHA512 hash from a plaintext password by running:
$ echo -n "secret" | sha512sum - $ ./w3af_api -p " * Running on (Press CTRL+C to quit)
In the above example, users are only able to connect using HTTP basic
authentication with the default username
admin and the password
secret.
For example, using the
curl command:
$ curl -u admin:secret { "docs": "" }
Please note that even with basic authentication, traffic passing to and from the REST API is not encrypted, meaning that authentication and vulnerability information could still be sniffed by an attacker with “man-in-the-middle” capabilities.
When running the REST API on a publicly available IP address we recommend taking additional precautions including running it behind an SSL proxy server (such as Pound, nginx, or Apache with mod_proxy enabled).
Config file format¶
Using a configuration file is optional and is simply a convenient place to store settings that could otherwise be specified using command line arguments.
The configuration file is in standard YAML format and accepts any of the options found on the command line. A sample configuration file would look like this:
# This is a comment host: '127.0.0.1' port: 5000 verbose: False username: 'admin' # The SHA512-hashed password is 'secret'. We don't recommend using this. password: '
In the above example, all values except
password are the defaults and could
have been omitted from the configuration file without changing the way the API
runs.
Serve using TLS/SSL¶
w3af‘s REST API is served using Flask, which can be used to deliver content
over TLS/SSL. By default
w3af will generate a self signed certificate and
bind to port 5000 using the
https protocol.
To disable
https users can set the
--no-ssl command line argument.
Advanced users who want to use their own SSL certificates can:
- Start
w3afin HTTP mode and use a proxy such as
nginxto handle the SSL traffic and forward unencrypted traffic to the REST API.
- Copy the user generated SSL certificate and key to
/.w3af/ssl/w3af.crtand
/.w3af/ssl/w3af.keyand start
./w3af_apiwithout
--no-ssl.
Note
Using
nginx to serve
w3af‘s API will give the user more configuration
options and security than running SSL in
w3af_api.
REST API Source code¶
The REST API is implemented in Flask and is pretty well documented for your reading pleasure.
REST API clients¶
Wrote a REST API client? Let us know and get it linked here!
- Official Python REST API client which is also available at pypi | http://docs.w3af.org/en/latest/api/index.html | 2017-01-16T17:09:52 | CC-MAIN-2017-04 | 1484560279224.13 | [] | docs.w3af.org |
Installing Cython¶
Many scientific Python distributions, such as Anaconda [Anaconda], Enthought Canopy [Canopy], Python(x,y) [Pythonxy], and Sage [Sage], bundle Cython and no setup is needed. Note however that if your distribution ships a version of Cython which is too old you can still use the instructions below to update Cython. Everything in this tutorial should work with Cython 0.11.2 and newer, unless a footnote says otherwise.
Unlike most Python software, Cython requires a C compiler to be present on the system. The details of getting a C compiler varies according to the system used:
- Linux The GNU C Compiler (gcc) is usually present, or easily available through the package system. On Ubuntu or Debian, for instance, the command
sudo apt-get install build-essentialwill fetch everything you need.
- Mac OS X To retrieve gcc, one option is to install Apple’s XCode, which can be retrieved from the Mac OS X’s install DVDs or from.
- Windows A popular option is to use the open source MinGW (a Windows distribution of gcc). See the appendix for instructions for setting up MinGW manually. Enthought Canopy and Python(x,y) bundle MinGW, but some of the configuration steps in the appendix might still be necessary. Another option is to use Microsoft’s Visual C. One must then use the same version which the installed Python was compiled with.
The newest Cython release can always be downloaded from. Unpack the tarball or zip file, enter the directory, and then run:
python setup.py install
If you have
pip set up on your system (e.g. in a virtualenv or a
recent Python version), you should be able to fetch Cython from PyPI
and install it using
pip install Cyth" | http://docs.cython.org/en/latest/src/quickstart/install.html | 2017-01-16T17:10:21 | CC-MAIN-2017-04 | 1484560279224.13 | [] | docs.cython.org |
.. highlight:: cython
.. _compilation-reference:
=============
Compilation
=============
Cy.
Compiling from the command line
=============================== :file: :class:`Extension` class takes many options, and a fuller explanation can
be found in the `distutils documentation`_. Some useful options to know about
are ``include_dirs``, ``libraries``, and ``library_dirs`` which specify where
to find the ``.h`` and library files when linking to external libraries.
.. _distutils documentation:
Distributing Cython modules
---------------------------- :file: :func:
===================
The Sage notebook allows transparently editing and compiling Cython
code simply by typing ``%cython`` at the top of a cell and evaluate
it. Variables and functions defined in a Cython cell are imported into the
running session. Please check `Sage documentation | http://docs.cython.org/en/latest/_sources/src/reference/compilation.txt | 2017-01-16T17:12:04 | CC-MAIN-2017-04 | 1484560279224.13 | [] | docs.cython.org |
Working with Python arrays¶
Python has a builtin array module supporting dynamic 1-dimensional arrays of
primitive types. It is possible to access the underlying C array of a Python
array from within Cython. At the same time they are ordinary Python objects
which can be stored in lists and serialized between processes when using
multiprocessing.
Compared to the manual approach with
malloc() and
free(), this
gives the safe and automatic memory management of Python, and compared to a
Numpy array there is no need to install a dependency, as the
array
module is built into both Python and Cython.
Safe usage with memory views¶
from cpython cimport array import array cdef array.array a = array.array('i', [1, 2, 3]) cdef int[:] ca = a print ca[0]
NB: the import brings the regular Python array object into the namespace while the cimport adds functions accessible from Cython.
A Python array is constructed with a type signature and sequence of initial values. For the possible type signatures, refer to the Python documentation for the array module.
Notice that when a Python array is assigned to a variable typed as memory view, there will be a slight overhead to construct the memory view. However, from that point on the variable can be passed to other functions without overhead, so long as it is typed:
from cpython cimport array import array cdef array.array a = array.array('i', [1, 2, 3]) cdef int[:] ca = a cdef int overhead(object a): cdef int[:] ca = a return ca[0] cdef int no_overhead(int[:] ca): return ca[0] print overhead(a) # new memory view will be constructed, overhead print no_overhead(ca) # ca is already a memory view, so no overhead
Zero-overhead, unsafe access to raw C pointer¶
To avoid any overhead and to be able to pass a C pointer to other functions, it is possible to access the underlying contiguous array as a pointer. There is no type or bounds checking, so be careful to use the right type and signedness.
from cpython cimport array import array cdef array.array a = array.array('i', [1, 2, 3]) # access underlying pointer: print a.data.as_ints[0] from libc.string cimport memset memset(a.data.as_voidptr, 0, len(a) * sizeof(int))
Note that any length-changing operation on the array object may invalidate the pointer.
Cloning, extending arrays¶
To avoid having to use the array constructor from the Python module, it is possible to create a new array with the same type as a template, and preallocate a given number of elements. The array is initialized to zero when requested.
from cpython cimport array import array cdef array.array int_array_template = array.array('i', []) cdef array.array newarray # create an array with 3 elements with same type as template newarray = array.clone(int_array_template, 3, zero=False)
An array can also be extended and resized; this avoids repeated memory reallocation which would occur if elements would be appended or removed one by one.
from cpython cimport array import array cdef array.array a = array.array('i', [1, 2, 3]) cdef array.array b = array.array('i', [4, 5, 6]) # extend a with b, resize as needed array.extend(a, b) # resize a, leaving just original three elements array.resize(a, len(a) - len(b))
API reference¶
Data fields¶
data.as_voidptr data.as_chars data.as_schars data.as_uchars data.as_shorts data.as_ushorts data.as_ints data.as_uints data.as_longs data.as_ulongs data.as_floats data.as_doubles data.as_pyunicodes
Direct access to the underlying contiguous C array, with given type;
e.g.,
myarray.data.as_ints.
Functions¶
The following functions are available to Cython from the array module:
int resize(array self, Py_ssize_t n) except -1
Fast resize / realloc. Not suitable for repeated, small increments; resizes underlying array to exactly the requested amount.
int resize_smart(array self, Py_ssize_t n) except -1
Efficient for small increments; uses growth pattern that delivers amortized linear-time appends.
cdef inline array clone(array template, Py_ssize_t length, bint zero)
Fast creation of a new array, given a template array. Type will be same as
template. If zero is
True, new array will be initialized with zeroes.
cdef inline array copy(array self)
Make a copy of an array.
cdef inline int extend_buffer(array self, char* stuff, Py_ssize_t n) except -1
Efficient appending of new data of same type (e.g. of same array type)
n: number of elements (not number of bytes!)
cdef inline int extend(array self, array other) except -1
Extend array with data from another array; types must match.
cdef inline void zero(array self)
Set all elements of array to zero. | http://docs.cython.org/en/latest/src/tutorial/array.html | 2017-01-16T17:12:35 | CC-MAIN-2017-04 | 1484560279224.13 | [] | docs.cython.org |
Subcommittee on Aviation (Committee on Transportation and Infrastructure)
Subcommittee on Aviation (Committee on Transportation and Infrastructure)
Wednesday, April 30, 2014 (10:00 AM)
2167 RHOB Washington, D.C.
20515-6256
Mr. Bryan K. Bedford President and CEO, Republic Airways Holdings
Dr. Gerald L. Dillingham Director of Civil Aviation Issues, U.S. Government Accountability Office
The Honorable Susan Kurland Assistant Secretary for Aviation and International Affairs, U.S. Department of Transportation
Mr. Dan E. Mann Executive Director, Columbia Metropolitan Airport
Captain Lee Moak President, Air Line Pilots Association
Mr. Brian L. Sprenger Airport Director, Bozeman Yellowstone International Airport
First Published:
April 14, 2014 at 10:18 AM
Last Updated:
April 30, 2014 at 10:02 AM | http://docs.house.gov/Committee/Calendar/ByEvent.aspx?EventID=102149 | 2017-01-16T17:11:30 | CC-MAIN-2017-04 | 1484560279224.13 | [] | docs.house.gov |
Migrate to v0.9¶
The purpose of this document is to enable you a smooth experience when upgrading to xlwings v0.9.0 and above by laying out the concept and syntax changes in detail. If you want to get an overview of the new features and bug fixes, have a look at the release notes. Note that the syntax for User Defined Functions (UDFs) didn’t change.
Full qualification: Using collections¶
The new object model allows to specify the Excel application instance if needed:
- old:
xw.Range('Sheet1', 'A1', wkb=xw.Workbook('Book1'))
- new:
xw.apps[0].books['Book1'].sheets['Sheet1'].range('A1')
See Syntax Overview for the details of the new object model.
Connecting to Books¶
- old:
xw.Workbook()
- new:
xw.Book()or via
xw.booksif you need to control the app instance.
See Connect to a Book for the details.
Round vs. Square Brackets¶
Round brackets follow Excel’s behavior (i.e. 1-based indexing), while square brackets use Python’s 0-based indexing/slicing.
As an example, the following')
Access the underlying Library/Engine¶
- old:
xw.Range('A1').xl_rangeand
xl_sheetetc.
- new:
xw.Range('A1').api, same for all other objects
This returns a
pywin32 COM object on Windows and an
appscript object on Mac. | http://docs.xlwings.org/en/stable/migrate_to_0.9.html | 2017-01-16T17:10:18 | CC-MAIN-2017-04 | 1484560279224.13 | [] | docs.xlwings.org |
Engineer: A Static Website Generator for Fellow Engineers¶
Note
Are you looking for documentation on the pre-release version of Engineer? If so, you can find them here:.
- The current release version of Engineer is version 0.5.1.
- This documentation is for version 0.5.1-2-gaef8668-dirty.
At its core, Engineer is a static website generator. In other words, Engineer let’s you build a website from a bunch of files - articles written in Markdown, templates, and other stuff - and outputs another bunch of files - HTML, mostly - that you can then copy wherever you want. It has some very nice Features that will make you happy, but it’s not for everybody.
Engineer was inspired by Brent Simmons, Marco Arment’s Second Crack, Jekyll, Octopress, and Hyde.
Note
The Engineer documentation is a work in progress. It is by-and-large up-to-date and the most relevant sections are complete, but some of the more ‘advanced’ sections are not yet complete.
Bugs and Feature Roadmap¶
If you find any bugs in Engineer please file an issue in the Github issue tracker (or fork and fix it yourself and send me a pull request). Feature ideas and other feedback are welcome as well!
Narrative Documentation¶
- Introduction
- Installation
- Upgrading to Engineer 0.5.0
- Getting Started
- Settings Files
- Themes
- Templates
- Included Plugins
- Engineer Commandline
- Deploying Engineer Sites
- EMMA: Engineer Miniature Management Automaton
- Compatibility With Other Static Site Generators
- Frequently Asked Questions
- How Do I...
- ...change my site theme?
- ...customize my site’s navigation links?
- ...use a custom RSS feed URL, e.g. Feedburner?
- ...add a flat page, like an ‘about’ or ‘contact’ page?
- ...add custom JavaScript or CSS?
- ...hook up Google Analytics (or another analytics system)?
- ...add a favicon or robots.txt file?
- ...put my site at a non-root path on my domain, such as?
- Release Notes
- version 0.5.1 - May 28, 2014
- version 0.5.0 - April 10, 2014
- version 0.4.6 - February 19, 2014
- version 0.4.5 - October 2, 2013
- version 0.4.4 - June 23, 2013
- version 0.4.3 - December 10, 2012
- version 0.4.2 - December 10, 2012
- version 0.4.1 - December 4, 2012
- version 0.4.0 - November 28, 2012
- version 0.3.2 - August 18, 2012
- version 0.3.1 - August 5, 2012
- version 0.3.0 - July 22, 2012
- version 0.2.4 - May 27, 2012
- version 0.2.3 - May 6, 2012
- version 0.2.2 - April 30, 2012
- version 0.2.1 - April 28, 2012
- version 0.2.0 - April 22, 2012
- version 0.1.0 - March 13, 2012
Developer Documentation¶
- The Build Pipeline
- Creating Your Own Themes
- Plugins
- Macros | http://engineer.readthedocs.io/en/master/index.html | 2017-01-16T17:18:33 | CC-MAIN-2017-04 | 1484560279224.13 | [] | engineer.readthedocs.io |
public interface SmartFactoryBean<T> extends FactoryBean<T>
FactoryBeaninterface. Implementations may indicate whether they always return independent instances, for the case where their
FactoryBean.isSingleton()implementation returning
falsedoes()
getObject, getObjectType, isSingleton
boolean isPrototype()().
FactoryBean.getObject(),
FactoryBean.isSingleton()
boolean isEagerInit().
ConfigurableListableBeanFactory.preInstantiateSingletons() | http://docs.spring.io/spring/docs/3.2.0.BUILD-SNAPSHOT/api/org/springframework/beans/factory/SmartFactoryBean.html | 2014-04-16T16:10:28 | CC-MAIN-2014-15 | 1397609524259.30 | [] | docs.spring.io |
it possible to write an extension without having to learn Python’s C API.
If you need to interface to some C or C++ library for which no Python extension currently exists, you can try wrapping the library’s data types and functions with a tool such as SWIG. SIP, CXX Boost, or Weave are also alternatives for wrapping C++ libraries.._Check(), PyTuple_Check(), PyList_Check(), etc.
There is also a high-level API to Python objects which is provided by the so-called ‘abstract’ interface – read Include/abstract.h for further details. It allows interfacing with any kind of Python sequence using calls like PySequence_Length(), PySequence_GetItem(), etc. as well as many other useful protocols such as numbers (PyNumber_Index() et al.) and mappings in the PyMapping APIs.
You can’t. Use PyTuple_Pack() instead.()‘ing!
You can get a pointer to the module object as follows:
module = PyImport_ImportModule(", "<attrname>");
Calling PyObject_SetAttrString() to assign to variables in the module also works..
For C++ libraries,.] = '\0'; /* keep strncat happy */ strncat (code, line, i); /* append line to code */ code[i + j] = '\n'; /* append '\n' to code */ code[i + j + 1] = '\0'; src = Py_CompileString (code, "<stdin>", Py_single_input); if (NULL != src) /* compiled just fine - */ { if (ps1 == prompt || /* ">>> " or */ '\n' == code[i + j - 1]) /* "... " and double '\n' */ { /* so execute it */ dum = PyEval_EvalCode ); }). | https://docs.python.org/dev/faq/extending.html | 2014-04-16T16:11:42 | CC-MAIN-2014-15 | 1397609524259.30 | [] | docs.python.org |
Welcome to pydas’s documentation!¶
Contents:
Module for the main user classes for pydas.
- class pydas.core.Communicator(url, drivers=None)[source]¶
Class for communicating with the Midas server through its drivers.
This module is for the drivers that actually do the work of communication with the Midas server. Any drivers that are implemented should use the utility functions provided in pydas.drivers.BaseDriver by inheriting from that class.
- class pydas.drivers.BaseDriver(url='')[source]¶
Base class for the Midas api drivers.
- login_with_api_key(cur_email, cur_apikey, Midas batchmake module’s API methods.
- add_condor_dag(token, batchmaketaskid, dagfilename, dagmanoutfilename)[source]¶
Adds a condor dag to the given batchmake task
- class pydas.drivers.CoreDriver(url='')[source]¶
Driver for the core API methods of Midas.
This contains all of the calls necessary to interact with a Midas.
provided. :returns: Dictionary containing the details of the created folder.
- folder_children(token, folder_id)[source]¶
Get the non-recursive children of the passed in folder_id.
- generate_upload_token(token, item_id, filename, checksum=None)[source]¶
Generate a token to use for upload.
Midas uses a individual token for each upload. The token corresponds to the file specified and that file only. Passing the MD5 checksum allows the server to determine if the file is already in the asset store.
bitstream. :param filename: The name of the file to generate the upload token for. :param checksum: (optional) The checksum of the file to upload. :returns: String of the upload token.
- desination folder.
- perform_upload(uploadtoken, filename, **kwargs)[source]¶
Upload a file into a given item (or just to the public folder if the item is not specified.
if ‘filepath’ is not set. :param mode: (optional) Stream or multipart. Default is stream. :param folderid: (optional) The id of the folder to upload into. :param item_id: (optional) If set, will create a new revision in the existing item. :param revision: (optional) If set, will add a new file into an existing revision. Set this to “head” to add to the most recent revision. :param filepath: (optional) The path to the file. :returns: Dictionary containing the details of the item created or changed.
- search(search, token=None)[source]¶
Get the resources corresponding to a given query.
dictionary item ‘results’, which is a list of item details.
- set_item_metadata(token, item_id, element, value, qualifier=None)[source]¶
Set the metadata associated with an item.
Share an item to the destination folder.
- class pydas.drivers.DicomextractorDriver(url='')[source]¶
Driver for the Midas dicomextractor module’s API methods.
- class pydas.drivers.MultiFactorAuthenticationDriver(url='')[source]¶
Driver for the multi-factor authentication module’s API methods.
- class pydas.drivers.SolrDriver(url='')[source]¶
Driver for the Midas solr module’s api methods.
- class pydas.drivers.ThumbnailCreatorDriver(url='')[source]¶
Driver for the Midas thumbnailcreator module’s API methods.
- create_big_thumbnail(token, bitstream_id, item_id, width=575)[source]¶
Create a big thumbnail for the given bitstream with the given width. It is used as the main image of the given item and shown in the item view page.
ratio will be preserved). Defaults to 575. :returns: The ItemthumbnailDao object that was created.
- class pydas.drivers.TrackerDriver(url='')[source]¶
Driver for the Midas tracker module’s api methods.
- add_scalar_data(token, community_id, producer_display_name, metric_name, producer_revision, submit_time, value, **kwargs)[source]¶
Create a new scalar data point.
point belongs to. :param producer_revision: The repository revision of the producer that produced this value. :param submit_time: The submit timestamp. Must be parsable with PHP strtotime(). :param value: The value of the scalar. silent: (optional) If true, do not perform threshold-based email notifications for this scalar. :param unofficial: (optional) If true, creates an unofficial scalar visible only to the user performing the submission. :returns: The scalar object that was created.
- associate_item_with_scalar_data(token, item_id, scalar_id, label)[source]¶
Associate a result item with a particular scalar value.
- upload_json_results(token, filepath, community_id, producer_display_name, metric_name, producer_revision, submit_time, **kwargs)[source]¶
Upload a JSON file containing numeric scoring results to be added as scalars. File is parsed and then deleted from the server.
that produced this value. :param submit_time: The submit timestamp. Must be parsable with PHP strtotime(). parent_keys: (optional) Semicolon-separated list of parent keys to look for numeric results under. Use ‘.’ to denote nesting, like in normal javascript syntax. :param silent: (optional) If true, do not perform threshold-based email notifications for this scalar. :param unofficial: (optional) If true, creates an unofficial scalar visible only to the user performing the submission. :returns: The list of scalars that were created. | http://pydas.readthedocs.org/en/latest/ | 2014-04-16T15:58:34 | CC-MAIN-2014-15 | 1397609524259.30 | [] | pydas.readthedocs.org |
MVC framework SPI interface, allowing parameterization of core MVC workflow.Servlet. Non-Ordered instances get treated as lowest priority.
public boolean supports(java.lang.Object handler)
A typical implementation:
return handler != null && MyHandler.class.isAssignableFrom(handler.getClass());
handler- handler object to check
public ModelAndView handle(javax.servlet.http.HttpServletRequest request, javax.servlet.http.HttpServletResponse response, java.lang.Object handler) throws javax.servlet.ServletException, java.io.IOException
request- current HTTP request
response- current HTTP response
handler- handler to use. This object must have previously been passed to the supports() method of this interface, which must have returned true. Implementations that generate output themselves (and return null from this method) may encounter IOExceptions.
javax.servlet.ServletException- if there is a general error
java.io.IOException- in case of I/O errors
public long getLastModified(javax.servlet.http.HttpServletRequest request, java.lang.Object handler)
request- current HTTP request
handler- handler to use
HttpServlet.getLastModified(javax.servlet.http.HttpServletRequest) | http://docs.spring.io/docs/tmp/com/interface21/web/servlet/HandlerAdapter.html | 2014-04-16T16:41:46 | CC-MAIN-2014-15 | 1397609524259.30 | [] | docs.spring.io |
Leaderboards
OverviewOverview
The AccelByte Leaderboard Service enables you to keep track of players’ scores and ranking in the game by collecting data from the Statistics service such as the number of matches played, the player’s matchmaking rating, and their experience points. The Leaderboard service will use this data to calculate each player’s rank. This service supports multiple leaderboards in one game, including a daily leaderboard for day to day player activities, a weekly leaderboard for a weekly player recap, a monthly leaderboard to record player activities every month, a seasonal leaderboard for seasonal player activities such as holiday events, and also an all-time leaderboard to record and player activities for all time. Each type of leaderboard is explained in greater detail below.
Types of LeaderboardsTypes of Leaderboards
- The Daily Leaderboard collects players' daily scores and ranking once the leaderboard has been created. When the leaderboard has been generated, players can start playing the game and the leaderboard service will collect their score and ranking until the daily reset time. Players have to keep playing to maintain their position on the daily leaderboard.
- The Weekly Leaderboard is similar to the Daily Leaderboard, but player scores and rankings will be updated every week. Players have to keep playing to maintain their position on the weekly leaderboard. The Monthly Leaderboard is also similar to the Daily and Weekly Leaderboards. The player's score and ranking will be updated every month.
- The All-Time Leaderboard doesn't have any time constraint; it will record the player’s score and ranking as long as the leaderboard exists in-game. As player scores continue to accumulate this leaderboard will be updated.
- The Season Leaderboard runs for a specific period of time that is set when the leaderboard is created. For example, if an event is running on the 1st of May between 00:30 AM and 2:30 AM, the season leaderboard will only record the rank and score of players who took part in that particular event. Once the event is over, the leaderboard will be reset.
How It WorksHow It Works
Single LeaderboardSingle Leaderboard
Leaderboard StartedLeaderboard Started
The leaderboard will start from the defined start time. At that time the leaderboard will retrieve the player’s current statistic scores, which will become the initial scores for the all-time leaderboard. If the leaderboard data also includes a daily, weekly, monthly, or seasonal suffix, it will also map to the statistic code.
Update a Player’s Statistic ScoreUpdate a Player’s Statistic Score
The leaderboard consumes statistic events to collect a player’s latest score. Then, players' delta score will be added to the existing score in related leaderboards such as daily, weekly, monthly, or seasonal, and the all-time leaderboard score will be replaced with the latest score.
Reset a LeaderboardReset a Leaderboard
A leaderboard will be reset based on its reset time that was calculated when the leaderboard started or when it is restarted.
Character LeaderboardCharacter Leaderboard
Update Character LeaderboardUpdate Character Leaderboard
When a player levels up or enters a new arena, the game client sends their score data to the Statistics service. There, the character event will be published with additional data, such as the character’s name and skills. On the other end, the Leaderboard service will fetch the character score data from Kafka and save the new score and rank data in its internal database.
Get Character LeaderboardGet Character Leaderboard
A game client call retrieves the leaderboard data from the Leaderboard service. It will return all related leaderboard data, including additional information such as character names and skills.
TutorialsTutorials
Create a New Leaderboard ConfigurationCreate a New Leaderboard Configuration
Before you create a leaderboard config, make sure you have created a statistic configuration in the same game namespace. Statistics configs are used to update a player's leaderboard rank.
Create a New Leaderboard Configuration Using APICreate a New Leaderboard Configuration Using API
Use the Create New Leaderboard POST - /leaderboard/v1/admin/namespaces/{namespace}/leaderboards endpoint.
Input the Namespace field with the game namespace.
Fill out the Request Body:
- Input the Reset Time for a Daily leaderboard. For other types of leaderboards, this field can be left blank.
- Define the order of the leaderboard in the Descending fields. Input this field with true if you want the leaderboard to appear in descending order.
- Input the iconURL field with your leaderboard icon URL.
- Input the Leaderboard Code. The code must be in lowercase and can contain a maximum of 32 characters.
- Input the Reset Date and Reset Time for a Monthly leaderboard. Input the Reset Date field with a number from 1 to 31. The default value is 1. For other types of leaderboards, these fields can be left blank.
- Input the Name of the leaderboard.
- If you want to create a Seasonal leaderboard, you need to input the Season Period with the number of days that will pass before the leaderboard is reset. This value must be greater than 31 days.
- Input the Start Time of the leaderboard with RFC3339 standard format, e.g. 2020-10-02T15:00:00.05Z.
- Input the Stat Code with the related statistic code that you’ve created. This will be the statistic the leaderboard draws from.
- Input the Reset Day and Reset Time for a Weekly leaderboard. For other types of leaderboards, this field can be left blank.
Upon successful request, a new leaderboard config will be created.
note
- The reset time must be in hours:minutes in 24-hour format, e.g. 01:30, 10:30, 15:30, 23:15. The default value is 00:00.
- The reset day must be in numerical order starting from 0 for Sunday to 6 for Saturday. The default value is 0.
Create a New Leaderboard Through the Admin PortalCreate a New Leaderboard Through the Admin Portal
In the Leaderboard menu of the Admin Portal, click the Create Leaderboard button.
Fill in the required fields.
Input the Leaderboard Code with the appropriate format.
Input the Stat Code with the related statistic code that you’ve created. This will be the statistic that the leaderboard draws from.
Input the Start Date with RFC3339 standard format, e.g. 2020-10-02T15:00:00.05Z.
Input the Daily reset time for a daily leaderboard.
Input a day and time for the Weekly reset time for a weekly leaderboard.
Input the day and time for the Monthly reset time for a monthly leaderboard. For example, if you input 1st in the date field and 12:00 AM in the time field, the reset time will be the first day of the month at 12:00AM.
If you select the Seasonal option, you must input the number of days it will take for the leaderboard to reset in the Season Period Days field.
Choose the Order of the Leaderboard.
Select an Icon for your leaderboard config.
When you’re done, click the Add button to create your new leaderboard.
The new leaderboard will appear in the Leaderboard List.
Get Leaderboard RankingsGet Leaderboard Rankings
Get Leaderboard Rankings Using APIGet Leaderboard Rankings Using API
You can get the leaderboard ranking data for all-time, monthly, seasonal, and daily leaderboards by using their respective endpoint. But, if needed you can get all leaderboards by a particular namespace using the respective endpoint. Follow the steps below to make a request:
- Input the Namespace field with the game namespace.
- Input the Leaderboard Code. The code must be in lowercase and can contain a maximum of 32 characters.
- If you want to paginate the displayed daya data, input the limit with the number of lines of data to be returned, and offset with the start position of the queried data.
Upon successful request, you will retrieve the ranking data for the desired leaderboard.
Get Leaderboard Rankings Through the Admin PortalGet Leaderboard Rankings Through the Admin Portal
In the Leaderboard List of the Admin Portal, choose the leaderboard you want to see ranking data for by clicking on the Action column and selecting View.
In this example, we choose to see the all-time leaderboard rankings. You can view the rankings for all types of leaderboards.
Delete a User RankingDelete a User Ranking
To keep your leaderboards clean and fair, you can remove players who cheat.
Delete a User Ranking Using APIDelete a User Ranking Using API
Follow the steps below to remove players from the leaderboard.
- Use the Delete User Ranking: DELETE - /leaderboard/v1/admin/namespaces/{namespace}/leaderboards/{leaderboardCode}/users/{userId} endpoint.
- Input the Game Namespace.
- Input the Leaderboard Code.
- Input the User ID of the player you want to remove from the leaderboard. To remove multiple players, separate each player’s user ID with a comma.
Upon successful request, you will remove the player from the leaderboard.
Implementing Leaderboards Using the SDKImplementing Leaderboards Using the SDK
Get RankingsGet Rankings
Retrieve all player rankings using a specific leaderboard code. The data is presented in descending order.
- UE4
- Unity
Get User RankingGet User Ranking
Get a specific player’s ranking using their User ID and the desired leaderboard code.
- UE4
- Unity
Related ConceptsRelated Concepts
- Check out the API Reference to learn more about Leaderboards. | http://docs-dev.accelbyte.io/docs/essential-service-guides/social/leaderboards/ | 2021-06-13T00:04:37 | CC-MAIN-2021-25 | 1623487586465.3 | [array(['/img/social/leaderboard/d1.png', 'Leaderboard leaderboard'],
dtype=object)
array(['/img/social/leaderboard/d2.png', 'Leaderboard leaderboard'],
dtype=object)
array(['/img/social/leaderboard/d3.png', 'Leaderboard leaderboard'],
dtype=object)
array(['/img/social/leaderboard/d4.png', 'Leaderboard leaderboard'],
dtype=object)
array(['/img/social/leaderboard/d5.png', 'Leaderboard leaderboard'],
dtype=object) ] | docs-dev.accelbyte.io |
- Sharepoint
- Connect Using OAuth 2.0 Authentication
- Connect using the Sharepoint__SharedLogin authentication provider
- Using the Sharepoint Data Source Type
- Troubleshooting
Sharepoint¶
The SharePoint data source type allows end users to access and manipulate files and data stored in a Sharepoint Online instance. While similar to OneDrive, Sharepoint is intended for team collaboration and communication while working on a project. For more information on the differences between OneDrive and Sharepoint, see Microsoft documentation.
When configuring a connection to your Sharepoint instance, there are two different authentication mechanisms to choose from:
Using an OAuth 2.0 authentication provider
Using the SharePoint_SharedLogin custom authentication provider
Note
This method is only available to Skuid on Salesforce users.
Warning
Sharepoint connections are not supported when using Skuid on Salesforce in Lightning.
Connect Using OAuth 2.0 Authentication¶
To connect to SharePoint using OAuth2.0, you’ll need to complete three steps:
- Create a SharePoint app
- Create the SharePoint authentication provider
- Create the SharePoint data source
Prerequisites¶
A SharePoint Online instance, accessible online via a URL such as https://<CompanyName>.sharepoint.com, (where <CompanyName> is your SharePoint Online company name).
The app domain for your Skuid site.
- For Skuid Platform:
- Use the base URL for your Skuid site:
example.skuidsite.com.
- For Skuid on Salesforce:
- Use the Salesforce App Menu to navigate to the Skuid app, and click it.
- In the Skuid app, copy the URL.
- Remove everything from /apex onwards.
The redirect URI.
In SharePoint¶
Create a SharePoint App / OAuth client¶
In order for Skuid to connect to SharePoint Online using OAuth 2.0, you must register a SharePoint Online “App”, which is a named OAuth Client.
Login to your SharePoint Online account.
Navigate to the Register an App page at
https://<CompanyName>.sharepoint.com/_layouts/15/appregnew.aspx(where <CompanyName> is your SharePoint Online company name).
For example, if the company name is Acme, the location would be:
Client Id: Click Generate.
Client Secret: Click Generate.
Note
Copy the generated client Id and secret and store in a secure place. You’ll need them later in this process.
Title: Use a name that is descriptive.
App Domain: Use the app domain as noted in the Prerequisites section.
Redirect URI: Use the redirect URI as noted in the Prerequisites section.
Click Create.
Edit the App Permissions¶
Next, edit the SharePoint App’s permissions to define the client’s access to the SharePoint Online instance.
- Navigate to
https://<CompanyName>.sharepoint.com/_layouts/15/appinv.aspx, and make the following edits:
App ID: Enter the App’s Client Id (previous section, Step 3) into the App Id field.
- Click Lookup to search for the App in the SharePoint Online instance’s registered apps.
- Select the app.
Permission Request XML: This XML determines the permissions for your Sharepoint app. Refer to Microsoft documentation to properly determine the best permissions settings for your app.
The examples below will assume the following Permission Request XML:
<AppPermissionRequests> <AppPermissionRequest Scope="" Right="Write" /> </AppPermissionRequests>
Note
You may want to change the “Write” in Right=”Write” to one of the following:
- Read
- Write
- Manage
- Full Control
Click Create to apply these changes to your App.
If presented with a “Do you trust …?” screen, click Trust It.
In Skuid¶
Create an authentication provider that uses the SharePoint app’s client ID and secret.
Create an authentication provider¶
Navigate to Configure > Data Sources > Authentication Providers.
Click New Authentication Provider.
Fill out the necessary fields:
Name: Enter a unique name, such as SharePoint_Auth
Authentication Method: OAuth 2.0 / OpenID
Provider Type: SharePoint Online
Grant Type: Authorization Code
Company Name: Your SharePoint Online Company Name.
Site Collection Path: Depending on the company’s SharePoint Online setup, it may be necessary to indicate the site collection path.
Note
Entering Company Name (and, if required, Site Collection Path) will auto-populate the following fields:
- The Authorize and Token Endpoint URLs
- Token Request Body Parameters
Client ID: Enter the Sharepoint app’s client ID (see the previous section).
Client Secret: Enter the Sharepoint app’s client secret (see the previous section.
Click Save.
If asked to create Remote Site Settings, click OK.
Create a data source¶
With the authentication provider configured, create a data source that uses it to authenticate and gain access to SharePoint data.
- Navigate to Configure > Data Sources > New Data Sources.
- Click New Data Source.
- Select the SharePoint data source type.
- Enter a unique name for your data source, such as SharePointOnline.
- Click Next Step.
- Enter the required information as noted below:
- URL/Endpoint:
https://<CompanyName>.sharepoint.com/_apiwhere <CompanyName> is your SharePoint Online company name.
- OData Version: 3
- Check Use Proxy / Use Apex Proxy.
- Authentication Method: Authentication Provider
- Authentication Provider: Select the authentication provider you created in the previous section.
- Click Save.
- If Skuid asks to create a Remote Site Setting, click OK.
You can now use your SharePoint data source within a model on a Skuid page.
Connect using the Sharepoint__SharedLogin authentication provider¶
Note
This authentication option is only available for Skuid on Salesforce users.
To use the Sharepoint_SharedLogin authentication provider, you must save at least one set of Sharepoint username and password credentials within Skuid.
After saving these credentials, complete the following steps:
- Decide on a credentialing option
- Create the SharePoint data source
- Create a Remote Site setting for the SharePoint instance
Credential Options¶
Before creating the data source type, decide on the type of credentialing the data source will use.
Shared: Org-Wide credentials¶
With this option, all end users share a single Sharepoint login.
Warning
This option is not recommended unless all Skuid users have equal permissions for all data in the SharePoint instance.
Shared: Per-Profile credentials¶
If you have properly assigned data source profiles and permissions and would prefer to have logins shared by those profiles, select Shared: Per-Profile, with optional Org-Wide Default option.
Per-User credentials (recommended option)¶
If you want end users to enter their own credentials individually, choose Per-User, with optional Profile / Org-wide Defaults.
With Per-User credentials, users will need to enter their username and password for Sharepoint Online in the Credentials Management tab, accessible through the MyCredentials button in the Skuid navbar.
Note
If end users cannot access the My Credentials page, make sure that users’ Salesforce profiles or permission sets grant them access to the Credentials Management tab included in the Skuid app, and that this tab is both accessible and visible.
Create a data source¶
- Navigate to Configure > Data Sources > New Data Sources.
- Click New Data Source.
- Select the SharePoint data source type.
- Enter a unique name for your data source, such as SharePointOnline.
- Click Next Step.
- Enter the URL of the Sharepoint database that allows API access. This should be similar to
https://<CompanyName>.sharepoint.com/_apiwhere <CompanyName> is your SharePoint Online company name.
- Check Use Apex Proxy.
- For Authentication:
- Authentication Method: Authentication Provider (OAuth, Custom, etc)
- Authentication Provider: SharePoint_SharedLogin.
- Credential Source: (See the previous section for details on these options)
- Shared: Org-Wide
- Shared: Per-Profile, with optional Org-wide Default
- Per-User, with optional Profile / Org-wide Defaults
Once the Credential Source is selected, two new fields open. If using Shared: Org Wide, or using the option Org-wide default for the other two options, enter the following information:
Org-wide default username
Org-wide default password
Note
These credentials will NOT be visible or accessible to Skuid page users, but Skuid will use these credentials to authenticate users when making requests to SharePoint Online.
- Click Save.
- Skuid will ask to create a Remote Site Setting. Click OK.
Remote Site Settings¶
Skuid automatically created the Remote Site Setting for the actual Sharepoint instance (see Step 11, above); you must manually setup a Remote Site Setting for the Microsoft login URL that Skuid uses to authenticate to SharePoint Online.
In Salesforce Setup:
- Navigate to Security Controls > Remote Site Settings.
- Click New Remote Site.
- Give the Remote Site a descriptive name (something like “Skuid_Sharepoint_SharedLogin”).
- For the Remote Site URL, enter.
- Click Save.
If using Shared: Per-Profile credentials¶
After saving the data source and creating the Remote Site settings, find the data source in the list of data sources (Configure > Data Sources) and click Advanced Settings next to the SharePoint data source.
- Click Profile Credentials in the top right of the configuration area.
- Click fa-plus-circle Add to create new credential sets with the following fields:
- Applies to: Select from the picklist.
- Username
Using the Sharepoint Data Source Type¶
After configuring your authentication and data source settings, you can create models that reference Sharepoint objects.
In a Skuid page, click fa-plus-circle Add Model to create a new model, then edit the model:
- Name: Give the model a name
- Data Source Type: Sharepoint.
- Data Source: Select the data source that points to your Sharepoint database.
If using OAuth authentication [[]]¶
At the first use of this data source, Skuid opens a popup asking you to log into the Sharepoint app. Once logged in, SharePoint will display a screen asking, “Do you trust [the Skuid Sharepoint app title]?” Click Trust It.
When this is complete, the SharePoint object list will populate allowing you to select SharePoint external objects.
Warning
The browser may block this popup. In Google Chrome‚ for instance, a red X and a window icon appears in the URL bar.
Click on the icon, and then click Always allow popups from…
Click Done.
Click on this icon again, then click on the link that was blocked to open it up.
Other browsers will have similar processes to unblock popups.
Select the SharePoint External Object from the picklist.
- Model Label and Model Plural Label fields: enter appropriate labels.
Add Fields and Conditions, etc. to the model, as needed. Click Save.
Save and then Preview the page.
You can now use your SharePoint data source within a model on a Skuid page.
Using Sharepoint with the File Upload Component¶
You may use the File Upload File Upload Component to upload files to SharePoint. If you select a Sharepoint model for your File Upload Data Source Type, the following fields will appear:
- File Storage Location:
relative_server_url
- This default value simply means files will be sent to the URL specified during data source configuration..
- Relative folder URL: The parameter Skuid will append to the File Storage location URL.
- Update Metadata: Specify what, if any, metadata values will be sent to SharePoint along with the uploaded file. These values will update column records in the Sharepoint folder set as the storage location. Column values are updated by name-value pairs, which are separated by a colon and then delimited with a comma.
For example, if your SharePoint folder has an OpportunityName and RecordID column, inserting
"OpportunityName:KrakenConsulting, RecordID:0001" into this field would update the OpportunityName column record for the uploaded file to
KrakenConsulting and the RecordId column record to
0001.
Data Source Actions¶
- Download File: Commonly used as a row action with the
SP.Filesentity, this action downloads a file.
- File URL (Required): The URL Skuid uses to download files. Accepts merge syntax. In most use cases, this value should be {{Url}}.
Note
- Ensure that your
SP.Filesmodel is pulling in the Url field.
- Folders cannot be downloaded and will deliver an error page to the end user if an attempt is made. If using this action type in a row action, use display logic to display the action when
Size != 0, as folders do not have a file size in Sharepoint.
Troubleshooting¶
I’m having issues connecting to my instance [[]]¶
If using OAuth to connect to Sharepoint, verify that the authentication provider is properly configured.
Most importantly—even if the Company Name and Site Collection path properties have been set—ensure that the
<Company Name>and
<Site Collection Path>``values have been updated in the **authorize endpoint URL** and the ``<Company Name>value has been updated in the token endpoint URL.
If issues continue, try creating a remote site setting for
https://<Company Name>.sharepoint.com/_api/$metadata, where
<Company Name>matches the name of your organization.
Nothing appears when I try to select a model object. [[]]¶
- Verify that the Permission Request XML has been updated in your Sharepoint app registration.
General debugging [[]]¶
- See the data troubleshooting for general debugging advice.
- Visit community.skuid.com to ask questions or report problems and give feedback. | https://docs.skuid.com/v11.2.7/en/data/microsoft/sharepoint.html | 2021-06-13T00:20:28 | CC-MAIN-2021-25 | 1623487586465.3 | [] | docs.skuid.com |
Scanning, as described in the next topic, Formatting. | https://www.docs4dev.com/docs/en/java/java8/tutorials/essential-io-scanning.html | 2021-06-12T23:41:33 | CC-MAIN-2021-25 | 1623487586465.3 | [] | www.docs4dev.com |
8.5.202.59
Web Services and Applications Release Notes
Helpful Links
Releases Info
Product Documentation
Web Services and Applications
Genesys Products
What's New
This release includes only resolved issues.
Resolved Issues
This release contains the following resolved issues:
Web Services API
A third-party library issue that occurred during resource-intensive Cassandra read operations, such as Full Sync, has been resolved. (HTCC-30488)
New elements that should not be visible in Contact History will not be parsed. (HTCC-30452)
Upgrade Notes
No special procedure is required to upgrade to release 8.5.202.59.
This page was last edited on April 24, 2019, at 17:31. | https://docs.genesys.com/Documentation/RN/latest/web-svr-apps85rn/web-svr-apps8520259 | 2021-06-13T01:01:28 | CC-MAIN-2021-25 | 1623487586465.3 | [] | docs.genesys.com |
This section explains how to attach a custom workflow to the application creation operation in WSO2 API Manager (WSO2 API-M). First, see Workflow Extensions for information on different types of workflow executors.
Attaching a custom workflow to API creation allows you to control the creation of applications within the Store. An application is the entity that holds a set of subscribed API's that would be accessed by a authorization key specified for that praticular application. Hence, controlling the creation of these applications would be a decision based on the oragnization's requirement. Some example use cases would be
- Review the information of the application by a specific reviewer before the application is created.
- The application creation would be offered as a paid service.
- The application creation should be allowed only to users who are in a specific role.
Before you begin, if you have changed the API Manager's default user and role, make sure you do the following changes:
- Change the credentials of the workflow configurations in the registry resource
_system/governance/apimgt/applicationdata/workflow-extensions.xml.
- Point the database that has the API Manager user permissions to BPS.
- Share any LDAPs,. For more information, see Changing the Default Ports with Offset.
<Offset>2</Offset>
Tip: If you change the BPS port offset to a value other than 2 or run the
<API-M_HOME>/business-processes/epr management console (
https://<Server Host>:9443+<port-offset>/carbon).
If you are using Mac OS with High Sierra, you may encounter following warning when login into the Management console due to a compression issue <BPS_HOME>/repository/conf/tomcat/catalina-server.xml and change the compression="on" to compression="off" in Connector configuration and restart the BPS.
-.
Configuring WSO2 API Manager
Open the
<API creation workflow.
https://<Server-Host>:9443/carbon) and select Browse under Resources.>
All the workflow process services of the BPS run on port 9765 because you changed its default port (9763) with an offset of 2.
The application creation WS Workflow Executor is now engaged.
Go to the API Store, click Applications and create a new application.
It invokes the application creation process and creates a Human Task instance that holds the execution of the BPEL process until some action is performed on it.
Note the message that appears if the BPEL is invoked correctly, saying that the request is successfully submitted.
Sign in to the Admin Portal (), list all the tasks for application creation and approve the task. It resumes the BPEL process and completes the application creation.
Go back to the Applications page in WSO2 API Store and see the created application.
Whenever a user tries to create an application in the API Store, a request is sent to the workflow endpoint. Given below is a sample:
: | https://docs.wso2.com/pages/viewpage.action?pageId=80724606&navigatingVersions=true | 2021-06-12T22:49:48 | CC-MAIN-2021-25 | 1623487586465.3 | [] | docs.wso2.com |
You can digitally sign or encrypt messages if you use a work email account that supports S/MIME-protected messages or IBM Notes email encryption on your BlackBerry device. Digitally signing or encrypting messages adds another level of security to email messages that you send from your device.
Digital signatures are designed to help recipients verify the authenticity and integrity of messages that you send. With S/MIME-protected messages, when you digitally sign a message using your private key, recipients use your public key to verify that the message is from you and that the message hasn't been changed.
Encryption is designed to keep messages confidential. With S/MIME-protected messages, when you encrypt a message, your device uses the recipient’s public key to encrypt the message. Recipients use their private key to decrypt the message.. | http://docs.blackberry.com/en/smartphone_users/deliverables/62002/cfl1391022272000.html | 2015-07-28T03:54:53 | CC-MAIN-2015-32 | 1438042981525.10 | [] | docs.blackberry.com |
random float number between and
min [inclusive] and
max [inclusive] (Read Only).
// Instantiate the prefab somewhere between -10.0 and 10.0 on the x-z plane var prefab : GameObject; function Start () { var position: Vector3 = Vector3(Random.Range(-10.0, 10.0), 0, Random.Range(-10.0, 10.0)); Instantiate(prefab, position, Quaternion.identity); }); } }
Returns a random integer number between
min [inclusive] and
max [exclusive] (Read Only).
If
max equals
min,
min will be returned. The returned value will never be
max unless
min equals
max.
// Loads a random level from the level list
Application.LoadLevel(Random.Range(0, Application.levelCount));
using UnityEngine; using System.Collections;
public class ExampleClass : MonoBehaviour { void Example() { Application.LoadLevel(Random.Range(0, Application.levelCount)); } } | http://docs.unity3d.com/ScriptReference/Random.Range.html | 2015-07-28T03:28:15 | CC-MAIN-2015-32 | 1438042981525.10 | [] | docs.unity3d.com |
...
Description / Features
This plugin enhances the Java Ecosystem to analyze Android projects within SonarQube:
- Adds a new set of set of rules based on Android lint
- On top of Java files, Android Manifest and resources (such as layouts or pictures) are analyzed
...
The Android SDK must be installed on the machine(s) running the SonarQube analyses. The
ANDROID_HOME environment variable should be configured to point to the installation directory of the Android SDK.
Note that you have to install the different Patforms/API whose that.
...
See Extending Coding Rules for Java.
SQALE
As Since the SQALE model for Java is already provided by the Java Ecosystem, the SQALE model for Android has to be applied manually:
- Download the XML file containing the SQALE model
- Log in as a System administratorAdministrator
- Go to Settings > SQALE > ImportBack Up / Export Restore > Merge modelModel
- Upload the XML file
- Click on "Merge selected files"
Change Log
- Restore | http://docs.codehaus.org/pages/diffpages.action?originalId=231737620&pageId=230400039 | 2014-04-16T10:52:02 | CC-MAIN-2014-15 | 1397609523265.25 | [] | docs.codehaus.org |
I.
Parent topic:
Service books and diagnostic reports
Related tasks
Run, view, send, or delete a diagnostic report | http://docs.blackberry.com/en/smartphone_users/deliverables/48593/1593348.html | 2014-04-16T10:40:06 | CC-MAIN-2014-15 | 1397609523265.25 | [] | docs.blackberry.com |
= loglaplace.cdf(x, c) >>> h = plt.semilogy(np.abs(x - loglaplace.ppf(prb, c)) + 1e-20)
Random number generation
>>> R = loglaplace.rvs(c, size=100)
Methods | http://docs.scipy.org/doc/scipy-0.12.0/reference/generated/scipy.stats.loglaplace.html | 2014-04-16T10:19:10 | CC-MAIN-2014-15 | 1397609523265.25 | [] | docs.scipy.org |
Model Form Functions¶
- modelform_factory(model, form=ModelForm, fields=None, exclude=None, formfield_callback=None, widgets=None, localized_fields=None, labels=None, help_texts=None, error_messages=None)¶
Returns a ModelForm class for the given model. You can optionally pass a form argument to use as a starting point for constructing the ModelForm..
widgets is a dictionary of model field names mapped to a widget.
formfield_callback is a callable that takes a model field and returns a form field.
localized_fields is a list of names of fields which should be localized.
labels is a dictionary of model field names mapped to a label.
help_texts is a dictionary of model field names mapped to a help text.
error_messages is a dictionary of model field names mapped to a dictionary of error messages.
See ModelForm factory function for example usage.
You must provide the list of fields explicitly, either via keyword arguments fields or exclude, or the corresponding attributes on the form’s inner Meta class. See Selecting the fields to use for more information. Omitting any definition of the fields to use will result in an ImproperlyConfigured exception.Changed in Django Development version:
Previously, omitting the list of fields was allowed and resulted in a form with all fields of the model.
- modelformset_factory(model, form=ModelForm, formfield_callback=None, formset=BaseModelFormSet, extra=1, can_delete=False, can_order=False, max_num=None, fields=None, exclude=None, widgets=None, validate_max=False, localized_fields=None, labels=None, help_texts=None, error_messages=None)¶
Returns a FormSet class for the given model class.
Arguments model, form, fields, exclude, formfield_callback, widgets, localized_fields, labels, help_texts, and error_messages are all passed through to modelform_factory().
Arguments formset, extra, max_num, can_order, can_delete and validate_max, widgets=None, validate_max=False, localized_fields=None, labels=None, help_texts=None, error_messages. | https://docs.djangoproject.com/en/dev/ref/forms/models/ | 2014-04-16T10:10:47 | CC-MAIN-2014-15 | 1397609523265.25 | [] | docs.djangoproject.com |
There are many ways in which a recorded test will not run as expected and result in a test failure. For example:
The pages in this section include diagnostic and troubleshooting guides that will help guide you in isolating the root cause and suggestions for how to resolve the problem.
A great first step for any troubleshooting scenario is to enable logging, and then view the log after the problematic behavior occurs. These steps can be done from the Help tab. If logging is already enabled, click Clear Log to remove old information. | http://docs.telerik.com/teststudio/user-guide/troubleshooting_guide.aspx | 2014-04-16T10:25:19 | CC-MAIN-2014-15 | 1397609523265.25 | [] | docs.telerik.com |
Setting up taxes and tax rates is likely one of the first tasks you’ll want to perform when setting up a store. Taxes can be a complex matter, but WooCommerce aims to make setting them up as straightforward as possible. To get started you’ll want to go to WooCommerce > Settings > Tax.
Tax Options ↑ Back to Top
The tax tab displays several options which you can configure to suit your needs – the settings which you choose will ultimately be based on the tax jurisdiction under which your store is located.
The options are as follows:
Enable taxes
Define whether to enable taxes and tax calculations. If taxes are disabled, you can ignore the rest of the options on the page as they will have no effect.
Prices Entered With Tax
This option is perhaps the most important option when it comes to setting up how you manage taxes in your store as it determines how you will input product prices later on.
“Yes, I will enter prices inclusive of tax” would mean all catalog prices are input using your store’s base tax rate.
For example, in the UK you would input prices inclusive of the 20% tax rate e.g. You would enter a product price of £9.99 which includes £1.67 tax. A customer in the UK would pay £9.99 as defined, whereas in this example…
You can choose between:
- Customer billing address
- Customer shipping address (default)
- Store base address.
Shipping Tax Class
In most setups, shipping tax class is inherited from the item being shipping e.g. shipping a reduced rate item like baby clothes would also used a reduced rate.
If this is not the case in your jurisdiction, choose a different tax class here.
Round tax at subtotal level, instead of per line
If in your tax jurisdiction rounding is done last (when the subtotal is calculated) enable this option.
Additional Tax Classes
Tax Classes are assigned to your products. In most cases you will want to use the default “standard” class. If however you sell goods which require a different tax class (for example Tax except zero-rated products) you can add the classes here. To get you started we include “standard”, “reduced-rate” and “zero-rate” tax classes.
You will notice that at the top of the tax settings page each class is listed – click a class to view the tax rates assigned to the class.
Display prices during cart/checkout
This option determines how prices are displayed on your cart and checkout pages – it works independently from your catalog prices. Choose from inclusive/exclusive tax display.
Setting up Tax Rates ↑ Back to Top
At the top of the tax screen you will notice your tax classes are displayed – click on one to view the tax rates for the class.
Once viewing your tax class, you will see the tax rates table. Here you can define tax rates (1 per row). Click ‘insert row’ to get started.
Each tax rate has the following you may for California which has a 7% tax rate and then a local tax rate of 2% for ZIP code 90210. Notice the priorities – this demonstrates how you can ‘layer’ rates on top of one another.
Importing and exporting rates
There is an export button within the table which you can use to export a CSV of your input rates.
There is also an import function which you can use to import a CSV. The CSV requires 10 columns;
country code, state code, postcodes, cities, rate, tax name, priority, compound, shipping, tax class
Leave tax class blank for standard rates.
Viewing Tax Reports ↑ Back to Top
Tax reporting can be found in Reports > Taxes by Month. This report lets you view the taxes for the year:
The ‘toggle tax rows’ shows a different report breaking up taxes based on your rules. This is useful if you need a report showing local taxes separately for example. | http://docs.woothemes.com/document/setting-up-taxes-in-woocommerce/ | 2014-04-16T10:09:54 | CC-MAIN-2014-15 | 1397609523265.25 | [array(['http://docs.woothemes.com/wp-content/uploads/2014/03/WooCommerce-Settings-Tax.png',
'WooCommerce-Settings-Tax'], dtype=object)
array(['http://docs.woothemes.com/wp-content/uploads/2014/02/C26Z.png',
'CA-Tax-Rate-example'], dtype=object) ] | docs.woothemes.com |
Help Center
Local Navigation
Wi-Fi Roaming Threshold configuration setting
Description
This setting determines how often the Wi-Fi® transceiver scans for nearby wireless access points and roams to one of them if the signal quality is better than the signal of the current access point.
Default value
The default value is Auto. A BlackBerry® device selects roaming thresholds automatically.
Usage
When you configure this setting to Low, a BlackBerry device roams only when signal quality is very low.
When you configure this setting to Medium, a BlackBerry device roams when the signal quality is medium to low.
When you configure this setting to High, a BlackBerry device roams aggressively to access points with better signal strength.
Was this information helpful? Send us your comments. | http://docs.blackberry.com/en/admin/deliverables/10872/WLAN_Roaming_Thresholdc_607199_11.jsp | 2014-04-16T10:58:58 | CC-MAIN-2014-15 | 1397609523265.25 | [] | docs.blackberry.com |
Switching the Series Type
AnyChart provides a method allowing to change the series type if the current type and the new one have the same or similar fields. See the list of supported chart types to find out what series types can be converted to each other.
To switch the series type, use the seriesType() method of a series and set the name of the series type as a string parameter. The name of the series type used as a parameter is identical to the method used to create series of this type, e.g. bar() method turns into "bar", line() turns into "line" and so on.
The sample below demonstrates how the feature works with line, column, and area series, which require only one value:
// set the data var data = anychart.data.set([ ["Spring", 10], ["Summer", 15], ["Autumn", 8], ["Winter", 23] ]); // set the series type using method var series = chart.line(data); // change the series type to area series.seriesType("area");
In the following sample, the seriesType() method is applied to OHLC and Japanese candlestick series, which require four values, as well as to a range area series:
// set the series type var series = chart.ohlc(data); // change the series type series.seriesType("rangeArea");. | https://docs.anychart.com/Basic_Charts/Series_Type | 2017-03-23T04:21:44 | CC-MAIN-2017-13 | 1490218186774.43 | [] | docs.anychart.com |
Release Note 20161101
This is a summary of new features and improvements introduced in the November 1, 2016 release. If you have any product feature request, please file it at feedback.treasuredata.com.
Table of Contents
Tool: Presto API-compatible JDBC / ODBC driver (Private Beta)
Treasure Data has released Presto API compatible interface for querying. This will allow us to work with a lot of other tools who have already supported Presto as a backend.
This feature is currently in Private Beta. Please contact your account representative if you’d like access to this new feature.
Integration: Mode Analytics (Private Beta)
Mode Analytics has released the native connectivity with Treasure Data, via new Presto-compatible API.
Integration: Looker (Private Beta)
Looker has released the native connectivity with Teasure Data, via new Presto-compatible API.
Integration: Datorama (Private Beta)
Datorama has released the native connectivity with Treasure Data, via new Presto-compatible API.
Client: JavaScript SDK v1.7.1
JavaScript SDK v1.7.1 was released. This reduced the size of SDK to load the file much faster. Also
Treasure#fetchGlobalID(success, failure, forceFetch) interface was added to retrieve client’s global ID. This method can be used for personalization, A/B testing, etc.
Last modified: Nov 11 2016 08:01:06 UTC
If this article is incorrect or outdated, or omits critical information, please let us know. For all other issues, please see our support channels. | https://docs.treasuredata.com/articles/releasenote-20161101 | 2017-03-23T04:23:21 | CC-MAIN-2017-13 | 1490218186774.43 | [] | docs.treasuredata.com |
Information for "Language form field type" Basic information Display titleLanguage form field type Default sort keyLanguage form field type Page length (in bytes)1,629 Page ID218vangeest (Talk | contribs) Date of page creation07:28, 21 May 2011 Latest editorMvangeest (Talk | contribs) Date of latest edit07:28, 21 May 2011 Total number of edits1 Total number of distinct authors1 Recent number of edits (within past 30 days)0 Recent number of distinct authors0 Page properties Transcluded template (1)Template used on this page: Template:Ambox (view source) (protected) Retrieved from ‘’ | https://docs.joomla.org/index.php?title=Language_form_field_type&action=info | 2015-04-18T08:57:25 | CC-MAIN-2015-18 | 1429246634257.45 | [] | docs.joomla.org |
.
Installed Plugins
The "Installed Plugins" tab displays the list of installed plugins and the list of System plugins. For each none System plugins, clicking on the the plugin name expand the description area to get additional information like the license, the author, the homepage of the plugin. Finally, this tab allows to uninstall a plugin by clicking on the "Uninstall" button :
Available Plugins
This "Available Plugins" tab displays all available plugins according to the version of your Sonar platform. Those available plugins are grouped by category like "Additional Languages", "Visualization/Reporting", ... | http://docs.codehaus.org/pages/viewpage.action?pageId=185598096 | 2015-04-18T09:04:18 | CC-MAIN-2015-18 | 1429246634257.45 | [array(['/download/attachments/185598091/sonar-installed-plugins.png?version=1&modificationDate=1289403195344&api=v2',
None], dtype=object) ] | docs.codehaus.org |
Write bitmap text
The jit.gl.text2d object lets you draw bitmap text in the named drawing context. The text which is drawn can be sent as a symbol, a list of symbols, or as a jit.matrix containing char data. When a jit.matrix is used, each row of the matrix is interpreted as one line of text.
Notes:
Due to differences in implementation, the rotate, scale, screenmode, classic, and interp attributes are currently only supported on Macintosh. On PC, it behaves similar to when classic mode on Macintosh is turned on.
On Macintosh, this object interprets the input text as Unicode, allowing the display of non-Roman fonts. The Unicode encoding to use is determined based on the current font, set using the font message, and the operating system's script system and region code settings.
Note: The Windows version of this object does not suport Unicode. | https://docs.cycling74.com/max5/refpages/jit-ref/jit.gl.text2d.html | 2015-04-18T09:07:13 | CC-MAIN-2015-18 | 1429246634257.45 | [] | docs.cycling74.com |
Project Item Options
There are three Source Control specific options when you right click on a project item that is connected to TFS and hover over Source Control.
- Check In to Source Control - check-in the selected project item.
- Check Out from Source Control - check-out the selected project item.
- Get Latest - obtain the current copy of the selected project item from Source Control.
- Revert to Server Version - undo changes made to the selected project item since check-out.
- Disconnect from Source Control - disconnects the selected item from Source Control. | http://docs.telerik.com/teststudio/features/source-control/project-item-options | 2015-04-18T08:48:00 | CC-MAIN-2015-18 | 1429246634257.45 | [array(['/teststudio/img/features/source-control/project-item-options/fig1.png',
'Source Control'], dtype=object) ] | docs.telerik.com |
All docs This doc
The following procedure to view alerts is relevant only for system administrators. System administrators, API publishers as well as API subscribers receive alerts via notification emails if they have subscribed to one or more alert type.
Follow the procedure below to view alerts that were generated for the APIs deployed in your WSO2 API Manager installation.
https://<API-M_HOST>:<API-M_port>/admin | https://docs.wso2.com/display/AM250/Viewing+Alerts | 2021-11-27T09:21:02 | CC-MAIN-2021-49 | 1637964358153.33 | [] | docs.wso2.com |
Registering Users and Assigning Roles
In this topic:
Registering new users and assigning roles
Assigning or unassigning roles from existing users
Adding or changing user email addresses
Also see:
Software users must be registered and assigned roles in order to gain access to this software. Before a user can connect to a server in a server group, that user must be registered as a user in the SANsymphony Management Console. The role assigned to the user determines the privilege level.
The Administrator account is added by default as a user when the software is installed. In this manner, a user may gain access to the SANsymphony Management Console after software installation in order to register users.
Credentials and User Names
- Windows operating system credentials are used to authenticate registered users when connecting to a server in a server group. Credentials can be domain-wide or local (workgroup).
- This software assumes domain authentication if the machine is a member of a domain. If the user requires local authentication, credentials should be specified as "machinename\username".
- If the machine is not a member of a domain, then local authentication is assumed. If the user requires domain authentication, credentials should be specified as "domainname\username".
- Domain credentials should be added to the Administrator group or other groups with administrator privileges.
- In order to perform maintenance of the software, such as software upgrades, the user account should have installation and administrator privileges.
- The same Windows user accounts and passwords should exist on all servers in the server group. If connecting to remote server groups, the same user accounts and passwords must exist on all servers in both local and remote groups. See Connecting to a Server Group and Name Resolution for important information.
- User names registered in this software must be identical to the Windows account created for that user. If connecting to remote server groups, users should be registered in both local and remote server groups.
- When the account is a domain account, register the user name as "domainname\username".
- When the account is a local account, register just the user name.
User Roles
There are three predefined roles:
- Full Privileges - Users are granted full privileges in using SANsymphony software. These users should have administrator privileges.
- View - Users may only view information in the SANsymphony Management Console and cannot make any changes to the configuration.
- VVol Managers - VVol Managers are granted permission to perform actions on VVOLs and protocol endpoints in the DataCore VASA Provider. This role is applied to the VASA Provider and should only be assigned to users that login to this software from the VASA Provider. Only users with this role will be able to perform actions on VVOLs and protocol endpoints.
Predefined roles cannot be edited or deleted. Also see Access Control for creating, editing, and deleting custom roles.
Registering Users and Assigning Roles
A user can have multiple roles assigned.
To register a new user and assign a role:
- Click the Register User link to open the Register User page.
- Enter user information:
- Name of the user. The user name must match the user account name in the Windows® operating system. (If credentials are domain-wide, include the domain with the name for example: DOMAIN\user name.)
- Email address. Email notifications can be sent to users when events occur, such as when pool thresholds have been reached or warnings are received from the System Health Tool. See Automated Tasks. (This step is optional and can be added later.)
- User description, if desired.
- In the list, click on the roles required and click Register. A details page will be created for the user. The User Details page contains the role and privileges assigned to the user, as well as the virtual disks owned by the user and the event log for the user.
After the user is registered, a User Details page is created for the user and the user is added to the Users List.
Assigning or Unassigning Roles for Registered Users
To assign a role to an existing user:
- In the Ribbon>Home tab, click Users to open the Users List.
- In the Users List, right-click on the name and select Assign Role.
(Alternatively, roles can be assigned by clicking Assign Role from the Ribbon>User Actions tab when the User Details page is open in the workspace.)
- In the list, select the role to add and click Assign. The role is added.
To unassign a role:
- In the Ribbon>Home tab, click Users to open the Users List.
- In the Users List, right-click on the name and select Unassign Role.
- In the list, select the role to remove. The role will be removed.
Deleting Users
When a user is deleted, the user can no longer log in to the SANsymphony Management Console.
To delete a user:
- In the Ribbon>Home tab, click Users to open the Users List.
- From the list, select one or more users to delete, then right-click and select Delete.
- You will receive a message to confirm deletion. Click Yes to continue. The user is deleted.
Adding or Changing User Email Addresses
Email notifications can be sent to users if an email address is entered.
In order to change the Administrator email address, the Administrator must be logged in to the console.
To add or change a user email addresses:
- In the Ribbon>Home tab, click Users to open the Users List.
- In the Users List, double-click on a name in the list to open the User Details page for the user.
- At the top of the page, click Edit.
- In Email address field, enter or make changes for the email address and click Done. | https://docs.datacore.com/SSV-WebHelp/SSV-WebHelp/Registering_Users_Assigning_Roles.htm | 2021-11-27T09:19:45 | CC-MAIN-2021-49 | 1637964358153.33 | [] | docs.datacore.com |
Date: Fri, 10 Aug 2012 16:27:43 +0100 From: Matthew Seaman <[email protected]> To: Robert Huff <[email protected]> Cc: [email protected] Subject: Re: partial sendmail breakage Message-ID: <[email protected]> In-Reply-To: <[email protected]> References: <[email protected]>
Next in thread | Previous in thread | Raw E-Mail | Index | Archive | Help
This is an OpenPGP/MIME signed message (RFC 2440 and 3156) --------------enig7EBB65CABD1FAAB22287D0E5 Content-Type: text/plain; charset=ISO-8859-1 Content-Transfer-Encoding: quoted-printable On 10/08/2012 14:32, Robert Huff wrote: > I have restarted sendmail and get this in /var/log/messages: >=20 > Aug 10 08:26:56 jerusalem sm-mta[87853]: sql_select option missing > Aug 10 08:26:56 jerusalem sm-mta[87853]: auxpropfunc error no mechanism= available >=20 > I'm (obviously) not a sendmail expert; what other information > should I provide to help figure out what went wrong? You've implemented saslauth in this sendmail instance against some sort of SQL database. However something has caused sendmail to lose the ability to look up user accounts in that DB. Could be all sorts of things: is the DB running? Can you login to it manually using the same credentials as sendmail? Has there been any changes to DB schemas or user grants recently? How about changes to /usr/local/lib/sasl2/Sendmail.conf ? One thing you can try is turning up the log level in Sendmail.conf to get a better idea of what SASL is trying to do. Add a line log_level: N where N is an integer, bigger meaning more verbose logging. Cheers, Matthew --=20 Dr Matthew J Seaman MA, D.Phil. PGP: --------------enig7EBB65CABD1FAAB22287D0EFAlAlKG8ACgkQ8Mjk52CukIyLsgCfePBFfG9UV2ngvg4N5PF/3Y1s xzkAn0+5Gzwts56F5mAwrV56qpVEFQMl =RiAN -----END PGP SIGNATURE----- --------------enig7EBB65CABD1FAAB22287D0E5--
Want to link to this message? Use this URL: <> | https://docs.freebsd.org/cgi/getmsg.cgi?fetch=509703+0+/usr/local/www/mailindex/archive/2012/freebsd-questions/20120812.freebsd-questions | 2021-11-27T08:30:17 | CC-MAIN-2021-49 | 1637964358153.33 | [] | docs.freebsd.org |
In-house Customers connecting to SharePoint Online - This configuration has been certified by frevvo. Follow these steps:
These instructions assume you are a frevvo Cloud customer or have an in-house installation of
up and running using the frevvo Tomcat bundle. Follow these steps:.
Remember SharePoint only allows https.
Cloud and in-house customers must configure their
tenant to connect to SharePoint as a client. frevvo expects that customers will only be integrating with one SharePoint instance for their organization. You will need the SharePoint Client Id and the Client Secret from Step 1.
Follow these steps:
Login to
as the tenant admin
Click the Edit Tenant link.
Enter the Connector URL. - This URL is needed if you are using
in the cloud and installing the connector locally. Enter the URL where your
does not receive a response back from SharePoint with a status code of 200, you can find information about the error from the
and posted to SharePoint.
To determine the version of the SharePoint connector you are using: | https://docs.frevvo.com/d/plugins/viewsource/viewpagesrc.action?pageId=22458987 | 2021-11-27T07:42:50 | CC-MAIN-2021-49 | 1637964358153.33 | [] | docs.frevvo.com |
Create. To ensure availability for business-critical applications, take safe transition steps.. | https://docs.paloaltonetworks.com/best-practices/9-1/data-center-best-practices/data-center-best-practice-security-policy/how-to-create-data-center-best-practice-security-profiles/create-the-data-center-best-practice-vulnerability-protection-profile.html | 2021-11-27T09:17:45 | CC-MAIN-2021-49 | 1637964358153.33 | [array(['/content/dam/techdocs/en_US/dita/_graphics/9-1/best-practice/vuln-protection-profile-bp-pcap.png/_jcr_content/renditions/original',
None], dtype=object) ] | docs.paloaltonetworks.com |
Eventmie Pro is a flexible event hosting solution. You can use it as a Multi-organization (multi-vendor) or as a single organization website.
In the case of the multi-vendor, you as a website owner can invite event organizers to signup and create events on your website. And that's where you'll also need a commission system, for sharing the profit.
So, here's a semi-automatic commission system in which you can set
commission percentage (%) from Admin Panel, and then commissions on each booking will be calculated automatically.
At first, all bookings credit goes to the website owner's (Admin) bank account and then the owner distributes the organizer's payout manually and can update the
Payout Transfer status on the Admin Panel.
Eventmie Pro makes the payout transfer system as smooth as cheese 🧀. Follow the below guidelines, and you'll become master in it, in no time.
Admin Panel -> Settings -> Multi-vendortab.
Admin Commissione.g
5(in percent).
{primary} Commissions are recorded only if, the
Multi-vendor mode On&
Admin Commission Set.
After setting the
Admin Commission. Eventmie Pro will start recording commissions on each ticket sale.
Go to
Admin Panel -> Commissions.
Click on
You can see the Organizer's every Event monthly
Total Bookings,
Organizer earnings, and
Admin Commission.
Suppose, you've transferred an Organiser's payout for a particular month.
Check
Transferred checkbox on that particular row, and click on
{success} Doing the above, means, being a website owner, you can keep a record of the organizer's payouts transfers.
This is how, the Admin Commission and Organizer earnings are calculated behind the scenes. Let us explain from an example.
{primary} Admin Commission = 5% of (Ticket Price + Organizer Tax)
{primary} Admin Tax won't be a part of Organizer earning. It'll completely go to Admin account.
In case of Online Payment the Is Paid is set to Yes, while in case of Offline/Direct payment, Organizer or Admin needs to update the Is Paid status to Yes. The bookings will appear in Commissions only after the Is Paid = Yes.
If a booking is made after making the Organizer Earning's Transferred status set to Yes, then that new booking of the same month will appear as a new entry in Commission, as Un-Transferred payout.
Once, the Un-Transferred Organiser earning for the same month is set to Transferred, then it'll be merged in one single Transferred Payout for the month.
The commission will record and show the overall calculations with floating-point precision.
In case of booking cancellations & refunds regarding the Organizer payouts, that has already been transferred, then the refunded amount will come into Refund Settlement that needs to be claimed back from the Organiser.
The organizer can add their bank account details from their profile page on the front-end.
Admin can find the Organizer Bank Details directly from
Admin Panel -> Commissions page, for transferring the payouts to the organizer's bank account.
When a booking is canceled, the commission of that canceled booking get excluded automatically and won't be sum-up in the Organizer earnings.
{primary} Make sure you turn the booking
statusto
Disabledafter making a refund.
Eventmie Pro supports a single organization and multi-organization (multi-vendor, like a SaaS platform). You can toggle between these two modes with a click of a button from the
Admin Panel -> Settings -> Multi-vendor tab.
In the case of
multi-vendor mode
on. A user, after
Become Organizer.
After signup as
customer, go to
Profile.
Click on
In the popup, fill in your
Organization name/Brand name and click on
After submission, the user's
Group will be changed from
Customer to
Organizer.
There are some set of rules about what an Organizer CAN and CANNOT do.
Organizer CANNOT become Customer again. The process is
irreversible.
Organizer CAN create and manage their own events, but CANNOT book any event for their own.
Organizer CAN book only their own events for any other Customer (users) of the site.
Organizer CAN book event for a customer ONLY IF, the organizer knows the customer email.
Organizers CAN manage bookings of their own events only.
{success} The above actions can be performed by the Organizer from the front-end.
{primary} While Admin can do anything, without any limitations.
When the multi-vendor mode is
Off, then on the front-end, all the options for
Become Organizer,
Create Event or anything else related to
multi-vendor functionality, get invisible. In that case, Admin manages everything on behalf of an Organizer.
Admin can create a separate
User from the admin panel and assign the
User the
Organizer - Role. Follow these steps to do so.
Go to
Admin Panel -> Manage Users.
Click on
Fill in the Organizer's name, email, and password (you can fill your own).
Select the
Default Role to
Organizer.
{primary} After creating an organizer, Admin can use the organizer everywhere, whenever it asks to
Select Organizer.
When you set the Admin Commission to ZERO, it calculates the Organizer payouts with ZERO commission and shows all the payouts on the Admin Panel -> Commissions page.
In case of ZERO commission, Admin can still set Admin Tax, and all the detailed payout info will be shown on Admin Commission & Organizer earning page.
If a booking is deleted, refunded, or disabled, you'll see proper effects and changes in the Admin Commission & Organizer Earnings. | https://eventmie-pro-docs.classiebit.com/docs/1.5/admin/commissions | 2021-11-27T08:07:55 | CC-MAIN-2021-49 | 1637964358153.33 | [] | eventmie-pro-docs.classiebit.com |
Query Performance Insight for Azure SQL Database
APPLIES TO:
Azure SQL Database
Query Performance Insight provides intelligent query analysis for single and pooled databases. It helps identify the top resource consuming and long-running queries in your workload. This helps you find the queries to optimize to improve overall workload performance and efficiently use the resource that you are paying for. Query Performance Insight helps you spend less time troubleshooting database performance by providing:
- Deeper insight into your databases resource (DTU) consumption
- Details on top database queries by CPU, duration, and execution count (potential tuning candidates for performance improvements)
- The ability to drill down into details of a query, to view the query text and history of resource utilization
- Annotations that show performance recommendations from database advisors
Prerequisites
Query Performance Insight requires that Query Store is active on your database. It's automatically enabled for all databases in Azure SQL Database by default. If Query Store is not running, the Azure portal will prompt you to enable it.
Note
If the "Query Store is not properly configured on this database" message appears in the portal, see Optimizing the Query Store configuration.
Permissions
You need the following Azure role-based access control (Azure RBAC) permissions to use Query Performance Insight:
- Reader, Owner, Contributor, SQL DB Contributor, or SQL Server Contributor permissions are required to view the top resource-consuming queries and charts.
- Owner, Contributor, SQL DB Contributor, or SQL Server Contributor permissions are required to view query text.
Use Query Performance Insight
Query Performance Insight is easy to use:
Open the Azure portal and find a database that you want to examine.
From the left-side menu, open Intelligent Performance > Query Performance Insight.
On the first tab, review the list of top resource-consuming queries.
Select an individual query to view its details.
Open Intelligent Performance > Performance recommendations and check if any performance recommendations are available. For more information on built-in performance recommendations, see Azure SQL Database Advisor.
Use sliders or zoom icons to change the observed interval.
Note
For Azure SQL Database to render the information in Query Performance Insight, Query Store needs to capture a couple hours of data. If the database has no activity or if Query Store was not active during a certain period, the charts will be empty when Query Performance Insight displays that time range. You can enable Query Store at any time if it's not running. For more information, see Best practices with Query Store.
For database performance recommendations, select Recommendations on the Query Performance Insight navigation blade.
Review top CPU-consuming queries
By default, Query Performance Insight shows the top five CPU-consuming queries when you first open it.
Select or clear individual queries to include or exclude them from the chart by using check boxes.
The top line shows overall DTU percentage for the database. The bars show CPU percentage that the selected queries consumed during the selected interval. For example, if Past week is selected, each bar represents a single day.
Important
The DTU line shown is aggregated to a maximum consumption value in one-hour periods. It's meant for a high-level comparison only with query execution statistics. In some cases, DTU utilization might seem too high compared to executed queries, but this might not be the case.
For example, if a query maxed out DTU to 100% for a few minutes only, the DTU line in Query Performance Insight will show the entire hour of consumption as 100% (the consequence of the maximum aggregated value).
For a finer comparison (up to one minute), consider creating a custom DTU utilization chart:
- In the Azure portal, select Azure SQL Database > Monitoring.
- Select Metrics.
- Select +Add chart.
- Select the DTU percentage on the chart.
- In addition, select Last 24 hours on the upper-left menu and change it to one minute.
Use the custom DTU chart with a finer level of details to compare with the query execution chart.
The bottom grid shows aggregated information for the visible queries:
- Query ID, which is a unique identifier for the query in the database.
- CPU per query during an observable interval, which depends on the aggregation function.
- Duration per query, which also depends on the aggregation function.
- Total number of executions for a specific query.
If your data becomes stale, select the Refresh button.
Use sliders and zoom buttons to change the observation interval and investigate consumption spikes:
Optionally, you can select the Custom tab to customize the view for:
- Metric (CPU, duration, execution count).
- Time interval (last 24 hours, past week, or past month).
- Number of queries.
- Aggregation function.
Select the Go > button to see the customized view.
Important
Query Performance Insight is limited to displaying the top 5-20 consuming queries, depending on your selection. Your database can run many more queries beyond the top ones shown, and these queries will not be included on the chart.
There might exist a database workload type in which lots of smaller queries, beyond the top ones shown, run frequently and use the majority of DTU. These queries don't appear on the performance chart.
For example, a query might have consumed a substantial amount of DTU for a while, although its total consumption in the observed period is less than the other top-consuming queries. In such a case, resource utilization of this query would not appear on the chart.
If you need to understand top query executions beyond the limitations of Query Performance Insight, consider using Azure SQL Analytics for advanced database performance monitoring and troubleshooting.
View individual query details
To view query details:
Select any query in the list of top queries.
A detailed view opens. It shows the CPU consumption, duration, and execution count over time.
Select the chart features for details.
- The top chart shows a line with the overall database DTU percentage. The bars are the CPU percentage that the selected query consumed.
- The second chart shows the total duration of the selected query.
- The bottom chart shows the total number of executions by the selected query.
Optionally, use sliders, use zoom buttons, or select Settings to customize how query data is displayed, or to pick a different time range.
Important
Query Performance Insight does not capture any DDL queries. In some cases, it might not capture all ad hoc queries.
Review top queries per duration
Two metrics in Query Performance Insight can help you find potential bottlenecks: duration and execution count.
Long-running queries have the greatest potential for locking resources longer, blocking other users, and limiting scalability. They're also the best candidates for optimization. For more information, see Understand and resolve Azure SQL blocking problems.
To identify long-running queries:
Open the Custom tab in Query Performance Insight for the selected database.
Change the metrics to duration.
Select the number of queries and the observation interval.
Select the aggregation function:
- Sum adds up all query execution time for the whole observation interval.
- Max finds queries in which execution time was maximum for the whole observation interval.
- Avg finds the average execution time of all query executions and shows you the top ones for these averages.
Select the Go > button to see the customized view.
Important
Adjusting the query view does not update the DTU line. The DTU line always shows the maximum consumption value for the interval.
To understand database DTU consumption with more detail (up to one minute), consider creating a custom chart in the Azure portal:
- Select Azure SQL Database > Monitoring.
- Select Metrics.
- Select +Add chart.
- Select the DTU percentage on the chart.
- In addition, select Last 24 hours on the upper-left menu and change it to one minute.
We recommend that you use the custom DTU chart to compare with the query performance chart.
Review top queries per execution count
A user application that uses the database might get slow, even though a high number of executions might not be affecting the database itself and resources usage is low.
In some cases, a high execution count can lead to more network round trips. Round trips affect performance. They're subject to network latency and to downstream server latency.
For example, many data-driven websites heavily access the database for every user request. Although connection pooling helps, the increased network traffic and processing load on the server can slow performance. In general, keep round trips to a minimum.
To identify frequently executed ("chatty") queries:
Open the Custom tab in Query Performance Insight for the selected database.
Change the metrics to execution count.
Select the number of queries and the observation interval.
Select the Go > button to see the customized view.
Understand performance tuning annotations
While exploring your workload in Query Performance Insight, you might notice icons with a vertical line on top of the chart.
These icons are annotations. They show performance recommendations from Azure SQL Database Advisor. By hovering over an annotation, you can get summarized information on performance recommendations.
If you want to understand more or apply the advisor's recommendation, select the icon to open details of the recommended action. If this is an active recommendation, you can apply it right away from the portal.
In some cases, due to the zoom level, it's possible that annotations close to each other are collapsed into a single annotation. Query Performance Insight represents this as a group annotation icon. Selecting the group annotation icon opens a new blade that lists the annotations.
Correlating queries and performance-tuning actions might help you to better understand your workload.
Optimize the Query Store configuration
While using Query Performance Insight, you might see the following Query Store error messages:
- "Query Store is not properly configured on this database. Click here to learn more."
- "Query Store is not properly configured on this database. Click here to change settings."
These messages usually appear when Query Store can't collect new data.
The first case happens when Query Store is in the read-only state and parameters are set optimally. You can fix this by increasing the size of the data store, or by clearing Query Store. (If you clear Query Store, all previously collected telemetry will be lost.)
The second case happens when Query Store is not enabled, or parameters are not set optimally. You can change the retention and capture policy, and also enable Query Store, by running the following commands provided from SQL Server Management Studio (SSMS) or the Azure portal.
Recommended retention and capture policy
There are two types of retention policies:
- Size based: If this policy is set to AUTO, it will clean data automatically when near maximum size is reached.
- Time based: By default, this policy is set to 30 days. If Query Store runs out of space, it will delete query information older than 30 days.
You can set the capture policy to:
- All: Query Store captures all queries.
- Auto: Query Store ignores infrequent queries and queries with insignificant compile and execution duration. Thresholds for execution count, compile duration, and runtime duration are internally determined. This is the default option.
- None: Query Store stops capturing new queries, but runtime statistics for already captured queries are still collected.
We recommend setting all policies to AUTO and the cleaning policy to 30 days by executing the following commands from SSMS or the Azure portal. (Replace
YourDB with the database name.)
ALTER DATABASE [YourDB] SET QUERY_STORE (SIZE_BASED_CLEANUP_MODE = AUTO); ALTER DATABASE [YourDB] SET QUERY_STORE (CLEANUP_POLICY = (STALE_QUERY_THRESHOLD_DAYS = 30)); ALTER DATABASE [YourDB] SET QUERY_STORE (QUERY_CAPTURE_MODE = AUTO);
Increase the size of Query Store by connecting to a database through SSMS or the Azure portal and running the following query. (Replace
YourDB with the database name.)
ALTER DATABASE [YourDB] SET QUERY_STORE (MAX_STORAGE_SIZE_MB = 1024);
Applying these settings will eventually make Query Store collect telemetry for new queries. If you need Query Store to be operational right away, you can optionally choose to clear Query Store by running the following query through SSMS or the Azure portal. (Replace
YourDB with the database name.)
Note
Running the following query will delete all previously collected monitored telemetry in Query Store.
ALTER DATABASE [YourDB] SET QUERY_STORE CLEAR;
Next steps
Consider using Azure SQL Analytics for advanced performance monitoring of a large fleet of single and pooled databases, elastic pools, managed instances and instance databases. | https://docs.microsoft.com/en-gb/azure/azure-sql/database/query-performance-insight-use | 2021-11-27T10:04:29 | CC-MAIN-2021-49 | 1637964358153.33 | [array(['media/query-performance-insight-use/opening-title.png',
'Query Performance Insight'], dtype=object)
array(['media/query-performance-insight-use/ia.png',
'The Recommendations tab'], dtype=object)
array(['media/query-performance-insight-use/annotation.png',
'Query annotation'], dtype=object)
array(['media/query-performance-insight-use/annotation-details.png',
'Query annotation details'], dtype=object)
array(['media/query-performance-insight-use/qds-off.png',
'Query Store details'], dtype=object) ] | docs.microsoft.com |
Windows.
Devices. Bluetooth. Advertisement Namespace
Important
Some information relates to prerelease product that may be substantially modified before it’s released. Microsoft makes no warranties, express or implied, with respect to the information provided here.
Classes
Enums
Remarks
The Windows.Devices.Bluetooth.Advertisement namespace provides an app with a simple but powerful set of methods that allow the following:
- Receive advertisement data from Bluetooth LE peripherals with configurable filtering capabilities.
- Send out Bluetooth LE advertisements allowing the app to operate as a source of beacon advertisements.
This namespace has two sets of classes used for the following:
- Advertisement watcher for receiving.
- Advertisement publisher for sending. | https://docs.microsoft.com/en-us/uwp/api/windows.devices.bluetooth.advertisement?view=winrt-20348 | 2021-11-27T07:46:51 | CC-MAIN-2021-49 | 1637964358153.33 | [] | docs.microsoft.com |
Modify schedule of reallocate job
You can delete an existing reallocation scan schedule. However, if you do this, the job's scan interval reverts to the schedule that was defined for it when the job was created with the volume reallocation start command..
cluster1::> volume reallocation schedule -s "0 23 6 *" /vol/db/lun1 | https://docs.netapp.com/ontap-9/topic/com.netapp.doc.dot-cm-cmpr-920/volume__reallocation__schedule.html | 2021-11-27T08:59:58 | CC-MAIN-2021-49 | 1637964358153.33 | [] | docs.netapp.com |
The Move Equipment activity will cause a staff to move an equipment or transport to a destination.
The staff will exit their current location, travel to the equipment (or transport), pick it up, and then travel to the destination. If the destination is an object, the staff will drop off the equipment and enter the destination.
The Move Equipment activity only allows one connector out. See Adding and Connecting Activities for more information.
The following image shows properties for the Move Equipment Equipment defines the equipment or transport to be moved.
The Destination defines the object or position to walk to.
The Staff defines staff that will be moving the equipment. If an array of staff is specified, the first will move the equipment while the rest follow along. | https://docs.flexsim.com/en/22.0/Reference/PeopleObjects/ProcessFlowActivities/SubFlows/MoveEquipment/MoveEquipment.html | 2021-11-27T09:33:25 | CC-MAIN-2021-49 | 1637964358153.33 | [] | docs.flexsim.com |
Date: Sat, 27 Nov 2021 09:27:55 +0000 (UTC) Message-ID: <[email protected]> Subject: Exported From Confluence MIME-Version: 1.0 Content-Type: multipart/related; boundary="----=_Part_6562_1783362170.1638005275241" ------=_Part_6562_1783362170.1638005275241 Content-Type: text/html; charset=UTF-8 Content-Transfer-Encoding: quoted-printable Content-Location:
There are many paths to Rome.... frev= vo suggests the following best practices for managing your tenants, project= s, forms and workflows.
Multiple Live Forms server installati= ons are the most flexible and best practice for maintaining a production en= vironment. In this scenario, you may have a development server, a test/stag= ing server and a production server. Or you may have only a development serv= er also used for testing and a production server for deployed forms/workflo= ws.
Create roles and users in your development environment. If you = are using the default security manager, simply create the users and roles i= n the tenant, otherwise refer to customers using the LDAP/SAML/Azure Security Manager.=
The role names in your development en= vironment should be the same as the role names in your production environme= nt. If they are different, modifications to your workflows will have to be = made to users and workflows to reflect the production roles when they are m= oved to the production.
Create a generic production user account (ex: =E2=80=9Cproduction@&l= t;your tenant>=E2=80=9D) in your production environment and give this us= er the frevvo.Designer role. All your production forms/workflows will be in this user accou= nt.
If you are using a non-default security manager, this step and the next = step would be done via your IDP software.
When a designer is ready to deploy a form/workflow to produc= tion or update one already in production, a frevvo.Publisher will check-out= the new project from source code (a repository outside of a frevvo server)= and upload/replace the project into the generic production use= r account in your production environment.
The tenants in your development and p= roduction environments may have the same name although this is not required= .
We recommend that all in-house customers with a single server have two t= enants: a development/test tenant and a production tenant. A separate devel= opment/test tenants is recommended for the following reasons:
Follow the setup steps above for In-house Test/Staging Server Installations. The o= nly difference is that in your case your "test environment" is simply a dev= /test tenant on the same server as your "production environment" and not a = separate frevvo server.
Cloud customers have a single tenant. In this scenario, best practices a= re to create a set of test roles and test users. Your form/workflow designe= rs will use test roles/users during the dev/test phase.
Cloud customers may optionally purchase a 2nd tenant for development and= testing.
We recommend that the forms/workflows=
be created & tested by one/multiple designers in their own accounts. A=
fter the forms are designed/tested, they can be downloaded from the individ=
ual designer user accounts and uploaded to a generic production user accoun=
t (ex: =E2=80=9Cproduction@<your tenant>" where the forms can be publ=
ished and used by your end users.
We recommend using a generic production user account to publish projects= /forms/workflows into production for the following reasons:
Follow these best practices:
Create a generic production user (ex: =E2=80=9Cproduction@<your t= enant>=E2=80=9D) a= s the Production User, select the form/workflow, and click Deploy.
To upda= te a form/workflow that is already in production, che= ck "replace" on the upload screen. The deployment state of the form/workflo= w being replaced will be maintained for the updated version.
If you need to update a form/workflow that has been deployed to producti= on, there are specific steps to follow to avoid issues with submissions. Su= bmissions are tied to a specific form/workflow. It is very important that y= ou make your changes to the form/workflow that has the same typeId as the p= roduction version. This ensures that the production version of your form/wo= rkflow will be replaced by the updated version when you upload it to your p= roduction account and check the Replace checkbox= on the Upload screen.
When uploading a form/workflow wit= h the same ID as an existing form/workflow, without checking Replac= e, a copy will be created and the designer will see an error messa= ge: "The form/workflow that was uploaded matches the id of one that already= existed so a copy was made. If you intended to replace the existing form/w= orkflow, delete the form/workflow you just uploaded and upload it again but= check off the =E2=80=98Replace=E2=80=99 option."
When uploading a form/workflow wit= h Replace checked that is currently being edited by anothe= r user, the designer will see this error message: "This form/workflow is cu= rrently being edited by <user@tenant>. Please try again later."
Let's say you have a form/workflow in production that requires some chan= ges. contai= n the changes the next time they are "performed" from the task list. For ex= ample, let's say you
When you edit a workflow and change business rule or add/remove fields, = all the pending tasks pick up the latest version of the workflow. Pending t= asks for a form/workflow that integrates with a Google sheet reflects any c= hanges e= xisting workflows to finish) by temporarily changing the Access Control for= Who can start the form/workflow? to "Designer/Owner only" that no one else= can access it.
The Access Control feature i= n frevvo allows the designer to assign o= ther users permission to make changes to forms and workflows.
The ability to edit a form/workflo=
w should not be given to other users if the form/workflow is in production.=
Giving this pe=
rmission would enable those users to edit your production forms directly th=
ereby to produ=
ction.
Create test users in your development tenant. If you are using = the default security manager, simply create the test users in the tenant. R= efer to Customers us= ing the LDAP/SAML/Azure Security Manager if you are not using the defau= lt security manager.
The role names in your development te= nant should be the same as the role names in your production tenant. If the= y are different, modifications to your workflows will have to be made to us= ers and workflows to reflect the production roles.
When further updates/modification= s are required, the forms/workflows should again be edited in the designer = user accounts and then upload/replaced in the generic production user accou= nt.
If you are testing in a multiple tenant scenario, we recommend that both= your dev/test and production tenants are configured with the same security= manger. This is recommended for the following reasons: | https://docs.frevvo.com/d/exportword?pageId=22454396 | 2021-11-27T09:27:55 | CC-MAIN-2021-49 | 1637964358153.33 | [] | docs.frevvo.com |
GroupDocs.Assembly for .NET 17.9 Release Notes
This page contains release notes for GroupDocs.Assembly for .NET 17.9.
Major Features
This release of GroupDocs.Assembly comes up with several fixes for recently supported email file formats.
Full List of Issues Covering all Changes in this Release
Public API and Backward Incompatible Changes
This section lists public API changes that were introduced in GroupDocs.Assembly for .NET 17.9. | https://docs.groupdocs.com/assembly/net/groupdocs-assembly-for-net-17-9-release-notes/ | 2021-11-27T08:07:33 | CC-MAIN-2021-49 | 1637964358153.33 | [] | docs.groupdocs.com |
I have Created a webjob to fetch data from external site and entring data in SPOnline list using MS Graph. The webjob is hosted in App Service to run continuously with "Always On" feature activated. I can see job is running, but very slow, intermittently restarting without any error on logs.
Kindly suggest to fix the issue. | https://docs.microsoft.com/en-us/answers/questions/33320/continuoys-webjob-is-restarting.html | 2021-11-27T10:27:01 | CC-MAIN-2021-49 | 1637964358153.33 | [] | docs.microsoft.com |
My WebLink
|
|
About
|
Sign Out
Search
01-19-2011 South Bend resumes curbside Christmas tree pickup
sbend
>
Public
>
News Releases
>
2011
>
01-19-2011 South Bend resumes curbside Christmas tree pickup
Metadata
Thumbnails
Annotations
Entry Properties
Last modified
1/19/2011 10:25:45 AM
Creation date
1/19/2011 10:25 /> <br /> <br /> <br /> <br />Office of the Mayor <br /> <br /> <br />MEDIA ADVISORY <br />Wednesday, January 19, 2011 <br />10:00 am <br /> <br /> <br /> <br />Contact: <br />Mikki Dobski, Dir. of Communications & Special Projects, 574-235-5855/876-1564c. <br /> Sam Hensley, Dir. of Streets, or Pete Kaminski, Manager, 574-235-9244 <br /> <br /> <br />South Bend resumes curbside Christmas tree pickup <br /> <br />South Bend, IN: As a result of the recent 40+ inches of snow, collection of live <br />Christmas trees was put on hold. With the break in the weather, and now that crews <br />have caught up with snow removal, the city of South Bend will resume curbside tree <br />pickup this week. “Barring any additional major snow or ice, crews will now collect <br />” <br />trees on a call-in basis, stated Sam Hensley, Dir. of Streets. So far, approximately <br />1200+ trees have been recycled. <br /> <br />call the <br />Residents are asked to place trees at curbside only, and not in alleys and then <br />Division of Streets at 235-9244 to schedule a pickup. <br /> Hensley also noted that <br />“many trees have been buried or covered by snow. Please assist crews in locating <br />trees for pickup by digging them out of the snow piles and then call us.” <br /> <br />In the event of major new winter weather, snow clearance takes precedence. <br />Residents can also drop off live trees at the following locations: <br />Rum Village Park <br /> - - Ewing Ave. at Gertrude St. <br />Pinhook Park <br /> - - 2800 block of Riverside Dr. <br />Veterans’ Memorial Park <br /> - - Twyckenham at Northside Blvd. <br />Please Note: Drop off is for live Christmas trees ONLY – all ornaments and <br />decorations must be removed - no artificial trees or trash are permitted under any <br />circumstances! Placing of any items other than live Christmas trees is against the <br />South Bend Municipal Code and tickets can be issued. <br /> <br />The drop-off locations are open seven days a week from 7 am to 5 pm and signs are <br />posted directing residents to designated drop-off points at all three locations. All trees <br />are recycled into high quality mulch, at Organic Resources, which is then provided, <br />free, to South Bend residents. <br />- 30 - <br />
The URL can be used to link to this page
Your browser does not support the video tag. | https://docs.southbendin.gov/WebLink/0/doc/16506/Page1.aspx | 2021-11-27T09:00:23 | CC-MAIN-2021-49 | 1637964358153.33 | [] | docs.southbendin.gov |
Overview
The walkthrough in this section is intended for users who are familiar with developing applications for WPF by using Visual Studio and know how to work with Telerik UI for WPF. You need the following components to complete this walkthrough:
Visual Studio 2010
ADO.NET Entity Framework 4.1
Prior knowledge of the following concepts is also helpful, but not required to complete the walkthrough:
- Entity Data Models and the ADO.NET Entity Framework. For more information, see Entity Data Model and Entity Framework. | https://docs.telerik.com/devtools/wpf/controls/radscheduleview/populating-with-data/binding-to-database/binding-to-db-overview | 2019-02-15T21:23:27 | CC-MAIN-2019-09 | 1550247479159.2 | [] | docs.telerik.com |
Integrating SSO with Your App
This topic describes how to integrate SSO with Java and non-Java apps.
Integrate SSO with an App
Because SSO service is based on the OAuth protocol, any app that uses SSO must be OAuth-aware.
Java Apps
If you are using Java, see Single Sign-On Service Sample Applications. These are sample apps created using Spring Boot for all four app types. These apps use the SSO Service Connector, which auto-configures the app for OAuth. For more information about the SSO Service Connector, see spring-cloud-sso-connector on GitHub.
After binding the app to an SSO service instance, you must restart the app for the new SSO configuration to take effect.
Non-Java Apps
To configure non-Java apps for OAuth, supply the following properties as environment variables to your app after the SSO service bind. You can view this information on the Next Steps page of the SSO. | https://docs.pivotal.io/p-identity/1-7/integrating-sso.html | 2019-02-15T21:01:06 | CC-MAIN-2019-09 | 1550247479159.2 | [] | docs.pivotal.io |
Using Your Own Load Balancer
This guide describes how to use your own load balancer and forward traffic to your Pivotal Application Service (PAS) router IP address.
Pivotal Cloud Foundry (PCF) deploys with a single instance of HAProxy for use in lab and test environments. Production environments should use a highly-available customer-provided load balancing solution that does the following:
- Provides load balancing to each of the PCF Router IP addresses
- Supports SSL termination with wildcard DNS location
- Adds appropriate
x-forwarded-forand
x-forwarded-protoHTTP headers to incoming requests
- Sets an HTTP keepalive connection timeout greater than five seconds
- (Optional) Supports WebSockets the topic Deploying Operations Manager the configuration procedure for your deployment IaaS:
Step 5: Finalize Changes
Return to the Ops Manager Installation Dashboard
Click Install. | https://docs.pivotal.io/pivotalcf/2-1/customizing/custom-load-balancer.html | 2019-02-15T21:33:17 | CC-MAIN-2019-09 | 1550247479159.2 | [] | docs.pivotal.io |
File:WM 852 nw trd rsp-accpt-icon.png
WM_852_nw_trd_rsp-accpt-icon.png (29 × 35 pixels, file size: 560 B, MIME type: image/png)
File history
Click on a date/time to view the file as it appeared at that time.
- You cannot overwrite this file.
File usage
The following 4 pages link to this file:
This page was last modified on December 9, 2016, at 05:24. | 2019-02-15T21:13:42 | CC-MAIN-2019-09 | 1550247479159.2 | [] | docs.genesys.com |
|
Edit and delete groups
From PeepSo Docs
Contents
Edit Groups
In order to edit groups you go to the Group that needs to be edited -> Settings -> Click the edit button on the item you need to edit.
Group Name
Edit the name.
Group Description
Edit the description.
Categories
Select group categories
Has no effect on Site Administrators
Has no effect on Owner and Site Administrators
Deleting Groups
Go to PeepSo -> Groups and select the group you want to delete. Then open Bulk Actions and select delete option. | https://docs.peepso.com/wiki/Edit_and_delete_groups | 2019-02-15T21:31:12 | CC-MAIN-2019-09 | 1550247479159.2 | [] | docs.peepso.com |
Delete all user accounts -f
This documentation applies to the following versions of Splunk® Enterprise: 4.3, 4.3.1, 4.3.2, 4.3.3, 4.3.4, 4.3.5, 4.3.6, 4.3.7
Feedback submitted, thanks! | https://docs.splunk.com/Documentation/Splunk/4.3.1/Admin/DeleteuserdatafromtheCLI | 2019-02-15T21:30:27 | CC-MAIN-2019-09 | 1550247479159.2 | [array(['/skins/OxfordComma/images/acrobat-logo.png', None], dtype=object)] | docs.splunk.com |
Remote participants can participate in the same way as people who are present.
- Either they are connected at the same time and receive the votes on their smartphones just like the other participants.
- Or you want them to receive your votes in order to answer them at their own pace. In this case, you simply have to create or import votes in the Surveys tab and send them the link. They will be able to participate as long as the survey is active. | https://docs.wooclap.com/faq-en/3-event-settings/how-can-i-share-my-questions-with-a-remote-audience | 2019-02-15T22:00:02 | CC-MAIN-2019-09 | 1550247479159.2 | [] | docs.wooclap.com |
UDOO Neo Full¶. Zephyr was ported to run on the Cortex-M4 core only. In a future release, it will also communicate with the Cortex-A9 core (running Linux) via OpenAMP.
Hardware¶
- MCIMX6X MCU with a single Cortex-A9 (1 GHz) core and single Cortex-M4 (227 MHz) core
- Memory
- 1 GB RAM
- 128 KB OCRAM
- 256 KB L2 cache (can be switched into OCRAM instead)
- 16 KB OCRAM_S
- 32 KB TCML
- 32 KB TCMU
- 32 KB CAAM (secure RAM)
- A9 Boot Devices
- NOR flash
- NAND flash
- OneNAND flash
- SD/MMC
- Serial (I2C/SPI) NOR flash and EEPROM
- QuadSPI (QSPI) flash
- Display
- Micro HDMI connector
- LVDS display connector
- Touch (I2C signals)
- Multimedia
- Integrated 2d/3d graphics controller
- 8-bit parallel interface for analog camera supporting NTSC and PAL
- HDMI audio transmitter
- S/PDIF
- I2S
- Connectivity
- USB 2.0 Type A port
- USB OTG (micro-AB connector)
- 10/100 Mbit/s Ethernet PHY
- Wi-Fi 802.11 b/g/n
- Bluetooth 4.0 Low Energy
- 3x UART ports
- 2x CAN Bus interfaces
- 8x PWM signals
- 3x I2C interface
- 1x SPI interface
- 6x multiplexable signals
- 32x GPIO (A9)
- 22x GPIO (M4)
- Other
- MicroSD card slot (8-bit SDIO interface)
- Power status LED (green)
- 2x user LED (red and orange)
- Power
- 5 V DC Micro USB
- 6-15 V DC jack
- RTC battery connector
- Debug
- pads for soldering of JTAG 14-pin connector
- Sensor
- 3-Axis Accelerometer
- 3-Axis Magnetometer
- 3-Axis Digital Gyroscope
- 1x Sensor Snap-In I2C connector
- Expansion port
- Arduino interface
For more information about the MCIMX6X SoC and UDOO Neo Full board, see these references:
- NXP i.MX 6SoloX Website [8]
- NXP i.MX 6SoloX Datasheet [9]
- NXP i.MX 6SoloX Reference Manual [10]
- UDOO Neo Website [1]
- UDOO Neo Getting Started [2]
- UDOO Neo Documentation [3]
- UDOO Neo Datasheet [4]
- UDOO Neo Schematics [5]
Supported Features¶
The UDOO Neo Full board configuration supports the following hardware features:
The default configuration can be found in the defconfig file:
boards/arm/udoo_neo_full_m4/udoo_neo_full_m4_defconfig
Other hardware features are not currently supported by the port.
Connections and IOs¶
The UDOO Neo Full board was tested with the following pinmux controller configuration.
System Clock¶
The MCIMX6X SoC is configured to use the 24 MHz external oscillator on the board with the on-chip PLL to generate core clock. PLL settings for M4 core are set via code running on the A9 core.
Programming and Debugging¶
The M4 core does not have a flash memory and is not provided a clock at power-on-reset. Therefore it needs to be started by the A9 core. The A9 core is responsible to load the M4 binary application into the RAM, put the M4 in reset, set the M4 Program Counter and Stack Pointer, and get the M4 out of reset. The A9 can perform these steps at the bootloader level or after the Linux system has booted.
The M4 core can use up to 5 different RAMs (some other types of memory like a secure RAM are not currently implemented in Zephyr). These are the memory mappings for A9 and M4:
References¶
- NXP i.MX 6SoloX Reference Manual [10] Chapter 2 - Memory Maps
You have to choose which RAM will be used at compilation time. This configuration
is done in the file
boards/arm/udoo_neo_full_m4/udoo_neo_full_m4.dts.
If you want to have the code placed in the subregion of a memory, which will likely be the case when using DDR, select “zephyr,flash=&flash” and set the DT_FLASH_SIZE macro to determine the region size and DT_FLASH_ADDR to determine the address where the region begins.
If you want to have the data placed in the subregion of a memory, which will likely be the case when using DDR, select “zephyr,sram=&sram” and set the DT_SRAM_SIZE macro to determine the region size and DT_SRAM_ADDR to determine the address where the region begins.
Otherwise set “zephyr,flash” and/or “zephyr,sram” to one of the predefined regions:
"zephyr,flash" - &tcml - &ocram_s - &ocram - &ddr "zephyr,sram" - &tcmu - &ocram_s - &ocram - &ddr
Below you will find the instructions how a Linux user space application running on the A9 core can be used to load and run Zephyr application on the M4 core.
The UDOOBuntu Linux distribution contains a udooneo-m4uploader [6] utility, but its purpose is to load UDOO Neo “Arduino-like” sketches, so it doesn’t work with Zephyr applications in most cases. The reason is that there is an exchange of information between this utility and the program running on the M4 core using hardcoded shared memory locations. The utility writes a flag which is read by the program running on the M4 core. The program is then supposed to end safely and write the status to the shared memory location for the main core. The utility then loads the new application and reads its status from the shared memory location to determine if it has successfully launched. Since this functionality is specific for the UDOO Neo “Arduino-like” sketches, it is not implemented in Zephyr. However Zephyr applications can support it on their own if planned to be used along with the UDOOBuntu Linux running on the A9 core. The udooneo-uploader utility calls another executable named mqx_upload_on_m4SoloX which can be called directly to load Zephyr applications. Copy the Zephyr binary image into the Linux filesystem and invoke the utility as a root user:
mqx_upload_on_m4SoloX zephyr.bin
If the output looks like below, the mqx_upload_on_m4SoloX could not read the status of the stopped application. This is expected if the previously loaded application is not a UDOO Neo “Arduino-like” sketch and ignores the shared memory communication:
UDOONeo - mqx_upload_on_m4SoloX 1.1.0 UDOONeo - Waiting M4 Stop, m4TraceFlags: 00000000 UDOONeo - Waiting M4 Stop, m4TraceFlags: 00000000 UDOONeo - Waiting M4 Stop, m4TraceFlags: 00000000 UDOONeo - Waiting M4 Stop, m4TraceFlags: 00000000 UDOONeo - Failed to Stop M4 sketch: reboot system !
In such situation, the mqx_upload_on_m4SoloX utility has reset the trace flags, so it will succeed when called again. Then it can have this output below:: 000001E0 UDOONeo - M4 sketch is running
Or the one below, if the utility cannot read the status flag that the M4 core applications has started. It can be ignored as the application should be running, the utility just doesn’t know it:: 00000000 UDOONeo - Waiting M4 Run, m4TraceFlags: 00000000 UDOONeo - Waiting M4 Run, m4TraceFlags: 00000000 UDOONeo - Waiting M4 Run, m4TraceFlags: 00000000 UDOONeo - Failed to Start M4 sketch: reboot system !
The stack pointer and the program counter values are read from the binary. The memory address where binary will be placed is calculated from the program counter as its value aligned to 64 KB down, or it can be provided as a second command line argument:
mqx_upload_on_m4SoloX zephyr.bin 0x84000000
It is necessary to provide the address if the binary is copied into a memory region which has different mapping between the A9 and the M4 core. The address calculated from the stack pointer value in the binary file would be wrong.
It is possible to modify the mqx_upload_on_m4SoloX utility source code to not exchange the information with the M4 core application using shared memory.
It is also possible to use the imx-m4fwloader [7] utility to load the M4 core application.
One option applicable in UDOOBuntu Linux is to copy the binary file into the file /var/opt/m4/m4last.fw in the Linux filesystem. The next time the system is booted, Das U-Boot will load it from there.
Another option is to directly use Das U-Boot to load the code.
Debugging¶
The UDOO Neo Full board includes pads for soldering the 14-pin JTAG connector. Zephyr applications running on the M4 core have only been tested by observing UART console output. | https://docs.zephyrproject.org/latest/boards/arm/udoo_neo_full_m4/doc/udoo_neo_full.html | 2019-02-15T21:27:32 | CC-MAIN-2019-09 | 1550247479159.2 | [] | docs.zephyrproject.org |
For LON devices, the SmartServer provides two network management modes called LON Device Management Mode (DMM) and LON Independent Management Mode (IMM). In LON DMM, the SmartServer manages the configuration of all LON devices provisioned by the SmartServer. In LON IMM, the SmartServer acts as an independent device in the LON network, and you must provide a network management tool such as the IzoT Commissioning Tool (CT) to manage the network configuration of the LON devices. DMM is the default setting where the SmartServer manages the LON devices in the network. When operating in DMM, all the internal LON routers in the SmartServer are configured by the SmartServer and function as repeaters. In IMM, the SmartServer does not configure its internal routers and they must be configured with the independent network management tool.
If you will be using your SmartServer on a LON network managed by another network manager such as the IzoT Net Server or LNS Server, or if you will be using your SmartServer solely as an IP-852 router or an LNS Remote Network Interface (RNI), switch to LON Independent Management Mode (IMM) as described on this page. Example network management tools include the IzoT CT and LonMaker tool.
Note: In DMM, using both IzoT CT and the SmartServer CMS to add devices to their own networks is not supported.
Switching LON Management Mode
- If you defined any LON device interfaces or LON devices prior to switching off LON management mode, delete the interface and device definitions as described in Provisioning Devices or delete all definitions as described in Resetting the SmartServer to Factory Defaults.
- To set the LON management mode, click the Action button () on the Devices widget.
- Select the Switch to LON Device Management Mode or Switch to LON Independent Management Mode action as appropriate for your system.
- Click YES in the Confirmation dialog box to confirm the management mode change. | https://docs.adestotech.com/pages/viewpage.action?pageId=43375845 | 2022-01-29T04:28:24 | CC-MAIN-2022-05 | 1642320299927.25 | [] | docs.adestotech.com |
Risk webinars
You can also learn how to use Adyen's fraud and dispute management tools in an upcoming Risk webinar.
You can use our risk management system, RevenueProtect, to minimize fraud by applying risk rules before processing a transaction. For most point-of-sale transactions, you don't need RevenueProtect because the risk for in-person payments is significantly lower than for ecommerce and most risk rules do not apply.
However, for riskier point-of-sale transactions like Mail Order/Telephone Order (MOTO) and Manual Key Entry (MKE), you can enable risk rules in your Customer Area. Based on your risk settings, every transaction gets a risk score ranging from 0 to 100. When the risk score reaches 100, the transaction is declined and the terminal shows Card blocked.
MOTO and MKE payments are considered insecure. There is no liability shift and you are fully liable for fraud chargebacks when accepting MOTO and MKE payments.
Step 1: Enable risk rules for point of sale
To turn on the Adyen risk management system for point of sale:
- Log into your Customer Area and select an account:
- Company account: to enable risk checks by default for all point-of-sale transactions on all your merchant accounts.
- Merchant account: to enable risk checks only for point-of-sale transactions on a specific merchant account.
- Go to Risk > Settings and stay on the Global settings tab.
- Under Enable risk, select On.
- Under Perform risk checks on point of sale (POS), select Enable.
On a merchant account, you first need to select Override company setting.
- At the bottom, select Save configuration.
Step 2: Create a risk profile for point of sale
When you turn on the risk management system, the default risk profile of the company applies. Many rules in the default risk profile are not suitable for point-of-sale transactions.
Therefore, if you only process point-of-sale transactions on your merchant account, we recommend you create a dedicated risk profile with risk rules configured specifically for point of sale.
- In your Customer Area, select your company account.
- Go to Risk > Risk profiles.
- In the top right, select Create new profile.
- Enter a name for your profile.
- For the template, under Based on Profile, select the default company profile.
- Select Create, refresh the page, and open your profile.
- Under Used by, select the point-of-sale merchant accounts that you want to apply the risk rules to.
- Disable unnecessary risk rules.
At least, you must disable:
- Multiple distinct IP address used
- Shopper used shared IP address
- Multiple distinct shopper references used
- Configure custom risk rules.
- At the bottom, select Save profile.
Step 3: Disable unnecessary risk rules
Most risk rules are designed to minimize the risk of ecommerce transactions. To ensure the point-of-sale transactions are not declined unnecessarily:
- In your risk profile for point of sale, under ShopperDNA, disable the following risk rules:
- Multiple distinct IP addresses used and Shopper used shared IP address: because terminals use the IP address of the store, the cards of multiple shoppers will use the same IP address. If you don't disable these rules, point-of-sale transactions will be declined.
- Multiple distinct shopper references: the shopper reference is a unique identifier for a shopper that you send in the payment request. If you don't disable this rule, the transaction is declined if the same shopper has multiple shopper references (for example, due to using the card in different stores).
- Optionally, disable all other rules, except for Velocity and Consistency and rules that you want to customize.
Step 4: (Optional) Configure custom risk rules
MOTO and MKE transactions typically send just the card number, expiry date, and CVV. To enable the most important risk rules for point-of-sale transactions:
- In your Customer Area, go to your risk profile for point of sale.
- Under Consistency, enable rules based on:
- AVS checks (only if you send the street address and the ZIP/postal code of the shopper)
- CVV checks
- Under Velocity, enable rules based on the number of transactions a shopper attempts in a given time.
If you send additional data fields in your payment request (for example
shopperName,
shopperEmail) different risk rules may make sense for your exact use case.
To target specific behaviors, add Custom Rules to your point-of-sale risk profile.
Testing
When the transaction gets declined due to a risk rule, the
PaymentResponse
includes:
Result: Failure
error condition: Refusal
AdditionalResponse: provides more information about why the transaction was declined in the following fields:
refusalReason: 199 Card blocked
messageBLOCK_CARD
Here's an example failure response for a declined payment:
{ "SaleToPOIResponse": { "MessageHeader": {...}, "PaymentResponse": { "POIData": {...}, "PaymentReceipt": {...}, "PaymentResult": {...}, "SaleData": {...}, "Response": { "AdditionalResponse": "refusalReason=199%20Card%20blocked...&message=BLOCK_CARD...", "ErrorCondition": "Refusal", "Result": "Failure" } } } }
To test how your integration handles refusals due to a risk rule, simulate a specific declined payment:
- Make a test payment for an amount with 125 as the last three digits of the
RequestedAmount(for example, 101.25 or 21.25).
- In the response, check that the error condition is Refusal and the refusal reason is Card blocked.
- Make sure your integration doesn't retry the transaction. | https://docs.adyen.com/point-of-sale/risk-management-pos | 2022-01-29T04:15:08 | CC-MAIN-2022-05 | 1642320299927.25 | [] | docs.adyen.com |
<resource ...> <name>Resource1</name> <administrativeOperationalState> <administrativeAvailabilityStatus>maintenance</administrativeAvailabilityStatus> <!-- values: maintenance/operational --> </administrativeOperationalState> ... </resource>
Resource Maintenance State
MidPoint provides feature that puts Resource into maintenance mode by administrative decission. It is also known as administrative operational state of the resource. This setting is useful when target system is e.g. undergoing planned maintenance to save computing time of the midPoint and to limit error messages that would normally appear from communication exceptions and so. Administrative operational state signalizes whether resource is up and operational to receive provisioning requests or is down in maintenance and midPoint should not contact it during the provisioning.
When under maintenance, operations are cached to the repository shadow and processed later when resource is back in operational mode. This is possible due to synergy with midPoint’s .
Administrative operational state is set manually by midPoint power user (e.g. administrator) in the ResourceType object - see example below. This is in contrast with OperationalStateType setting of the Resource, which is set automatically by the midPoint after each provisioning operation.
Operations executed on the resource which is under maintenance do not throw common provisioning errors. When change is requested, it’s result is IN_PROGRESS to indicate that pending operation has been saved to the ShadowType. When no change is requested or pending delta is propagated to operational resource, SUCCESS result is returned.
Limitations
Feature was designed for "outbound" resources. It was not tested for authoritative sources with inbound mappings (e.g. HR resource), but theoretically it may work.
Background tasks that were fitted for the administrative operational state of the resource are Reconciliation and LiveSync tasks. The rest of the task handlers may or may not work well when resource is in the maintenance state. The recompute task is compatible with resourceadministrative operational state by default since it launches individual user reconciles, which are supported.
Some midPoint screens do not make use of administrative operational state, this is e.g. case of Resource detail - Accounts tab - Search In: Resource. These screens invoke resource connector operations regardless of the maintenance setting. | https://docs.evolveum.com/midpoint/reference/resources/maintenance-state/ | 2022-01-29T05:28:09 | CC-MAIN-2022-05 | 1642320299927.25 | [] | docs.evolveum.com |
If your integration already supports XML 3D Secure (3DS) 1.0, to support the new directives within the PSD2 regulations it is necessary to make some small changes.
When using 3DS 1.0 it is not necessary to add the Cardholder Name in the REQUEST BODY for the 3D Secure Verification. For 3DS 2.0 it is necessary to add the CARDHOLDERNAME tag containing the Cardholder Name entered by the user when performing the transaction.
PSD2 and Strong Customer Authentication (SCA)
The Payment Services Directive 2 (PSD2) comes into force in 2020 (only applicable in EU) and you might need to be prepared to provide SCA for your payments. Take a closer look at our F.A.Q in case you have more questions.
The process is described in the flowchart below.
1. A POST request is made to the Worldnet server. The server will handle the user authentication.
2. After authentication, the server will redirect to the URL set in the MPI Receipt URL in the `Selfcare > Settings > Terminal` section with the authentication results passed in the URL.
3. In your integration add the MPIREF code to your XML payment and send a payment load to the XML Request URL. For further details, check XML Payment Features.
4. If the payment is successful, the server will return an approval message.
The following resources are the same for all the requests and responses you find in this page:
3D SECURE LIVE
This URL should be used in test mode only. Please contact the Worldnet support team to receive the live URL.
ND001 - Hash Formation
The general rule to build HASH field is given at the Special Fields and Parameters page. For this specific feature, you should use the following formats:
TERMINALID:ORDERID:CARDNUMBER:CARDEXPIRY:CARDTYPE:AMOUNT:DATETIME:SECRET
The response body fields will be the same as from your previous integration. For details about the response, check the XML 3D Secure page. | https://docs.worldnettps.com/doku.php?id=developer:api_specification:upgrading_xml_to_3ds_version_2 | 2022-01-29T05:47:35 | CC-MAIN-2022-05 | 1642320299927.25 | [] | docs.worldnettps.com |
Hardware Requirements¶
Do you ask yourself how much RAM do you need to give your new IntelMQ virtual machine?
The honest answer is simple and pointless: It depends ;)
Contents
IntelMQ and the messaging queue (broker)¶
IntelMQ uses a messaging queue to move the messages between the bots. All bot instances can only process one message at a time, therefore all other messages need to wait in the queue. As not all bots are equally fast, the messages will naturally “queue up” before the slower ones. Further, parsers produce many events with just one message (the report) as input.
The following estimations assume Redis as messaging broker which is the default for IntelMQ. When RabbitMQ is used, the required resources will differ, and RabbitMQ can handle system overload and therefore a shortage of memory.
As Redis stores all data in memory, the data which is processed at any point in time must fit there, including overheads. Please note that IntelMQ does neither store nor cache any input data. These estimates therefore only relate to the processing step, not the storage.
For a minimal system, these requirements suffice:
4 GB of RAM
2 CPUs
10 GB disk size
Depending on your data input, you will need the twentiethfold of the input data size as memory for processing.
When using Redis persistence, you will additionally need twice as much memory for Redis.
Disk space¶
Disk space is only relevant if you save your data to a file, which is not recommended for production setups, and only useful for testing and evaluation.
Do not forget to rotate your logs or use syslog, especially if you use the logging level “DEBUG”. logrotate is in use by default for all installation with deb/rpm packages. When other means of installation are used (pip, manual), configure log rotation manually. See Logging.
Background on memory¶
For experimentation, we used multiple Shadowserver Poodle reports for demonstration purpose, totaling in 120 MB of data. All numbers are estimates and are rounded. In memory, the report data requires 160 MB. After parsing, the memory usage increases to 850 MB in total, as every data line is stored as JSON, with additional information plus the original data encoded in Base 64. The further processing steps depend on the configuration, but you can estimate that caches (for lookups and deduplication) and other added information cause an additional size increase of about 2x. Once a dataset finished processing in IntelMQ, it is no longer stored in memory. Therefore, the memory is only needed to catch high load.
The above numbers result in a factor of 14 for input data size vs. memory required by Redis. Assuming some overhead and memory for the bots’ processes, a factor of 20 seems sensible.
To reduce the amount of required memory and disk size, you can optionally remove the raw data field, see Removing raw data for higher performance and less space usage in the FAQ.
Additional components¶
If some of the optional components of the IntelMQ Ecosystem are in use, they can add additional hardware requirements.
Those components do not add relevant requirements:
IntelMQ API: It is just an API for intelmqctl.
IntelMQ Manager: Only contains static files served by the webserver.
IntelMQ Webinput CSV: Just a webinterface to insert data. Requires the amount of processed data to fit in memory, see above.
Stats Portal: The aggregation step and Graphana require some resources, but no exact numbers are known.
Malware Name Mapping
Docker: The docker layer adds only minimal hardware requirements.
EventDB¶
When storing data in databases (such as MongoDB, PostgreSQL, ElasticSearch), it is recommended to do this on separate machines for operational reasons. Using a different machine results in a separation of stream processing to data storage and allows for a specialized system optimization for both use-cases.
IntelMQ cb mailgen¶
While the Fody backend and frontend do not have significant requirements, the RIPE import tool of the certbund-contact requires about 8 GB of memory as of March 2021. | https://intelmq.readthedocs.io/en/latest/user/hardware-requirements.html | 2022-01-29T03:27:03 | CC-MAIN-2022-05 | 1642320299927.25 | [] | intelmq.readthedocs.io |
You're viewing Apigee Edge documentation.
View Apigee X documentation.
Symptomargeterror on the Message Processor. The Message Processor will send
502 Bad Gateway
TargetServerdefinition by including the
SSLInfoattributes where the
enabledflag"> <Host>mocktarget.apigee.net</Host> <Port>443</Port> <IsEnabled>true</IsEnabled> attributes with
ClientAuthEnabled,
Keystore,
KeyAlias, and
Truststoreflags unexpectederror and fix the issue appropriately in your backend server.
- If you don't find any errors or information in your backend server, collect the
tcpdumpoutput on the Message Processors:
-d | https://docs.apigee.com/api-platform/troubleshoot/runtime/502-bad-gateway?hl=pt-br | 2022-01-29T04:46:35 | CC-MAIN-2022-05 | 1642320299927.25 | [array(['https://docs.apigee.com/api-platform/images/502-UnexEOF-502-logs.png?hl=pt-br',
None], dtype=object) ] | docs.apigee.com |
Field visibility adds another level of customization to the listing packages. Apart from setting the “Listings Limit” and “Expiration Date” on a package, you can now also show or hide any listing field based on the package the user decides to purchase.
To take advantage of this feature, first you must edit a listing type under wp-admin > listing types. In the General tab, choose Packages
There, you’ll be able to choose what listing packages are available on this particular listing type, set the order they appear in, set whether they’re featured (this adds a featured badge), and optionally override the package title and description for this type.
To use field visibility on a field, click on it and choose "Enable package visibility" option. Once you do this, you are able to add one or multiple rules and select the packages where this field will be available.
You can also select No Package which is useful to display premium fields on listings added through WP backend.
I can show or hide the fields based on package, what about listing blocks e.g the contact form block?
Majority of listing blocks will not show if the field is not available or does not output any value. So the contact form will not show if there's no email field, and the gallery won't show if there's no gallery field. So using field visibility on a field, will impact the block which displays that field too
What about the static code block which does not require any field available?
When you add a static code block you can find the visibility option in the block options. This block can be used as an example to display advertisements for listings which have a certain listing package, while hiding them on others. | https://docs.mylistingtheme.com/article/using-field-visibility-to-limit-fields-based-on-the-package-the-user-purchases/ | 2022-01-29T05:06:01 | CC-MAIN-2022-05 | 1642320299927.25 | [] | docs.mylistingtheme.com |
Configure Tunnels with Cisco ISR
Table of Contents
Prerequisites
The following prerequisites must be met for the tunnel to work successfully.
Licensing and Hardware
- A valid Cisco Umbrella SIG Essentials or SIG Add-On subscription or a free SIG trial.
- A router (ISR-G2, ISR4K or CSR) with a security K9 license to establish an IPsec tunnel.
Network Access
- You must select an Umbrella SIG Data Center IP address to use when creating the IPsec tunnel.
In the sample commands,
<umbrella_dc_ip> refers to this IP address. We recommend choosing the IP address based on the data center located closest to your device.
The following ports must be open before connecting to the tunnel:
- UDP ports 500 and 4500.
Cisco router (ISR-G2, ISR4K or CSR) devices do not require public static IPv4 address(es) configured on the interface that will connect to the public internet and Cisco Umbrella SIG service. They can be behind a NAT device. This is because we can specify a text as its IKE ID. This ID in combination with the PSK is used to successfully authenticate the Cisco router (ISR-G2, ISR4K or CSR) devices with the Cisco Umbrella SIG service.
Text as an IKE ID also allows for multiple tunnels to be established from the same Cisco router device with a single IP address. This provides an opportunity to increase bandwidth by increasing the number of tunnels.
Umbrella Configuration
- Navigate to Deployments > Core Identities > Network Tunnels, then click Add.
- Give your tunnel a meaningful Tunnel Name, from the Device Type drop-down list choose ISR, and then click Save.
- Select your Tunnel ID from the drop-down list. Enter the Pre-Shared-Key (PSK) Passphrase and click Save.
The new tunnel appears in the Umbrella dashboard with a status of Not Established. The tunnel status is updated once it is fully configured and connected with the ISR.
Configuration for ISR (G2, 4K) or CSR
Follow these steps to connect the Cisco router to the Cisco Umbrella Cloud-Delivered Firewall. Some configurations are preliminary will not be required in the final product.
- Configure the IKEv2 keyring.
ISR routers support a default proposal and policy for IKEv2, with a predefined encryption, integrity and DH group. These values change across different software versions. You can either use the default proposal or you can create your own proposal. Your proposal needs to be attached to the policy with matching parameters. Create an IKEv2 keyring profile and configure the peer address and pre-shared key, associate the keyring profile to the IKEv2 profile, set the local identity as email and configure the IKE ID (email) which you get from the Tunnel Configuration dashboard.
For example, the default IKE proposal of an ISR running 16.11.01a image:
ISR-4221#sh ver Cisco IOS XE Software, Version 16.11.01a ISR-4221#show crypto ikev2 proposal default IKEv2 proposal: default Encryption : AES-CBC-256 Integrity : SHA512 SHA384 PRF : SHA512 SHA384 DH Group : DH_GROUP_256_ECP/Group 19 DH_GROUP_2048_MODP/Group 14 DH_GROUP_521_ECP/Group 21 DH_GROUP_1536_MODP/Group 5
Define the IKEv2 profile and policy with the following parameters.
match address local would be the tunnel source IP address.
crypto ikev2 proposal umbrella encryption aes-gcm-256 prf sha256 group 19 20 ! crypto ikev2 policy umbrella proposal umbrella match address local <x.x.x.x> !
Define the IKEv2 Keyring and profile with the following parameters.
The highlighted parameters, Cisco Umbrella team will share the VPN IP address with customers, local-id and pre-shared keys can be obtained while customers provision the Network Tunnels through the Cisco Umbrella dashboard. Substitute the IP address of the Umbrella data center closest to your location for
[umbrella_dc_ip]. You can find the entries marked
xxxxx... from the Umbrella dashboard.
crypto ikev2 keyring Umbrella-Key peer umbrella address [umbrella_dc_ip] pre-shared-key [Portal_Tunnel_Passphrase] ! crypto ikev2 profile umbrella match identity remote address [umbrella_dc_ip] identity local email [Portal_Tunnel_ID] authentication remote pre-share authentication local pre-share keyring local Umbrella-Key dpd 10 2 periodic !
In the above commands, replace [Portal_Tunnel_ID] and [Portal_Tunnel_Passphrase] with the Tunnel ID and Passphrase you configured in the previous section Umbrella Dashboard Configuration.
- Define the IPSec profile and transform-set.
Create the transform-set and IPsec profile. Then associate the transform-set and IKEv2 Profile with the IPSec profile. Refer to Supported IPsec Parameters for the recommended algorithms.
crypto ipsec transform-set Umb-Transform esp-aes 256 esp-sha-hmac mode tunnel ! crypto ipsec profile umbrella set transform-set Umb-Transform set ikev2-profile umbrella
- Create the tunnel interface.
Define the static tunnel interface with the peer IP as the Umbrella VPN headend IP and associate the IPsec profile under the tunnel. Make sure the tunnel interface does not contain NAT related commands; traffic sent to Umbrella should not have NAT applied.
interface Tunnel1 ip unnumbered GigabitEthernet <WAN Interface of the router > tunnel source GigabitEthernet <WAN Interface of the router > tunnel mode ipsec ipv4 tunnel destination [umb_dc_ip] tunnel protection ipsec profile umbrella
- Configure routing rules.
Define the traffic which needs to be tunneled to the CDFW. Based on the requirements, these ACL rules can be modified.
The route-map needs to be associated with the LAN interface of the router where the device receives the traffic.
In the following examples,
192.168.20.0/24 is the LAN subnet, and
GigabitEthernet is the LAN interface.
ip access-list extended To_Umbrella permit ip 192.168.20.0 0.0.0.255 any ! route-map Umbrella-PBR permit 10 match ip address To_Umbrella set interface Tunnel1 ! interface GigabitEthernet < LAN Interface > ip policy route-map Umbrella-PBR <Associate the Route-map to the LAN Interface>
Test Your Configuration
Check Tunnel Status
Use the following command to verify the tunnel status on your ISR.
show crypto session detail and the output must show the tunnel status as UP-ACTIVE.
Substitute the IP address of the Umbrella data center nearest your location for
[umbrella_dc_ip].
ISR#show crypto session detail Crypto session current status Code: C - IKE Configuration mode, D - Dead Peer Detection K - Keepalives, N - NAT-traversal, T - cTCP encapsulation X - IKE Extended Authentication, F - IKE Fragmentation R - IKE Auto Reconnect, U - IKE Dynamic Route Update Interface: Tunnel1 Profile: umbrella Uptime: 14:53:47 Session status: UP-ACTIVE Peer: [umbrella_dc_ip] port 4500 fvrf: (none) ivrf: (none) Phase1_id: [umbrella_dc_ip] Desc: (none) Session ID: 1 IKEv2 SA: local 10.10.10.201/4500 remote [umbrella_dc_ip]/4500 Active Capabilities:DFNXU connid:4 lifetime:09:06:13 IPSEC FLOW: permit ip 0.0.0.0/0.0.0.0 0.0.0.0/0.0.0.0 Active SAs: 2, origin: crypto map Inbound: #pkts dec'ed 0 drop 0 life (KB/Sec) 4608000/2499 Outbound: #pkts enc'ed 0 drop 0 life (KB/Sec) 4608000/2499
Manually Trigger the Tunnel
Should the tunnel not come up immediately, or should it need to be manually triggered for any reason, select the tunnel interface, and issue the
shutdown and
no shutdown commands.
ISR(config)#int T1 ISR(config-if)#shutdown ISR(config-if)#no shutdown
Verify Tunnel Status
Verify the tunnel status with
show crypto session remote <Headend IP> detail. See the example output.
ISR:11 Session status: UP-ACTIVE Peer: 192.0.2.0 port 4500 fvrf: (none) ivrf: (none) Phase1_id: 192.0.2.0 Desc: (none) Session ID: 1 IKEv2 SA: local 192.0.2.0/4500 remote 146.112.66.2/4500 Active Capabilities:DN connid:3 lifetime:23:20:49 IPSEC FLOW: permit ip 0.0.0.0/0.0.0.0 0.0.0.0/0.0.0.0 Active SAs: 2, origin: crypto map Inbound: #pkts dec'ed 0 drop 0 life (KB/Sec) 4607996/1248 Outbound: #pkts enc'ed 0 drop 0 life (KB/Sec) 4607997/1248
Validate the Data Path Through the Tunnel Using a Client
Once the tunnel is up, we can validate a single host, in the case the IP address of the host is 192.168.50.1 which is behind the tunnel, by making the following changes on the router, after successful validation we can add the entire subnet.
! ip access-list extended testip permit ip host 192.0.2.0 any ! route-map To_Umbrella permit 10 match ip address testip set interface Tunnel1 !
Validate the Data Path Through the Tunnel Using a Router
Validate the data-path through the tunnel, in case if we don’t have the client behind the routers. In that case, leverage local policy-based routing for router generated traffic and associate to the route-map.
Once the local PBR is configured, use the ping command with local LAN subnet as source IP address (LAN subnet), and verify that the encryption/decryption counters in the
show crypto session command.
ISR(config)#ip local policy route-map To_Umbrella ISR#ping 8.8.8.8 source 192.168.10.1 Type escape sequence to abort. Sending 5, 100-byte ICMP Echos to 8.8.8.8, timeout is 2 seconds: Packet sent with a source address of 192.0.2.0 !!!!! Success rate is 100 percent (5/5), round-trip min/avg/max = 79/79/80 ms ip-10-10-10-50:21 Session status: UP-ACTIVE Peer: 192.0.2.0 port 4500 fvrf: (none) ivrf: (none) Phase1_id: 146.112.66.2 Desc: (none) Session ID: 1 IKEv2 SA: local 192.0.2.0/4500 remote 192.0.2.0/4500 Active Capabilities:DN connid:3 lifetime:23:20:39 IPSEC FLOW: permit ip 0.0.0.0/0.0.0.0 0.0.0.0/0.0.0.0 Active SAs: 2, origin: crypto map Inbound: #pkts dec'ed 5 drop 0 life (KB/Sec) 4607995/1238 Outbound: #pkts enc'ed 5 drop 0 life (KB/Sec) 4607997/1238
Configure Tunnels with Cisco ASA < Configure tunnels with Cisco ISR > Configure Tunnels with Cisco Firepower Threat Defense (FTD)
Updated about 1 month ago | https://docs.umbrella.com/umbrella-user-guide/docs/add-a-tunnel-cisco-isr | 2022-01-29T04:08:35 | CC-MAIN-2022-05 | 1642320299927.25 | [array(['https://files.readme.io/36289ed-Add_Network_Tunnel.png',
'Add_Network_Tunnel.png'], dtype=object)
array(['https://files.readme.io/36289ed-Add_Network_Tunnel.png',
'Click to close...'], dtype=object)
array(['https://files.readme.io/252236c-isr.png', 'isr.png'], dtype=object)
array(['https://files.readme.io/252236c-isr.png', 'Click to close...'],
dtype=object)
array(['https://files.readme.io/616cbc8-tunnel_id.png', 'tunnel_id.png'],
dtype=object)
array(['https://files.readme.io/616cbc8-tunnel_id.png',
'Click to close...'], dtype=object) ] | docs.umbrella.com |
Viral Spread Theory in RKnot¶
RKnot is built up from the contact level. A contact is an interaction between an infected person and a susceptible person that could result in transmission of the virus (more details below). Contacts occur in a 2d environment in which subjects move randomly according to pre-defined distributions or deterministically according to event subscription.
When a contact occurs, transmission is determined stochastically according to a multi-faceted likelihood of transmission tailored specifically to the infected and susceptible subjects in contact at the specific time at the specific location.
The expected number of new infections at any time, can be found as the sum of all contacts at that time multiplied each contact’s specific transmission risk.
\(\tau\) is a fundamental property of virus that can be further augmented for properties of the infected, or the subject, or the location of the contact.
Thus, the transmission risk of the virus is required in order to mimick its spread. This can be provided directly or derived from a virus’ estimated \(R_0\).
\(R_0\)¶
The propensity for a virus to spread is most-commonly referenced by its Basic Reproduction Number, \(R_0\). \(R_0\) is the number of new infections that will be caused by a single infected person in an entirely susceptible population.
At an individual level, \(R_0\) is influenced by many factors: the number of contacts the infected person has, the location of contacts, the social setting of contacts, etc.
At a population level, many models of \(R_0\) assume that the individual factors average out, thus leaving us with a property that is fundamental to the virus itself. We can see there is a broad range of \(R_0\) values for different viruses in the wikipedia entry.
\(R_0\) can be described as:
The above ignores the number of contacts made, however, \(\beta\) can be further broken down as:
which yields:
Using the relationship above, we can simulate viral spread contact by contact, however, we must have a value(s) for \(\tau\).
Static Transmission Risk¶
There are many mathematical models used to describe viral transmission, the most commonly-referenced being the SIR model (Suscepitble-Infected-Recovered).
The SIR model makes several simplifying assumptions:
- Closed, well-mixed population with no demography
- Constant Rates
This allows viral spread to be described in 3 equations.
The simplified model allows for some quick estimates of various spread characteristics. Herd Immunity threshold, for instance, can be found as:
For instance, \(R_0\) of 2.5x, the prevailing estimate for sars-cov-2, yields a HIT of 60%.
The assumption of constant rates presents problems for simulating at the contact level, however. SIR assumes that transmission risk is constant during the infection period and that each subject in the population has the same number of contacts and so is able to ignore \(\tau\) and \(c\).
Referring to equation (5) above:
- \(R_0\) is known (from external analysis and provided by the user)
- \(d\) is known (from external analysis and provided by the user)
Thus, unknowns are \(\tau\) & \(\overline{c}\)
As per above, we must know \(\tau\) in order to simulate spread.
Expected Contact Rate¶
While we do not know \(\overline{c}\), the simulation space is given a number of parameters that allow us to estimate the expected number of contacts. We know:
- The population size
- The number of locations
- The movement patterns of subjects
- The likelihood that a subject will be at a particular location at a particular time given 1/2/3 above
A simple method to estimate \(\overline{c}\) is to assume that each subject is equally likely to be in any one location at any time. The probability of a single dot being in a singe location is:
The probability of another dot being there at the same time:
The probability of \(k\) dots being at the same location at the same time:
Then, the number of ordered contacts is:
And for all possible orders:
In fact, we can prove that the contact rate for an equal mover is the density using probability theory. We can view the probability of contact with other dots in the space as a binomial random variable with the trials representing the number of dots at a location, so:
If there are three dots (and 3 locations), the distribution governing their interactions is the sum of the two Binomial functions:
We know the sum of two Binomial distributions is:
so:
and for k dots:
which simplifies to:
and we know the mean of a binomial distribution is the product \(n*p\)
We can see that relationship below, where an environment with density of 5 generates a \(\overline{k}\) of 5.
[85]:
import numpy as np import scipy.stats as st import matplotlib.pyplot as plt density = 5 n_dots = 10000 n_locs = n_dots // density # binom always returns n + 1 p = 1 / n_locs n = n_dots - 1 x = np.arange(n + 1) kontact = st.binom(n, p) fig, ax = plt.subplots(1,1, figsize=(16,12)) counts, bins, _ = plt.hist( np.random.binomial(n, p, 100000), bins=x, density=True, rwidth=.9 ) ax.plot(x+.5, kontact.pmf(x)) ax.set_xticks([i+.5 for i in range(20) if not i % 5]) ax.set_xticklabels([i for i in range(20) if not i % 5]) ax.set_xlabel('Number of Contacts') plt.text(.5, .5, f'$\mu$: {n*p} contacts / day', transform=ax.transAxes) plt.xlim(0,20) plt.gcf().set_facecolor('white') # Or any color plt.tight_layout() plt.savefig('imgs/binom.png')
Likelihood of Transmission¶
With the expected contact rate known, probability of transmission under the SIR model is found as:
So, again, If a susceptible subject has contact with an infected at the same location, at any time, its probability of infection is:
If a susceptible comes in contact with multiple infected, we assume that this results in multiple contacts that occur in succession. So we must sum all the branches of the probability tree that end in an infection:
This ensures that the likelihood of transmission is asymptotic to 1, as follows:
Dynamic Transmission Risk¶
The SIR model makes several assumptions that make for simpler math, but that do not map well onto reality.
In particular, SIR assumes constant transmission risk,
, when in reality, we know that the infectiousness of an individual changes over time as a function of viral load. It takes time for the virus to accumulate in the subject and, then, in turn it takes time for the subject to dimish the virus via its immune response.
Generally, the greater the viral load, the greater the transmission risk. And so the likelihood of transmission should follow a similar pattern as the viral load (or vice versa). This New York Times piece has a nice visualisation of this concept for sar-cov-2.
There are several techinques available for incorporating viral load in a viral spread model including serial interval, explored here and generation time, explored here. These techniques, however, again tend to ignore \(\overline{c}\) and focus on \(\beta\).
Hutch Model¶
A paper from a team at the Fred Hutchinson Cancer Research Center, however, maps transmission risk directly onto an estimated viral load curve and infectiousness factor and optimizes it at a specfic contact rate and contact variance (hence forth known as the Hutch model).
First, we will show how the dynamic transmission risk curve is derived. The goal is to produce an array of non-zero probabilites reflecting the likelihood of transmission of virus from one infected to a susceptible in a single contact, based on the known viral load characteristics of the virus.
The Fred model combines quantities of infectiousness and viral load. Viral load is found via a system of 6 differential equations as follows:
The variable
v is the viral load.
With viral load known, the transmission risk, \(\tau\), can be found as:
The paper provides estimates for several of the parameters including \(\gamma\), \(\alpha\), etc. The system can be described and solved using the
odeint method in
scipy.
[65]:
import math import numpy as np from scipy.integrate import odeint def inf_rate(beta,v,s): return beta*v*s def sfunc(beta, v, s): return -inf_rate(beta, v, s) def vfunc(pi,i,c,v): return pi*i - c*v def ifunc(beta, v, s, delta, i, k, m, e, r, phi): inf_r = inf_rate(beta,v,s) dens_rate = delta*(i**k) acq_res = (m * e**r) / (e**r + phi**r) return inf_r - dens_rate*i - acq_res*i def m1func(omega, i, m1, q): return omega*i*m1 - q*m1 def m2func(q, m1, m2): return q * (m1-m2) def efunc(q, m2, dE, e): return q*m2 - dE*e def model(z,t): beta=10**-7.23 k=0.08 delta= 3.13 pi=10**2.59 m=3.21 omega=10**-4.55 r=10 dE=1 q=2.4*10**-5 c=15 s, i, v, m1, m2, e = z[0], z[1], z[2], z[3], z[4], z[5] dsdt = sfunc(beta, v, s) didt = ifunc(beta, v, s, delta, i, k, m, e, r, phi) dvdt = vfunc(pi,i,c,v) dm1dt = m1func(omega, i, m1, q) dm2dt = m2func(q, m1, m2) dedt = efunc(q, m2, dE, e) dzdt = [dsdt,didt,dvdt,dm1dt,dm2dt,dedt] return dzdt # FROM PRIOR RESEARCH pi=10**2.59 c = 15 S0=10**7 I0=1 V0=pi*I0/c M10=1 M20=0 E0=0 phi=100 z0 = [S0, I0, V0, M10, M20, E0] t = np.linspace(0,20,30*4) z = odeint(model,z0,t) v= z[:, 2]
This results in the curve below, which shows the level of virus present in an infected person over time.
We can see that sars-cov-2 viral load can have a long tail, however, the Hutch paper showed that the amount of virus present during the tail is unlikely to result in high transmission, as per below.
Transmission risk is derived from the viral load above as well as two properties of the infectiousness of the subjects in the contact, \(\alpha\) and \(\gamma\) (see the paper for more details).
The paper estimated \(\alpha\) = 0.8 and \(\gamma\) = \(10^7\).
[67]:
def infness(v, alpha, gamma): num = v**alpha den = gamma**alpha + v**alpha return num/den def taufunc(v, alpha, gamma): return infness(v, alpha, gamma)**2 alpha = 0.8 gamma = 10**7 vlin = np.linspace(1, 10**10, 10**5) tmr = taufunc(vlin, alpha, gamma)
tmr is the transmission risk curve that we will utilize in our simulations. It is a 1d array with each element representing the likelihood of transmission of the virus at that point in the infection’s life cycle.
We can see from the plot below, that transmission risk only increases materially at exponentially higher viral loads:
We can further combine Chart 1 and Chart 2 above to show that transmission of sars-cov-2 is likely only during a very narrow range in the early stage of infection.
Note that there is only a significant risk of transmission of the virus for during the first few days of the infection period (shown as the more orange color under the curve).
If we assume random \(\tau\) values from
tmr curve above for 30 different infected dots, the likelihood of transmission to a susceptible as a function of the number of dots at the same location, scales as follows:
Other Forms of Heterogeneity¶
RKnot seeks to address the shortfalls in \(R_0\) models by allowing the user to introduce customized, heterogeneous populations across several axis including:
- Fatality Rate
- Population Density
- Movement - frequency and distance of location changes in space according to different probability distributions.
- Events - in the real world, people do not move and interact according to smooth probability functions. In fact, they typically have a small subset of movements that are huge outliers from any distribution. These are the professional sporting events, vacation trips, church functions, house parties, etc. that are scheduled and often times recurring. Thankfully, they are more often than not deterministic, which allows us to incorporate them in a simulation.
- Susceptibility - segments of population can be made immune (without requiring vaccination) to mimick phenomenon like possible T-cell immunity.
- Subject Transmission Factor, \(T_i\): \(R_0\) assumes that all contacts have the same transmission risk, \(\tau\) (subject to the viral load at the time of the interaction). RKnot introduces a unitless Transmission Factor, \(T\), for each subject at each contact that can modulate \(\tau\). This can be used to mimick social distancing or mask wearing or different socio-cultural norms that may impact spread (i.e. east Asian bows versus southern European double-kisses).
Still To Be Incorporated
- Location Transmission Factor, \(T_{\text{xy}}\): similar to \(T_i\) above, we can introduce a transmission factor to specific locations that might result in higher or lower likelihood of spread. This could be used to simulate certain work environments (like enclosed office spaces or meat-packing plants). It can also be used to mimic seasonality, by changing \(T_{xy}\) over time to account for, say, more time outdoors in temperate seasons.
- Testing and Isolation - with the heightened awareness of a pandemic, individuals in population are more likely to self-isolate or quarantine themselves upon sympton onset, thereby helping to reduce spread.
The Average Contact¶
Currently, RKnot assumes that each and every contact is an Average Contact.
The average contact is a purely theoretical interaction that would result in about an average likelihood of transmission relative to all other possible interactions. It is not influenced by external factors such as the demographics of the subjects, the properties of the location, etc. Thus, the \(\tau\) of an Average Contact is a fundamental property of the virus.
I like to think of the Average Contact as the Elevator Case, i.e.:
- Two people on an elevator, standing three feet apart, having a conversation for several minutes before one person exits. No masks nor other conscious social distancing, but no particularly reckless behaviour either.
Every other conceivable interaction can now be scaled relative to the Elevator Case on a continuum of higher or lower probability of transmission using transmission factors, \(T\). For instance:
- two college students pressed closely together on a concert floor and yelling at the band on stage would have \(T >>> 1x\)
- two people standing in a open field, 6 feet apart with masks on exchanging limited pleasantries would have \(T <<< 1x\)
References¶
-
-
-
-
-
-
-
-
-
-
- | https://rknotdocs.readthedocs.io/en/latest/theory.html | 2022-01-29T04:08:34 | CC-MAIN-2022-05 | 1642320299927.25 | [array(['_images/theory_14_0.png', '_images/theory_14_0.png'], dtype=object)
array(['https://storage.googleapis.com/rknotvids/imgs/tau_v_contacts.png',
'Drawing'], dtype=object)
array(['_images/theory_23_0.png', '_images/theory_23_0.png'], dtype=object)
array(['_images/theory_27_0.png', '_images/theory_27_0.png'], dtype=object)
array(['_images/theory_29_0.png', '_images/theory_29_0.png'], dtype=object)
array(['_images/theory_34_0.png', '_images/theory_34_0.png'], dtype=object)] | rknotdocs.readthedocs.io |
Concepts in the AWS Encryption SDK
This section introduces the concepts used in the AWS Encryption SDK, and provides a glossary and reference. It's designed to help you understand how the AWS Encryption SDK works and the terms we use to describe it.
Need help?
Learn how the AWS Encryption SDK uses envelope encryption to protect your data.
Learn about the elements of envelope encryption: the data keys that protect your data and the wrapping keys that protect your data keys.
Learn about the keyrings and master key providers that determine which wrapping keys you use.
Learn about the encryption context that adds integrity to your encryption process. It's optional, but it's a best practice that we recommend.
Then you're ready to use the AWS Encryption SDK in your preferred programming language.
Topics
Envelope encryption
The security of your encrypted data depends in part on protecting the data key that can decrypt it. One accepted best practice for protecting the data key is to encrypt it. To do this, you need another encryption key, known as a key-encryption key or wrapping key. The practice of using a wrapping key to encrypt data keys is known as envelope encryption.
- Protecting data keys
The AWS Encryption SDK encrypts each message with a unique data key. Then it encrypts the data key under the wrapping key you specify. It stores the encrypted data key with the encrypted data in the encrypted message that it returns.
To specify your wrapping key, you use a keyring or master key provider.
- Encrypting the same data under multiple wrapping keys
You can encrypt the data key under multiple wrapping keys. You might want to provide different wrapping keys for different users, or wrapping keys of different types, or in different locations. Each of the wrapping keys encrypts the same data key. The AWS Encryption SDK stores all of the encrypted data keys with the encrypted data in the encrypted message.
To decrypt the data, you need to provide a wrapping key that can decrypt one of the encrypted data keys.
- Combining the strengths of multiple algorithms
To encrypt your data, by default, the AWS Encryption SDK uses a sophisticated algorithm suite with AES-GCM symmetric encryption, a key derivation function (HKDF), and signing. To encrypt the data key, you can specify a symmetric or asymmetric encryption algorithm appropriate to your wrapping key.
In general, symmetric key encryption algorithms are faster and produce smaller ciphertexts than asymmetric or public key encryption. But public key algorithms provide inherent separation of roles and easier key management. To combine the strengths of each, you can encrypt your data with symmetric key encryption, and then encrypt the data key with public key encryption.
Data key
A data key is an encryption key that the AWS Encryption SDK uses to encrypt your data. Each data key is a byte array that conforms to the requirements for cryptographic keys. Unless you're using data key caching, the AWS Encryption SDK uses a unique data key to encrypt each message.
You don't need to specify, generate, implement, extend, protect or use data keys. The AWS Encryption SDK does that work for you when you call the encrypt and decrypt operations.
To protect your data keys, the AWS Encryption SDK encrypts them under one or more key-encryption keys known as wrapping keys or master keys. After the AWS Encryption SDK uses your plaintext data keys to encrypt your data, it removes them from memory as soon as possible. Then it stores the encrypted data keys with the encrypted data in the encrypted message that the encrypt operations return. For details, see How the AWS Encryption SDK works.
In the AWS Encryption SDK, we distinguish data keys
from data encryption keys. Several of the
supported algorithm suites, including the
default suite, use a key derivation
function
Each encrypted data key includes metadata, including the identifier of the wrapping key that encrypted it. This metadata makes it easier for the AWS Encryption SDK to identify valid wrapping keys when decrypting.
Wrapping key
A wrapping key (or master key) is a key-encryption key that the AWS Encryption SDK uses to encrypt the data key that encrypts your data. Each plaintext data key can be encrypted under one or more wrapping keys. You determine which wrapping keys are used to protect your data when you configure a keyring or master key provider.
Wrapping key refers to the keys in a keyring or
master key provider. Master key is typically associated with
the
MasterKey class that you instantiate when you use a master key provider.
The AWS Encryption SDK supports several commonly used wrapping keys, such as AWS Key Management Service (AWS KMS) symmetric AWS KMS keys, raw AES-GCM (Advanced Encryption Standard/Galois Counter Mode) keys, and raw RSA keys. You can also extend or implement your own wrapping keys.
When you use envelope encryption, you need to protect your wrapping keys from unauthorized access. You can do this in any of the following ways:
Use a web service designed for this purpose, such as AWS Key Management Service (AWS KMS)
.
Use a hardware security module (HSM)
such as those offered by AWS CloudHSM .
Use other key management tools and services.
If you don't have a key management system, we recommend AWS KMS. The AWS Encryption SDK integrates with AWS KMS to help you protect and use your wrapping keys. However, the AWS Encryption SDK does not require AWS or any AWS service.
Keyrings and master key providers
To specify the wrapping keys you use for encryption and decryption, you use a keyring (C and JavaScript) or a master key provider (Java, Python, CLI). You can use the keyrings and master key providers that the AWS Encryption SDK provides or design your own implementations. The AWS Encryption SDK provides keyrings and master key providers that are compatible with each other subject to language constraints. For details, see Keyring compatibility.
A keyring generates, encrypts, and decrypts data keys. When you define a keyring, you can specify the wrapping keys that encrypt your data keys. Most keyrings specify at least one wrapping key or a service that provides and protects wrapping keys. You can also define a keyring with no wrapping keys or a more complex keyring with additional configuration options. For help choosing and using the keyrings that the AWS Encryption SDK defines, see Using keyrings. Keyrings are supported in C and JavaScript.
A master key provider is an alternative to a keyring. The master key provider returns the wrapping keys (or master keys) you specify. Each master key is associated with one master key provider, but a master key provider typically provides multiple master keys. Master key providers are supported in Java, Python, and the AWS Encryption CLI.
You must specify a keyring (or master key provider) for encryption. You can specify the same keyring (or master key provider), or a different one, for decryption. When encrypting, the AWS Encryption SDK uses all of the wrapping keys you specify to encrypt the data key. When decrypting, the AWS Encryption SDK uses only the wrapping keys you specify to decrypt an encrypted data key. Specifying wrapping keys for decryption is optional, but it's a AWS Encryption SDK best practice.
For details about specifying wrapping keys, see Select wrapping keys.
Encryption context
To improve the security of your cryptographic operations, include an encryption context in all requests to encrypt data. Using an encryption context is optional, but it is a cryptographic best practice that we recommend.
An encryption context is a set of name-value pairs that contain arbitrary, non-secret additional authenticated data. The encryption context can contain any data you choose, but it typically consists of data that is useful in logging and tracking, such as data about the file type, purpose, or ownership. When you encrypt data, the encryption context is cryptographically bound to the encrypted data so that the same encryption context is required to decrypt the data. The AWS Encryption SDK includes the encryption context in plaintext in the header of the encrypted message that it returns.
The encryption context that the AWS Encryption SDK uses consists of the encryption context
that you specify and a public key pair that the cryptographic materials manager (CMM) adds. Specifically, whenever you use an encryption algorithm with signing, the CMM adds a
name-value pair to the encryption context that consists of a reserved name,
aws-crypto-public-key, and a value that represents the public verification
key. The
aws-crypto-public-key name in the encryption context is reserved
by the AWS Encryption SDK and cannot be used as a name in any other pair in the encryption
context. For details, see AAD in the Message Format Reference.
The following example encryption context consists of two encryption context pairs specified in the request and the public key pair that the CMM adds.
"Purpose"="Test", "Department"="IT", aws-crypto-public-key=
<public key>
To decrypt the data, you pass in the encrypted message. Because the AWS Encryption SDK can extract the encryption context from the encrypted message header, you are not required to provide the encryption context separately. However, the encryption context can help you to confirm that you are decrypting the correct encrypted message.
In the AWS Encryption SDK Command Line Interface (CLI), if you provide an encryption context in a decrypt command, the CLI verifies that the values are present in the encryption context of the encrypted message before it returns the plaintext data.
In other languages, the decrypt response includes the encryption context and the plaintext data. The decrypt function in your application should always verify that the encryption context in the decrypt response includes the encryption context in the encrypt request (or a subset) before it returns the plaintext data.
When choosing an encryption context, remember that it is not a secret. The encryption context is displayed in plaintext in the header of the encrypted message that the AWS Encryption SDK returns. If you are using AWS Key Management Service, the encryption context also might appear in plaintext in audit records and logs, such as AWS CloudTrail.
For examples of submitting and verifying an encryption context in your code, see the examples for your preferred programming language.
Encrypted message
When you encrypt data with the AWS Encryption SDK, it returns an encrypted message.
An encrypted message is a portable formatted data structure that includes the encrypted data along with encrypted copies of the data keys, the algorithm ID, and, optionally, an encryption context and a digital signature. Encrypt operations in the AWS Encryption SDK return an encrypted message and decrypt operations take an encrypted message as input.
Combining the encrypted data and its encrypted data keys streamlines the decryption operation and frees you from having to store and manage encrypted data keys independently of the data that they encrypt.
For technical information about the encrypted message, see Encrypted Message Format.
Algorithm suite
The AWS Encryption SDK uses an algorithm suite to encrypt and sign the data in the encrypted message that the encrypt and decrypt operations return. The AWS Encryption SDK supports several algorithm suites. All of the supported suites use Advanced Encryption Standard (AES) as the primary algorithm, and combine it with other algorithms and values.
The AWS Encryption SDK establishes a recommended algorithm suite as the default for all
encryption operations. The default might change as standards and best practices improve.
You can specify an alternate algorithm suite in requests to encrypt data or when
creating a cryptographic materials manager (CMM), but
unless an alternate is required for your situation, it is best to use the default. The
current default is AES-GCM with an HMAC-based extract-and-expand key derivation function
If your application requires high performance and the users who are encrypting data and those who are decrypting data are equally trusted, you might consider specifying an algorithm suite without a digital signature. However, we strongly recommend an algorithm suite that includes key commitment and a key derivation function. Algorithm suites without these features are supported only for backward compatibility.
Cryptographic materials manager
The cryptographic materials manager (CMM) assembles the cryptographic materials that are used to encrypt and decrypt data. The cryptographic materials include plaintext and encrypted data keys, and an optional message signing key. You never interact with the CMM directly. The encryption and decryption methods handle it for you.
You can use the default CMM or the caching CMM that the AWS Encryption SDK provides, or write a custom CMM. And you can specify a CMM, but it's not required. When you specify a keyring or master key provider, the AWS Encryption SDK creates a default CMM for you. The default CMM gets the encryption or decryption materials from the keyring or master key provider that you specify. This might involve a call to a cryptographic service, such as AWS Key Management Service(AWS KMS).
Because the CMM acts as a liaison between the AWS Encryption SDK and a keyring (or master key provider), it is an ideal point for customization and extension, such as support for policy enforcement and caching. The AWS Encryption SDK provides a caching CMM to support data key caching.
Symmetric and asymmetric encryption
Symmetric encryption uses the same key to encrypt and decrypt data.
Asymmetric encryption uses a mathematically related data key pair. One key in the pair encrypts the data; only the other key in the pair can decrypt the data. For details, see Cryptographic algorithms in the AWS Cryptographic Services and Tools Guide.
The AWS Encryption SDK uses envelope encryption. It encrypts your data with a symmetric data key. It encrypts the symmetric data key with one or more symmetric or asymmetric wrapping keys. It returns an encrypted message that includes the encrypted data and at least one encrypted copy of the data key.
- Encrypting your data (symmetric encryption)
To encrypt your data, the AWS Encryption SDK uses a symmetric data key and an algorithm suite that includes a symmetric encryption algorithm. To decrypt the data, the AWS Encryption SDK uses the same data key and the same algorithm suite.
- Encrypting your data key (symmetric or asymmetric encryption)
The keyring or master key provider that you supply to an encrypt and decrypt operation determines how the symmetric data key is encrypted and decrypted. You can choose a keyring or master key provider that uses symmetric encryption, such as a AWS KMS keyring, or one that uses asymmetric encryption, such as a raw RSA keyring or
JceMasterKey.
Key commitment
The AWS Encryption SDK supports key commitment (sometimes known as robustness), a security property that guarantees that each ciphertext can be decrypted only to a single plaintext. To do this, key commitment guarantees that only the data key that encrypted your message will be used to decrypt it. Encrypting and decrypting with key commitment is an AWS Encryption SDK best practice.
Most modern symmetric ciphers (including AES) encrypt a plaintext under a single secret key, such as the unique data key that the AWS Encryption SDK uses to encrypt each plaintext message. Decrypting this data with the same data key returns a plaintext that is identical to the original. Decrypting with a different key will usually fail. However, it's possible to decrypt a ciphertext under two different keys. In rare cases, it is feasible to find a key that can decrypt a few bytes of ciphertext into a different, but still intelligible, plaintext.
The AWS Encryption SDK always encrypts each plaintext message under one unique data key. It might encrypt that data key under multiple wrapping keys (or master keys), but the wrapping keys always encrypt the same data key. Nonetheless, a sophisticated, manually crafted encrypted message might actually contain different data keys, each encrypted by a different wrapping key. For example, if one user decrypts the encrypted message it returns 0x0 (false) while another user decrypting the same encrypted message gets 0x1 (true).
To prevent this scenario, the AWS Encryption SDK supports key commitment when encrypting and decrypting. When the AWS Encryption SDK encrypts a message with key commitment, it cryptographically binds the unique data key that produced the ciphertext to the key commitment string, a non-secret data key identifier. Then it stores key commitment string in the metadata of the encrypted message. When it decrypts a message with key commitment, the AWS Encryption SDK verifies that the data key is the one and only key for that encrypted message. If data key verification fails, the decrypt operation fails.
Support for key commitment is introduced in version 1.7.x, which can decrypt messages with key commitment, but won't encrypt with key commitment. You can use this version to fully deploy the ability to decrypt ciphertext with key commitment. Version 2.0.x includes full support for key commitment. By default, it encrypts and decrypts only with key commitment. This is an ideal configuration for applications that don't need to decrypt ciphertext encrypted by earlier versions of the AWS Encryption SDK.
Although encrypting and decrypting with key commitment is a best practice, we let you decide when it's used, and let you adjust the pace at which you adopt it. Beginning in version 1.7.x, AWS Encryption SDK supports a commitment policy that sets the default algorithm suite and limits the algorithm suites that may be used. This policy determines whether your data is encrypted and decrypted with key commitment.
Key commitment results in a slightly larger (+ 30 bytes) encrypted message and takes more time to process. If your application is very sensitive to size or performance, you might choose to opt out of key commitment. But do so only if you must.
For more information about migrating to versions 1.7.x and 2.0.x, including their key commitment features, see Migrating to version 2.0.x. For technical information about key commitment, see AWS Encryption SDK algorithms reference and AWS Encryption SDK message format reference.
Commitment policy
A commitment policy is a configuration setting that determines whether your application encrypts and decrypts with key commitment. Encrypting and decrypting with key commitment is an AWS Encryption SDK best practice.
Commitment policy has three values.
You might have to scroll horizontally or vertically to see the entire table.
The commitment policy setting is introduced in AWS Encryption SDK version 1.7.x. It's valid in all supported programming languages.
ForbidEncryptAllowDecryptdecrypts with or without key commitment, but it won't encrypt with key commitment. This is the only valid value for commitment policy in version 1.7.x and it is used for all encrypt and decrypt operations. It's designed to prepare all hosts running your application to decrypt with key commitment before they ever encounter a ciphertext encrypted with key commitment.
RequireEncryptAllowDecryptalways encrypts with key commitment. It can decrypt with or without key commitment. This value, introduced in version 2.0.x, lets you start encrypting with key commitment, but still decrypt legacy ciphertexts without key commitment.
RequireEncryptRequireDecryptencrypts and decrypts only with key commitment. This value is the default for version 2.0.x. Use this value when you are certain that all of your ciphertexts are encrypted with key commitment.
The commitment policy setting determines which algorithm suites you can use. Beginning in version 1.7.x, the AWS Encryption SDK supports algorithm suites for key commitment; with and without signing. If you specify an algorithm suite that conflicts with your commitment policy, the AWS Encryption SDK returns an error.
For help setting your commitment policy, see Setting your commitment policy.
Digital signatures
To ensure the integrity of a digital message as it goes between systems, you can apply a digital signature to the message. Digital signatures are always asymmetric. You use your private key to create the signature, and append it to the original message. Your recipient uses a public key to verify that the message has not been modified since you signed it.
The AWS Encryption SDK encrypts your data using an authenticated encryption algorithm, AES-GCM, and the decryption process verifies the integrity and authenticity of an encrypted message without using a digital signature. But because AES-GCM uses symmetric keys, anyone who can decrypt the data key used to decrypt the ciphertext could also manually create a new encrypted ciphertext, causing a potential security concern. For instance, if you use an AWS KMS key as a wrapping key, this means that it is possible for a user with KMS Decrypt permissions to create encrypted ciphertexts without calling KMS Encrypt.
To avoid this issue, the AWS Encryption SDK supports adding an Elliptic Curve Digital Signature Algorithm (ECDSA) signature to the end of encrypted messages. When a signing algorithm suite is used, the AWS Encryption SDK generates a temporary private key and public key pair for each encrypted message. The AWS Encryption SDK stores the public key in the encryption context of the data key and discards the private key, and no one can create another signature that verifies with the public key. Because the algorithm binds the public key to the encrypted data key as additional authenticated data in the message header, a user who can only decrypt messages cannot alter the public key.
Signature verification adds a significant performance cost on decryption. If the users encrypting data and the users decrypting data are equally trusted, consider using an algorithm suite that does not include signing. | https://docs.aws.amazon.com/encryption-sdk/latest/developer-guide/concepts.html | 2022-01-29T05:35:07 | CC-MAIN-2022-05 | 1642320299927.25 | [] | docs.aws.amazon.com |
...
- Log in to the Cloud Service Portal.
- Click Manage -> Data Connector.
- On the Destination Configuration tab, from the Create drop-down list, choose Splunk. The Create Splunk Destination Configuration screen appears.
- In the Name field, enter the name of the destination. Select a name that best describes the destination and can be distinguished from other destinations. The field length is 256 characters.
- In the Description field, enter the description of the destination. The field length is 256 characters.
- Use the State sliderer Index Name: Enter the name of the Splunk indexerindex. An index is a collection of directories and files that are located under
$SPLUNK_HOME/var/lib/splunk.
-.
... | https://docs.infoblox.com/pages/diffpages.action?originalId=54137834&pageId=58468004 | 2022-01-29T03:52:23 | CC-MAIN-2022-05 | 1642320299927.25 | [] | docs.infoblox.com |
Interface IDataSourceConfiguration
This helps a data source get configured. It manages all the properties which the data source will want to look up, as well as the LookUp engine which will perform the token resolution
Namespace: ToSic.Eav.DataSources
Assembly: ToSic.Eav.DataSources.dll
Syntax
[PublicApi_Stable_ForUseInYourCode] public interface IDataSourceConfiguration
Properties| Improve this Doc View Source
IsParsed
Tell us if the values have already been parsed or not. Ideal to check / avoid multiple calls to parse, which would just slow the system down.
Declaration
bool IsParsed { get; }
Property Value| Improve this Doc View Source
Item[String]
Quick read / add for values which the DataSource will use.
Declaration
string this[string key] { get; set; }
Parameters
Property Value| Improve this Doc View Source
LookUpEngine
The internal look up engine which manages value sources and will resolve the tokens
Declaration
ILookUpEngine LookUpEngine { get; }
Property Value| Improve this Doc View Source
Values
The values (and keys) used in the data source which owns this Configuration
Declaration
IDictionary<string, string> Values { get; }
Property Value
Methods| Improve this Doc View Source
Parse()
Parse the values and change them so placeholders in the values are now the resolved value. This can only be called once - then the placeholder are gone. In scenarios where multiple parses are required, use the Parse(IDictionary) overload.
Declaration
void Parse()
Parse(IDictionary<String, String>)
This will parse a dictionary of values and return the result. It's used to resolve the values list without actually changing the values on the configuration object, in scenarios where multiple parses will be required.
Declaration
IDictionary<string, string> Parse(IDictionary<string, string> values) | https://docs.2sxc.org/api/dot-net/ToSic.Eav.DataSources.IDataSourceConfiguration.html | 2022-01-29T04:57:21 | CC-MAIN-2022-05 | 1642320299927.25 | [] | docs.2sxc.org |
Content Assets / Images / Documents
Content has Assets like images and documents.
Private Assets
By default, all Assets belong to the item and field they were uploaded on. This happens automatically and is managed by ADAM.
Shared Assets
You can also have shared assets which are stored in the Site content folder, but we don't recommend this as it makes clean-up very difficult.
Learn More About
- ADAM - Automatic Digital Asset Management
- Image Resizer
- Fields for Links / Files / Folders
- Asset Metadata
- Asset Permissions / Protected folders in Dnn / Oqtane
- App Assets like icons / logos used in an App
#todoc
- Add screenshots of drag-drop upload | https://docs.2sxc.org/basics/content/content-assets.html | 2022-01-29T05:12:28 | CC-MAIN-2022-05 | 1642320299927.25 | [] | docs.2sxc.org |
gluster.gluster.gluster_heal_info – Gather information on self-heal or rebalance status
Note
This plugin is part of the gluster.gluster collection (version 1.0.2).
You might already have this collection installed if you are using the
ansible package.
It is not included in
ansible-core.
To check whether it is installed, run
ansible-galaxy collection list.
To install it, use:
ansible-galaxy collection install gluster.gluster.
To use it in a playbook, specify:
gluster.gluster.gluster_heal_info.
Synopsis
Gather facts about either self-heal or rebalance status.
This module was called
gluster_heal_factsbefore Ansible 2.9, returning
ansible_facts. Note that the gluster.gluster.gluster_heal_info module no longer returns
ansible_facts!
Requirements
The below requirements are needed on the host that executes this module.
GlusterFS > 3.2
Examples
- name: Gather self-heal facts about all gluster hosts in the cluster gluster.gluster.gluster_heal_info: name: test_volume status_filter: self-heal register: self_heal_status - debug: var: self_heal_status - name: Gather rebalance facts about all gluster hosts in the cluster gluster.gluster.gluster_heal_info: name: test_volume status_filter: rebalance register: rebalance_status - debug: var: rebalance_status
Return Values
Common return values are documented here, the following are the fields unique to this module: | https://docs.ansible.com/ansible/latest/collections/gluster/gluster/gluster_heal_info_module.html | 2022-01-29T03:37:16 | CC-MAIN-2022-05 | 1642320299927.25 | [] | docs.ansible.com |
ASPxGridLookup
The ASPxGridLookup control combines the functionality of the ASPxDropDownEdit and ASPxGridView controls, and allows users to select values from a drop-down grid.
You can use the settings listed in the following topic to customize the embedded ASPxGridView control: Member Table: GridView Specific Settings.
Editor-specific settings are listed in the following topic: Member Table: Lookup Editor Specific Settings.
Concepts
Limitations
- ASPxGridLookup does not work in the ASPxGridView’s EditItemTemplate or DataItemTemplate in batch edit mode.
- ASPxGridLookup does not support endless paging.
Feedback | https://docs.devexpress.com/AspNet/9073/components/grid-view/concepts/aspxgridlookup | 2022-01-29T04:35:51 | CC-MAIN-2022-05 | 1642320299927.25 | [array(['/AspNet/images/gridlookup_control.png12682.png',
'GridLookup_Control.png'], dtype=object) ] | docs.devexpress.com |
Introduction
ag|knowledge is a REST API for mobile and web applications allowing easy access and integration of satellite remote sensing data & analytics into your agricultural application. The API provides access to field monitoring products for registered parcels (= fields or other plots of land). The data for each parcel will be immediately updated as soon as new measurements are available.
Typical image products are:
- Visible images (True colour images, RGB) of the parcel
- Live green vegetation indicator (Vitality images, NDVI)
- Vegetation variations maps
- A variety of specialised vegetation indices providing more specific information about the vegetation status such as: Chlorophyll, Nitrogen or vegetation water content
This comes with:
- Time series of the above listed products including statistics
- History of up 5 years
- Notification messages
- Preconfigured color maps
- Backend field object storage and management
- Geometry validation
- Open-Source Widgets for visualisation
For more details please see below or refer to the product pages: Basic Monitoring and Professional Monitoring.
This set of data is used by decision ready information products:
- Data Validation Package, making sure that your data will yield reliable results
- Crop Performance Monitoring allowing crop status and biomass monitoring and benchmarking
- Harvest Maturity monitoring supporting harvest planning and identifying the optimum quality of the produce
- Farm Management Zones for reducing inputs and increasing yield
- Production Monitoring providing fast production status updates on large field portfolios
- Agronomic Weather services to support your farming activities and yield prediction
- Regional Drought Monitoring for better risk assessment
For further informations regarding our product pricing please refer here.
We support and speed up the integration of the ag|knowledge API into your application by Open Source Widgets. We also develop custom analytics based on our analytics engine, integrating state-of-the-art remote sensing and statistical analysis with AI technology. This allows us to achieve a high reliability of products required e.g. for crop maps, anomaly detection, forest water stress monitoring or specific crop markers. For own modelling and dedicated data analysis purposes we also offer access to the underlying index and raw sensor values. Radar data is also available on request. The functionality is constantly being updated.
Please have a look at our demo web client to get a first impression. We have a bunch of different visualization components available for free that can help you make the most out of the data.
Ag|knowledge is free of charge for development purposes. For operational use ag|knowledge is available on an annual subscription basis. A small fee per hectare and year will be charged. To obtain a developer account please contact us directly at [email protected].
Let us know your area of interest and give us an indication about the domain you want to use the service for. With a developer account you will also receive direct support for getting started.
Ag|Knowledge focuses on agriculture, but can also be used for forest monitoring or other tasks such as environmental monitoring. If you have any specific needs please let us know.
Basic Monitoring Package
If you are new to remote sensing technology, but want to demonstrate your customers the benefits of this technology, the Basic Monitoring Package is your starting point. It provides core functionality and services for easy integration of remote sensing in your application. It is easy to use and understand and can be very fast integrated into your application. For accessing the REST API you just need a user key and the base URL. Using the API is simple: register your parcel and seconds later you can retrieve the information stack available for it. This stack is updated as soon as a new sensor measurements are available. To further speed up the integration please have a look to our set of Open Source visualisation components on GitHub.
The sample syntax for parcel registration is:
Once the parcel is registered a number of products will be available. These can be accessed by a simple get request:
GET<parcel-id>/vitality/<raster-id>
The Basic Monitoring Package contains a number of products such as
- Visible (true color) Images
- Vitality Index (NDVI)
- Variations Map
- Vitality Time Series statistics
The Vitality Index statistics allow to visualise and interpret a time series of the parcel from seeding to harvest. The Vitality Image can also be downloaded as PNG or TIFF.
Use Cases: Basic Package for all applications
Professional Monitoring package
You understand Remote Sensing technology but don’t want to have the hassle of handling gigabytes of data daily? You want to develop your own algorithms and models? In this case you should have a look to the Professional Monitoring Package. It provides a wide range of specialised vegetation indices out of the box. Access to the original reflectance data allows you to calculate your own indices or ask us to to integrate them directly. We care about the data management at object level. A major concern is data quality. Outliers caused by clouds or cloud shadows are largely removed as well as mixed pixels at parcels borders. The standard set of indices comprises:
- NDVI – Normalised Defference Vegetation Index based on reflectance measurements
- NDRE1 – Normalised Difference Red Edge Index 1
- NDRE2 – Normalised Difference Red Edge Index 2
- NDWI – Normalised Difference Water Index
- SAVI – Soil Adjusted Vegetation Index
- EVI2 – Enhanced Vegetation Index 2
- CI-RE – Chlorophyll Index – Red Edge
Other indices on request. Our reflectance product allows to access the raw reflectance values of the sensor for all spectral bands in GeoTIFF format.
Use cases: research, crop modelling, data analytics
Data Validation
All decision level products rely on correct input data. Faulty data may lead to wrong decisions or unjustified payments. To support our customers we provide a set of efficient data validation tools in our Data Validation Package. Parcels are the basic object Ag|Knowledge deals with. At registration time already the geometric properties of boundaries are checked and validated. The Land Use Homogeneity and Crop Cycle detection make sure that only one crop is cultivated in its boundaries and between the given sowing and harvest date. A generic crop type validation allows to identify easily parcels with wrongly declared crop types.
Use Cases: crop analytics, government subsidies control, insurance
Crop Performance Monitoring
Determining the actual crop performance and in particular biomass in the field is of particular importance for managing crops. If not scientifically measured it remains guesswork and these measurements mean costs. On large farms or cooperatives you need to identify fields or farms which need attention. Our Crop Performance Monitoring product provides objective and fast tools for determining and comparing biomass and crop performance across fields and seasons. Key features are actual biomass development by index, including a field map and time series view, phenology markers, growth rate and duration, cultivated area determination as well as biomass benchmarking and comparison to nominal growth pattern of a crop.
Use Cases: crop growth management, risk management
Harvest Maturity and Scheduling
A wrong timing of harvest can cause significant harvest losses in quantity and quality. They occur if harvest happens too early or delayed – both scenarios are undesirable. Also fields may not mature homogeneously. During harvest time the availability of machinery is limited and needs to be planned carefully. We developed a Harvest Maturity Monitoring and harvest scheduling service using spectral indices sensitive to crop senescence. The start and progress of maturation per field is shown in a timeline. Maturity field maps differentiates the maturation status within each field. This information is used to determine the best harvest order of fields for better machinery use. With an individual crop calibration it may predict the optimal harvest dates to maximize crop yields as maturity approaches.
Optimising the harvest date is vital for all farm managers and farmers. The best harvest schedule and timing may improve yield by 5-10%. It also helps in mitigating restrictions in manpower and machinery availability, logistics and storage and thus saving costs at the same time.
Use Cases: harvest planning, yield optimisation
Farm Management Zones
The Farm Management Zones is an important product for field management. It serves as basis for optimized soil sampling, fertilisation, irrigation or for planning other management actions. It allows to use fertilisers more efficiently, reduce run-offs, reduce number of soil samplings or improve irrigation.
No field is perfectly homogeneous. Managing and in particular fertilising each sector the same may mean wasting inputs on the one side while undersupplying crops on the other side. Our Farm Management Zones Map identifies the homogeneous zones within a field representing different soil characteristics and yield potentials. This is the basis for precision farming. By evaluation sensor images from previous seasons the zones with previously similar characteristics are derived. Zones with higher fertility are shown in green, followed by the medium zone colored in yellow, zones with lower fertility are displayed in red.
Use Cases: Nutrition advisory
Production Monitoring
Accurate monitoring of crop production based on large field portfolios currently requires continuous and consistent collection of crop status information in the field. This is costly and thus often not reliably available. Ag|Knowledge specifically supports the monitoring and management of large field portfolios as they occur in crop procurement, contract farming, at large enterprises or within cooperatives. The Production Monitoring package allows the effective and timely capturing of the crop production status. Users know the actual emerged area of the crop. During crop growth anomalies and low & high performing fields are identified and highlighted. The progress of crop maturation and the progress of harvests are timely and accurately monitored. Our information service makes production management much more effective, reducing costs, detects early potential risks early and helps to plan harvest and processing activities.
Use Cases: Crop procurement, contract farming, production planning, insurance risk management
Agronomic weather
Weather and soil are the two main influential factors in crop development. Free weather forecasts are helpful but are often not accurate enough and do provided some essential information. We offer high resolution Agronomic Weather at field level to support your farming activities, crop modelling and yield prediction. This not only includes a 7 day forecast but also a full weather history for any location. Parameters include temperatures (mean, min, max), precipitation, wind, humidity, dew point, soil moisture, soil temperature evapotranspiration and solar radiation on a daily basis (hourly on request). Weather has a big impact on farming activities like spraying, harvest planning, but is also essential for irrigation planing, crop modelling, pest and disease risk assessment and yield prediction. Such decision ready services will be made available in near future.
Use Cases: farm management, crop modelling, yield prediction, insurance
Regional Drought Monitoring
Droughts are an increasing problem and cause severe crop losses. Early understanding the timely and regional development of the situation is essential for proper risk management for insurances, government and buyers. Our standard Regional Crop Performance gives a clear indication about the actual vegetation status in agricultural areas in comparison to the long term average. This drought status classification follows an international developed and recognised classification also used by FAO and other international organisations. It is available for India and it is currently updated on a monthly basis. On request it can also be provided for other regions or with weekly updates.
Use Cases: Agro-Insurance risk assessment, governments, food processing, contract farming
Custom services
Geocledian supports you with customised services. Every crop and region is different, every user has specific needs. As we are specialists in analysis of remote sensing and geographic data we provide customised algorithms, crop specific calibration, validation or just in individual data analysis. Below some examples
Crop type verification
Integrated and pre-trained machine learning algorithms to identify or verify planted crop types and other labeled datasets. We support a range of different approaches internally, from untrained verification by similarity analysis to highly accurate crop identification. The latter achieves the best accuracy but requires custom training with reference data sets.
Use Cases are subsidies controls and IAC Systems.
Outlier & Anomaly detection
Our Outlier and anomaly detection feature compares fields with its neighbours and provides information whether a crop field is deviating from the norm due to different management practices, weather conditions, pests, delayed sowing or drought.
Use Cases: Portfolio Management.
Forest Water Stress Map
Our Forest Water Stress Map detects and maps forest water stress and potential damage areas. It enables forest practitioners to locate areas of concern and react quickly. Different thresholds for broadleaf and coniferous forest are used and up to four categories have been derived. 2018 and 2019 were outstanding years with severe drought issues not only in agriculture but also in forestry in large parts of Germany. The year 2018 was one of the driest years of the century in Germany. The product has been validated and used by the Bayerische Staatsforsten.
Use Cases: Forest Management.
Other important information
Update Frequency
The underlying sensor data will be updated every few days mainly depending on the local weather conditions and the geographical latitude. Without clouds you typically get new images every 3-5 days. Generally it can be said the further North or South of the Equator the higher the revisit rate but the worse the cloud situation. More arid areas will be updated more frequently than humid areas.
Ground Resolution
Since 2016 the native resolution has increased by a factor of 9. One pixel measured by the best satellite sensors we use covers an area of approx. 100 m2 (10 m x 10 m) Thus for a parcel of the size of 1 ha approx. 100 measured points (pixels) are available. The minimum parcel size should currently be not below 0.2 ha or 0.5 acres. It has to be taken into account that pixels from the border of the parcel may contain significant signal components from neighbouring objects and thus are not representative. These pixels will be filtered in our products. On request we can also integrate commercial HHR or VHR data.
Geographical Areas Covered
The service works globally and is fully operational. We are setting up the areas you request within a few days if it is not already available on or data base. We are already routinely covering large areas in Europe and Africa and India.
Image Formats
Supported image formats are PNG for visual images and GEOTIFF for measured values. In general the imagery is geocoded to Google Mercator projection (EPSG: 3857) and can be directly used within various map frameworks such as Google Maps, Mapbox or OpenLayers. The API supports also other image formats and projections on the fly.
Attribute information will be delivered in JSON format, coordinate data as GEOJSON or WKT.
Remote Sensing Basics
Here you can find more remote sensing basics.
This web service is provided to you by geo|cledian. All rights reserved ©2021 geo|cledian. | https://docs.geocledian.com/product-overview/ | 2022-01-29T04:26:54 | CC-MAIN-2022-05 | 1642320299927.25 | [array(['https://docs.geocledian.com/wp-content/uploads/2021/09/Product_packages_overview-1024x629.png',
None], dtype=object)
array(['https://docs.geocledian.com/wp-content/uploads/2021/08/Analysts_dashboard_Virgiliana2018_graph-1024x588.jpg',
'Data Times Series Visualisation'], dtype=object)
array(['https://docs.geocledian.com/wp-content/uploads/2021/08/Bild1.png',
None], dtype=object)
array(['https://docs.geocledian.com/wp-content/uploads/2021/09/Inhomogenious-1.png',
None], dtype=object)
array(['https://docs.geocledian.com/wp-content/uploads/2021/09/phenology2.png',
None], dtype=object)
array(['https://sites.google.com/site/geocledian/_/rsrc/1627565365890/home/product-overview/harvester-3562476_1920.jpg',
None], dtype=object)
array(['https://sites.google.com/site/geocledian/_/rsrc/1627565365881/home/product-overview/FMZ_legend.png?height=305&width=320',
None], dtype=object)
array(['https://docs.geocledian.com/wp-content/uploads/2021/09/Portfolio.jpg',
None], dtype=object)
array(['https://docs.geocledian.com/wp-content/uploads/2021/09/meteogram_agro-300x283.png',
None], dtype=object)
array(['https://docs.geocledian.com/wp-content/uploads/2021/09/Regional-Drought-Monitoring-300x215.jpg',
None], dtype=object)
array(['https://docs.geocledian.com/wp-content/uploads/2021/09/broadleaf_nuremberg_map_1.jpg',
None], dtype=object) ] | docs.geocledian.com |
Canceled Bookings – Overview
Your Canceled Bookings page will provide you with a list of all bookings that have been Cancelled by either your customer or staff. From the Cancelled Bookings page you can either edit the Cancellation Price for that booking, or you can restore the booking to an Active Booking.
Restore a Booking
- Go to your Cancelled Bookings Page
- Select the blue restore icon to the right of the booking
- A pop up will appear asking “Are you sure you want to Restore this Booking?”, select Restore.
- Go to your Active Bookings page to find your booking
Edit your Cancellation Price for a Booking
- Go to your Cancelled Bookings Page
- Select the Green Edit Icon
- Enter your new Cancellation Price
- Select Save | https://docs.launch27.com/knwbase/canceled-bookings/ | 2022-01-29T04:45:04 | CC-MAIN-2022-05 | 1642320299927.25 | [] | docs.launch27.com |
Characters¶
One of Game Creator's main systems is the Character. It represents any interactive playable or non-playable entity and comes packed with a collection of flexible and independent features that can be used to enhance and speed up the development process.
Main Features¶
A Character is defined by a
Character component that can be attached to any game object. It is organized into multiple collapsable sections, each of which controls a very specific feature of this system.
Some of the most noticeable features are:
- Player Input: An input system that allows to change how the Player is controlled at any given moment. Including directional, point & click, tank-controls, and more.
- Rotation Modes: Controls how and when the character rotates. For example facing the camera's direction, its movement direction or strafing around a world position.
- World Navigation: Manages how the character moves around a scene. It can use a Character Controller, a Navigation Mesh Agent, or plug-in a custom controller.
- Gestures & States: An animation system built on top of Unity's Mecanim which simplifies how to play animations on characters.
- Inverse Kinematics: An extendable IK system with feet-to-ground alignement or realistic body orientation when looking at points of interest.
- Footstep Sounds: A very easy to use foot-step system that mixes different sounds based on the multiple layers of the ground's materials and textures
- Dynamic Ragdoll: Without chaving to configure anything, the Ragdoll system allows a character to seamlessly transition to (and from) a ragdoll state.
- Breathing & Twitching: Procedural animations that can be tweaked at runtime which change a character's perceived exertion and breathing rate and amount.
Player Character¶
The Player character uses the same Character component as any other non-playable character but with the difference that it has the
Is Player checkbox enabled. A Character with this option enabled processes the user's input based on its Player section.
Shortcut Player
Note that when creating a Player game object from the Hierarchy menu or the Game Creator Toolbar, it ticks the Is Player checkbox by default. | https://docs.gamecreator.io/gamecreator/characters/ | 2022-01-29T05:09:56 | CC-MAIN-2022-05 | 1642320299927.25 | [] | docs.gamecreator.io |
URL:
Creates a new projection of an existing table. A projection represents a subset of the columns (potentially including derived columns) of a table.
For projection details and examples, see Projections. For limitations, see Projection Limitations and Cautions.
Window functions, which can perform operations like moving averages, are available through this endpoint as well as /get/records/bycolumn.
A projection can be created with a different shard key than the source table. By specifying shard_key, the projection will be sharded according to the specified columns, regardless of how the source table is sharded. The source table can even be unsharded or replicated.
If input parameter table_name is empty, selection is performed against a single-row virtual table. This can be useful in executing temporal (NOW()), identity (USER()), or constant-based functions (GEODIST(-77.11, 38.88, -71.06, 42.36)).
Input Parameter Description
Output Parameter Description
The GPUdb server embeds the endpoint response inside a standard response structure which contains status information and the actual response to the query. Here is a description of the various fields of the wrapper: | https://docs.kinetica.com/7.1/azure/api/rest/create_projection_rest/ | 2022-01-29T04:37:59 | CC-MAIN-2022-05 | 1642320299927.25 | [] | docs.kinetica.com |
Enable ADEM in Panorama Managed Prisma Access for Mobile Users
Learn how to enable Autonomous DEM for your Panorama:
- Generate the certificate the agent will use to authenticate to the Autonomous DEM service.
- From Panorama, select.PanoramaCloud ServicesConfigurationService Setup
- In the GlobalProtect App Log Collection section under Service Operators, clickGenerate Certificate for GlobalProtect App Collection and Autonomous DEM.A confirmation message indicates that the certificate was successfully generated in the Mobile_User_Template Shared location.
- Configure the portal to push the DEM settings to the GlobalProtect agent.
- Select.NetworkGlobalProtectPortalsGlobalProtect Portal
- To create an agent configuration to push to your DEM users only, in theMobile_User_Template, select the GlobalProtect Portal Configuration.
- On theAgenttab, select the DEFAULT agent configuration andCloneit and give it a newName.
- To enable the portal to push the DEM authentication certificate you just generated to the end user systems, on theAuthenticationtab setClient CertificatetoLocaland then select theglobalprotect_app_log_cert.
- To ensure that this agent configuration is only pushed to agents running on supported operating systems, on thetab, clickConfig Selection CriteriaUser/User GroupAddin theOScolumn and selectMacand/orWindowsonly).
- If you only want to deploy the DEM configuration to a subset of your Mac and/or Windows users, in the User/User Group columnAddthe specific users or user groups to push this configuration to.
- To enable Autonomous DEM functionality for the selected users, on theApptab, enableAutonomous DEM endpoint agent for Prisma Access (Windows & Mac Only).You can select whether to let users enable and disable ADEM by selectingInstall and user can enable/disable agent from GlobalProtectorInstall and user cannot enable/disable agent from GlobalProtect.
- Also on theApptab, setEnable Autonomous DEM and GlobalProtect App Log Collection for TroubleshootingtoYesto enable the GlobalProtect app to use the certificate you just created to authenticate to the DEM service.
- Starting in GlobalProtect version 5.2.8, you have the option to suppress receiving all Autonomous DEM update notifications (pertaining to installing, uninstalling and upgrading an agent) on the endpoints. To suppress the notifications, set theDisplay Autonomous DEM Update NotificationstoNo. By default, theDisplay Autonomous DEM Update Notificationsis set toYes.
- ClickOKto save the new app configuration settings and clickOKagain to save the portal configuration.
- Make sure you have security policy rules required to allow the GlobalProtect app to connect to the ADEM service and run the synthetic tests.
- In Panorama, go to. Click onObjectsaddressesAddand add the following ADEM Service Destination FQDNs.
-
- Create an address group to contain the addresses above by going to, clickingObjectsAddress GroupsAddand providing a name for the address group.
- Add the address group you just created into the security policy. Go to. ClickPoliciesSecurityPreRulesAddand add the address group to the policy.
- To enable the GlobalProtect users to connect to and register with the ADEM service and to run the synthetic application tests, make sure there is a security policy rule that allows traffic to HTTPS-based applications.
- To enable the app to run network monitoring tests, you must have a security policy rule to allow ICMP and TCP traffic.
- (Optional) If you plan to run synthetic tests that use HTTP, you must also have a security policy rule to allow the GlobalProtect users to access applications over HTTP.
- Commit all your changes to Panorama and push the configuration changes to Prisma Access.
- Click.CommitCommit to Panorama
- Clickand clickCommitPush to DevicesEdit Selections.
- On the Prisma Access tab, make surePrisma Access for usersis selected and then clickOK.
- ClickPush.
Recommended For You
Recommended Videos
Recommended videos not found. | https://docs.paloaltonetworks.com/autonomous-dem/autonomous-dem-in-prisma-access/enable-autonomous-dem/enable-autonomous-dem-in-panorama-managed-prisma-access.html | 2022-01-29T03:57:14 | CC-MAIN-2022-05 | 1642320299927.25 | [] | docs.paloaltonetworks.com |
Actions in the Workbench
Select various Actions in the Workbench based upon the object and control type selected. Actions are allowed on HTML, . NET and Java Swing/AWT controls.
Actions allowed on HTML controls
The following Actions are allowed on HTML Controls:
- Click
- DoubleClick
- RightClick
- SetText
- AppendText
- GetProperty
- GetVisibility
- GetTotalItems
- GetSelectedIndex
- GetSelectedText
- SelectItembyText
- SelectItembyIndex
- GetChildrenName
- GetChildrenValue
Get Text, SetText and AppendText
GetText, SetText and AppendText actions are available when the selected object types are Text/Text Box, Password, Windows Control , or Custom Objects.
The properties and relevant actions in the Logic are controlled by the Object properties configuration set for the selected Screen.
GetProperty
Use the GetProperty action when you want to search the objects based on their properties during play time.
When you select action as GetProperty, you will be able to select properties names such as Object ID, Name, Value, Class, Type, Index, Description, State, IsVisible, IsProtected etc based on the object control selected.
Get Visibility
Use the GetVisibility action to build a logic based on an object's visibility during play time. This screen area could be a custom object or an object with Play Type Image. The GetVisibility action returns the visibility status as True or False.
To add a GetVisibility action in the Logic Editor, the object should be configured for Play Type Image.
Actions allowed on Window Controls
The following Actions are allowed on Window Controls:
- Click
- DoubleClick
- RightClick
- LeftClick
- SetText
- AppendText
- GetProperty
- GetChildrenName
- GetChildrenValue
SetText for Window Control: Use the action type 'SetText' for Window Controls. Select the entire window and specify the action type.
OCR screens
For screens that are captured using OCR technology, when you select play type as Image for custom objects, you are allowed actions - SetText, GetText, LeftClick, RightClick, DoubleClick, and GetVisibility. | https://docs.automationanywhere.com/zh-TW/bundle/enterprise-v11.3/page/enterprise/topics/aae-client/metabots/getting-started/selecting-actions-in-the-logic-editor.html | 2022-01-29T04:20:11 | CC-MAIN-2022-05 | 1642320299927.25 | [] | docs.automationanywhere.com |
This guide shows you how to install CloudBees CI on Azure Kubernetes Service (AKS). To perform the installation, you should be knowledgeable in AKS, Helm, Kubernetes, and NGINX Ingress.
CloudBees CI is a fully-featured, cloud native CD solution that can be hosted on-premise or in the public cloud. It provides a shared, centrally managed, self-service experience for all your development teams.
Before you install CloudBees CI on modern cloud platforms on AKS, be sure to:
Review the Learn and Plan stages from Onboarding for CloudBees CI on modern cloud platforms.
Review the pre-installation requirements. | https://docs.cloudbees.com/docs/cloudbees-ci/2.289.3.2/aks-install-guide/ | 2022-01-29T03:34:54 | CC-MAIN-2022-05 | 1642320299927.25 | [] | docs.cloudbees.com |
SMTP port 25 is blocked on all Droplets for some new accounts to prevent spam and other abuses of our platform. To send mail on these accounts, use a dedicated email deliverability platform (such as Sendgrid and Mailgun), which are better at handling deliverability factors like IP reputation.
Even on accounts where SMTP is available, we recommend against running your own mail server in favor of using a dedicated email deliverability platform. | https://docs.digitalocean.com/support/why-is-smtp-blocked/ | 2022-01-29T04:16:22 | CC-MAIN-2022-05 | 1642320299927.25 | [] | docs.digitalocean.com |
Command
Binding. Command Property
Definition
Important
Some information relates to prerelease product that may be substantially modified before it’s released. Microsoft makes no warranties, express or implied, with respect to the information provided here.
Gets or sets the ICommand associated with this CommandBinding.
public: property System::Windows::Input::ICommand ^ Command { System::Windows::Input::ICommand ^ get(); void set(System::Windows::Input::ICommand ^ value); };
[System.Windows.Localizability(System.Windows.LocalizationCategory.NeverLocalize)] public System.Windows.Input.ICommand Command { get; set; }
[<System.Windows.Localizability(System.Windows.LocalizationCategory.NeverLocalize)>] member this.Command : System.Windows.Input.ICommand with get, set
Public Property Command As ICommand
Property Value
The command associated with this binding.
- Attributes
-
Examples
The following example creates a CommandBinding that maps an ExecutedRoutedEventHandler and a CanExecuteRoutedEventArgs handler to the Open command.
)
The following shows the
The following shows the ExecutedRoutedEventHandler which creates | https://docs.microsoft.com/en-us/dotnet/api/system.windows.input.commandbinding.command?view=windowsdesktop-5.0 | 2022-01-29T06:02:45 | CC-MAIN-2022-05 | 1642320299927.25 | [] | docs.microsoft.com |
Versions with this page:
Versions without this page:
struct in UnityEngine
Representation of 2D vectors and points.
This structure is used in some places to represent 2D positions and vectors (e.g. texture
coordinates in a Mesh or texture offsets in Material). In the majority of other cases a Vector3
is used. | https://docs.unity3d.com/kr/2017.4/ScriptReference/Vector2.html | 2022-01-29T04:41:23 | CC-MAIN-2022-05 | 1642320299927.25 | [] | docs.unity3d.com |
[ aws . directconnect ]
Creates a hosted connection on the specified interconnect or a link aggregation group (LAG) of interconnects.
Allocates a VLAN number and a specified amount of capacity (bandwidth) for use by a hosted connection on the specified interconnect or LAG of interconnects. AWS polices the hosted connection for the specified capacity and the AWS Direct Connect Partner must also police the hosted connection for the specified capacity.
Note
Intended for use by AWS Direct Connect Partners only.
See also: AWS API Documentation
See 'aws help' for descriptions of global parameters.
allocate-hosted-connection --connection-id <value> --owner-account <value> --bandwidth <value> --connection-name <value> --vlan <value> [--cli-input-json <value>] [--generate-cli-skeleton <value>]
--connection-id (string)
The ID of the interconnect or LAG.
--owner-account (string)
The ID of the AWS account ID of the customer for the connection.
- hosted connection.
--vlan (integer)
The dedicated VLAN provisioned to the hosted example creates a hosted connection on the specified interconnect.
Command:
aws directconnect allocate-hosted-conection --bandwidth 500Mbps --connection-name mydcinterconnect --owner-account 123456789012 --connection.
- unknown : The state of the connection is not available..
hasLogicalRedundancy -> (string)
Indicates whether the connection supports a secondary BGP peer in the same address family (IPv4/IPv6). | https://docs.aws.amazon.com/cli/latest/reference/directconnect/allocate-hosted-connection.html | 2019-05-19T14:54:07 | CC-MAIN-2019-22 | 1558232254889.43 | [] | docs.aws.amazon.com |
lookups... For an example, see feature_id_element.
- The default setting for
feature_id_element configuration..
- For information on generating a choropleth map, see Choropleth maps in the Dashboards and Visualizations manual..
Search commands and geospatial lookups
After you save a geospatial lookup stanza and restart Splunk Enterprise, you can interact with the new geospatial lookup through.
Steps:
- From the Search and Reporting app, use the
inputlookupcommand to search on the contents of your geospatial lookup.
| inputlookup geo_us_states
- Check to make sure that your featureIds are in the lookup with the featureId column.
- Click on the Visualization tab.
- Click on Cluster Map and select Chloropleth Map for your visualization.
A.
! | https://docs.splunk.com/Documentation/Splunk/6.6.12/Knowledge/Configuregeospatiallookups | 2019-05-19T14:58:17 | CC-MAIN-2019-22 | 1558232254889.43 | [array(['/skins/OxfordComma/images/acrobat-logo.png', None], dtype=object)] | docs.splunk.com |
Google Cloud Platform (GCP) can provide memory and CPU for your ThoughtSpot instance.
Your database capacity will determine the number of instances you’ll need and the instance network/storage requirements. In addition, you can go with multiple virtual machines (VMs) based on your dataset size.
You will need to setup the appropriate Firewall Rules in your GCP environment for your ThoughtSpot deployment. See the GCP Firewall Rules article for configuration details.
You can find more information about appropriate network policies for your ThoughtSpot deployment in the network ports reference. via. | https://docs.thoughtspot.com/5.0/appliance/gcp/about-gcp.html | 2019-05-19T15:54:38 | CC-MAIN-2019-22 | 1558232254889.43 | [] | docs.thoughtspot.com |
Data Serialization¶
What is data serialization?¶
Data serialization is the concept of converting structured data into a format that allows it to be shared or stored in such a way that its original structure to be recovered. In some cases, the secondary intention of data serialization is to minimize the size of the serialized data which then minimizes disk space or bandwidth requirements.
Pickle¶ ) | https://python-guide-kr.readthedocs.io/ko/latest/scenarios/serialization.html | 2019-05-19T15:44:48 | CC-MAIN-2019-22 | 1558232254889.43 | [] | python-guide-kr.readthedocs.io |
Configuring REX-Ray
Tweak this, turn that, peek behind the curtain...
Overview
This page reviews how to configure REX-Ray to suit any environment, beginning with the most common use cases, exploring recommended guidelines, and finally, delving into the details of more advanced settings.
Quick Configuration
Upon installing REX-Ray create a configuration file by hand or using the REX-Ray Configuration Generator and then start REX-Ray as a service:
$ rexray start -c rexray.yml
Basic Configuration
This section outlines the two most common configuration scenarios encountered by REX-Ray's users:
- REX-Ray as a stand-alone CLI tool
- REX-Ray as a service.
Stand-alone CLI Mode
It is possible to use REX-Ray directly from the command line without any configuration files. The following example uses REX-Ray to list the storage volumes available to a Linux VM hosted by VirtualBox:
note
The examples below assume that the VirtualBox web server is running on the host OS with authentication disabled and accessible to the guest OS. For more information please refer to the VirtualBox storage driver documentation.
$ rexray volume --service virtualbox ls ID Name Status Size 1b819454-a280-4cff-aff5-141f4e8fd154 libStorage.vmdk attached 16
In addition to listing volumes, the REX-Ray CLI can be used to create and remove them as well as manage volume snapshots. For an end-to-end example of volume creation, see Hello REX-Ray.
Embedded Server Mode
When operating as a stand-alone CLI, REX-Ray actually loads an embedded libStorage server for the duration of the CLI process and is accessible by only the process that hosts it. This is known as Embedded Server Mode.
While commonly used when executing one-off commands with REX-Ray as a stand-alone CLI tool, Embedded Server Mode can be utilized when configuring REX-Ray to advertise a static libStorage server as well. The following qualifications must be met for Embedded Server Mode to be activated:
The property
libstorage.hostmust not be defined via configuration file, environment variable, or CLI flag
If the
libstorage.hostproperty is defined then the property
libstorage.embeddedcan be set to
trueto explicitly activate Embedded Server Mode.
If the
libstorage.hostproperty is set and
libtorage.embeddedis set to true, Embedded Server Mode will still only activate if the address specified by
libstorage.host(whether a UNIX socket or TCP port) is not currently in use.
Auto Service Mode
The Stand-alone CLI Mode example also uses the
--service flag. This flag's argument sets the
libstorage.service property,
which has a special meaning inside of REX-Ray -- it serves to enabled
Auto Service Mode.
Services represent unique libStorage endpoints that are available to libStorage clients. Each service is associated with a storage driver. Thus Auto Service Mode minimizes configuration for simple environments.
The value of the
libstorage.service property is used to create a default
service configured with a storage driver. This special mode is only activated
if all of the following conditions are met:
- The
libstorage.serviceproperty is set via:
- The CLI flags
-s|--serviceor
--libstorageService
- The environment variable
LIBSTORAGE_SERVICE
- The configuration file property
libstorage.service
- The
libstorage.hostproperty is not set. This property can be set via:
- The CLI flags
-h|--hostor
--libstorageHost
- The environment variable
LIBSTORAGE_HOST
- The configuration file property
libstorage.host
- The configuration property
libstorage.server.servicesmust not be set. This property is only configurable via a configuration file.
Because the above example met the auto service mode conditions, REX-Ray
created a service named
virtualbox configured to use the
virtualbox driver.
This service runs on the libStorage server embedded inside of REX-Ray and is
accessible only by the executing CLI process for the duration of said process.
When used in this manner, the service name must also be a valid driver name.
Service Mode
REX-Ray can also run as a persistent service that advertises both Docker Volume Plug-in and libStorage endpoints.
Docker Volume Plug-in
This section refers to the only operational mode that REX-Ray supported in versions 0.3.3 and prior. A UNIX socket is created by REX-Ray that serves as a Docker Volume Plugin compliant API endpoint. Docker is able to leverage this endpoint to deliver on-demand, persistent storage to containers.
The following is a simple example of a configuration file that should be
located at
/etc/rexray/config.yml. This file can be used to configure the
same options that were specified in the previous CLI example. Please see the
advanced section for a complete list of
configuration options.
libstorage: service: virtualbox virtualbox: volumePath: $HOME/VirtualBox/Volumes
Once the configuration file is in place,
rexray service start can be used to
start the service. Sometimes it is also useful to add
-l debug to enable
more verbose logging. Additionally, it's also occasionally beneficial to
start the service in the foreground with the
-f flag.
$ rexray start Starting REX-Ray...SUCCESS! The REX-Ray daemon is now running at PID 15724. To shutdown the daemon execute the following command: sudo /usr/bin/rexray stop
At this point requests can now be made to the default Docker Volume Plugin
and Volume Driver advertised by the UNIX socket
rexray at
/run/docker/plugins/rexray.sock. More details on configuring the Docker
Volume Plug-in are available on the Schedulers page.
libStorage Server and Client
In addition to Embedded Server Mode, REX-Ray can also expose the libStorage API statically. This enables REX-Ray to serve a libStorage server and perform only a storage abstraction role.
If the desire is to establish a centralized REX-Ray server that is called on from remote REX-Ray instances then the following example will be useful. The first configuration is for running REX-Ray purely as a libStorage server. The second defines how one would would use one or more REX-Ray instances in a libStorage client role.
The following examples require multiple systems in order to fulfill these different roles. The Hello REX-Ray section on the front page has an end-to-end illustration of this use case that leverages Vagrant to provide and configure the necessary systems.
libStorage Server
The example below illustrates the necessary settings for configuring REX-Ray as a libStorage server:
rexray: modules: default-docker: disabled: true libstorage: host: tcp://127.0.0.1:7979 embedded: true client: type: controller server: endpoints: public: address: tcp://:7979 services: virtualbox: driver: virtualbox virtualbox: volumePath: $HOME/VirtualBox/Volumes
In the above sample, the default Docker module is disabled. This means that while the REX-Ray service would be running, it would not be available to Docker on that host.
The
libstorage section defines the settings that configure the libStorage
server:
Start the REX-Ray service with
rexray service start.
libStorage Client
On a separate OS instance running REX-Ray, the follow command can be used to list the instance's available VirtualBox storage volumes:
$ rexray volume -h tcp://REXRAY_SERVER:7979 -s virtualbox
An alternative to the above CLI flags is to add them as persistent settings
to the
/etc/rexray/config.yml configuration file on this instance:
libstorage: host: tcp://REXRAY_SERVER:7979 service: virtualbox
Now the above command can be simplified further:
$ rexray volume
Once more, the REX-Ray service can be started with
rexray service start and
the REX-Ray Docker Volume Plug-in endpoint will utilize the remote libStorage
server as its method for communicating with VirtualBox.
Again, a complete end-to-end Vagrant environment for the above example is available at Hello REX-Ray.
Example sans Modules
Lets review the major sections of the configuration file:
rexray: logLevel: warn libstorage: service: virtualbox integration: volume: operations: create: default: size: 1 virtualbox: volumePath: $HOME/VirtualBox/Volumes
Settings occur in three primary areas:
rexray
libstorage
virtualbox
The
rexray section contains all properties specific to REX-Ray. The
YAML property path
rexray.logLevel defines the log level for REX-Ray and its
child components. All of the
rexray properties are
documented below.
Next, the
libstorage section defines the service with which REX-Ray will
communicate via the property
libstorage.service. This property also enables
the Auto Service Mode discussed above since this
configuration example does not define a host or services section. For all
information related to libStorage and its properties, please refer to the
libStorage documentation.
Finally, the
virtualbox section configures the VirtualBox driver selected
or loaded by REX-Ray, as indicated via the
libstorage.service property. The
libStorage Storage Drivers page has information about the configuration details
of each driver,
including VirtualBox.
Default TLS
REX-Ray now uses TLS by default to secure libStorage client-to-controller communications.
Server Certificates
When REX-Ray is installed a self-signed certificate and private key are
generated for use by the libStorage controller and saved as
/etc/rexray/tls/rexray.crt and
/etc/rexray/tls/rexray.key.
Peer Verification
Clients can use the fingerprint of the controller's certificate to validate
the peer connection, similar to the way SSH works. In fact, peer fingerprints
are stored in the file
$HOME/.rexray/known_hosts that has the same format
as the SSH file
$HOME/.ssh/known_hosts.
When a REX-Ray client is executed it may now prompt a user to verify a remote peer. For example:
$ rexray device ls Rejecting connection to unknown host 127.0.0.1. sha fingerprint presented: sha256:6389ca7c87f308e7/73c4. Do you want to save host to known_hosts file? (yes/no): yes Permanently added host 127.0.0.1 to known_hosts file $HOME/.rexray/known_hosts It is safe to retry your last rexray command.
Please note that the
known_hosts file is stored in the
directory for the account executing the REX-Ray process. Once the
fingerprint has been added the command may be retried.
Disabling Default TLS
The default TLS behavior can be disabled by setting
libstorage.client.tls to
false.
Logging
The
-l|--logLevel option or
rexray.logLevel configuration key can be set
to any of the following values to increase or decrease the verbosity of the
information logged to the console or the REX-Ray log file (defaults to
/var/log/rexray/rexray.log).
- panic
- fatal
- error
- warn
- info
- debug
Troubleshooting
The command
rexray env can be used to print out the runtime interpretation
of the environment, including configured properties, in order to help diagnose
configuration issues.
$ rexray env | grep DEFAULT | sort -r REXRAY_MODULES_DEFAULT-DOCKER_TYPE=docker REXRAY_MODULES_DEFAULT-DOCKER_SPEC=/etc/docker/plugins/rexray.spec REXRAY_MODULES_DEFAULT-DOCKER_LIBSTORAGE_SERVICE=vfs REXRAY_MODULES_DEFAULT-DOCKER_HOST=unix:///run/docker/plugins/rexray.sock REXRAY_MODULES_DEFAULT-DOCKER_DISABLED=false REXRAY_MODULES_DEFAULT-DOCKER_DESC=The default docker module. REXRAY_MODULES_DEFAULT-ADMIN_TYPE=admin REXRAY_MODULES_DEFAULT-ADMIN_HOST=unix:///var/run/rexray/server.sock REXRAY_MODULES_DEFAULT-ADMIN_DISABLED=false REXRAY_MODULES_DEFAULT-ADMIN_DESC=The default admin module. LIBSTORAGE_INTEGRATION_VOLUME_OPERATIONS_CREATE_DEFAULT_TYPE= LIBSTORAGE_INTEGRATION_VOLUME_OPERATIONS_CREATE_DEFAULT_SIZE=16 LIBSTORAGE_INTEGRATION_VOLUME_OPERATIONS_CREATE_DEFAULT_IOPS= LIBSTORAGE_INTEGRATION_VOLUME_OPERATIONS_CREATE_DEFAULT_FSTYPE=ext4 LIBSTORAGE_INTEGRATION_VOLUME_OPERATIONS_CREATE_DEFAULT_AVAILABILITYZONE=
Advanced Configuration
The following sections detail every last aspect of how REX-Ray works and can be configured.
Example with Modules
Modules enable a single REX-Ray instance to present multiple personalities or volume endpoints, serving hosts that require access to multiple storage platforms.
Defining Modules
The following example demonstrates a basic configuration that presents two
modules using the VirtualBox driver:
default-docker and
vb2-module.
rexray: logLevel: warn modules: default-docker: type: docker desc: The default docker module. host: unix:///run/docker/plugins/vb1.sock libstorage: service: virtualbox integration: volume: operations: create: default: size: 1 virtualbox: volumePath: $HOME/VirtualBox/Volumes vb2-module: type: docker desc: The second docker module. host: unix:///run/docker/plugins/vb2.sock libstorage: service: virtualbox integration: volume: operations: create: default: size: 1 virtualbox: volumePath: $HOME/VirtualBox/Volumes libstorage: service: virtualbox
Whereas the previous example did not use modules and the example above does,
they both begin by defining the root section
rexray. Unlike the previous
example, however, the majority of the
libstorage section and all of the
virtualbox section are no longer at the root. Instead the section
rexray.modules is defined. The
modules key in the
rexray section is where
all modules are configured. Each key that is a child of
modules represents the
name of a module.
note
Please note that while most of the
libstorage section has been relocated
as a child of each module, the
libstorage.service property is still
defined at the root to activate Auto Service Mode as
a quick-start method of property configuring the embedded libStorage server.
The above example defines two modules:
default-module
This is a special module, and it's always defined, even if not explicitly listed. In the previous example without modules, the
libstorageand
virtualboxsections at the root actually informed the configuration of the implicit
default-dockermodule. In this example the explicit declaration of the
default-dockermodule enables several of its properties to be overridden and given desired values. The Advanced Configuration section has more information on Default Modules.
vb2-module
This is a new, custom module configured almost identically to the
default-modulewith the exception of a unique host address as defined by the module's
hostkey.
Notice that both modules share many of the same properties and values. In fact,
when defining both modules, the top-level
libstorage and
virtualbox sections
were simply copied into each module as sub-sections. This is perfectly valid
as any configuration path that begins from the root of the REX-Ray
configuration file can be duplicated beginning as a child of a module
definition. This allows global settings to be overridden just for a specific
modules.
As noted, each module shares identical values with the exception of the module's
name and host. The host is the address used by Docker to communicate with
REX-Ray. The base name of the socket file specified in the address can be
used with
docker --volume-driver=. With the current example the value of the
--volume-driver parameter would be either
vb1 of
vb2.
Modules and Inherited Properties
There is also another way to write the previous example while reducing the number of repeated, identical properties shared by two modules.
rexray: logLevel: warn modules: default-docker: host: unix:///run/docker/plugins/vb1.sock libstorage: integration: volume: operations: create: default: size: 1 vb2: type: docker libstorage: service: virtualbox virtualbox: volumePath: $HOME/VirtualBox/Volumes
The above example may look strikingly different than the previous one, but it's actually the same with just a few tweaks.
While there are still two modules defined, the second one has been renamed from
vb2-module to
vb2. The change is a more succinct way to represent the same
intent, and it also provides a nice side-effect. If the
host key is omitted
from a Docker module, the value for the
host key is automatically generated
using the module's name. Therefore since there is no
host key for the
vb2
module, the value will be
unix:///run/docker/plugins/vb2.sock.
Additionally,
virtualbox sections from each module definition have been
removed and now only a single, global
virtualbox section is present at the
root. When accessing properties, a module will first attempt to access a
property defined in the context of the module, but if that fails the property
lookup will resolve against globally defined keys as well.
Finally, the
libstorage section has been completely removed from the
vb2
module whereas it still remains in the
default-docker section. Volume
creation requests without an explicit size value sent to the
default-docker
module will result in 1GB volumes whereas the same request sent to the
vb2
module will result in 16GB volumes (since 16GB is the default value for the
libstorage.integration.volume.operations.create.default.size property).
Defining Service Endpoints
Multiple libStorage services can be defined in order to leverage several different combinations of storage provider drivers and their respective configurations. The following section illustrates how to define two separate services, one using the ScaleIO driver and one using VirtualBox:: server:
Once the services have been defined, it is then up to the modules to specify
which service to use. Notice how the
default-docker module specifies
the
virtualbox service as its
libstorage.service. Any requests to the
Docker Volume Plug-in endpoint
/run/docker/plugins/virtualbox.sock will
utilize the libStorage service
virtualbox on the backend.
Defining a libStorage Server
The following example is very similar to the previous one, but in this instance there is a centralized REX-Ray server which services requests from many REX-Ray clients.
rexray: modules: default-docker: disabled: true libstorage: host: tcp://127.0.0.1:7979 embedded: true client: type: controller server: endpoints: public: address: tcp://:7979
One of the larger differences between the above example and the previous one is the removal of the module definitions. Docker does not communicate with the central REX-Ray server directly; instead Docker interacts with the REX-Ray services running on the clients via their Docker Volume Endpoints. The client REX-Ray instances then send all storage-related requests to the central REX-Ray server.
Additionally, the above sample configuration introduces a few new properties:
Defining a libStorage Client
The client configuration is still rather simple. As mentioned in the previous
section, the
rexray.modules configuration occurs here. This enables the Docker
engines running on remote instances to communicate with local REX-Ray exposed
Docker Volume endpoints that then handle the storage-related requests via the
centralized REX-Ray server.: host: tcp://REXRAY_SERVER:7979
Data Directories
The first time REX-Ray is executed it will create several directories if they do not already exist:
/etc/rexray
/etc/rexray/tls
/var/lib/rexray
/var/log/rexray
/var/run/rexray
The above directories will contain configuration files, logs, PID files, and mounted volumes.
The location of these directories can be influenced in two ways. The first way
is via the environment variable
REXRAY_HOME. When
REXRAY_HOME is
defined, the normal, final token of the above paths is removed. Thus when
REXRAY_HOME is defined as
/opt/rexray the above directory paths would be:
/opt/rexray/etc
/opt/rexray/etc/tls
/opt/rexray/var/lib
/opt/rexray/var/log
/opt/rexray/var/run
It's also possible to override any one of the above directory paths manually using the following environment variables:
REXRAY_HOME_ETC
REXRAY_HOME_ETC_TLS
REXRAY_HOME_LIB
REXRAY_HOME_LOG
REXRAY_HOME_RUN
Thus if
REXRAY_HOME was set to
/opt/rexray and
REXRAY_HOME_ETC was set to
/etc/rexray the above paths would be:
/etc/rexray
/etc/rexray/tls
/opt/rexray/var/lib
/opt/rexray/var/log
/opt/rexray/var/run
Configuration Methods
There are three ways to configure REX-Ray:
-
There are two REX-Ray configuration files - global and user:
/etc/rexray/config.yml
$HOME/.rexray/config.yml
Please note that while the user configuration file is located inside the user's
home directory, this is the directory of the user that starts REX-Ray. And
if REX-Ray is being started as a service, then
sudo is likely being used,
which means that
$HOME/.rexray/config.yml won't point to your home
directory, but rather
/root/.rexray/config.yml.
The next section has an example configuration with the default configuration.
Configuration Properties
The section Configuration Methods mentions there are three ways to configure REX-Ray: config files, environment variables, and the command line. However, this section will illuminate the relationship between the names of the configuration file properties, environment variables, and CLI flags.
Here is a sample REX-Ray configuration:
rexray: logLevel: warn libstorage: service: virtualbox virtualbox: volumePath: $HOME/VirtualBox/Volumes
The properties
rexray.logLevel,
libstorage.service, and
virtualbox.volumePath are strings. These values can also be set via
environment variables or command line interface (CLI) flags, but to do so
requires knowing the names of the environment variables or CLI flags to use.
Luckily those are very easy to figure out just by knowing the property names.
All properties that might appear in the REX-Ray configuration file fall under some type of heading. For example, take the.
- See the verbose help for exact global flags using
rexray --help -vas they may be chopped to minimize verbosity.
The following table illustrates the transformations: | https://rexray.readthedocs.io/en/stable/user-guide/config/ | 2019-05-19T14:32:31 | CC-MAIN-2019-22 | 1558232254889.43 | [] | rexray.readthedocs.io |
[ Tcllib Table Of Contents | Tcllib Index ]
yencode(n) 1.1.2 "Text encoding & decoding binary data"
Table Of Contents
Synopsis
- package require Tcl 8.2
- package require yencode ?1.1.2?
Description
This package provides a Tcl-only implementation of the yEnc file encoding. This is a recently introduced method of encoding binary files for transmission through Usenet. This encoding packs binary data into a format that requires an 8-bit clean transmission layer but that escapes characters special to the NNTP posting protocols. See
Bugs, Ideas, Feedback
This document, and the package it describes, will undoubtedly contain bugs and other problems. Please report such in the category base64 of the Tcllib Trackers. Please also report any ideas for enhancements you may have for either package and/or documentation. | http://docs.activestate.com/activetcl/8.5/tcl/tcllib/base64/yencode.html | 2019-05-19T15:29:13 | CC-MAIN-2019-22 | 1558232254889.43 | [] | docs.activestate.com |
You can assign a business entity to an asset ( host, port, storage, switch, virtual machine, qtree, share, volume, or internal volume) without having associated the business entity to an application; however, business entities are assigned automatically to an asset if that asset is associated with an application related to a business entity.
While you can assign business entities directly to assets, it is recommended that you assign applications to assets and then assign business entities to assets. | http://docs.netapp.com/oci-73/topic/com.netapp.doc.oci-acg/GUID-CFFC8AE9-7195-4910-BFBC-FB3BB222A7FD.html | 2019-05-19T15:06:53 | CC-MAIN-2019-22 | 1558232254889.43 | [] | docs.netapp.com |
You can configure GemFire XD to write immediately to disk and you may be able to modify your operating system behavior to perform buffer flushes more frequently.
Typically, GemFire XD writes disk data into the operating system's disk buffers and the operating system periodically flushes the buffers to disk. Increasing the frequency of writes to disk decreases the likelihood of data loss from application or machine crashes, but it impacts performance..
You can have GemFire XD flush the disk buffers on every disk write. Do this by setting the system property gemfire.syncWrites to true at the command line when you start your GemFire XD member. You can modify this setting only when you start a member. When this property is set, GemFire XD uses a Java RandomAccessFile with the flags "rwd", which causes every file update to be written synchronously to the storage device. This only guarantees your data if your disk stores are on a local device. See the Java documentation for java.IO.RandomAccessFile.
Configure this property when you start a GemFire XD member:
gfxd server start -J-Dgemfire.syncWrites=true | http://gemfirexd.docs.pivotal.io/docs/1.4.0/userguide/disk_storage/managing_disk_buffer_flushes.html | 2019-05-19T15:44:14 | CC-MAIN-2019-22 | 1558232254889.43 | [] | gemfirexd.docs.pivotal.io |
)
Renamed "Error Code Analysis" dashboard
The "Error Analysis" dashboard has been renamed to "Error Code Analysis." The dashboard includes API calls with HTTP status codes of 4xx and 5xx. (EDGEUI-738)
TPS data on proxy dashboards
When an analytics dashboard received a 500 error, the management UI displayed "Report timed out" regardless of the error. To provide better troubleshooting capabilities, the UI now displays the actual error. (EDGEUI-753)
Inactive developer indicators in the)
Bugs fixed
The following bugs are fixed in this release. This list is primarily for users checking to see if their support tickets have been fixed. It's not designed to provide detailed information for all users. | https://docs.apigee.com/release/notes/161026-ui-apigee-edge-public-cloud-release-notes | 2020-10-20T05:17:52 | CC-MAIN-2020-45 | 1603107869933.16 | [] | docs.apigee.com |
« Previous Tour
Next Tour »
We've added this layer to hold the Theme Press Tour Guide.
changes.mady.by.user Geoff Ka'alani
Saved on Oct 30, 2019
Saved on Oct 31, 2019
Armor Knowledge Base / Armor Anywhere / Health Overview Dashboard
In the Detection screen, the Detection score focuses on the incoming activity of Armor services. You can use these scores to determine if Armor is receiving the necessary data to perform useful security checks for your environment.
For Armor Anywhere, these services are:
Widget
Description
Detection Score
This widget calculates a score based on:
Score range
Health status
Fair
Events Analyzed
An event is any log that passes an Armor agent.
Malware Protection, File Integrity Monitoring, and Log and Event Management contain a subagent.
This widget displays data from the previous month.
Services Reporting
Detection Score Trend
The Detection Events table displays information for the past seven days. This table will update every day.
Column
The Highest Risk Assets table displays virtual machines that contain the installed Armor Anywhere agent that are considered highly vulnerable. This table is based on the findings of the weekly vulnerability scanning report.
The Top Vulnerabilities table displays the most critical vulnerabilities found in your environment. This table is based on the findings of the weekly vulnerability scanning report.
Vulnerability Name
This column displays the name of the vulnerability.
Affected Assets
This column displays the virtual machines (host / asset) affected by the vulnerability.
If you are unfamiliar with the name of a virtual machine, you can use the Virtual Machines screen to search.
Date Discovered
This column displays the date the vulnerability was discovered.
CVSS
This column displays the CVSS, a score attached to a vulnerability to determine the vulnerability's severity.
Severity
This column displays the severity of the vulnerability.
There are four severity types, based on the vulnerability's CVSS:
There is an additional severity type called Info. Although Info is listed as a severity type, in reality, Info simply displays activity information for corresponding plugins from third-party vendors. | https://docs.armor.com/pages/diffpages.action?pageId=54535104&originalId=54532509 | 2020-10-20T05:37:34 | CC-MAIN-2020-45 | 1603107869933.16 | [] | docs.armor.com |
Fill out the 3 fields and click the Transfer button to complete transaction.
The Multisig page will have 4 fields to fill in;
Proposer - The account you are proposing the multisig from.
Proposal Name - Name of the proposal and can be 2-12 characters long.
Requested Approvals - List which accounts can prove this transaction. These are accounts listed as authorizations.
Actor - Account name
Permission - Owner or Active
Authorization
Actor
Permission
You will receive a Success message along with a transaction hash to check the completed action, as well as a link to check the proposal. You can share your MSIG with this link provided.
The link to the proposal will show the proposal in detail, with the Proposer, Proposal Name, Approval Status and the current Request Approvals by the account(s) and their individually status.
The Show Transaction button will show the action in code.
The Approve Transaction button is for anyone who is listed as a Requested Approval to Approve Transaction after review.
Execute Transaction
If you are logged into an account that is listed as one of the Request Approvals, find the proposal and click Approve Transaction.
Once the Approval Status reaches the minimum threshold, you can now execute the transaction. Press Executive Transaction.
You will receive a Success message with a transaction hash attached. You can also search up the account and verify transaction to see if the transfer has been executed.
You will need to open up Scatter, go to Settings, Firewall, and remove eosio.msig approve as a Blacklisted Action. | https://docs.bloks.io/wallet/msig | 2020-10-20T06:53:25 | CC-MAIN-2020-45 | 1603107869933.16 | [] | docs.bloks.io |
Web Data Extraction Part II
The web data extraction can also take place on an actual IE if you have the "Extract data from Web Page" action open while you move your mouse pointer to the page of interest.
Should you click in the webpage, then the "Live Web Helper- Extract Data From Web Page" window will pop up. In this Window you will be able to preview the extracted data.
Extracting a List:
Lets say that you wish to extract the title for all available results in a webpage.
Having the "Extract data from Web Page" action open, hover your mouse on the page (or click on a blank area). Then right click on the first result and extract its Text as in the screenshot below:
Do the same for the second result and the list a list of all the items' text will be automatically extracted. Click on the "Advanced Settings" icon to review the CSS selector which you can modify and make it even more efficient.
1. As you can see while extracting a list, we have the Base Selector and the CSS selector. The Base selector is the root element in the HTML code, under which the items of the list are listed. This means that the extraction starts from the ".....div:eq(1) > ul > li"
2. For each list item from the list "...div:eq(1) > ul > li" and then it gets the "h3 > a" element.
3. The attribute that you are extracting is "Own Text" and it can be changed to "Title", "Href", "SourceLink", "Exists" or any other Attribute is available in the HTML code of the page for this element.
4. You also have the option to apply Regular Expressions on the extracted text, in order to get just a part of it.
Changing the selector by hand, then you can click on the "Recalculate now"
button to see the extraction's Result.
Extracting a Table:
In order to extract more than one piece of info for each result you would have to extract a table.
Let's say that we want to extract the Title of the product, the link behind it and the price.
For the first result we right click on the title, extract its "Text", then right click again to extract the "Href" and finally we right click on the price element to extract its "Text".
We move on to the second result/product to do the same and the table is automagically created in the extraction preview window.
For the table, in the same notion as extracting the list, we have the Base CSS Selector, which is the root element in the HTML code, under which the data of each result/product exist. This means that the extraction starts from the ".....div:eq(1) > ul > li" and then for each or the item we extract the
h3 > a Attribute "Own Text"
h3 > a Attribute "Href"
ul:eq(0) > li:eq(0) > span Attribute "Own Text
Attributes to extract:
In the Attribute field of the "Advanced Settings" of the "Extraction Preview" window, other than the attributes that are listed in the drop down list, you can specify any other attribute that the element has. For example if an element in the HTML code of the page is:
< li.......< /li >
Then in the attribute dropdown list you can write "class" if you want to extract its class, "id" if you want to extract its id...and so on.
NOTE
You can extract the plain html code of the element -and all its children elements- should you write "outerhtml"
You can extract the plain html code of the all the children elements of the element should you write "innerhtml"
This is very helpful if you want to extract a piece of info that resides in the html for this element by applying some Regular Expressions on the extracted code. | https://docs.winautomation.com/en/web-data-extraction-part-ii.html | 2020-10-20T05:41:41 | CC-MAIN-2020-45 | 1603107869933.16 | [array(['image/15f1820132e76b.png', 'Extract1.png'], dtype=object)
array(['image/15f18201330988.png', 'Extract2.png'], dtype=object)
array(['image/15f18201332c5e.png', 'Extract3.png'], dtype=object)] | docs.winautomation.com |
Vectorization¶
To find parallel instructions the tracer must provide enough information about memory load/store operations. They must be adjacent in memory. The requirement for that is that they use the same index variable and offset can be expressed as a a linear or affine combination.
Command line flags:
- –jit vec=1: turns on the vectorization for marked jitdrivers (e.g. those in the NumPyPy module).
- –jit vec_all=1: turns on the vectorization for any jit driver. See parameters for the filtering heuristics of traces.
Features¶
Currently the following operations can be vectorized if the trace contains parallel operations:
- float32/float64: add, substract, multiply, divide, negate, absolute
- int8/int16/int32/int64 arithmetic: add, substract, multiply, negate, absolute
- int8/int16/int32/int64 logical: and, or, xor
Constant & Variable Expansion¶
Packed arithmetic operations expand scalar variables or contants into vector registers.
Guard Strengthening¶
Unrolled guards are strengthend on an arithmetical level (See GuardStrengthenOpt). The resulting vector trace will only have one guard that checks the index.
Calculations on the index variable that are redundant (because of the merged load/store instructions) are not removed. The backend removes these instructions while assembling the trace.
In addition a simple heuristic (enabled by –jit vec_all=1) tries to remove array bound checks for application level loops. It tries to identify the array bound checks and adds a transitive guard at the top of the loop:
label(...) ... guard(i < n) # index guard ... guard(i < len(a)) a = load(..., i, ...) ... jump(...) # becomes guard(n < len(a)) label(...) guard(i < n) # index guard ... a = load(..., i, ...) ... jump(...)
Future Work and Limitations¶
- The only SIMD instruction architecture currently supported is SSE4.1
- Packed mul for int8,int64 (see PMUL). It would be possible to use PCLMULQDQ. Only supported by some CPUs and must be checked in the cpuid.
- Loop that convert types from int(8|16|32|64) to int(8|16) are not supported in the current SSE4.1 assembler implementation. The opcode needed spans over multiple instructions. In terms of performance there might only be little to non advantage to use SIMD instructions for this conversions.
- For a guard that checks true/false on a vector integer regsiter, it would be handy to have 2 xmm registers (one filled with zero bits and the other with one every bit). This cuts down 2 instructions for guard checking, trading for higher register pressure.
- prod, sum are only supported by 64 bit data types
- isomorphic function prevents the following cases for combination into a pair: 1) getarrayitem_gc, getarrayitem_gc_pure 2) int_add(v,1), int_sub(v,-1) | https://rpython.readthedocs.io/en/latest/jit/vectorization.html | 2020-10-20T06:31:05 | CC-MAIN-2020-45 | 1603107869933.16 | [] | rpython.readthedocs.io |
Description
CouchDB is an open source NoSQL database that stores your data with JSON documents, which you can access via HTTP. It allows you to index, combine, and transform your documents with JavaScript.
First steps with the Bitnami CouchDB: admin of CouchDB?
The default access port for CouchDB's HTTP server is 5984. HTTPS is disabled and remote access is disabled by default.
- Learn how to enable SSL for HTTPS on CouchDB.
- Learn how to connect to CouchDB from a different machine.
- Learn how to connect to Fauxton, CouchDB's administration panel. CouchDB access port is 5984. connect to CouchDB from a different machine?
For security reasons, the CouchDB port in this solution cannot be accessed over a public IP address. To connect to CouchDB from a different machine, you must open port 5984 the firewall rule is in place, perform these additional steps:
Stop your CouchDB server and edit the /opt/bitnami/couchdb/etc/local.ini file. Change the bind_address from 127.0.0.1 to 0.0.0.0:
[chttpd] port = 5984 bind_address = 0.0.0.0 ... [httpd] bind_address = 0.0.0.0 ...
Restart your server for the changes to take effect.
$ sudo /opt/bitnami/ctlscript.sh restart couchdb
You should now be able to connect to the CouchDB server from a different machine using the server's public IP address and receive a welcome message. This is shown in the example command and output below:
$ curl {"couchdb":"Welcome","version":"2.1.1","features":["scheduler"],"vendor":{"name":"The Apache Software Foundation"}}
How to change the CouchDB admin password?
You can modify the CouchDB admin password with these steps:
Stop your server.
$ sudo /opt/bitnami/ctlscript.sh stop couchdb
Edit your /opt/bitnami/couchdb/etc/local.ini configuration file, editing the respective admin password to what you want, in the [admin] section. For example:
admin = my_new_password
CouchDB will take care of hashing your password, so the only thing you need is to start your server again.
$ sudo /opt/bitnami/ctlscript.sh restart couchdb
How to start/stop the CouchDB server?
To start the CouchDB server, access your machine and execute the following:
$ sudo /opt/bitnami/ctlscript.sh start couchdb
To stop the CouchDB server execute the following:
$ sudo /opt/bitnami/ctlscript.sh stop couchdb
How to enable SSL (for HTTPS) on CouchDB?
You can enable SSL on CouchDB using these steps:
Stop CouchDB.
$ sudo /opt/bitnami/ctlscript.sh stop couchdb
Edit your /opt/bitnami/couchdb/etc/local.ini file and edit the [daemons] section so that the line activating the httpsd daemon is uncommented.
[daemons] httpsd = {couch_httpd, start_link, [https]}
Within the same file, make sure your [ssl] section includes at least the following lines uncommented:
[ssl] port = 6984.
Finally, start your CouchDB server again and you will be able to access CouchDB over SSL at the selected port eg. at.
$ sudo /opt/bitnami/ctlscript.sh restart couchdb
How to connect to Fauxton, the CouchDB management panel?
By default, CouchDB is configured to listen on the local interface only, so an SSH tunnel is needed to access it. Use port 5984 (the default CouchDB port) for both ends of your SSH tunnel. Follow these instructions to remotely connect safely and reliably.
Once your SSH tunnel is running, you can connect to Fauxton, CouchDB's management panel, by browsing to. To log in, use the username and password obtained from the server dashboard. Remember to replace LOCAL-PORT with the local port number specified during the SSH tunnel creation.
How can I run a command in the Bitnami CouchDB Stack?
Log in to the server console as the bitnami user and run the command as usual. The required environment is automatically loaded for the bitnami user.
How to create a full backup of CouchDB?
Backup
The Bitnami CouchDB CouchDB errors?
To debug CouchDB's errors, use the log files at /opt/bitnami/couchdb/var/log/couchdb/. | https://docs.bitnami.com/oracle/infrastructure/couchdb/ | 2018-07-15T20:50:37 | CC-MAIN-2018-30 | 1531676588972.37 | [] | docs.bitnami.com |
Virtual machines can be configured like physical computers and can perform the same tasks as physical computers. Virtual machines also support special features that physical computers do not support.
You can use the VMware Host Client to create, register, and manage virtual machines, and to conduct daily administrative and troubleshooting tasks. | https://docs.vmware.com/en/VMware-vSphere/6.5/com.vmware.vsphere.html.hostclient.doc/GUID-4ECD8CE7-6362-4FC3-A2DA-CD3D68882306.html | 2018-07-15T21:28:37 | CC-MAIN-2018-30 | 1531676588972.37 | [] | docs.vmware.com |
Device Details page
Overview
This page displays the information about last backup details, profile settings, DLP details and device location, and the device platform details.
Summary
The following table lists the fields in the Summary area.
Profile Settings
The following table lists the fields in the Profile Settings area.
DLP Summary
The DLP Summary area displays the location of the device on the world map if device tracing is activated.
Summary tab
Device Details
The following table lists the fields in the Device Details area.
Backup Summary
The following table lists the fields in the Backup Summary area.
DLP Details
The following table lists the fields in the DLP Details area.
Map
The MAP area displays the last known device location as ON if device trace is activated for the device.
Data Activity Trend
Note: Available only to inSync ElitePlus customers.
The Data Activity Trend is a visual representation of updates in each snapshot. Over a set of backup jobs, inSync gathers information on how the files inside configured folders on a device have changed. Every time the inSync Client executes a backup job and creates a snapshot, inSync Cloud uses a set of parameters, and compares the updates in the current snapshot with the previous snapshots. With every backup job, inSync monitors:
- The number of files deleted
- The number of files added
- The number of files modified
- The number of files encrypted
Using the parameters above, the Data Activity Trend area renders a chart that indicates the updates in each snapshot. If inSync detects anomalous behavior, the snapshot is flagged in the graph. Such an anomalous behavior can indicate a potential threat in the device such as a ransom ware attack.
You can click each bar in the Data Activity Trend chart to launch the Restore Data window. When the Restore Data dialog is launched, under the list of snapshots, the anomalous snapshot is highlighted with a warning symbol.
Note: If the end-user has enabled the privacy settings, you cannot run admin restore on the device. When you click on a bar in this case, you see an error message in place of the Restore Data window.
When you select the snapshot, a banner is displayed at the top of the Restore Data dialog that indicates unusual data activity detected in this snapshot.
You can use the Download Logs button in the Restore Data dialog to download a .csv log that describes the changes in files in this snapshot compared to the last snapshot, and can help you identify files that led inSync to flag this snapshot. You can either manually analyze the .csv log or use a third party tool for analysis that can detect potential threats. The .csv log contains the following fields that can help you in analyzing it:
The Download Logs button is available for snapshots created after the feature is enabled. The Download Logs option is not available for the snapshots created before the feature is enabled.
Note: Logs are not available for the first snapshot after unusual data activity is enabled. If you click Download Logs for the first snapshot, you see the Page Not Found (HTTP 404) error message.
From the Restore Data dialog, you can also select a snapshot that inSync did not flag for unusual data activity, and restore data on your device using that snapshot. | https://docs.druva.com/001_inSync_Cloud/Cloud/020_Backup_and_Restore/030_inSync_Master_Management_Console_User_Interface/Device_Details_page | 2018-07-15T21:22:01 | CC-MAIN-2018-30 | 1531676588972.37 | [array(['https://docs.druva.com/@api/deki/files/3644/tick.png?revision=2',
'File:/tick.png'], dtype=object)
array(['https://docs.druva.com/@api/deki/files/3644/tick.png?revision=2',
'File:/tick.png'], dtype=object)
array(['https://docs.druva.com/@api/deki/files/3644/tick.png?revision=2',
'File:/cross.png'], dtype=object)
array(['https://docs.druva.com/@api/deki/files/3644/tick.png?revision=2',
'File:/tick.png'], dtype=object)
array(['https://docs.druva.com/@api/deki/files/20432/Device_summarynew.png?revision=1&size=bestfit&width=394&height=186',
None], dtype=object)
array(['https://docs.druva.com/@api/deki/files/20431/Device_profilenew.png?revision=1&size=bestfit&width=401&height=186',
None], dtype=object)
array(['https://docs.druva.com/@api/deki/files/20436/Manage_Devices_DLP_Summary.jpg?revision=1&size=bestfit&width=350&height=181',
'Manage Devices DLP Summary.jpg'], dtype=object)
array(['https://docs.druva.com/@api/deki/files/20434/Manage_Devices_Device_Details.jpg?revision=1&size=bestfit&width=350&height=190',
'Manage_Devices_Device_Details.jpg'], dtype=object)
array(['https://docs.druva.com/@api/deki/files/20433/Manage_Devices_Backup_Summary.jpg?revision=1&size=bestfit&width=350&height=247',
'Manage Devices Backup Summary.jpg'], dtype=object)
array(['https://docs.druva.com/@api/deki/files/20435/Manage_Devices_DLP_Details.jpg?revision=1&size=bestfit&width=350&height=195',
'Manage Devices DLP Details.jpg'], dtype=object)
array(['https://docs.druva.com/@api/deki/files/20437/Manage_Devices_Map.jpg?revision=1&size=bestfit&width=350&height=205',
'Manage Devices Map.jpg'], dtype=object)
array(['https://docs.druva.com/@api/deki/files/31241/data-activity-trend.png?revision=1&size=bestfit&width=1162&height=354',
None], dtype=object)
array(['https://docs.druva.com/@api/deki/files/34234/snapshots.png?revision=1&size=bestfit&width=550&height=191',
None], dtype=object)
array(['https://docs.druva.com/@api/deki/files/31243/banner-snapshot.png?revision=1&size=bestfit&width=598&height=131',
None], dtype=object) ] | docs.druva.com |
Subsets and Splits
No saved queries yet
Save your SQL queries to embed, download, and access them later. Queries will appear here once saved.