content
stringlengths 0
557k
| url
stringlengths 16
1.78k
| timestamp
timestamp[ms] | dump
stringlengths 9
15
| segment
stringlengths 13
17
| image_urls
stringlengths 2
55.5k
| netloc
stringlengths 7
77
|
---|---|---|---|---|---|---|
Create a dynamic asset listing using keywords
This tutorial creates an asset listing that can have dynamic parameters passed to it when nested into other assets to function differently based on those different parameters.
When creating asset listings, you often need to re-purpose them for multiple scenarios. In other words, you may need to tweak various small settings within the asset listing while keeping most of the other configurations the same.
Instead of creating multiple asset listings with slightly different settings and presentation layouts, it’s sometimes better to just create one and make it dynamic in those areas.
You can set dynamic parameters when nesting in the asset listing into a template such as a paint layout or Design. However, they can also be automatic based on various factors.
This tutorial teaches you how to configure the following dynamic parts of an asset listing:
- Root node
The root node from which to list assets.
- Assets per page
The maximum number of assets to list per page.
- Listing heading
The heading above the list.
- Listing format
The presentation layout of each asset listed.
To make things dynamic, we’ll use several keywords and modifiers and break each of these down to help you learn and understand a bit more about the power of keywords.
Before you start
This tutorial assumes you are already familiar with the Admin Interface and understand the basic concepts of Keyword Replacements, Components, and asset listings.
Create the assets
For our example, we’ll create a basic asset listing that lists some standard pages from different folders. The first step is to create assets used within the dynamic listing.
Create one top-level folder called "Top Folder".
Inside the Top Folder folder, create two sub-folders called "Folder A" and "Folder B".
Inside each of the sub-folders, create three Standard page assets each and give them names in the following format:
"Page A-1"
"Page A-2"
"Page A-3"
Next to the "Top Folder" folder, create an asset listing named "Dynamic Listing" that lists these assets.
Create another standard page asset named "Nester Page" on the same level that nests in the asset listing and pass the dynamic parameters to it.
You should now have an asset structure similar to the image below:
Configure asset listing settings
Configure the asset listing and make the root node and assets per page settings dynamic.
Go to the Details screen of the asset listing and click Acquire Locks.
Configure the following areas:
- Asset Types to List
Set this to Pages and select the Inherit option.
- Root Nodes
Set this to our Top Folder asset ID, which in our example is "9528".
- Assets Per Page
This is where the first piece of dynamic configuration displays. In this field, enter
%nested_get_max^empty:10%. This keyword replacement keyword gets the value from a GET parameter called
max. To control what happens if the field value is empty, specify the default to
10.
- Dynamic Parameter
This controls our dynamic root node functionality. In the Parameter drop-down, select "Replacement root node" for the listing and leave the Source as GET Variable Name.
- If dynamic root not found
Set this to "Return empty result". If a dynamic root node value can not be found, this setting limits unnecessary page loads by preventing all assets from being listed from the fixed root node.
Save the screen and then scroll down to the Dynamic Parameters section again and in the GET Variable Name field that now is available, enter "root".
Now we have our dynamic asset listing configured with a couple of dynamic settings we can work on the presentation layout and make that more dynamic as well.
Configure the asset listing page contents layout
Make the listing heading dynamic, which appears just before the list.
Load the Edit Contents screen of the asset listing’s page contents bodycopy.
Add the dynamic heading in the form of a keyword and a modifier and also the standard keyword for the asset listing content itself:
<h2>%nested_get_heading^empty:{nested_get_root^as_asset:asset_name}^empty:Page List%</h2> <ul> %asset_listing% </ul>
Let’s break down that keyword to see what it’s doing:
nested_get_heading
This gets the value from a GET parameter called "heading" which can be passed to the asset listing when nested into another asset. We’ll show where and how this is done later in the tutorial.
^empty:
Print something if the preceding value is empty (in our case, if
nested_get_headingis empty).
{nested_get_root^as_asset:asset_name}
This is what gets printed if the preceding value is empty. In our case, it’s a whole other keyword, which we can break down further:
nested_get_root
Get the value from a GET parameter called "root". We expect this to be an asset ID value.
^as_asset:
Allows us to print any attribute or metadata value from the asset ID value sourced from the preceding
nested_get_rootpart.
asset_name
Print the Asset Name of the asset ID sourced from
nested_get_root.
^empty:Page List
We finish off with another fallback and default value if both
nested_get_headingand
nested_get_rootare empty.
Try and print the value from the GET parameter heading. If that value is empty, try to print the Asset Name from an asset ID sourced from a GET parameter called "root". If that also is empty, then just print "Page List".
Configure the asset listing default format layout
Make the listing format dynamic, which is the layout format for each asset produced by the asset listing.
Go to the Edit Contents screen of the asset listing’s Default Format Bodycopy.
Make sure you set Component to Code.
Add the following keyword functionality to the Code component:
a basic keyword for printing the Asset Name wrapped in a link to itself,
some conditional keywords to print the published date, created date, or nothing.
<li> %asset_name_linked% %begin_nested_get_date^eq:published% (1) (Published: %asset_published_readabledate%) %else_begin_nested_get_date^eq:created% (2) (Created: %asset_created_readabledate%) %end_nested% (3) </li>
We now have a dynamic layout generated for each asset that gets listed.
Using the asset listing
Our dynamic asset listing is complete, and we can now start embedding it into other assets. Here are two common ways of doing this:
Use the nested content component method
In our Nester Page that we created earlier, go to the Edit Contents screen and click Edit.
Change the Component to "Nested content".
Chose the Dynamic asset listing as the asset to nest within.
Click on the "Toggle Additional Options" link to display the Additional GET Parameters area. This area allows us to specify our settings to pass to the asset listing.
Click on Add a new variable four times to add four variable Name and Value fields.
Enter values for all four of our dynamic settings, for example:
Check that your Nested content component looks something like this example:
Save the screen and preview the Nester Page on the frontend. Your page should look similar to this example:
Use the global keyword method
This method uses a Global Keyword together with the
^with_get: keyword modifier.
In the same Nester Page, create another Component but with a type of Code.
In the code block, enter the following keyword:
%globals_asset_contents_raw:9568^with_get:root=9528&max=4&date=published%
globals_asset_contents_raw:9568
Grab the Asset Contents, without any Paint Layout applied, of asset 9568 (our Dynamic asset listing).
^with_get:
This modifier lets us pass GET parameters to asset 9568 when we request its Asset Contents with the
globals_asset_contents_rawkeyword.
root=9528&max=4&date=published
These are the GET parameters that we are passing (make sure you replace 9528 with the asset ID of your Top Folder). As you can see, it’s in the same format as if you were requesting a page with a URL query string.
Save the screen and preview the Nester Page again. Another list displays with some different content (including the name of the root node asset as the heading fallback): | https://docs.squiz.net/matrix/version/latest/tutorials/listings/create-a-dynamic-asset-listing-using-keywords.html | 2021-10-16T12:34:58 | CC-MAIN-2021-43 | 1634323584567.81 | [array(['../_images/dynamic-asset-listing-asset-structure.png',
'dynamic asset listing asset structure'], dtype=object)] | docs.squiz.net |
Set up a custom repository location
You can set up a custom default location for Python and R code package repositories. This is especially useful for air-gapped clusters that are isolated from the PIP and CRAN repositories on the public internet.
Python PIP repository
Custom PIP repositories can be set as default for all engines at a site or project level. The environmental variables can be set at the Project or Site level. If the values are set at the Site level, they can be overridden at the Project level.
- Set the environmental variables at the appropriate level.
- For Site level, go to:
- For Project level, go to:
- To set a new default URL for the PIP index, enter:
PIP_INDEX_URL = <new url>
PIP_EXTRA_INDEX_URL = <new url>
CRAN repository
Custom CRAN repositories must be set in a session or as part of a custom engine. To set a new default URL for a CRAN repository, set the following in the /home/cdsw/.Rprofile file:
options(repos=structure(c(CRAN="<mirror URL>"))) | https://docs.cloudera.com/machine-learning/1.3.1/engines/topics/ml-pip-cran-custom-repo.html | 2021-10-16T11:47:57 | CC-MAIN-2021-43 | 1634323584567.81 | [] | docs.cloudera.com |
$ oc describe clusterrole/customer-admin-cluster $ oc describe clusterrole/customer-admin-project
These Cluster Administration topics cover the day-to-day tasks for managing your Azure Red Hat OpenShift cluster and other advanced configuration topics.
As a Customer cluster administrator of an Azure Red Hat OpenShift cluster, your account has increased permissions and access to all user-created projects. If you are new to the role, check out the Getting Started topic on Administering an Azure Red Hat OpenShift Cluster for a quick overview.
When your account has the
customer-admin-cluster authorization role
bound to it, you
are automatically bound to the
customer-admin-project for any new projects
that are created by users in the cluster.
You can perform actions associated with a set of
verbs
(e.g.,
create) to operate on a set of
resource
names (e.g.,
templates). To view the details of these roles and their sets of
verbs and resources, run the following:
$ oc describe clusterrole/customer-admin-cluster $ oc describe clusterrole/customer-admin-project
The verb names do not necessarily all map directly to
oc commands, but rather
equate more generally to the types of CLI operations you can perform. For
example, having the
list verb means that you can display a list of all objects
of a given resource name (e.g., using
oc get), while
get means that you can
display the details of a specific object if you know its name (e.g., using
oc
describe).
At the project level, an administrator of an Azure Red Hat OpenShift cluster can perform all actions that a project administrator can perform. In addition, the Azure Red Hat OpenShift administrator can set resource quotas and limit ranges for the project. | https://docs.openshift.com/aro/3/admin_guide/index.html | 2021-10-16T12:18:46 | CC-MAIN-2021-43 | 1634323584567.81 | [] | docs.openshift.com |
New in version 2014.7.0.
The mod_aggregate system was added in the 2014.7.0 release of Salt and allows for runtime modification of the executing state data. Simply put, it allows for the data used by Salt's state system to be changed on the fly at runtime, kind of like a configuration management JIT compiler or a runtime import system. All in all, it makes Salt much more dynamic.
The best example is the
pkg state. One of the major requests in Salt has long
been adding the ability to install all packages defined at the same time. The
mod_aggregate system makes this a reality. While executing Salt's state system,
when a
pkg state is reached the
mod_aggregate function in the state module
is called. For
pkg this function scans all of the other states that are slated
to run, and picks up the references to
name and
pkgs, then adds them to
pkgs in the first state. The result is a single call to yum, apt-get,
pacman, etc as part of the first package install.
Note
Since this option changes the basic behavior of the state runtime, after it is enabled states should be executed using test=True to ensure that the desired behavior is preserved.
The first way to enable aggregation is with a configuration option in either
the master or minion configuration files. Salt will invoke
mod_aggregate
the first time it encounters a state module that has aggregate support.
If this option is set in the master config it will apply to all state runs on all minions, if set in the minion config it will only apply to said minion.
Enable for all states:
state_aggregate: True
Enable for only specific state modules:
state_aggregate: - pkg
The second way to enable aggregation is with the state-level
aggregate
keyword. In this configuration, Salt will invoke the
mod_aggregate function
the first time it encounters this keyword. Any additional occurrences of the
keyword will be ignored as the aggregation has already taken place.
The following example will trigger
mod_aggregate when the
lamp_stack
state is processed resulting in a single call to the underlying package
manager.
lamp_stack: pkg.installed: - pkgs: - php - mysql-client - aggregate: True memcached: pkg.installed: - name: memcached
Adding a mod_aggregate routine to an existing state module only requires adding an additional function to the state module called mod_aggregate.
The mod_aggregate function just needs to accept three parameters and return the low data to use. Since mod_aggregate is working on the state runtime level it does need to manipulate low data.
The three parameters are low, chunks, and running. The low option is the low data for the state execution which is about to be called. The chunks is the list of all of the low data dictionaries which are being executed by the runtime and the running dictionary is the return data from all of the state executions which have already be executed.
This example, simplified from the pkg state, shows how to create mod_aggregate functions:
def mod_aggregate(low, chunks, running): """ The mod_aggregate function which looks up all packages in the available low chunks and merges them into a single pkgs ref in the present low data """ pkgs = [] # What functions should we aggregate? agg_enabled = [ "installed", "latest", "removed", "purged", ] # The `low` data is just a dict with the state, function (fun) and # arguments passed in from the sls if low.get("fun") not in agg_enabled: return low # Now look into what other things are set to execute for chunk in chunks: # The state runtime uses "tags" to track completed jobs, it may # look familiar with the _|- tag = __utils__["state.gen_tag"](chunk) if tag in running: # Already ran the pkg state, skip aggregation continue if chunk.get("state") == "pkg": if "__agg__" in chunk: continue # Check for the same function if chunk.get("fun") != low.get("fun"): continue # Pull out the pkg names! if "pkgs" in chunk: pkgs.extend(chunk["pkgs"]) chunk["__agg__"] = True elif "name" in chunk: pkgs.append(chunk["name"]) chunk["__agg__"] = True if pkgs: if "pkgs" in low: low["pkgs"].extend(pkgs) else: low["pkgs"] = pkgs # The low has been modified and needs to be returned to the state # runtime for execution return low | https://docs.saltproject.io/en/latest/ref/states/aggregate.html | 2021-10-16T11:06:08 | CC-MAIN-2021-43 | 1634323584567.81 | [] | docs.saltproject.io |
Microsoft Azure #
Starburst Enterprise platform (SEP) can be used with many components of Microsoft Azure. You can deploy there, access data sources in Microsoft Azure and use some of the other features of Microsoft Azure.
Find all the relevant information for using Starburst and Microsoft Azure in the following sections and guides.
Deployments #
You can install SEP on Microsoft Azure Kubernetes Service (AKS), with Azure Virtual Machines, or through the Azure Marketplace, once you have established your enterprise account.
To get started Starburst offers a trial license and help with your proof of concept.
Azure AKS #
You can use the Microsoft Azure Kubernetes Service (AKS) directly. AKS is certified to work with SEP.
More information is available in our Kubernetes reference documentation, including a customization guide, examples and tips.
We also have a helpful installation checklist for an overview of the general Helm-based installation and upgrade process.
Azure VMs and App Services #
As an alternative, you can use Azure App Services and linux-based Azure VMs for your SEP cluster, managing an RPM or tarball-based installation with Starburst Admin.
Azure Marketplace #
Documentation #
You are currently viewing the Microsoft Azure section of our user guides. In the left-hand navigation are guides for Azure
Azure Marketplace customers #
Starburst offers the following support for our marketplace subscribers without an enterprise contract:
- Five email issues to [email protected] per month
- First response SLA of one business day
- Support hours of 9 AM - 6 PM US Eastern Time
Is the information on this page helpful?
Yes
No | https://docs.starburst.io/ecosystems/microsoft/index.html | 2021-10-16T13:10:02 | CC-MAIN-2021-43 | 1634323584567.81 | [] | docs.starburst.io |
All docs This doc
Messaging is one integration style out of many, used for connecting various applications in a loosely coupled, asynchronous manner. Messaging decouples the applications from data transferring, so that applications can concentrate on data and related logic while the messaging system handles the transferring of data.
This chapter introduces the basic patterns used when implementing enterprise integration using messaging and how they are simulated using the WSO2 ESB. These patterns are the fundamentals on which the rest of the chapters in this guide are built. | https://docs.wso2.com/display/IntegrationPatterns/Messaging+Systems | 2021-10-16T11:16:38 | CC-MAIN-2021-43 | 1634323584567.81 | [] | docs.wso2.com |
Set up an active-active cluster
This topic describes how to set up an active-active cluster for Deploy. Running Deploy in this mode enables you to have a Highly Available (HA) Deploy setup with improved scalability.
Requirements
Running Deploy in an active-active cluster requires the following:
- Deploy must be installed according to the system requirements. For more information, see requirements for installing Deploy.
- A load balancer that receives HTTP(S) traffic and forwards that to the Deploy master nodes. For more information, see the HAProxy load balancer documentation.
- Two or more Deploy master nodes that are stateless and provide control over the workers and other functions (e.g. CI editing and reporting).
- Two or more Deploy worker nodes that contain and execute tasks and are configured to connect to all masters.
- A database server.
- A shared drive location to store exported CIs and reports.
Basic functions and communication in a cluster
The communication between the masters and the workers is done through a two-way peer-to-peer protocol using a single port for each master or worker node.
The majority of Deploy functions and configurations are identical for a cluster setup as for a single instance. The exception is that, in the cluster setup, the functions can operate on all masters and/or on all workers.
Planning phase for cluster setup
When planning an active-active cluster for Deploy, make sure you are aware of the following:
All masters and workers must have the same configuration, which consists of:
- The plugins
- The extensions folder (e.g. for scripts)
- The configuration files (some parts will be node specific)
- All masters and workers mast have access to the database.
- All masters and workers must have access to the artifacts.
- Communication between masters and workers requires a low latency, high bandwidth network.
- All masters and workers need access to all target hosts (and Deploy Satellites, if applicable).
- For the HTML5 UI to function correctly, all requests for a single user session must be handled by the same master.
- For exports of CIs and reports to work correctly across masters, the
export/folder should be a shared and read-write accessible volume for each master and worker.
Recommendations
Based on the planning phase considerations, these settings are strongly recommend:
- All masters and workers are part of the same network segment.
- The network segment for the masters and workers is properly secured.
- The hostnames and IP addresses for all masters and workers are stored and maintained in a DNS server that is accessible to all masters and workers.
- The load balancer is configured to be highly available and can properly reach the masters.
- The load balancer handles SSL and forwards unencrypted data.
- The load balancer is configured with session affinity (“Sticky sessions”).
- The database is configured for high availability and can be properly reached by masters and workers.
- Artifacts are stored in the database (or preferably in (an) external system(s)).
- When Deploy Satellite is used, all communication between masters, workers and satellites is secured using SSL Certificates.
The configuration of the load balancer, the network, and the database is not covered in this document.
Setup and configuration
When setting up a new system, the setup procedure should be executed on a single master node and the resulting configuration files shared with other nodes (masters and workers).
When upgrading, the upgrade procedure should be executed on all masters and workers.
In both cases, the configuration files to be shared between the masters and the workers include:
- The
deployit.conf,
deployit-defaults.propertiesand
xl-deploy.conffiles
- The license (
deployit-license.lic)
- The repository keystore (
repository-keystore.jceksor
repository-keystore.p12)
- The truststore (if applicable)
Each master and worker should:
- define its fully qualified host name in the configuration property
xl.server.hostnamein
XL_DEPLOY_SERVER_HOME/conf/xl-deploy.conf.
- configure the correct
key-storeand
trust-storein the
xl.server.sslsection (if SSL is enabled) including certificates for Deploy Satellites (if applicable). Follow these instructions to set it up.
To start master and worker nodes:
- Masters can be started with the normal procedure, e.g. invoking
bin/run.shor
bin\run.cmd.
Workers can be started with the literal ’
worker’ as the first argument to
bin/run.shor
bin\run.cmd; one
-apiflag pointing to the load balancer; and one or more
-masterflags, one for each fully qualified master name. E.g.:
bin/run.sh worker -api -master xld1.example.com:8180 -master xld2.example.com:8180
See also Scalability for Masters below. Further switches can be applied when starting workers, see the documentation on workers for more information.
NOTE if no DNS server is used and the mapping is done using
/etc/hosts or a similar local mechanism, the configuration setting
xl.tasks.system.akka.io.dns.resolver must be set to
inet-address in
XL_DEPLOY_SERVER_HOME/conf/xl-deploy.conf on all masters and hosts.
Scalability
A running active-active cluster for Deploy can be scaled for better performance if properly configured.
When using SSL for communication between masters, workers and satellites, the certificates of new masters and workers must be trusted by the other nodes and satellites. In this case it is recommended to use a trusted root certificate to sign all certificates used by masters and workers and satellites. A (self-signed) root certificate can be added to the trust store.
Scalability for workers
Additional workers can be started and directed to an existing cluster of workers without additional configuration.
It is important to note that scheduled or on-going work (tasks) will not be re-balanced when adding workers. All workers are assigned tasks in a round-robin fashion when a task is created on one of the masters. Once a task is assigned to a worker, it cannot be moved to another worker.
Scalability for masters
To enable workers to find masters that are added while the workers are running, available workers should be registered in a DNS SRV record.
xld-masters IN SRV 1 0 0 xld-master-1 xld-masters IN SRV 1 0 0 xld-master-2 ...
The workers can now be started with a single
-master parameter that points to the SRV record:
-master xld-masters:8180.
The port number for a master can be configured in the DNS SRV record or in the parameter value:
xld-masters IN SRV 1 0 9001 xld-master-1
defines the port to be used for
xld-master-1 to be 9001. If the port in the DNS SRV record is
0, it is ignored.
A parameter value of
-master xld-masters:9002 means that all masters found in the DNS SRV record will use port 9002. The port number in the DNS SRV record has higher preference. | https://docs.xebialabs.com/v.9.7/deploy/how-to/set-up-an-active-active-cluster/ | 2021-10-16T12:25:08 | CC-MAIN-2021-43 | 1634323584567.81 | [array(['/static/Production-XLD-9.0-setup-bb358720ada4df7e7ca4a9335ef64163.png',
'Image'], dtype=object) ] | docs.xebialabs.com |
Upgrading from PXF 5.x
If you have installed, configured, and are using PXF 5.x in your Greenplum Database 5 or 6 cluster, you must perform some upgrade actions when you install PXF 6.x.
The PXF upgrade procedure has three steps. You perform one pre-install procedure, the install itself, and then a post-install procedure to upgrade to PXF 6.x:
- Step 1: Perform the PXF Pre-Upgrade Actions
- Step 2: Install PXF 6.x
- Step 3: Complete the Upgrade to PXF 6.x
Step 1: Performing the PXF Pre-Upgrade Actions
Perform this procedure before you upgrade to a new version of PXF:
Log in to the Greenplum Database master node. For example:
$ ssh gpadmin@<gpmaster>
Identify and note the version of PXF currently running in your Greenplum cluster:
gpadmin@gpmaster$ pxf version
Identify the file system location of the
$PXF_CONFsetting in your PXF 5.x PXF installation; you will need this later. If you are unsure of the location, you can find the value in
pxf-env-default.sh.
Stop PXF on each Greenplum host as described in Stopping PXF.
Step 2: Installing PXF 6.x
Install PXF 6.x and identify and note the new PXF version number.
Check out the new installation layout in About the PXF Installation and Configuration Directories.
Step 3: Completing the Upgrade to PXF 6.x
After you install the new version of PXF, perform the following procedure:
Log in to the Greenplum Database master node. For example:
$ ssh gpadmin@<gpmaster>
You must run the
pxfcommands specified in subsequent steps using the binaries from your PXF 6.x installation. Ensure that the PXF 6.x installation
bin/directory is in your
$PATH, or provide the full path to the
pxfcommand. You can run the following command to check the
pxfversion:
gpadmin@gpmaster$ pxf version
(Optional, Advanced) If you want to relocate
$PXF_BASEoutside of
$PXF_HOME, perform the procedure described in Relocating $PXF_BASE.
Auto-migrate your PXF 5.x configuration to PXF 6.x
$PXF_BASE:
- Recall your PXF 5.x
$PXF_CONFsetting.
Run the
migratecommand (see pxf cluster migrate). You must provide
PXF_CONF. If you relocated
$PXF_BASE, provide that setting as well.
gpadmin@gpmaster$ PXF_CONF=/path/to/dir pxf cluster migrate
Or:
gpadmin@gpmaster$ PXF_CONF=/path/to/dir PXF_BASE=/new/dir pxf cluster migrate
The command copies PXF 5.x
conf/pxf-profiles.xml,
servers/*,
lib/*, and
keytabs/*to the PXF 6.x
$PXF_BASEdirectory. The command also merges configuration changes in the PXF 5.x
conf/pxf-env.shinto the PXF 6.x file of the same name and into
pxf-application.properties.
The
migratecommand does not migrate PXF 5.x
$PXF_CONF/conf/pxf-log4j.propertiescustomizations; you must manually migrate any changes that you made to this file to
$PXF_BASE/conf/pxf-log4j2.xml. Note that PXF 5.x
pxf-log4j.propertiesis in properties format, and PXF 6
pxf-log4j2.xmlis
xmlformat. See the Configuration with XML topic in the Apache Log4j 2 documentation for more information.
If you migrated your PXF 6.x
$PXF_BASEconfiguration (see previous step), be sure to apply any changes identified in subsequent steps to the new, migrated directory.
If you are upgrading from PXF version 5.9.
If you are upgrading from PXF version 5.11.x or earlier: The PXF
Hiveand
HiveRCprofiles (named
hiveand
hive:rcin PXF version 6.x).
If you are upgrading from PXF version 5.15.x or earlier:
- The
pxf.service.user.nameproperty in the
pxf-site.xmltemplate file is now commented out by default. Keep this in mind when you configure new PXF servers.
- The default value for the
jdbc.pool.property.maximumPoolSizeproperty is now
15. If you have previously configured a JDBC server and want that server to use the new default value, you must manually change the property value in the server’s
jdbc-site.xmlfile.
- PXF 5.16 disallows specifying relative paths and environment variables in the
CREATE EXTERNAL TABLE
LOCATIONclause file path. If you previously created any external tables that specified a relative path or environment variable, you must drop each external table, and then re-create it without these constructs.
Filter pushdown is enabled by default for queries on external tables that specify the
Hive,
HiveRC, or
HiveORCprofiles (named
hive,
hive:rc, and
hive:orcin PXF version 6.x). If you have previously created an external table that specifies one of these profiles and queries are failing with PXF v5.16+, you can disable filter pushdown at the external table-level or at the server level:
- (External table) Drop the external table and re-create it, specifying the
&PPD=falseoption in the
LOCATIONclause.
(Server) If you do not want to recreate the external table, you can disable filter pushdown for all
Hive*(named as described here in PXF version 6.x) profile queries using the server by setting the
pxf.ppd.hiveproperty in the
pxf-site.xmlfile to
false:
<property> <name>pxf.ppd.hive</name> <value>false</value> </property>
You may need to add this property block to the
pxf-site.xmlfile.
Register the PXF 6.x extension files with Greenplum Database (see pxf cluster register).
$GPHOMEmust be set when you run this command.
gpadmin@gpmaster$ pxf cluster register
The
registercommand copies only the
pxf.controlextension file to the Greenplum cluster. In PXF 6.x, the PXF extension
.sqlfile and library
pxf.soreside in
$PXF_HOME/gpextable. You may choose to remove these now-unused files from the Greenplum Database installation on the Greenplum Database master, standby master, and all segment hosts. For example, to remove the files on the master host:
gpadmin@gpmaster$ rm $GPHOME/share/postgresql/extension/pxf--1.0.sql gpadmin@gpmaster$ rm $GPHOME/lib/postgresql/pxf.so
PXF 6.x includes a new version of the
pxfextension. You must update the extension in every Greenplum database in which you are using PXF. A database superuser or the database owner must run this SQL command in the
psqlsubsystem or in an SQL script:
ALTER EXTENSION pxf UPDATE;
Ensure that you no longer reference previously-deprecated features that were removed in PXF 6.0:
PXF 6.x distributes a single JAR file that includes all of its dependencies, and separately makes its HBase JAR file available in
$PXF_HOME/share. If you have configured a PXF Hadoop server for HBase access, you must register the new
pxf-hbase-<version>.jarwith Hadoop and HBase as follows:
- Copy
$PXF_HOME/share/pxf-hbase-<version>.jarto each node in your HBase cluster.
- Add the location of this JAR to
$HBASE_CLASSPATHon each HBase node.
- Restart HBase on each node.
In PXF 6.x, the PXF Service runs on all Greenplum Database hosts. If you used PXF 5.x to access Kerberos-secured HDFS, you must now generate principals and keytabs for the Greenplum master and standby master hosts, and distribute these to the hosts as described in Configuring PXF for Secure HDFS.
Synchronize the PXF 6.x configuration from the master host to the standby master and each Greenplum Database segment host. For example:
gpadmin@gpmaster$ pxf cluster sync
Start PXF on each Greenplum host. For example:
gpadmin@gpmaster$ pxf cluster start
Verify that PXF can access each external data source by querying external tables that specify each PXF server. | https://gpdb.docs.pivotal.io/pxf/6-2/using/upgrade_5_to_6.html | 2021-10-16T11:40:55 | CC-MAIN-2021-43 | 1634323584567.81 | [] | gpdb.docs.pivotal.io |
Type Filtered Metadata Grid
The type-filtered metadata grid is specially designed and recommended for orgs handling extensive metadata. You can always switch between the classic and type-filtered grids.
Functionalities
- Scalability:
- The type-filtered metadata grid enables Copado users to handle orgs with extensive metadata components (hundreds of thousands).
- Retrieves by metadata type:
- The type-filtered metadata grid retrieves metadata components from the source org filtering by selected metadata type.
- Tab view on the grid:
- The type-filtered metadata grid has tab views where you can switch between retrieved metadata components and selected metadata components.
- User-friendly:
- A smooth animation is shown on the tab header when selecting or deselecting metadata components.
Activation/Deactivation Instructions
Activate
To activate the type-filtered metadata grid, follow the steps below:
- Go to Setup.
- Click on Manage Records for Copado Setting.
- Edit Big MetaData.
- Check the Enabled checkbox
- Save it.
After activation, the type-filtered metadata grid will be displayed within the following pages:
- Org Credentials
- Org Differences
- User Story
- Commit files
- Add Metadata
- Snapshot Differences
- Deployments:
- Metadata step
- Delete Metadata step
Deactivate
To deactivate the type-filtered metadata grid, follow the steps below:
- Go to Setup.
- Click on Manage Records for Copado Setting.
- Edit Big MetaData.
- Uncheck the Enabled checkbox
- Save it.
Refreshing Metadata with the Type-Filtered Metadata Grid
Copado allows you to schedule a job to refresh the metadata index. In order to do so, you will first need to create a deployment with a URL Callout step and paste the following URL in the URL field:{ORG_CREDENTIAL_ID}?api_key={YOUR_API_KEY}&typeFiltered=true.
Once you have fulfilled this prerequisite, follow the steps below to create a scheduled job:
- Navigate to the Scheduled Jobs tab and click on New.
- Name your scheduled job.
- Click on Look up Copado Webhook and select Execute a Deployment.
- Select the Deployment record with the URL Callout.
- Select a running user.
- Click on Save.
- Once the record has been saved, click on Schedule.
- Select the desired scheduling criteria and click on Create Cron Expression.
- Save the scheduled job. | https://docs.copado.com/article/grg2zi84c3-type-filtered-metadata-grid | 2021-10-16T12:16:56 | CC-MAIN-2021-43 | 1634323584567.81 | [] | docs.copado.com |
You can decompose a server that has not been assigned to a VI workload domain.
Prerequisites
The server to be decomposed must be unassigned.
Procedure
- In the navigation pane, click .
- From the Server Composition Summary table, select the server to be decomposed.
- Click Decompose.
- In the Decompose Servers dialog box, click Decompose. | https://docs.vmware.com/en/VMware-Cloud-Foundation/4.3/vcf-admin/GUID-41B1E4F1-10D0-453C-8EA6-013775BC667E.html | 2021-10-16T12:38:33 | CC-MAIN-2021-43 | 1634323584567.81 | [] | docs.vmware.com |
AWS Lambda Quickstart
- Objectives
- Before You Begin
- Visual Summary
- Step 1: Install and Launch the Shell Script Delegate
- Step 2: Add a Harness AWS Cloud Provider
- Step 3: Add Your Lambda Function
- Step 4: Define Your Lambda Target Infrastructure
- Step 5: Build a Basic Lambda Workflow
- Step 6: Deploy Your Lambda Function
- Next Steps
This quickstart shows you how to deploy a Node.js function to your AWS Lambda service using a Basic Deployment strategy in Harness.
Objectives
You'll learn how to:
- Set up and verify AWS IAM and EC2 for the Harness Shell Script Delegate.
- Install the Harness Shell Script Delegate.
- Connect Harness with AWS.
- Add your Lambda function file and specification to Harness.
- Create and deploy a Lambda Basic Workflow.
Once you have the prerequisites set up, the tutorial should only take about 10 minutes.
Before You Begin
- Review Harness Key Concepts to establish a general understanding of Harness.
- Create an AWS IAM Role for Harness Lambda Deployments:
Create an IAM role with the following policies attached and use the role when you create the EC2 instance for the Harness Shell Script Delegate:
- AmazonEC2FullAccess: Needed for Shell Script Delegate on EC2 instance.
- IAMReadOnlyAccess: Needed to verify required policies.
- AWSLambdaRole: Needed to invoke function.
- AWSLambdaFullAccess: Needed to write to Lambda.
- AmazonS3ReadOnlyAccess: Needed to pull the function file from S3.
For example, if the role you created was named
LambdaTutorial, you can attach the policies like this:
$ aws iam attach-role-policy --policy-arn arn:aws:iam::aws:policy/AmazonEC2FullAccess --role-name LambdaTutorial
$ aws iam attach-role-policy --policy-arn arn:aws:iam::aws:policy/service-role/AWSLambdaRole --role-name LambdaTutorial
$ aws iam attach-role-policy --policy-arn arn:aws:iam::aws:policy/IAMReadOnlyAccess --role-name LambdaTutorial
$ aws iam attach-role-policy --policy-arn arn:aws:iam::aws:policy/AWSLambdaFullAccess --role-name LambdaTutorial
$ aws iam attach-role-policy --policy-arn arn:aws:iam::aws:policy/AmazonS3ReadOnlyAccess --role-name LambdaTutorial
AWS Lambda Execution Role — As a Lambda user, you probably already have the AWS Lambda Execution Role set up. If you do not, follow the steps in AWS Lambda Execution Role from AWS. You will use this role when you set up an Infrastructure Definition later in the tutorial.
- Create an EC2 instance for the Harness Shell Script Delegate.
The instance same region, VPC, and ideally, one of the subnets used by your functions.
- Attach the IAM role you created to this instance.
- Create an AWS S3 bucket or use an existing S3 bucket to upload the function used here.
We'll being using a file named index.js containing the following function:
exports.handler = function(event, context, callback) {
console.log("Received event: ", event);
var data = {
"greetings": "Hello, " + event.firstName + " " + event.lastName + "."
};
callback(null, data);
}
You can download the zip file here:.
If you want to create the file yourself, in a code editor, create the file index.js and paste in the above code. Zip the file and name the zip file function.zip.
Upload the function.zip file to a bucket in your AWS S3. In our example, we named the bucket lambda-harness-tutorial.
Visual Summary
The following diagram shows the very simple topology for this tutorial:
You will install the Harness Shell Script Delegate on an EC2 instance in your AWS account, select a target VPC for deployment, and then use Harness to pull a Node.js function from S3 and deploy it to Lambda.
Step 1: Install and Launch the Shell Script Delegate
First we'll install the Harness Shell Script Delegate on the EC2 instance you set up with the IAM role you created for Harness Lambda deployments._1<<
Delegate Selector
Add a Delegate Selector to the Delegate so you can use it when you create a Harness AWS Cloud Provider, in the next step in this tutorial. Harness AWS Cloud Provider
In this section, we will add a Harness AWS Cloud Provider to your Harness account to connect to both AWS S3, Lambda, and the VPC. You can use a single or separate AWS Cloud Providers for the connections, but using a single AWS Cloud Provider is easiest. Lambda deployment in just a few minutes.
Step 3: Add Your Lambda Function
Now we'll add a Lambda function and configure its Lambda Function Specification. We'll start by creating a Harness Application.
An Application in Harness represents a logical group of one or more entities, including Services, Environments, Workflows, Pipelines, Triggers, and Infrastructure Provisioners. Applications organize all of the entities and configurations in Harness CD. For more information, see Create an Application.
Create a Harness Application
To create the Harness Application, do the following:
- In Harness, click Setup, and then click Add Application. The Application settings appear.
- Enter the name Lambda-Tutorial and click SUBMIT. The new Application is added.
We won't cover all of the Application entities in this tutorial. We assume you've read Harness Key Concepts.
Create a Harness Service
To add your function and spec, you create a Harness Service. Services represent your microservices, apps, or functions. You define their sources as artifacts and you add your specs.
- Click Services. The Services page appears.
- Click Add Service. The Service dialog appears.
- Enter the following settings and click Submit:
The new Service is listed.
Add an Artifact Source
- Use the AWS S3 bucket to which you uploaded the function.zip file as part of the prerequisites. In our example, we named the bucket lambda-harness-tutorial.
We use S3 in this quickstart, but Harness supports the following artifact sources with Lambda:
- In your Harness Service, click Add Artifact Source and select Amazon S3. The Amazon S3 settings appear. Enter the following settings:
This grabs the file from your AWS S3 bucket, for example:
- Click SUBMIT. The Lambda function file is added as an Artifact Source.
Add AWS Lambda Function Specification
Next, we'll add the Lambda Function Specification to provide details about the Lambda functions in the zip file in Artifact Source.
- Click Lambda Function Specification. The AWS Lambda Function Specifications dialog appears. Enter the following settings:
aws lambda create-functioncommand. For more information, see create-function from AWS.
If you remember the code sample for our function, the value of the Handler setting is the file name (index) and the name of the exported handler module, separated by a dot. In our example, the handler is index.handler.
Click SUBMIT. Your function is added to the Service.
Step 4: Define Your Lambda Target Infrastructure
Now that we've added a Lambda Service to your Application, we'll define an Environment where your function will be deployed. In an Environment, you specify the AWS VPC settings as an Infrastructure Definition.
- Use the breadcrumb navigation to jump to Environments.
- Click Add Environment. The Environment dialog appears. Enter the following settings:
- Click SUBMIT. The new Environment page appears. Next we will add an Infrastructure Definition to identify the related VPC information.
- On your Environment page, click Add Infrastructure Definition. Enter the following settings:
The Infrastructure Definition settings are similar to the
‑‑role and
‑‑vpc-config options in the
aws lambda create-function command. For example:
$ aws lambda create-function --function-name example-function \
--runtime nodejs12.x --handler index.handler --zip-file lambda/function.zip \
--role execution-role-arn \
--vpc-config SubnetIds=<subnet-ids>,SecurityGroupIds=<security-group-ids>
These are the same as the Network section of a Lambda function:
- Click Submit. The new Infrastructure Definition is added to the Harness Environment.
You will select this Environment and Infrastructure Definition when you create your Harness Workflow.
Step 5: Build a Basic Lambda Workflow
The Lambda Basic Workflow you will create has two steps, generated by Harness automatically:
- AWS Lambda - This step deploys the function and also sets the Lambda aliases and tags for the function.
- Rollback AWS Lambda - If a deployment fails, this step uses aliases to roll back to the last successful version of a Lambda function.
- Use the breadcrumb navigation to jump to Workflows, and then click Add Workflow. The Workflow settings appear.
- Click SUBMIT. The new Basic Workflow is created and pre-configured with the AWS Lambda step.
Next, let's look at the pre-configured AWS Lambda step.
AWS Lambda Step
When you deploy the Workflow, the AWS Lambda step creates the Lambda functions defined in the Service you attached to the Workflow. This is the equivalent of the aws lambda create-function API command.
The next time you run the Workflow, manually or as the result of a Trigger, the AWS Lambda step updates the Lambda functions. This is the equivalent of the aws lambda update-function-configuration API command.
- In the Workflow, click the AWS Lambda step. The Configure AWS Lambda settings appear. Enter or review the following settings:
The AWS Lambda step in the Workflow applies the alias just like you would using the AWS Lambda console.
By default, Harness names the alias with the name of the Environment by using the built-in Harness variable ${env.name}. You can replace this with whatever alias you want, or use other built-in Harness variables by entering $ and seeing what variables are available.
You can set the tags for your Lambda functions in the AWS Lambda step and, once deployed, you can see the tags in the AWS Lambda console.
- Click Submit.
Your Lambda Workflow is complete. You can run the Workflow to deploy the Lambda function to your AWS Lambda service.
Step 6: Deploy Your Lambda Function
Now that the Basic Workflow for Lambda is set up, you can click Deploy in the Workflow to deploy the Lambda functions in the Harness Service to your AWS Lambda environment.
- If you're not already on the main Workflow page, use the breadcrumb navigation to navigate to MyFunction Lambda Tutorial.
- Click the Deploy button. The Deploy settings appear. Enter the following settings:
- Click Submit. The deployment executes.
Here's an example of a typical deployment:
To see the completed deployment, log in to your AWS Lambda console. The Lambda function is listed:
You can also log into AWS and use the aws lambda get-function command to view the function.
View Lambda Deployments in the Serverless Functions Dashboard
Harness Manager's Serverless Functions Dashboard offers views of your Lambda deployment data.
Here is an individual Lambda deployment and how it is displayed on the Serverless Functions dashboard:
See Serverless Functions Dashboard.
Next Steps
In this tutorial, you learned how to:
- Set up and verify AWS IAM and EC2 for the Harness Shell Script Delegate.
- Install the Harness Shell Script Delegate.
- Connect Harness with AWS.
- Add your Lambda function file and specification to Harness.
- Create and deploy a Lambda Basic Workflow.
Read the following related How-tos:
- Lambda Deployment Overview.
- Triggers show you how to automate deployments in response to different events.
- CloudFormation Provisioner will show you how to add provisioning as part of your Workflow. | https://docs.harness.io/article/wy1rjh19ej-aws-lambda-deployments | 2020-09-18T09:37:02 | CC-MAIN-2020-40 | 1600400187390.18 | [array(['https://files.helpdocs.io/i5nl071jo5/articles/4p9yr2gpoa/1579735258982/image.png',
None], dtype=object)
array(['https://files.helpdocs.io/kw8ldg1itf/articles/h9tkwmkrm7/1539731679546/image.png',
None], dtype=object)
array(['https://files.helpdocs.io/kw8ldg1itf/other/1551914214481/image.png',
None], dtype=object)
array(['https://files.helpdocs.io/i5nl071jo5/articles/4p9yr2gpoa/1579821644933/image.png',
None], dtype=object)
array(['https://files.helpdocs.io/i5nl071jo5/articles/4p9yr2gpoa/1579823421081/image.png',
None], dtype=object)
array(['https://files.helpdocs.io/i5nl071jo5/articles/4p9yr2gpoa/1579826019878/image.png',
None], dtype=object)
array(['https://files.helpdocs.io/i5nl071jo5/articles/4p9yr2gpoa/1579891000819/image.png',
None], dtype=object)
array(['https://files.helpdocs.io/i5nl071jo5/articles/4p9yr2gpoa/1579893247543/lambda-tutorial-rec.gif',
None], dtype=object)
array(['https://files.helpdocs.io/i5nl071jo5/articles/4p9yr2gpoa/1579893056742/image.png',
None], dtype=object)
array(['https://files.helpdocs.io/kw8ldg1itf/articles/491a6etr7a/1578426600779/image.png',
None], dtype=object) ] | docs.harness.io |
ARDBCSetEntry
Description
This function modifies the contents of a single entry or BLOB. The plug-in server calls this function when:
- The BMC Remedy AR System server receives a call to
ARSetEntryor
ARMergeEntryfrom the client.
- Users enter information to create or merge entries into an external form.
- A Push Fields filter or an escalation action modifies an entry.
If the specified entry does not exist, the plug-in creates it. An
ARDBCSetEntry operation on a nonexistent entry can occur when the BMC Remedy AR System server receives a call to
ARMergeEntry from a client.
Important
If you do not define this function, the BMC Remedy AR System server receives an error message and the entry is not modified.
Synopsis
#include "ardbc.h" int ARDBCSetEntry( void *object, char *tableName, ARVendorFieldList *vendorFieldList, ARInternalId transId, AREntryIdList *entryId, ARFieldValueList *fieldValueList, ARTimestamp getTimestamp, ARStatusList *status)
Input arguments
object
Pointer to the plug-in instance that the call to
ARPluginCreateInstance returned.
tableName
Name of the external table on which the plug-in sets to set. This must correspond to a value in the external data source and must be unique, non-NULL, and either character or numeric data. For an external data source, the entry ID can be longer than 15 characters. Therefore, the entry ID can consist of one or more values of type AREntryIdType and is represented by the
AREntryIdList structure.
fieldValueList
List of the data in a new entry. The list consists of one or more field and value pairs that the server specifies in any order.
getTimestamp
Time stamp that specifies when the client last retrieved the entry. The ARDBC plug-in can provide functionality like that provided by the BMC Remedy AR System server by returning an error if an entry was modified (presumably by another client) after the calling client last retrieved it. For example, the plug-in can store a modification time stamp and compare this value with it to determine whether the entry changed since the last retrieval. If the value of this parameter is
0, ignore it.
For a description of the BMC Remedy AR System functionality, see ARSetEntry.
Return values
status
List of zero or more notes, warnings, or errors generated from a call to this function. For a description of all possible values, see Error checking.
See also
ARDBCCreateEntry, ARDBCDeleteEntry, ARDBCGetEntry, ARDBCGetEntryBLOB. | https://docs.bmc.com/docs/ars91/en/ardbcsetentry-609071563.html | 2020-09-18T12:12:26 | CC-MAIN-2020-40 | 1600400187390.18 | [] | docs.bmc.com |
Before reading this topic, you might want to ensure you are familiar with the concepts in Key Concepts About Emulation and Validating PLC Logic.
In this topic, you'll learn more about the phase in which you validate PLC logic by connecting your model to a PLC or server. We'll refer to an OPC DA Connection, but the setup is similar for other connection types.
Once the PLCs have been programmed, you could possibly use FlexSim to then validate whether the PLCs had been programmed correctly before they are actually implemented in a manufacturing system. To validate this logic, you'd need to connect FlexSim to a PLC server.
After inputting all of the server's permissions, you'd also have to assign all your emulation variables to their appropriate tag IDs on the server. You'd then make the connection shared asset active in FlexSim so that it will directly read or write values on the actual server. This will also cause your internal ladder logic to stop working. Sensors will no longer trigger their on Change event, meaning that any event listening activities will no longer be notified when a sensor variable is changed. Instead, this logic will be handled by the PLC. model.
To connect a PLC server to a FlexSim simulation model:
1.
localhostif you're running it locally on the same machine as the simulation model.
One thing you need to pay attention to when running the simulation model while connected to an active server is the speed at which you are running the model. While you can technically set the run speed to a greater value, you should ideally run it at the 1.00 speed, which means it will run in real time.
Be aware that FlexSim will run in high precision time when the server is active. High precision run time means it will keep the run speed of FlexSim in sync with the run speed of the computer. Stepping will be unavailable while you are connected to an active server.
During the simulation run, it might be good to open the server and watch the values change. You'll be able to better ensure that they match what happens in the simulation model. The ladder logic used with Sensors should no longer be used when the connection is active. You should ensure that there are no tokens moving through this ladder logic during the simulation run. | https://docs.flexsim.com/en/20.0/ModelLogic/UsingEmulation/ValidatingPLCLogic/ValidatingPLCLogic.html | 2020-09-18T10:18:04 | CC-MAIN-2020-40 | 1600400187390.18 | [] | docs.flexsim.com |
Difference between revisions of "Buttons course format"
From Carleton Moodle Docs
Revision as of 17:27, 6 June 2018 default. | https://docs.moodle.carleton.edu/index.php?title=Buttons_course_format&diff=prev&oldid=15819 | 2020-09-18T11:13:48 | CC-MAIN-2020-40 | 1600400187390.18 | [] | docs.moodle.carleton.edu |
Objective
Taiwan Docs offers an integrative information platform to facilitate dialogue between art works, audiences, and society. Our aim is to raise the visibility of Taiwan documentary, to foster communication and exchange within the creative community, and to promote Taiwan documentary at home and abroad.
Service
The Taiwan Docs website provides up-to-date information on all aspects of documentary filmmaking in Taiwan. We offer a free online archive for film professionals, academics, and the general public. The Taiwan Docs website thus serves as a platform that connects Taiwan filmmakers, international film festivals, and academic film studies.
The Doc Calendar is a service for film professionals who wish to stay informed about important events in their field. We collect information, dates and deadlines related to international film festivals, film industry events, workshops, funding opportunities etc. to serve the international doc film community.
Taiwan Docs supports local filmmakers in promoting their works internationally to reach audiences world-wide. By showcasing Taiwan documentary in the world, we hope to foster international cooperation and dialogue.
Taiwan Docs also welcomes special requests from film professionals and academics. We provide assistance with
- Festival programming and curating
- Organizing screenings and after-screening Q&As
- Information searching and industry news
- Consulting for film professionals who wish to collaborate with partners in Taiwan
and much more … | https://docs.tfi.org.tw/en/about | 2020-09-18T10:04:13 | CC-MAIN-2020-40 | 1600400187390.18 | [] | docs.tfi.org.tw |
Recent activity block. | https://docs.moodle.carleton.edu/wiki/Recent_activity_block | 2020-09-18T10:09:54 | CC-MAIN-2020-40 | 1600400187390.18 | [] | docs.moodle.carleton.edu |
hashchain is a Python package developed to join the ease of use of Python with the security of blockchain to certify that your important records haven’t been tampered with.
The core module creates a hash chain, attesting that no record can be tempered with, once saved. The blockchain module save a proof of your hashchain permanentely in the most secured way. Then, it’s impossible to alter the haschain without causing a discrepancy with the blockchain.
No need for third party certification anymore. No more single point of failure nor certification costs.
Note
The package is in beta release Even though No major change to the hashing mechanism should be expected, we can’t provide any kind of grantee. Use it in a production environment at your own risk. | https://hashchain.readthedocs.io/en/latest/ | 2020-09-18T11:30:01 | CC-MAIN-2020-40 | 1600400187390.18 | [] | hashchain.readthedocs.io |
This instruction explains how you receive goods that are to be used for a maintenance job with a work order, and that have been purchased from a supplier.
The goods are received at your plant and are assigned unique receiving numbers. Supplier statistics are updated with lead times and interest costs for early deliveries. The inventory value rises. Documents may have been printed.
The following files are updated:
Start 'Purchase Order. Receive Goods' (PPS300/A) or use option 23=Goods receipt in 'Work Order. Open Line' (MOS101).
Enter the purchase order number.
An F13 parameter defines whether completed lines should be displayed.
Enter the delivery note number, if applicable.
Enter the received quantity.
Enter the serial number if the component is serialized.
Press Enter to finish.
If the item that is received is a non-stocked item, the system will update the associated material line on the work order to status 90. This will also be the case if the receipt is linked to a sub-contracted operation. In addition, if it is the last option, then 'Work Order. Close' (MOS050) will be displayed to allow the work order to be closed as well. | https://docs.infor.com/help_m3beud_16.x/topic/com.infor.help.maintmgmths_16.x/c001410.html | 2020-09-18T11:30:44 | CC-MAIN-2020-40 | 1600400187390.18 | [] | docs.infor.com |
Configuring the Passive FTP Mode on a Microsoft Azure Instance
By default, Plesk only allows active FTP connections. This may result in customers being unable to connect to the server via FTP. To avoid this, we recommend enabling passive FTP. This topic explains how to enable passive FTP mode in Plesk installed on a Microsoft Azure Platform instance.
To configure passive FTP:
Log in to Microsoft Azure portal.
Go to Virtual machines > click the name of the virtual machine for which you want to configure passive FTP > Networking (under “Settings”).
Click the Add inbound port rule button.
In the “Add inbound security rule” panel, specify the following settings:
- “Service”. Keep the “Custom” value in the drop-down list.
- “Port ranges”. Specify the following port range:
49152-65535.
- “Priority”. This value determines the order in which firewall rules are applied. Rules with low priority are applied before rules with high priority. We recommend keeping the automatically assigned Priority value.
- “Name”. Give the rule a recognizable name so you can tell it apart from others.
- (Optional) “Description”. If desired, you can add the description to the rule.
Click OK.
On Plesk for Linux instances, perform the following steps to complete configuring passive FTP:
The required configuration is completed. Now you can use the passive FTP mode on the Microsoft Azure instance. | https://docs.plesk.com/en-US/obsidian/deployment-guide/plesk-installation-and-upgrade-on-public-cloud-services/installing-plesk-on-microsoft-azure/configuring-the-passive-ftp-mode-on-a-microsoft-azure-instance.79079/ | 2020-09-18T11:32:02 | CC-MAIN-2020-40 | 1600400187390.18 | [] | docs.plesk.com |
In the status bar of the Transformer page, click the Eye icon to review the list of visible and hidden columns.In the status bar of the Transformer page, click the Eye icon to review the list of visible and hidden columns.
Figure: Visible Columns Panel
Click the Eye icon next to a column name to toggle its visibility in the Transformer page.
NOTE: Columns that are not visible in the Transformer page are still generated in the output file. Before you run a job, you should review the Visible Columns dialog.
NOTE: Filters applied to the data grid or column browser are also applied in this panel. For more information, see Filter Panel.
To toggle display of multiple columns at the same time, use
CTRLor
SHIFTto select columns. Then, click the Selected link and choose to show or hide them.
- Use the Search box to find matches for column names.
- To close the dialog, click the X icon.
This page has no comments. | https://docs.trifacta.com/display/SS/Visible+Columns+Panel | 2020-09-18T11:41:28 | CC-MAIN-2020-40 | 1600400187390.18 | [] | docs.trifacta.com |
Location Codes
The Location Codes indicate the methodology used to compute the geocode and may also provide some information about the quality of the geocode.
A Location Code of ""E" indicates a location code is not available. This usually occurs when you have requested ZIP Code centroids of a high quality, and one is not available for that match. It can occur infrequently when the Enterprise Tax Module does not have a 5-digit centroid location. An "E" location code type may also be returned when the input address cannot be standardized and there is no input ZIP Code. In this case, do not assume the ZIP Code returned with the nonstandardized address is the correct ZIP Code because the Enterprise Tax Module did not standardize the address; therefore, the Enterprise Tax Module does not return geocoding or Census Block information. | https://docs.precisely.com/docs/sftw/spectrum/12.2/en/webhelp/WebServicesGuide/EnterpriseTaxGuide/source/AssignGeoTaxInfo/output_location_codes.html | 2020-09-18T09:55:03 | CC-MAIN-2020-40 | 1600400187390.18 | [] | docs.precisely.com |
Extraneous Data Within an Address Line
Extraneous data that is within an address line is returned in AdditionalInputData. For example, in the following addresses "John Smith" would be returned in AdditionalInputData.
123 Main St John Smith
05674
123 Main St Apt 5 John Smith
05674
123 Main St John Smith
Apt 5
05674
123 Main St
Apt 5 John Smith
05674
For U.S. addresses, only extraneous data at the end of the address line is returned in AdditionalInputData. Extraneous data that is not at the end of an address line is not returned for U.S. addresses. For example, in the following addresses "John Smith" is not returned.
John Smith 123 Main St
05674
123 Main John Smith St
05674
The AdditionalInputData will sometimes contain the original street name or suffix if the street name was changed to obtain a match and the street name or suffix was at the end of a line. For example this address:
Pitney Bowes
4200 Parlament
Lanham MD
ValidateAddress would correct the spelling of the street name and add the suffix, returning "4200 Parliament Pl" as the corrected street address and "Parlament" in AdditionalInputData. | https://docs.precisely.com/docs/sftw/spectrum/12.2/en/webhelp/WebServicesGuide/UNC/source/ValidateAddress/additional_input_data-extrawithin-1.html | 2020-09-18T11:56:30 | CC-MAIN-2020-40 | 1600400187390.18 | [] | docs.precisely.com |
Kay CHOU
Introduction
Kay CHOU, a long-time producer of TV programs and short films. Her works have been nominated for the Golden Bell Awards several times. She is also a third-generation Christian, who becomes an outcast from the church and family after disclosing her sexual orientation. She also has been a LGBT supporter for a long time. As a core member in the production team, she now aims to illuminate the dilemmas faced by homosexual Christians through documentary. | https://docs.tfi.org.tw/en/filmmakers/5387 | 2020-09-18T10:48:04 | CC-MAIN-2020-40 | 1600400187390.18 | [array(['https://docs.tfi.org.tw/sites/default/files/styles/maker_photo/public/photo/14598532831151010769.jpg?itok=O34aep7U',
None], dtype=object) ] | docs.tfi.org.tw |
Remote Audit with FTP Delivery
Introduced in 8.6
The Remote Audit is a method of WAN audit, based on the deployment of standalone audit agents to a remote network. With this method, you can regularly audit offsite computers and remote networks that have no direct connection to the local network. Depending on the way how audit snapshots are delivered to Alloy Discovery, the Remote Audit method comes in two modes: FTP delivery and e-mail delivery.
The Remote Audit method offers two deployment scenarios for audit agents:
- Install the audit agent to every remote computer.
- Deploy the Inventory Analyzer package to a centralized location in the remote network and automate the audit agent using domain logon scripts or scheduled tasks.
When using this audit method, there is no direct link between the Inventory Server and deployed audit agents; this is why any configuration changes or updated versions of the audit agents have to be manually re-deployed.
Before configuring the audit, make sure you have configured a proper audit profile.
To set up the audit, take the following steps:
Make sure the Internet connectivity is available for transferring data via FTP/SFTP.
Create an FTP Audit Source as described in Configuring FTP Audit Sources.
NOTE: Before creating the FTP Audit Source, make sure that you know the settings of the FTP server on the remote network and have a valid user name and password if the server requires users to authenticate.
On the remote site, create a folder on the dedicated file server. This network folder will serve as an intermediary repository, and it must have both the Modify permission and the Change Permissions special permission assigned for your account. Share this folder and grant the Full Control share permission to the Everyone group for this network share.
Set the minimally necessary permissions for that folder. These minimally necessary permissions are used for the audit method to create the most secure environment for the network share and audit snapshot files stored there.
Choose how to deliver the audit agent to the computers you want audited. You can use any or both of these options:
Create an installer and install the audit agent on every computer. For details, see Installing the Audit Agent for FTP Delivery.
Create an Inventory Analyzer package, deploy it to remote networks, and automate the audit using domain logon scripts or scheduled tasks. For details, see Preparing the Inventory Analyzer Package for the FTP Delivery. You can use the automation scenarios offered for the Network Folder Audit (for details, see Network Folder Audit).
Optional: You can check the source for new snapshots on demand, as described in Checking FTP Audit Sources for New Snapshots. | https://docs.alloysoftware.com/alloydiscovery/8/help/using-alloy-discovery/ftp-audit/ftp-audit.html | 2020-11-23T21:24:47 | CC-MAIN-2020-50 | 1606141168074.3 | [] | docs.alloysoftware.com |
Click CTRL + F on a Windows computer, or COMMAND + F on a Mac to search for the term you'd like to know more about.
Premium- An insurance premium is the amount of money that an individual must pay for an insurance policy. The insurance premium is income for the insurance company, once it is earned. Premiums are a fixed amount paid to the insurance company once a month./
Deductible- The amount a patient must pay for health care or prescriptions, before Original Medicare, their Medicare drug plan, their Medicare Advantage plan, or any other insurance plan begins to pay. During the deductible, patients will pay 100% of their medication costs until they reach the set deductible amount. These amounts can change every year.
PSAO- (Pharmacy Services Administration Organization) gives a group of independent pharmacies access to managed care and PBM contracting advantages that are normally associated with large, multi-location chains.
PBM- Pharmacy Benefit Managers PBMs were originally established as “middlemen” entities, designed to process drug claims (for a small fee per claim) for insurance companies and plan sponsors.Over the past 20 years as drug prices have skyrocketed, as well as the advent of Medicare Part D and the ACA, PBMs have leveraged their position to play a much larger, more powerful role in the delivery of healthcare/prescription drugs.
Beyond just processing drug claims, their responsibility has shifted to keeping drug prices and spending down for plans and improving patient health outcomes.
DIR Fee- “Direct and Indirect Remuneration” fees. There is no set all-encompassing definition of what these fees are for. PBM’s use them to define fees for a variety of “pharmacy perks”- fees for participating in preferred pharmacy networks, network access fees, administrative fees, technical fees, service fees, credentialing fees, refill rates, generic dispensing rates, audit performance rates, error rates, and more.
Copay- A fixed out-of-pocket amount patients will pay for their medications.
Coinsurance- Copays that are percentage-based. For example, if a coinsurance for Tier 4 drugs on a certain plan = 34%, patients would pay 34% of the full cost of the drug during Initial Coverage.
Subsidy: Any federal (medicaid) or state based financial help for Medicare costs.
Dual Eligible-the general term that describes individuals who are enrolled in both Medicare and Medicaid, or receive extra help through Social Security.
Full Cost- The full cost of a drug includes the patient’s copay amount plus the insurance reimbursement- This is the profit for the pharmacy before DIR fees are taken out
Reimbursement- What an insurance plan pays the pharmacy for a drug provided to a patient.
Special Needs Plan- Specific type of Advantage Plan that provide specialized coverage for patients with specific needs. These plans often offer more benefits in regards to coordination of care. Like MA-PDs, they cover all services provided by Medicare Parts A and B, prescription drugs (Part D), and sometimes vision, dental, and hearing. There are three types of SNP plans:
D-SNPs -This is the most common SNP. The patient must be dual eligible (receiving both Medicare and Medicaid) to enroll.
C-SNP-Special Needs plans for patients with certain chronic health conditions. They are available for patients in certain counties. The most common C-SNPs are for patients with diabetes or heart disease.
I-SNP-These SNPs are only available for institutionalized patients in LTCs, such as skilled nursing facilities, LTC nursing facilities, intermediate care facilities, or assisted living facilities. They are also available for patients who live at home but require an institutional level of care, also called Institutional Equivalent.
Advantage Plan (MA-PD)- MA-PDs cover all services provided by Medicare Parts A, B, and D, and sometimes, vision, dental, and hearing. MA-PDs, also referred to as “Part C” or “MA Plans”, and are provided by private insurers. There are three types of MA-PD plans:
Health Maintenance Organization (HMO)- HMO plans almost always require the patient to select a primary care physician (PCP). The patient is required to visit their PCP who can then refer them to other specialists. The only case when this is not applicable is in emergency situations where urgent care is needed. HMO premiums tend to be lower.
Preferred Provider Organization (PPO)- PPO plans offer a much wider range of providers than HMOs. However, out-of-pocket costs (premiums & deductibles) can be slightly higher. Referrals to outside specialist are typically not required.
Point-of-Service plan (POS)- POS plans are a hybrid between HMO and PPO plans. Patients designate an in-network primary physician, but can also receive services out-of-network for higher copayment or coinsurance.
Private Fee for Service (PFFS)- PFFS plans do not require referrals to visit specialists, but copayments and coinsurance tend to be higher for an out-of-network provider. Out-of-network providers may also refuse service for PFFS plan holders (with the exception of emergencies).
Prescription Drug Plan (PDP)- PDPs are your standard Medicare Part D plan. These plans include coverage just for patient’s prescription drug needs. All other coverage (e.g. hospital and medical) will come from "Original Medicare" Parts A and B. Patients on these plans can also enroll in Medigap coverage.
Medigap (Supplemental Insurance)- Medigap policies are supplemental insurance policies for patients with Original Medicare (NOT enrolled in a Medicare Advantage plan). These policies help pay some of the health or medical costs such as deductibles, copays, and coinsurance. These can be very beneficial for patients with chronic health conditions who may see many doctors and/or have many hospital visits. A patient cannot join a Medicare Part D plan and also have a Medigap policy with drug coverage
BIN (Bank Identification Number)- 6-8 digit number that health plans can use to process electronic pharmacy claims
Tier Copay- Medications are assigned to one of six categories known as copayment or coinsurance tiers, based on drug usage, cost and clinical effectiveness. A Tier Copay will be a fixed cost limit that a patient will pay for a medication, and this limit cannot be exceeded. For example, if the full cost of a Tier 3 medication is $100 and the Tier 3 Copay is $37, the patient will never have to pay over $37 for a Tier 3 covered drug. A coinsurance is a little different, because it is a fixed PERCENTAGE. So in this same case, if a Tier 3 coinsurance is 50%, the patient will never have to pay over 50% of the full cost of the medication. Full Costs of drugs can change anytime throughout the year, however, so when full cost increases so does the Tier coinsurance. Copays and coinsurances change based on the Tier of the drug, and usually the higher Tiers have higher copays/coinsurances.
Benchmark Plan- This plan has a $0 monthly premium and $0 deductible for dual eligible patients. Benchmark plans vary between states.
Prior Authorization (PA)- Prior authorization means that you will need prior approval from an insurance plan before you fill the prescription.
Quantity Limit (QL)- For safety and cost reasons, plans may limit the quantity of drugs that they cover over a certain period of time. This drug is technically covered, but only a set amount/quantity.
Step Therapy (ST)- When a plan requires a patient to first try one drug to treat their medical condition before they will cover another drug for that condition
Drug Not Covered (NC)- When a drug is "Not Covered" by a plan, the enrollee receives no coverage benefits for that medication. This means the patient will pay full cost (as determined by the pharmacy) for this medication.
Formulary Exception- One kind of coverage determination process where an enrollee or their doctor can request an off-formulary drug to be covered for a patient
Tier Exception- One kind of coverage determination process that should be requested when an enrollee needs to obtain a non-preferred drug at the lower cost-sharing terms applicable to a preferred tier on the plan's formulary. This is different than a formulary exception in that the enrollee must obtain a drug that is on the plan's formulary but at a cost he/she cannot afford.
Straddle Claim - Prescription drug purchases or claims that cross different phases of your Medicare Part D prescription drug plan (PDP) benefit (or your Medicare Advantage plan that offers prescription drug coverage). For example, Drug A’s cost exceeds the Deductible limit of your plan so instead of being in the Deductible phase for the month of January you will be in the transition period, not fully in the Deductible or fully in Initial Coverage.
Transition Period- In Amplicare, some months may be marked "Transition". This isn't a specific phase of coverage, it simply means the patient is transitioning from one phase of coverage to the next during that month!
Pre-Initial Coverage -- Dual Eligible patients don’t experience a Deductible. Therefore in Amplicare this period for Duals is listed as the “Pre-Initial Coverage” phase, and their copays will remain their subsidized amount.
Post-Initial Coverage -- Dual Eligible patients don’t experience the Donut Hole (or Coverage Gap) to the degree that non-dual patients do. Their copays may increase, but only to the highest level that their subsidy allows. Therefore this phase in Amplicare for duals is listed as Post Initial coverage.
Maximum Allowable Cost (MAC)- A “Maximum allowable cost” or “MAC” list refers to a payer or PBM-generated list of products that includes the upper limit or maximum amount that a plan will pay for generic drugs and brand name drugs that have generic versions available (“multi-source brands”). Essentially, no two MAC lists are alike and each PBM has free reign to pick and choose products for their MAC lists.
Center for Medicare and Medicaid (CMS)- A part of the U.S. Department of Health and Human Services, CMS oversees many federal healthcare programs
Long Term Care Facility- Includes Nursing Homes, Assisted Living Facilities, and Skilled Nursing Facilities
Max Out of Pocket Limit (OOP)- For Medicare Advantage plans (MA-PDs) this is the maximum the patient can pay out-of-pocket for health related costs, before the plan starts to pay 100% of the expenses. Max OOP excludes monthly premiums and prescription medications, and may exclude certain medical procedures. The Max OOP limit is decided by each plan, and can change every year. | https://docs.amplicare.com/en/articles/2775874-glossary-of-terms | 2020-11-23T21:44:39 | CC-MAIN-2020-50 | 1606141168074.3 | [] | docs.amplicare.com |
Who is this OutOfMemory guy and why does he make my process crash when I have plenty of memory left?:
- Dll’s
- Native heaps (non .net heaps)
- Threads (each thread reserves 1 MB for the stack)
- .net heaps (for managed variables)
- .net loader heap (for assemblies and related structures)
- Virtual allocations made by com components. | https://docs.microsoft.com/en-us/archive/blogs/tess/who-is-this-outofmemory-guy-and-why-does-he-make-my-process-crash-when-i-have-plenty-of-memory-left | 2020-11-23T23:12:50 | CC-MAIN-2020-50 | 1606141168074.3 | [] | docs.microsoft.com |
You can modify a local admin users account to update the full display name or group membership that governs access to system features. You can also temporarily prevent a user from accessing the system.
You can edit only local users. Federated user details are automatically synchronized with the external identity source, for example, the LDAP server. | https://docs.netapp.com/sgws-111/topic/com.netapp.doc.sg-admin/GUID-139A4654-55B2-446E-ADD7-13A5ADFF8009.html?lang=en | 2020-11-23T22:43:49 | CC-MAIN-2020-50 | 1606141168074.3 | [] | docs.netapp.com |
Before installing StorageGRID Webscale software, verify and configure hardware so that it is ready to support the StorageGRID Webscale system.
The following table lists the supported minimum resource requirements for each StorageGRID Webscale node. Use these values to ensure that the number of StorageGRID Webscale nodes you plan to run on each physical or virtual host does not exceed the number of CPU cores or the physical RAM available. If the hosts are not dedicated to running StorageGRID Webscale, be sure to consider the resource requirements of the other applications. | https://docs.netapp.com/sgws-111/topic/com.netapp.doc.sg-install-ub/GUID-84F773C9-5063-4CCF-AF7C-21E758134AF1.html?lang=en | 2020-11-23T23:04:47 | CC-MAIN-2020-50 | 1606141168074.3 | [] | docs.netapp.com |
Document type: medical_safety_alert
There are 843 pages with document type 'medical_safety_alert' in the GOV.UK search index.
Rendering apps
This document type is rendered by:
Supertypes
Example pages
- Field Safety Notice: 26 to 30 October
- Class 4 Medicines Defect Information, Kolanticon Gel 200ml, (PL 17509/0084), EL (20) A/51
- Medical Devices Safety Bulletin (MDSB/2020/02)
- Field Safety Notice: 2 to 6 November 2020
- Class 2 Medicines Recall, Mylan UK Healthcare Ltd, Ancotil 2.5 g/250 ml Solution for Infusion, PL 46302/0116, EL (20) A/53
- Class 2 Medicines Recall, medac GmbH (T/A medac Pharma LLP) Sodiofolin 50mg/ml Solution for Injection 100mg/2ml, PL 11587/0005, EL (20) A/52
- Field Safety Notice: 19 to 23 October
- Class 3 Medicines Recall: Theramex Ireland Ltd T/A Theramex HQ UK Ltd, AlfaD 0.25 microgram capsules (PL 49876/0001), EL (20) A/50
- Company led drug alert - Optiray® 300mg I/ml Solution for Injection or Infusion (PL 12308/0028) and Optiray® 350mg I/ml Solution for Injection or Infusion (PL 12308/0032)
- Class 3 Medicines Recall: Metoprolol 50 mg Tablets (PL 20075/0304), EL (20)A/49
Source query from Search API | https://docs.publishing.service.gov.uk/document-types/medical_safety_alert.html | 2020-11-23T22:36:31 | CC-MAIN-2020-50 | 1606141168074.3 | [] | docs.publishing.service.gov.uk |
Tugboat has four different types of users:
Owner users can:
Owners can also do everything the admin user and general user can do.
Admin users can:
Admin can also do everything the general user can do.
Tugboat’s general User’s permissions include:
Tugboat users with Read-only permissions can:
These users have no access to anything else. | https://docs.tugboat.qa/administer-tugboat-crew/user-admin/ | 2020-11-23T22:12:53 | CC-MAIN-2020-50 | 1606141168074.3 | [] | docs.tugboat.qa |
2020.10.3
Release date: 17 November 2020
What’s New
In this patch, we've fixed an issue with uploading new files to MinIO buckets..3:
Action Center Integration with Automation Cloud
Release date: 9 November 2020
What's New<<
See below the highlights of this release.
Triggering Unattended Processes Through Forms
The new Action Center offers a way for business users to launch unattended processes directly from the Automation Cloud portal and provide inputs to Robots, which are subsequently used for successful process execution. They can trigger unattended processes through forms (including file upload) and track the executions.
Classification Station Integration).
Modern Folder Support.
Intermediate Save
Beginning with this release, you no longer have to fill in form fields in one go. Simply save the form at any point, and the progress is saved for you to get back to it later on.
Adding File Attachments
As of now, business users can upload files in action forms to facilitate agent verification scenarios such that the Robot can proceed with downstream processing.
The particularities of the upload (e.g., single or multi-file upload, minimum and maximum allowed file size) are configured at design time in Studio, using the Form Designer incorporated in the Create Form Task activity (UiPath.Persistence.Activities activity package v1.1.8+).
PDF Viewer Support
Form actions bring support for PDFs so you can have a look at your PDF files directly from the portal.
Safe JavaScript Executions in Actions Forms
Action Center now allows for JavaScript-based advanced validations and conditions logic.
Resources
Action Center Documentation
The documentation for Action Center services residing in Automaton Cloud can be found in the official Action Center guide.
For the time being, actions can also be managed in Orchestrator services as a way to help you smoothly transition to the new paradigm. To reflect this, Action Center documentation can be found in the Orchestrator guide as well. However, we recommend taking advantage of the new experience, as after the transition is complete, Action Center will be removed from Orchestrator, and so will be the documentation.
Sample Workflows
- Triggering unattended processes with file upload control through a queue schema.
- In Orchestrator, create a new queue and name it Relocation Expenses.
- Provide a default value for the
BucketNamein the queue schema file. The schema can be found in the
.zipas the JSON schema For this workflow, the bucket's name is ActionsTest.
- In Orchestrator, upload the queue schema file for the Specific Data field of the queue.
- Create a queue trigger for every new queue item that is added to the Relocation Expenses queue.
- PDF Viewer Support in Action Forms
- Upload the PDF to the storage bucket using storage bucket activities.
- Pass the uploaded file path to the form data through an attribute name having
_storageas a suffix (e.g.,
pdf_storage).
- Refer the attribute name (e.g.,
pdf_storage) in the HTML Element control using an
<embed>tag. For example:
<embed src={{ data.pdf_storage }}</embed>.
- Do not check the RefreshOnChange property.
- Navigate to Edit JSON, search for RefreshOn field, and set the value to the form data attribute (e.g.,
pdf_storage) and then save.
- JavaScript Expressions Support in Action Forms
- In the property of a form element, go to the Logic tab, then add Advanced Logic.
- Select JavaScript for writing data validations and advanced condition expressions.
- This JavaScript interpreter does not support DOM APIs. Hence, objects such as fetch, window, browser are not exposed.
Release Notes Per Product
To find out what changed on each and every component of the UiPath family, feel free to visit the following links:
2020.10.2
Release date: 5 November 2020
What’s New
This release brings a couple of fixes to the Orchestrator functionality and installation experience.:
Updated 6 days ago | https://docs.uipath.com/releasenotes/docs/platform-november-2020 | 2020-11-23T22:19:11 | CC-MAIN-2020-50 | 1606141168074.3 | [array(['https://files.readme.io/257a9b6-ac_portal.png', 'ac_portal.png'],
dtype=object)
array(['https://files.readme.io/257a9b6-ac_portal.png',
'Click to close...'], dtype=object)
array(['https://files.readme.io/dab9cf7-JS_based_actions.png',
'JS_based_actions.png'], dtype=object)
array(['https://files.readme.io/dab9cf7-JS_based_actions.png',
'Click to close...'], dtype=object) ] | docs.uipath.com |
Contents
Strategic Partner Links
Sepasoft - MES Modules
Cirrus Link - MQTT Modules
Resources
Knowledge Base Articles
Inductive University
Forum
IA Support
SDK Documentation
SDK Examples
All Manual Versions
Ignition 8
Ignition 7.9
Ignition 7.8
The DB Browse binding is easy to get started, and helps create queries for you. Let's get started.
In the DB Browse binding, you can also do ordering of data in the table. There is a Sort buttonthat allows you to sort in ascending and descending order. Using the same example, let's have the area column of our table be the key column so you can only see the entries for area B, and return all the operators, siteid, and supervisors columns in the table. We'll also order the data by the number of operators at each site in ascending order.
Confirm the binding by clicking OK to produce the specified data in the table. Only area B rows are shown, and our data is being sorted in ascending order of our operators column.
This example only had one key and one order column, but you can add as many as you want. Just select a second column and hit the Key or Sort buttons.
DB Browse bindings also give the ability to bind a property to the key column to allow for dynamic filtering of the returned data. This allows you to give the operators some control over the data they are seeing. This example is using a list of companies that has their respective city and state. | https://docs.inductiveautomation.com/pages/viewpage.action?pageId=19956030 | 2020-11-23T22:51:12 | CC-MAIN-2020-50 | 1606141168074.3 | [] | docs.inductiveautomation.com |
Separator Property
Separator property as it applies to the CaptionLabel object.
WdSeparatorType
expression.Separator
expression Required. An expression that returns a CaptionLabel object.
Separator property as it applies to the Endnotes and Footnotes objects.
Returns a Range object that represents the endnote or footnote separator.
expression.Separator
expression Required. An expression that returns one of the above objects.
Separator property as it applies to the TableOfAuthorities object.
Returns or sets the characters (up to five) between the sequence number and the page number. A hyphen (-) is the default character. This property corresponds to the \d switch for a Table of Authorities (TOA) field. Read/write String.
expression.Separator
expression Required. An expression that returns a TableOfAuthorities object.
Example
As applies to the CaptionLabel object.
This example inserts a Figure caption that has a colon (:) between the chapter number and the sequence number.
With CaptionLabels("Figure") .Separator = wdSeparatorColon .IncludeChapterNumber = True End With Selection.InsertCaption "Figure"
As applies to the Footnotes object.
This example changes the footnote separator to a single border indented 3 inches from the right margin.
With ActiveDocument.Footnotes.Separator .Delete .Borders(wdBorderTop).LineStyle = wdLineStyleSingle .ParagraphFormat.RightIndent = InchesToPoints(3) End With
As applies to the TableOfAuthorities object.
This example inserts a table of authorities at the beginning of the active document, and then it formats the table to include a sequence number and a page number, separated by a hyphen (-).
Set myRange = ActiveDocument.Range(0, 0) With ActiveDocument.TablesOfAuthorities.Add(Range:=myRange) .IncludeSequenceName = "Chapter" .Separator = "-" End With
Applies to | CaptionLabel Object | Endnotes Collection Object | Footnotes Collection Object | TableOfAuthorities Object
See Also | Add Method | ContinuationSeparator Property | EntrySeparator Property | NumberStyle Property | PageNumberSeparator Property | PageRangeSeparator Property | ResetSeparator Method | https://docs.microsoft.com/en-us/previous-versions/office/developer/office-2003/aa196806(v=office.11) | 2020-11-23T23:36:42 | CC-MAIN-2020-50 | 1606141168074.3 | [] | docs.microsoft.com |
Cellbase is a centralised database that integrates lots of information from several main genomic and biological databases used for genomic annotation and clinical variant prioritisation. See Overview for details.
CellBase is open-source and freely available at
You can search CellBase using your favourite programming language:
Recent space activity
Some important fixes made, please check
Web services updated and accessible at:
Some important fixes made, please check
Web services updated and accessible at:
New data sources, new web services and many variant annotation improvements (structural variants annotation, new population frequencies datasets and much more).
Accessible now at
Please, have a look to the release notes document at
variation_chr*.full.json.gz GRCh37 files in our http download server have been updated to include gnomAD frequencies:
An R CellBase client (CellBaseR) is now distributed by Bioconductor
gnomAD exomes and genomes population frequencies (GRCh37) are now provided as part of the variant annotation results:
Space contributors | http://docs.opencb.org/display/cellbase | 2020-11-23T22:24:23 | CC-MAIN-2020-50 | 1606141168074.3 | [] | docs.opencb.org |
The Quay.io API is a full OAuth 2, RESTful API.
Note: The Quay API is currently marked as version 1 and considered stable within minor versions of Quay Enterprise. The API may (but, in practice, never has) experience breaking changes across major versions of Quay Enterprise or at any time on Quay.io.
All APIs are accessed from the endpoint.
For Enterprise customers, the endpoint is
http(s)://yourdomain/api/v1/.
All data is sent and received as JSON.
The full list of defined methods and endpoints can be found in the API Explorer.
The majority of calls to the Quay.io API require a token with one or more scopes, which specify the permissions granted to the token to perform some work.
All calls to the Quay.io REST API must occur via a token created for a defined Application.
A new application can be created under an Organization in the Applications tab.
All calls to API methods which are not read-only and public require the use of an OAuth 2 access token, specified via a header. Access tokens for the Quay.io are long-lived and do not expire.
If your application will be used by various users of Quay.io or the Enterprise Registry, then generating a token requires running the OAuth 2 web flow (See Google’s example).
To do so, your application must make a request like so (replace
quay.io with your domain for Enterprise Registry):
GET{your redirect URI}&realm=realm&client_id={application client ID}&scope={comma delineated set of scopes to request}
Once the user has approved the permissions for your application, the browser will load the specified redirect URI with the access token appended like so:{created access token}
This access token can then be saved to make API requests.
Note: The generated token will be created on behalf of the currently logged in user.
If the API call will be conducted by an internal application, an access token can be generated simply by clicking on the Generate Token tab under the application, choosing scopes, and then clicking the Generate Access Token button. After conducting the OAuth flow for the current account, the newly generated token will be displayed.
API requests are made by executing the documented HTTP verb (GET, POST, PUT or DELETE) against the API endpoint URL, with an Authorization header containing the access token and (if necessary) body content in JSON form.
Authorization: Bearer AccessTokenGoesHere GET
Authorization: Bearer AccessTokenGoesHere PUT { "role": "read" }
OAuth 2 access tokens granted by Quay.io applications can invoke
docker pull and
docker push on behalf of the user if they have the
repo:read and
repo:write scopes (respectively).
To login, the
docker login command can be used with the username
$oauthtoken and the access token as the password:
$ docker login quay.io Username: $oauthtoken Password: ThisIsTheAccessToken Email: [email protected] | http://docs.quay.io/api/ | 2020-11-23T22:55:01 | CC-MAIN-2020-50 | 1606141168074.3 | [] | docs.quay.io |
There are two different ways to retrieve data from an RDBMS in an expression,
a!queryEntity() and
a!queryRecord. This page describes some of the properties of these queries and details the differences between them.
These queries can be called from any expression, so they can be reused across interface expressions, process models, and record type definitions. Parameters can be used to filter or page the array of data returned.
An a!queryEntity() is an expression function that takes a Data Store Entity and a Query as parameters and executes the query against that Entity. You can easily create simple entity queries using the Query Editor.
An a!queryRecord() is an expression function that takes a process-backed or entity-backed Record Type and a Query as parameters and executes the query against that Record Type.
See the following table for comparisons between
a!queryEntity() and
a!queryRecord().
See the following table for information on the filter operators that can be used with
a!queryEntity() and
a!queryRecord.
Case-sensitivity or insensitivity for text comparisons is determined by your RDBMS settings - not a setting configured in Appian.
You can limit the number of results returned by your query by passing an optional paging parameter of type PagingInfo. This allows you to specify how many items should be returned and the sort of those items.
When a paging parameter is specified, a value of type DataSubset is returned. Use the a!pagingInfo() function to construct the paging parameter.
a!queryEntity
The following rules apply to a!queryEntity when applying multiple sorts:
The queries you define are limited in how long they wait to return results (10 seconds). This setting can be configured by a system administrator.
There is also a limit to the amount of memory that can be returned by a single query. This is set by the
conf.data.query.memory.limit value. The system will display an error with code
APNX-1-4164-024 if this error is reached by a query. Use the paging parameter to return less data (or return data in batches) to avoid the limit.
See also: Configuring Query Limits
It is important that you tailor your queries to provide only desired information, especially when substantial growth in the data-sets queried is expected.
When designing queries, keep the following considerations in mind:
By default, queries do not return a constrained subset of matching data records, unless you configure query conditions and filters and call the query using the PagingInfo parameter.
The following improper design example illustrates how not to effectively implement queries.
Given a form that also displays the value of a previous item using a query, the following configuration might be used:
It is possible to implement a query with an additional rule input to function as a boundary for the attribute that you're filtering on.
For example, instead of id < currentId, you could implement two query conditions:
Id < currentId
— and —
Id > minimumBoundary
The query would then use two rule inputs GetPrior(currentId, minimumBoundary). This allows you to nest the query in one or more if statements within your expression.
For example:
On This Page | https://docs.appian.com/suite/help/19.4/Querying_Data_From_an_RDBMS.html | 2020-11-23T22:31:15 | CC-MAIN-2020-50 | 1606141168074.3 | [] | docs.appian.com |
Note: All pages below are subject to having relevant Roles and Permissions.
Hover on the Config heading on the Menu select Setup and click on ParentPay Integration Settings.
To setup the Parent Pay Integration please follow the steps below. If you get a message Parent Pay Integration is not licensed please contact [email protected]
This will open the Parentpay Integration Settings page.
Enter the School ID from ParentPay which you will find in the top right hand corner of the ParentPay homepage into the Supplier ID field.
You will need to create an Admin Account in ParentPay for Bromcom. Add the Account Credentials to the Username and Password fields.
Click Save when finished.
You can then either manually upload or set a schedule to run once a day at a time you want. Just toggle the Schedule Enabled radio button and set the time you want it to run. | https://docs.bromcom.com/knowledge-base/how-to-manage-parentpay-integration-settings/ | 2020-11-23T21:57:11 | CC-MAIN-2020-50 | 1606141168074.3 | [array(['https://docs.bromcom.com/wp-content/uploads/2020/07/image-53.png',
None], dtype=object)
array(['https://docs.bromcom.com/wp-content/uploads/2020/07/image-54.png',
None], dtype=object) ] | docs.bromcom.com |
Provision machines for data science research
Data science research frequently requires high-powered machines for limited periods of time.
Machines for this purpose are created in the
govuk-tools AWS account
using the
app-data-science Terraform project.
Note
The machines provisioned using this method are usually high powered and costly. Therefore, please ensure they are destroyed once the research has been completed.
Autoscaling
Machines created using this project are created in autoscaling groups that ensure at least 1 instance of each machine is running at all times.
Once research is finished, machines should be destroyed by removing them from the project and re-deploying it. Alternatively, if the machine will be required again in a short period of time, the autoscaling group can be manually set to zero instances in the meantime.
SSH access
All machines have public IP addresses and can be accessed via SSH. The userdata for each machine is set up to add SSH keys for the people who will be accessing them.
This userdata should be updated when people need to be added or removed. Existing machines will then need to be re-deployed (most quickly by destroying the existing instances and allowing the autoscaling group to re-create them).
SSH access to data science machines is not controlled by Puppet or AWS access. | https://docs.publishing.service.gov.uk/manual/provision-machines-for-data-science-research.html | 2020-11-23T22:38:34 | CC-MAIN-2020-50 | 1606141168074.3 | [] | docs.publishing.service.gov.uk |
1.0.0.RC1
Table of Contents
This project provides support for orchestrating long-running (streaming) and short-lived (task/batch) data microservices to Marathon on Mes run as a standalone application outside the Mesos cluster. A future version will provide support for the Data Flow Server itself to run on Mesos.
Deploy a Mesos and Marathon cluster.
The Mesosphere getting started guide provides a number of options for you to deploy a cluster. Many of the options listed there need some additional work to get going. For example, many Vagrant provisioned VMs are using deprecated versions of the Docker client. We have included some brief instructions for setting up a single-node cluster with Vagrant in Appendix A, Test Cluster. In addition to this we have also used the Playa Mesos Vagrant setup. For those that want to setup a distributed cluster quickly, there is also an option to spin up a cluster on AWS using Mesosphere’s Datacenter Operation System on Amazon Web Services.
The rest of this getting started guide assumes that you have a working Mesos and Marathon cluster and know the Marathon endpoint URL.
Create a Rabbit MQ service on the Mesos cluster.
The
rabbitmq service will be used for messaging between applications in the stream. There is a sample application JSON file for Rabbit MQ in the
spring-cloud-dataflow-server-mesos repository that you can use as a starting point. The service discovery mechanism is currently disabled so you need to look up the host and port to use for the connection. Depending on how large your cluster is, you way want to tweek the CPU and/or memory values.
Using the above JSON file and an Mesos and Marathon cluster installed you can deploy a Rabbit MQ application instance by issuing the following command
curl -X POST -d @rabbitmq.json -H "Content-type: application/json"
Note the
@ symbol to reference a file and that we are using the Marathon endpoint URL of
192.168.33.10:8080. Your endpoint might be different based on the configuration used for your installation of Mesos and Marathon. Using the Marathon and Mesos UIs you can verify that
rabbitmq service is running on the cluster.
Download the Spring Cloud Data Flow Server for Mesos and Marathon.
$ wget
Using the Marathon GUI, look up the host and port for the
rabbitmq application. In our case it was
192.168.33.10:31916. For the deployed apps to be able to connect to Rabbit MQ we need to provide the following property when we start the server:
--spring.cloud.deployer.mesos.marathon.environmentVariables='SPRING_RABBITMQ_HOST=192.168.33.10,SPRING_RABBITMQ_PORT=31916'
Now, run the Spring Cloud Data Flow Server for Mesos and Marathon passing in this host/port configuration.
$ java -jar spring-cloud-dataflow-server-mesos-1.0.0.RC1.jar --spring.cloud.deployer.mesos.marathon.apiEndpoint= --spring.cloud.deployer.mesos.marathon.memory=768 --spring.cloud.deployer.mesos.marathon.environmentVariables='SPRING_RABBITMQ_HOST=192.168.33.10,SPRING_RABBITMQ_PORT=31916'
You can pass in properties to set default values for memory and cpu resource request. For example
--spring.cloud.deployer.mesos.marathon.memory=768 will by default allocate additional memory for the application vs. the default value of 512. You can see all the available options in the MarathonAppDeployerProperties.java file.
Download and run the Spring Cloud Data Flow shell.
$ wget $ java -jar spring-cloud-dataflow-shell-1.0.0.RC1.jar
Deploy a simple stream in the shell
dataflow:>stream create --name ticktock --definition "time | log" --deploy
In the Mesos UI you can then look at the logs for the log sink.
2016-04-26 18:13:03.001 INFO 1 --- [ main] s.b.c.e.t.TomcatEmbeddedServletContainer : Tomcat started on port(s): 8080 (http) 2016-04-26 18:13:03.004 INFO 1 --- [ main] o.s.c.s.a.l.s.r.LogSinkRabbitApplication : Started LogSinkRabbitApplication in 7.766 seconds (JVM running for 8.24) 2016-04-26 18:13:54.443 INFO 1 --- [nio-8080-exec-1] o.a.c.c.C.[Tomcat].[localhost].[/] : Initializing Spring FrameworkServlet 'dispatcherServlet' 2016-04-26 18:13:54.445 INFO 1 --- [nio-8080-exec-1] o.s.web.servlet.DispatcherServlet : FrameworkServlet 'dispatcherServlet': initialization started 2016-04-26 18:13:54.459 INFO 1 --- [nio-8080-exec-1] o.s.web.servlet.DispatcherServlet : FrameworkServlet 'dispatcherServlet': initialization completed in 14 ms 2016-04-26 18:14:09.088 INFO 1 --- [time.ticktock-1] log.sink : 04/26/16 18:14:09 2016-04-26 18:14:10.077 INFO 1 --- [time.ticktock-1] log.sink : 04/26/16 18:14:10 2016-04-26 18:14:11.080 INFO 1 --- [time.ticktock-1] log.sink : 04/26/16 18:14:11 2016-04-26 18:14:12.083 INFO 1 --- [time.ticktock-1] log.sink : 04/26/16 18:14:12 2016-04-26 18:14:13.090 INFO 1 --- [time.ticktock-1] log.sink : 04/26/16 18:14:13 2016-04-26 18:14:14.091 INFO 1 --- [time.ticktock-1] log.sink : 04/26/16 18:14:14 2016-04-26 18:14:15.093 INFO 1 --- [time.ticktock-1] log.sink : 04/26/16 18:14:15 2016-04-26 18:14:16.095 INFO 1 --- [time.ticktock-1] log.sink : 04/26/16 18:14:16
Destroy the stream
dataflow:>stream destroy --name ticktock.
Here are brief setup instructions for setting up a local Vagrant single-node cluster. The Mesos endpoint will be 192.168.33.10:5050 and the Marathon endpoint will be 192.168.33.10:8080.
First create the
Vagrant file with necessary customizations:
$ vi Vagrantfile
Add the following content and save the file:
# -*- mode: ruby -*- # vi: set ft=ruby : Vagrant.configure(2) do |config| config.vm.box = "ubuntu/trusty64" config.vm.network "private_network", ip: "192.168.33.10" config.vm.hostname = "mesos" config.vm.provider "virtualbox" do |vb| vb.memory = "4096" vb.cpus = 4 end end
Next, update the box to the latest version and start it:
$ vagrant box update $ vagrant up
We can now ssh to the instance to install the necessary bits:
$ vagrant ssh
The rest of these instructions are run from within this ssh shell.
Refresh the apt repo and install Docker:
[email protected]:~$ sudo apt-get -y update [email protected]:~$ wget -qO- | sh [email protected]:~$ sudo usermod -aG docker vagrant
Install needed repos:
[email protected]:~$ echo "deb(lsb_release -is | tr '[:upper:]' '[:lower:]') $(lsb_release -cs) main" | sudo tee /etc/apt/sources.list.d/mesosphere.list [email protected]:~$ sudo apt-key adv --keyserver keyserver.ubuntu.com --recv E56151BF [email protected]:~$ sudo add-apt-repository ppa:webupd8team/java -y [email protected]:~$ sudo apt-get -y update
Install Java:
Install Mesos and Marathon:
Add Docker as a containerizer:
Set the IP address as the hostname used for the slave:
[email protected]:~$ echo $(/sbin/ifconfig eth1 | grep 'inet addr:' | cut -d: -f2 | awk '{ print $1}') | sudo tee /etc/mesos-slave/hostname
Reboot the server). | https://docs.spring.io/spring-cloud-dataflow-server-mesos/docs/1.0.0.RC1/reference/htmlsingle/ | 2020-11-23T22:53:17 | CC-MAIN-2020-50 | 1606141168074.3 | [] | docs.spring.io |
Mutation¶
To maintain a level of variety in a population and to force the evolutionary algorithm to explore more of the search space, new individuals are mutated immediately after their creation during the crossover process.
The mutation process in EDO is not quite as simple as in a traditional genetic algorithm. This is due to the representation of individuals. An individual is mutated in the following way:
- Mutate the number of rows and columns by adding and/or removing a line from each axis with the same probability. Lines are removed at random. Rows are added by sampling a new value from each current column distribution and adding them to the bottom of the dataset. Columns are added in the same way as in the creation process. Note that the number of rows and columns will not mutate beyond the bounds passed in
col_limits.
- With the dimensions of the dataset mutated, each value in the dataset is mutated using the same mutation probability. A value is mutated by replacing it with a single value sampled from the distribution associated with its column.
Example¶
Consider the following mutation of an individual:
>>> import numpy as np >>> from edo import Family >>> from edo.distributions import Poisson >>> from edo.individual import create_individual >>> from edo.operators import mutation >>> >>> row_limits, col_limits = [3, 5], [2, 5] >>> families = [Family(Poisson)] >>> state = np.random.RandomState(0) >>> >>> individual = create_individual( ... row_limits, col_limits, families, weights=None, random_state=state ... )
The individual looks like this:
>>> individual.dataframe 0 1 2 3 4 0 12 8 4 1 7 1 6 6 5 1 5 2 8 7 7 1 3 >>> individual.metadata [Poisson(lam=7.15), Poisson(lam=7.74), Poisson(lam=6.53), Poisson(lam=2.83), Poisson(lam=6.92)]
Now we can mutate this individual after setting the mutation probability. This is deliberately large to make for a substantial mutation:
>>> mutation_prob = 0.7 >>> mutant = mutation(individual, mutation_prob, row_limits, col_limits, families)
This gives the following individual:
>>> mutant.dataframe 0 1 2 3 0 8 4 1 5 1 11 3 4 5 2 9 7 3 3 >>> mutant.metadata [Poisson(lam=7.74), Poisson(lam=6.53), Poisson(lam=2.83), Poisson(lam=6.92)] | https://edo.readthedocs.io/en/latest/discussion/operators/mutation.html | 2020-11-23T22:16:17 | CC-MAIN-2020-50 | 1606141168074.3 | [] | edo.readthedocs.io |
. Meta objects are also very efficient for ray tracing.
Note
Meta objects have a slightly different behavior in Object Mode.
Visualization¶
In Object Mode, the calculated mesh is shown, along with a black “selection ring” (becoming pink when selected).
In Edit Mode (Fig. Meta Ball example.), a meta is drawn S transformation, having the green circle highlighted is equivalent to having the red one. | https://docs.blender.org/manual/en/latest/modeling/metas/introduction.html | 2019-01-16T06:12:01 | CC-MAIN-2019-04 | 1547583656897.10 | [] | docs.blender.org |
moreOrLessEquals function
Asserts that two doubles are equal, within some tolerated error.
Two values are considered equal if the difference between them is within
1e-10.
Implementation
Matcher moreOrLessEquals(double value, { double epsilon = 1e-10 }) { return _MoreOrLessEquals(value, epsilon); } | https://docs.flutter.io/flutter/flutter_test/moreOrLessEquals.html | 2019-01-16T06:18:18 | CC-MAIN-2019-04 | 1547583656897.10 | [] | docs.flutter.io |
When multiple graphs are opened in the Maltego client, they will each have their own tab above the main graph window. Graphs that have not been saved yet will be displayed as New Graph (number). Once a graph is saved, the display name on the name tag will change to the name under which it was saved. The graph name will be written in bold when changes are made to graph that have not been saved yet.
The first tab is always the Home screen that includes the Start Page and Transform Hub:
Right-clicking on a graph’s tab will open the dropdown menu described in the image below:
The Shift Left and Shift Right buttons from the drop down menu can be used to change tab ordering. The other items not described in the image above are used to make a graph tab into its own floating window however these options are rarely used.
Graph tabs can also be re-arranged by clicking and dragging the tab to another position:
Tab Bar Buttons
Navigating the display is always an issue of being able to see only what you want to see. For this reason, the Maltego client has been made very versatile and adaptable. As discussed previously graphs are maintained in tabs which can be flipped through. The next section details some of the options available display information windows. On the top right-hand side of the graph the following options are available:
When there are more tabs than can be displayed, the additional tabs will not be shown. The first two buttons in the image above allow you to scroll left and right through the tabs that are not shown.
The third button in the tab bar opens a drop down that shows all the graphs that are currently open. The arrow points to the graph that is currently in view.
The last button in the tab bar will maximize the graph window and minimize all other windows in the Maltego client as shown in the image below. Double clicking the graph tab will also maximize the graph window.
Clicking the button again will restore the windows to their previous state. | https://docs.maltego.com/support/solutions/articles/15000009614-tab-options | 2019-01-16T05:34:05 | CC-MAIN-2019-04 | 1547583656897.10 | [array(['https://s3-eu-central-1.amazonaws.com/euc-cdn.freshdesk.com/data/helpdesk/attachments/production/15003838206/original/RfFqf-sxTgVAPqb0It2D7ejkC3d8mPf-zQ.png?1526914350',
None], dtype=object)
array(['https://s3-eu-central-1.amazonaws.com/euc-cdn.freshdesk.com/data/helpdesk/attachments/production/15003838213/original/347vkIqSgjeeJ70tqXXElywAJM6FoMx3Hw.png?1526914383',
None], dtype=object)
array(['https://s3-eu-central-1.amazonaws.com/euc-cdn.freshdesk.com/data/helpdesk/attachments/production/15003838226/original/Tx9YFICEeM4gvOMJSHO7022kybbfsi9v0Q.png?1526914414',
None], dtype=object)
array(['https://s3-eu-central-1.amazonaws.com/euc-cdn.freshdesk.com/data/helpdesk/attachments/production/15003838232/original/PyXCQ3RE4G0u7qfrqY9ekjpJv3wkLdBwiQ.png?1526914466',
None], dtype=object)
array(['https://s3-eu-central-1.amazonaws.com/euc-cdn.freshdesk.com/data/helpdesk/attachments/production/15003838240/original/1HfVAeUO03MVNNR0eXg-qSxg53tBOOsiNQ.png?1526914505',
None], dtype=object)
array(['https://s3-eu-central-1.amazonaws.com/euc-cdn.freshdesk.com/data/helpdesk/attachments/production/15003838257/original/YgraBF49bw2B3AMNibe8p-dCtS8ufXSpdg.png?1526914552',
None], dtype=object) ] | docs.maltego.com |
bool OEMakeBoxMolecule(OEChem::OEMolBase& mol, const OEBoxBase& box)
Creates a molecule (mol) out of box. mol will have 8 carbon atoms, one at each corner of the box and a single bond between appropriate atoms.
Note that this molecule is in no way chemically valid. The purpose of this method is to create a molecule representing box that can be viewed in a molecular visualizer. | https://docs.eyesopen.com/toolkits/cpp/dockingtk/OEDockingFunctions/OEMakeBoxMolecule.html | 2019-01-16T07:08:05 | CC-MAIN-2019-04 | 1547583656897.10 | [] | docs.eyesopen.com |
Use a custom MDX query for a PerformancePoint KPI
Si applica a: SharePoint Server 2010 Enterprise
Ultima modifica dell'argomento: 2015-03-09.
Adding custom MDX queries to PerformancePoint KPIs.
Importante
Before you perform the procedures in this article, make sure that you have an existing KPI to configure. The KPI must use data that is stored in Analysis Services.
Add an MDX query to a KPI’s Actual or Target value.
Suggerimento
To view examples of MDX queries that you can use, see Estendere dashboard di PerformancePoint tramite query MDX..
See Also
Concepts
Create and configure a KPI by using Dashboard Designer
Extend a PerformancePoint KPI
Create a scorecard by using Dashboard Designer
Estendere dashboard di PerformancePoint tramite query MDX | https://docs.microsoft.com/it-it/previous-versions/office/sharepoint-server-2010/gg185657(v=office.14) | 2019-01-16T05:51:46 | CC-MAIN-2019-04 | 1547583656897.10 | [] | docs.microsoft.com |
JSON objects do not preserve key order like Godot dictionaries, thus you should not rely on keys being in a certain order if a dictionary is constructed from JSON. In contrast, JSON arrays retain the order of their elements:*
var p = JSON.parse('["hello", "world", "!"]') if typeof(p.result) == TYPE_ARRAY: print(p.result[0]) # prints 'hello' else: print("unexpected results") | https://godot.readthedocs.io/en/latest/classes/class_jsonparseresult.html | 2019-01-16T07:14:31 | CC-MAIN-2019-04 | 1547583656897.10 | [] | godot.readthedocs.io |
How to: Create a New Purpose
Applies To: Microsoft Dynamics AX 2012 R3, Microsoft Dynamics AX 2012 R2, Microsoft Dynamics AX 2012 Feature Pack, Microsoft Dynamics AX 2012
You can extend the Microsoft Dynamics AX organization model to create a custom purpose. A purpose defines how the organization hierarchy is used in application scenarios. For example, you can assign a hierarchy to a purpose, such as the Expenditure internal control purpose, to define policies for expense reports. The override and defaulting rules for policies on expense reports are based on the hierarchies that have the purpose of Expenditure internal control.
An organization can belong to several hierarchies. By assigning a hierarchy to a purpose you identify to the system how to find the correct hierarchy for applying policies.
In this topic you create a purpose named MyPurpose.
Create a New Base Enumerated Value
You create a new base enumerated value for your custom purpose by following these steps:
Create a project named CustomPurpose. The project can be either private or shared.
For information about how to create a project, see How to: Create a MorphX Development Project.
Drag the AOT > Data Dictionary > Base Enums > HierarchyPurpose node onto the project node.
Add an element named MyPurpose to the HierarchyPurpose enum. Under your project node right-click HierarchyPurpose, and then click New Element. In the Properties window, set the following properties:
Create a Method for a New Purpose
In this section you will create a method named addMyPurpose that resembles methods created for other types of purposes. For example, the addSecurityPurpose method was created for the security purpose.
Locate OMHierarchyPurposeTableClass in the AOT > Classes node.
Drag the AOT > Classes > OMHierarchyPurposeTableClass node onto the project node.
Duplicate the addSecurityPurpose method by right-clicking the addSecurityPurpose node, and then clicking Duplicate. The AOT will create the CopyOfaddSecurityPurpose node.
Replace the code for the CopyOfaddSecurityPurpose method with the following code. This renames the method.
private static void addMyPurpose() { OMHierPurposeOrgTypeMap omHPOTP; select RecId from omHPOTP where omHPOTP.HierarchyPurpose == HierarchyPurpose::MyPurpose; if (omHPOTP.RecId <= 0) { omHPOTP.clear(); omHPOTP.HierarchyPurpose = HierarchyPurpose::MyPurpose; omHPOTP.OperatingUnitType = OMOperatingUnitType::OMAnyOU; omHPOTP.IsLegalEntityAllowed = NoYes::No; omHPOTP.write(); omHPOTP.clear(); omHPOTP.HierarchyPurpose = HierarchyPurpose::MyPurpose; omHPOTP.OperatingUnitType = 0; omHPOTP.IsLegalEntityAllowed = NoYes::Yes; omHPOTP.write(); } }
The preceding code is similar to the code in most of the methods on the OMHierarchyPurposeTableClass class. The code was changed in only the places where the HierarchyPurpose enum values are referenced. In the code you can see three occurrences of HierarchyPurpose::MyPurpose.
In the OMHierarchyPurposeTableClass class, update the populateHierarchyPurposeTable method to call the new method that you created. The following line of code should be added.
OMHierarchyPurposeTableClass::addMyPurpose();
The following code shows your modification to the populateHierarchyPurposeTable method, near the end of the method.
public static void populateHierarchyPurposeTable() { OMHierPurposeOrgTypeMap omHPOTP; if (omHPOTP.RecId <= 0) { ttsbegin; OMHierarchyPurposeTableClass::AddOrganizationChartPurpose(); OMHierarchyPurposeTableClass::AddInvoiceControlPurpose(); OMHierarchyPurposeTableClass::AddExpenseControlPurpose(); OMHierarchyPurposeTableClass::AddPurchaseControlPurpose(); OMHierarchyPurposeTableClass::AddSigningLimitsPurpose(); OMHierarchyPurposeTableClass::AddAuditInternalControlPurpose(); OMHierarchyPurposeTableClass::AddCentralizedPaymentPurpose(); OMHierarchyPurposeTableClass::addSecurityPurpose(); // We add the following line. OMHierarchyPurposeTableClass::addMyPurpose(); ttscommit; } }
Review the Project
Now you have created all the items essential for this scenario, as shown in the following image.
The project that you have created
Use Your New Purpose Type
In the Workspace window of your Microsoft Dynamics AX client, select DAT > Organization administration > Area page > Setup > Organization > Organization hierarchy purposes. For information about how to use the Organization hierarchy purposes form, see Create or modify an organization hierarchy.
The form displays as shown in the following image.
The new purpose that you create
By using the toolbar on this form you can assign, remove, and view hierarchies for your new purpose.
See also
Extending the Organization Model
What's new: Company and organization framework
Create or modify an organization hierarchy
Announcements: New book: "Inside Microsoft Dynamics AX 2012 R3" now available. Get your copy at the MS Press Store. | https://docs.microsoft.com/en-us/dynamicsax-2012/developer/how-to-create-a-new-purpose | 2019-01-16T05:31:27 | CC-MAIN-2019-04 | 1547583656897.10 | [array(['images/gg989800.aotorgmcustpurposeproj%28en-us%2cax.60%29.jpg',
'AOTOrgMCustPurposeProj AOTOrgMCustPurposeProj'], dtype=object) ] | docs.microsoft.com |
When you create storage policy protection groups, you must first create storage policies and ensure that your environment meets certain prerequisites.
Prerequisites
Create datastore tags and assign them to datastores to associate with a storage policy:
If your environment does not use Enhanced Linked Mode, create tag categories and tags and assign them to the datastores to protect on the protected site. Create tag categories and tags and assign them to the datastores to which to recover virtual machines on the recovery site. The tag and category names must be identical on both sites.
If your environment uses Enhanced Linked Mode, create tag categories and tags only on the protected site. The tags are replicated to other vCenter Server instances in Enhanced Linked Mode environments.
Create virtual machine storage polices in vCenter Server on both sites, that include the tags that you assigned to the datastores to protect. Create virtual machine policies on both sites even if your environment uses Enhanced Linked Mode. The storage policies can have different names on each site.
Associate virtual machines to protect with the appropriate storage policy on the protected site. You must associate all the virtual machine's disks with the same storage policy.
Configure array-based replication of the datastores from the protected site to the recovery site by using the replication technology that your array vendor provides.
Configure inventory mappings in Site Recovery Manager. If you use storage policy protection groups and you do not configure mappings, planned migration or disaster recovery fail and Site Recovery Manager creates temporary placeholder mappings.
When Site Recovery Manager Server starts, Site Recovery Manager queries the storage policy-based management and tag manager services in vCenter Server to find virtual machines that are associated with a storage policy. These services and vCenter Server must be running when you start or restart Site Recovery Manager Server. If they are not running, Site Recovery Manager Server does not start.
For information about how to create storage policies, see Virtual Machine Storage Policies in the VMware vSphere ESXi and vCenter Server 6.5 Documentation.
For information about how to create inventory mappings, see Configure Inventory Mappings.
For information about temporary placeholder mappings, see Inventory Mappings for Storage Policy Protection Groups.
For information about known limitations of storage policy protection groups, see Limitations of Storage Policy Protection Groups. | https://docs.vmware.com/en/Site-Recovery-Manager/6.5/com.vmware.srm.admin.doc/GUID-1EEFBCB5-EED1-4295-9E5A-BC4CD8580C89.html | 2019-01-16T05:42:11 | CC-MAIN-2019-04 | 1547583656897.10 | [] | docs.vmware.com |
Enabling Call Home to collect diagnostics
You use the Call Home function to collect diagnostic information that you can send to BMC Support for troubleshooting purposes. When you enable the Call Home functionality, it collects version information of your Remedy products and it collects BMC Remedy AR System plug-in server configuration.
You can also use the Call Home functionality to review and analyze the sequence of your upgrade. This information is used for one of the following purposes:
- Identify the exact stage where you might have skipped a step in the upgrade process
- Trace the upgrade sequence you followed to locate incorrect steps that might have caused the upgrade to fail
The AR System Centralized Configuration Setting and SHARE:Application Properties forms contain the following information:
- AR System Configuration Component Setting: contains information related to AR plug-ins and their properties. For example, AR System email messages, Approval Plug-in, and so on.
- SHARE:application_Properties: contains information related to the products installed on the server along with their versions and language information.
Note
BMC Support retains the collected information for only 90 days from the day the information was collected.
To enable Call Home functionality
- In the browser, type http://<midtier>:<port>/arsys/ and log on to your BMC Remedy AR System server.
- On the Applications flyout menu, click AR System Administration > BMC AR System Administration Console > System > General > Server Information.
- Click the Call Home tab.
- Select I am willing to share the following data with BMC.
- Enter your BMC Support credentials.
- Select the desired form name from the table.
- In the Opt-in column of the table, select the Yes check box..
- Click Apply and then click OK.
To resubmit the information
- Open the AR System Upgrade Tracker form.
You can access the form at the following URL:
http://<midtier>:<port>/arsys/forms/<AR serverName>/AR System Upgrade Tracker
- Click Search.
- In the returnRequestId field, delete the ID.
- Save the form.
The information is resent to BMC after two hours. | https://docs.bmc.com/docs/brid2002/enabling-call-home-to-collect-diagnostics-908972235.html | 2020-11-23T20:15:50 | CC-MAIN-2020-50 | 1606141164142.1 | [] | docs.bmc.com |
Difference between revisions of "Category:Installing Providence"
From CollectiveAccess Documentation
Latest revision as of 18:11, 2 November 2012
Creating a custom CollectiveAccess implementation starts with the installation profile, a file that defines lists, fields, user interfaces, relationship types and more for a Providence setup. Most users tailor an out-of-the-box standard, rather than starting a profile from scratch. The documentation in this category explains the components of the profile model and syntax. It is possible to configure a profile using graphical tools (after installing a standard profile) but it is more time consuming than working with the XML file (see Configuration Through the User Interface). Most implementations also require additional customization of all or some of the main configuration files.
Subcategories
This category has only the following subcategory.
A
- [×] Attributes (empty) | https://docs.collectiveaccess.org/index.php?title=Category:Installing_Providence&diff=cur&oldid=597 | 2020-11-23T18:31:22 | CC-MAIN-2020-50 | 1606141164142.1 | [] | docs.collectiveaccess.org |
Windows.
Devices. Sensors Namespace
Classes
Interfaces
Enums..
For some samples that demonstrate using various sensors, see Windows Sensor Samples.
See also
- Sensor data and display orientation
- Windows Sensor Samples
- Background sensors sample (Windows 10)
- Compass sample (Windows 10)
- Inclinometer sample (Windows 10)
- Gyrometer sample (Windows 10)
- Light sensor sample (Windows 10)
- Orientation sensor sample (Windows 10)
- Accelerometer sample (Windows 10)
- Video stabilization sample
- Activity detection sensor sample
- Altimeter sample
- Barometer sample
- Magnetometer sample
- Pedometer sample
- Proximity sensor sample
- Relative inclinometer sample
- Simple orientation sensor sample
- Version adaptive code sample | https://docs.microsoft.com/es-es/uwp/api/Windows.Devices.Sensors?view=winrt-19041 | 2020-11-23T20:22:34 | CC-MAIN-2020-50 | 1606141164142.1 | [] | docs.microsoft.com |
1 uploadserver.go:203] Saving stream failed: data stream copy failed: Local qcow to raw conversion failed: could not convert local qcow2 image to raw: qemu-img execution failed: signal: killed
Container-native Virtualization is an add-on to OpenShift Container Platform that allows virtual machine workloads to run and be managed alongside container workloads. You can create virtual machines from disk images imported using the containerized data importer (CDI) controller, or from scratch within OpenShift Container Platform.
Container-native Virtualization introduces two new objects to OpenShift Container Platform:
Virtual Machine: The virtual machine in OpenShift Container Platform
Virtual Machine Instance: A running instance of the virtual machine
With the Container-native Virtualization add-on, virtual machines run in pods and have the same network and storage capabilities as standard pods.
Existing virtual machine disks are imported into persistent volumes (PVs), which are made accessible to Container-native Virtualization virtual machines using persistent volume claims (PVCs). In OpenShift Container Platform, the virtual machine object can be modified or replaced as needed, without affecting the persistent data stored on the PV.
Operators power the installation of KubeVirt, Containerized Data Importer, and the web-ui in Container-native Virtualization 1.4.
A new version of the KubeVirt API is included in Container-native Virtualization 1.4. Several important changes are reflected in the latest configuration file templates.
The
apiVersion has been updated from
kubevirt.io/v1alpha2 to
kubevirt.io/v1alpha3.
The
volumeName attribute no longer exists. Ensure that each disk name
matches the corresponding volume name in all configuration files.
All instances of
registryDisk must be updated to
containerDisk in
configuration files.
The
runc package version
runc-1.0.0-54 contained a bug that caused the
virt-launcher to crash if FIPS was disabled. The version of
runc containing
the fix is now shipped with Red Hat Enterprise Linux 7 Extras.
(BZ1650512)
In the Create Virtual Machine Wizard, using the PXE source option with the Start virtual machine on creation option resulted in the boot order not changing after stopping and starting the virtual machine. This issue has been resolved. (BZ#1648245) (BZ#1647447)
CPU Manager, a feature that provides CPU pinning in OpenShift Container Platform, is currently disabled in Container-native Virtualization due to performance regressions. CPU Manager does not consider the physical CPU topology, resulting in sub-optimal pinning when hyper-threading is enabled. (BZ#1667854)
Red Hat OpenShift Container Storage versions before 3.11.1 are not compatible with Container-native Virtualization. In the incompatible versions, Gluster nodes do not deploy with CRI-O as the container runtime. (BZ#1651270)
When installing Container-native Virtualization 1.4, the
ansible-playbook command fails
if the
multus image and its underlying layers are not pulled within the
timeout period. As a workaround, wait a few minutes and try the command again.
(BZ#1664274)
The
virtctl image-upload command fails if the
--uploadproxy-url value
ends with a trailing slash. If you use a custom URL, ensure that it does not end
with a trailing slash before running the command.
(BZ#1660888)
The limit for compute node devices is currently 110. This limit cannot be configured, but scaling up to more than 110 devices will be supported in a future release. (BZ#1673438)
When uploading or importing a disk image to a PVC, the space allocated for the
PVC must be at least
2 * actual image size + virtual image size. Otherwise,
the virtual machine does not boot successfully.
(BZ#1676824)
If you create a new DataVolume while a PVC already exists with the same name,
the DataVolume enters an unrecoverable error state. If this DataVolume is
associated with a virtual machine, or if it was created with the
dataVolumeTemplates section of a virtual machine configuration file, then the
virtual machine will fail to start. In these cases, the underlying DataVolume
error will not be propagated to the virtual machine.
(BZ#1669163)
If a CDI import into a PVC fails, a request to delete the PVC might not work
immediately. Instead, the importer pod gets stuck in a
CrashLoopBackOff state,
causing the PVC to enter a
Terminating phase. To resolve this issue, find the
importer pod associated with the PVC and delete it. The PVC will then be
deleted. (BZ#1673683)
If you use
virtctl image-upload to upload a QCOW2 image to a PVC, the
operation might fail with the error
Unexpected return value 500, resulting in
an unusable PVC. This can be caused by a bug where conversion of certain QCOW2
images during an upload operation exceeds predefined process limits.
(BZ#1679134)
To confirm
that the failure was caused by this bug, check the associated
uploadserver
pod logs for a message like this:
1 uploadserver.go:203] Saving stream failed: data stream copy failed: Local qcow to raw conversion failed: could not convert local qcow2 image to raw: qemu-img execution failed: signal: killed
As a workaround, locally convert the file to compressed raw format and then upload the result:
$ qemu-img convert -f qcow2 -O raw <failing-image.qcow2> image.raw $ gzip image.raw $ virtctl image-upload ... image.raw.gz
When a virtual machine provisioned from a
URL source is started for the first time, the virtual machine will be in the Importing state while Container-native Virtualization imports the container from the endpoint URL.
Restarting a virtual machine while it is in the Importing state results in an error. (BZ#1673921)
If the kubelet on a node crashes or restarts, this causes the kubelet to incorrectly report 0 KVM devices. Virtual machines are not properly scheduled on affected nodes.
Verify the number of devices that the kubelet reports by running:
$ oc get node $NODE | grep devices.kubevirt
The output on an affected node shows
devices.kubevirt.io/kvm: 0.
(BZ#1681175)
If a virtual machine is connected to the pod network by using
bridge mode,
the virtual machine might stop if the kubelet gets restarted.
(BZ#1685118) | https://docs.openshift.com/container-platform/3.11/cnv_release_notes/cnv_release_notes.html | 2020-11-23T19:13:49 | CC-MAIN-2020-50 | 1606141164142.1 | [] | docs.openshift.com |
> { public Task<MySagaData> FindBy(MyMessage message, SynchronizedStorageSession storageSession, ReadOnlyContextBag context) { // SynchronizedStorageSession will have a persistence specific extension method // For example GetDbSession is a stub extension method var dbSession = storageSession.GetDbSession(); return dbSession.GetSagaFromDB(message.SomeId, message.SomeData); // If a saga can't be found Task.FromResult(null) should be returned } }).. | https://docs.particular.net/nservicebus/sagas/saga-finding?version=core_7.2 | 2020-11-23T19:03:45 | CC-MAIN-2020-50 | 1606141164142.1 | [] | docs.particular.net |
Join the SingleStore Community Today
Get expert advice, develop skills, and connect with others.
Please follow this guide to learn how to migrate to SingleStore tools.
SingleStore Managed Service does not support this command.
Available since MemSQL Ops version 4.0.31.
Create a backup of a database in the cluster.
usage: memsql-ops database-backup [--settings-file SETTINGS_FILE] [--async] -D DATABASE Create a backup of a database database to be fully backed up. -D DATABASE, --database DATABASE The name of the database to back up. | https://docs.singlestore.com/v7.1/reference/memsql-ops-cli-reference/memsql-backup-management/database-backup/ | 2020-11-23T18:35:50 | CC-MAIN-2020-50 | 1606141164142.1 | [] | docs.singlestore.com |
JChem supports the search of reactions with substructure or reaction queries. (In addition to the reaction search features described in this section, all query features of non-reaction search can be used.)
Besides reaction structural searches, reaction similarity calculation and search is also available. For more details, see the reaction similarity documentation .
Structures to the left of the reaction arrow are reactants (starting materials), structures to the right of the reaction arrow are products, and those molecules drawn just above or below the arrow are agents (ingredients). Corresponding atoms containing changing bonds (created, destroyed or modified) are marked with map numbers both in the reactants and in the products (Table 1.).
Searching for a substructure in a reaction equation does not differ from the classical substructure search process described above. Any matching in any reaction components (reactants, agents, products) is a hit. This is not the case when the query itself is a reaction.
Table 1. Searching for simple query structure in reaction
Reaction queries are not necessarily complete reactions. Reaction queries sometimes contain reactants only. In this case, the search engine retrieves reactions containing reactants matching to the given structure. When just a product is specified in the query, those reactions will be returned which contain matching products (Table 2).
Table 2. Searching for query structure in reaction components
See also SMARTS component level grouping.
When a reaction query has mapped atoms, the reaction center of a matching reaction is mapped correspondingly. Although, the actual value of the map numbers might be different in the query and the target, the hit atoms have to be paired exactly as they are in the query (Table 3).
Table 3. Searching by mapped reaction queries
Note : If you plan to execute database searches with mapped reaction queries then the target reactions must be mapped before database import. We offer Standardizer application, built in JChem, for mapping reactions.
You can restrict your reaction search by applying reacting center query features on bonds to express the bond's role in the reaction mechanism. Table 4. describes these query features.
Table 4.
Restrictions:
Our method requires atom maps to identify the above reacting center bond categories. For this reason, it is best if the target(database) reactions are fully mapped or at least atoms with changing bonds are mapped. (ChemAxon-style mapping.)
If the atom maps are missing, we try to automap the target(database) reactions, but this may introduce errors. Automappings are taken into account exclusively at the evaluation of reacting center bond query features.
If atom mapping is not unambiguous (e.g. the same atom map appears two or more times on the product side, or alternatively on the reactant side) then only one of the mapped atoms will be used to calculate the reacting center and it is arbitrary which one is used.
Table 5. shows some examples.
Table 5.
Note : Reacting center bond features of reactions in database are not considered.
In case of mapped reactions, reacting center stereo query features - inversion and retention - can be applied on the chiral atom in the reacting center. In case of unmapped reactions, these query features are not applicable.
Inv: inversion of the reacting center stereo bond
Ret: retention of the reacting center stereo bond Table 6. displays some examples.
Table 6. Effect of reacting center stereo query features during reaction search
A query structure occasionally consists of some disjunct fragments. Since these fragments belong to a single reaction component in the query, their corresponding hits must belong to a single component as well. Two components of a reaction query are matching to two components of a target reaction (Table 7).
Table 7. Component identification during reaction search | https://docs.chemaxon.com/display/docs/reaction-search.md | 2020-11-23T18:58:35 | CC-MAIN-2020-50 | 1606141164142.1 | [] | docs.chemaxon.com |
<[email protected]>
This Working Draft describes an environment for writing and publishing an OASIS specification using DocBook XML. It is an internal OASIS support document and not the basis of an OASIS specification in and of itself.
This methodology supersedes guidelines for the XML-based authoring and publishing of OASIS specifications described in and supported by the stylesheets in.
This is a work in progress.com>..
This document details an environment and methodology for writing an OASIS specification document using XML markup, and publishing the resulting document to HTML and printed results conforming to OASIS layout conventions.
While this has been prepared before, a new version of this environment and methodology is required to accommodate (a) the inclusion of revised OASIS specification metadata; and (b) the changes in the available stylesheets for the XML authoring environments developed years ago for OASIS specifications. processed to produce printable results.
An important objective of using XML markup when writing content is to separate what you are writing from how it is formatted../wd-spectools-docbook-template-0.4.html.
p:/oasis/spec-0.4/, thus allowing the following stylesheet association processing instruction placed at the top of the XML file before the document type declaration to render the document in an XSLT-aware web browser:
<?xml-stylesheet type="text/xsl" href=".
The online publishing environment for this methodology is found at complete with this documentation and directories for CSS stylesheets, XSLT stylesheets, and a pro forma template instance. There is no need to copy any files to your local machine environment in order to use the online publishing environment.
One needs to be connected to the Internet for online publishing to function.
See Appendix A, Publishing choreography and orchestration (Non-Normative) for example choreography and orchestration of these processes.
To confirm that your DocBook instance conforms to the constraints of the DocBook vocabulary of elements and attributes, use any conforming DTD validating XML processor, process the XML instance checking it for being well-formed and valid. Ensure first that the document type declaration of your XML instance points the document type declaration's SYSTEM identifier to the online copy of the DocBook document type definition (DTD) as in the following.
<!DOCTYPE article PUBLIC "-//OASIS//DTD DocBook XML V4.4//EN" ""
Note that the PUBLIC identifier is handy in systems that use catalogues for dereferencing SYSTEM identifiers on the fly.
To produce an HTML rendition run a conforming XSLT processor using your XML document as the source and as the stylesheet to create the final HTML result.
To produce a print rendition run a conforming XSLT processor using your XML document as the source, and as the stylesheet to create the intermediate XSL-FO result for international A4 paper size (210mm by 297mm), and.
Three packages of files need to be downloaded to your local environment and then configured for offline publishing use.
While the offline publishing environment can be used very effectively in the development and writing of the specification documents, the final publicly-posted documents must be configured for online results. See Section 4.3, “OASIS specification stylesheets” for details on configuring for online results. See Section 5, “Packaging and check list” for reminders regarding publishing the final results.
See Appendix A, Publishing choreography and orchestration (Non-Normative) for example choreography and orchestration of these processes.
An offline version of the DocBook document model is a verbatim copy of the online version found at, and packaged in a single downloadable file in.
No further configuration of these files is necessary in order to be useful in the offline publishing environment.
For the purposes of this example, the extracted contents of that package are installed in the local "
p:\docbook" directory. This generic directory name can be used elsewhere without having to be changed when the document models are updated.
For XML instances to engage the offline environment for validation, point the document type declaration's SYSTEM identifier to the offline copy of the DocBook document type definition (DTD) as in the following.
<!DOCTYPE article PUBLIC "-//OASIS//DTD DocBook XML V4.4//EN" ""
An offline version of the DocBook stylesheets is a verbatim copy of the online version found in the directory.
No further configuration of these files is necessary in order to be useful in the offline publishing environment.
For the purposes of this example, using
docbook-xsl-1.69.1.tar.gz or
docbook-xsl-1.69.1.zip (being careful when looking in the long list to get the "
xsl" package and not inadvertently download the "
xsl-doc" package which does not have stylesheets), the extracted
docbook-xsl-1.69.1 directory is renamed "
xsl" and moved to be the local "
p:\docbook\xsl" directory. This generic directory name can be used elsewhere without having to be changed when the stylesheets are updated.
An offline version of the OASIS specification stylesheets is a slightly modified copy of the online version, where only a single file has to be changed to reflect the offline configuration and the nature of the target results.
The online version of this explanatory document you are reading points to the ZIP and TAR/GZ packages containing the OASIS specification stylesheets and related documentation and samples is found at where the entire environment can be downloaded in the single package.
Note that if you are reading this document from a complete download of the environment, the stylesheets are already installed in the
stylesheets/ directory.
The one file
stylesheets/oasis-configuration.ent must be changed to reflect (1) the choices of installation subdirectories of the offline publishing environment; and (2) the nature of the result files being produced. The default configuration is for online publishing to produce results for use online.
To configure the directories file for offline publishing, change the following lines in that file that indicate the locations using URL syntax for the filenames (the
p:/ names are merely examples being used by the author and are replaced with wherever you decide to install the files):
<!--locations of offline installation of support software--> <!ENTITY offline-oasis-spec-directory ""> <!ENTITY offline-docbook-xsl-directory "">
Next, configure the stylesheets for offline use by indicating the offline stylesheets are included as follows (note the file that is mounted online is preconfigured for online use of the stylesheets):
<!--only one of the following two parameter entites can be "INCLUDE"--> <!ENTITY % offline-stylesheets "INCLUDE"> <!ENTITY % online-stylesheets "IGNORE">
Finally, leave the stylesheets configured to produce online results, or if you want to produce results for use locally on your machine without needing any online access, swap the "
INCLUDE" and "
IGNORE" strings in the following defaults:
<!--only one of the following two parameter entites can be "INCLUDE"--> <!ENTITY % offline-results "IGNORE"> <!ENTITY % online-results "INCLUDE">
To produce an HTML rendition run a conforming XSLT processor using your XML document as the source and your locally-installed equivalent to the example
p:\oasis\spec-04\stylesheets\oasis-specification-html.xsl as the stylesheet to create the final HTML result.
To produce a print rendition run a conforming XSLT processor using your XML document as the source, and your locally-installed equivalent to the example
p:\oasis\spec-04\stylesheets\oasis-specification-fo-a4.xsl as the stylesheet to create the intermediate XSL-FO result for international A4 paper size (210mm by 297mm), and
p:\oasis\spec-04\stylesheets\oasis-specification-fo-us.xsl.
The package of files for this environment and methodology includes the XML source, three renditions (HTML, PDF for the international A4 paper size and PDF for the US paper size), and a number of support files.
Before packaging the files to be posted it is recommended to consider the following check list:
remove from the XML source any stylesheet association processing instruction
ensure the XML source document type declaration SYSTEM identifier is pointing to the online reference and not an offline reference
create online results when producing the final documents in order to ensure that all pointers (such as the OASIS logo and CSS stylesheet) are pointing to their correct online locations instead of offline locations
All of the required facilities for creating and testing ZIP and TAR/GZ files are available as core tasks in Ant [ant] as illustrated in the
template/package.xml file in the pro forma template directory. In there you will see how to create a subdirectory of the files to be packaged, time stamp the files to a consistent date and time, package the files into both kinds of compressed archive files and uncompress the archive files into subdirectories suitable for testing./wd/wd
[DocBook] Norm Walsh DocBook XML, The DocBook 4.4 Document type. OASIS January 27, 2005
[RFC Keywords] S. Bradner Key words for use in RFCs to Indicate Requirement Levels Internet Engineering Task Force, March 1997
[XSL-FO] Sharon Adler, et al. Extensible Stylesheet Language (XSL) Version 1.0 W3C Recommendation 15 October 2001
[XSLT] James Clark XSL Transformations (XSLT) Version 1.0 W3C Recommendation 16 November 1999
[ant] Apache Software Foundation Ant Java-based Build Tool (Another Neat Tool).
[FOP] Apache Software Foundation Formatting Objects. | http://docs.oasis-open.org/templates/DocBook/spec-0.4/oasis-specification-0.4.html | 2020-11-23T19:41:40 | CC-MAIN-2020-50 | 1606141164142.1 | [] | docs.oasis-open.org |
Accessing the Mid Tier Configuration Tool
You can access the Mid Tier Configuration Tool in any of the following ways:
- Open a browser and enter the following URL:
http://<hostName>:<portNumber>/<contextPath>/shared/config/config.jsp
-).
- If the mid tier is installed on the local computer using the default context path, enter the following URL in your browser: http://<localhost>/arsys/shared/config/config.jsp
For this the URL to work, localhost must be correctly entered in the hosts file.
- On a Windows computer where the mid tier is installed on the local computer, select Start > Programs > BMC Software > AR System > BMC Remedy Mid Tier > Configure ARSYSTEM on Localhost.
To log on to the Mid Tier Configuration Tool
The AR System administrator performs the following steps when the Login page appears:
- In the ARServer Name field, enter a valid AR Server name
- (Optional) If you have installed AR Server using port, enter the port number In the Port field
In the User Name field, and Password field, enter AR Server credentials for an administrator.
- Click Log in.
After you log on, the Mid Tier Configuration Tool Overview page appears. It displays the current settings for your installation. Use the navigation pane at the left to select configuration tasks.
Troubleshooting an authentication error when logging in to the Mid Tier Configuration Tool
An authentication error occurs and the following message is displayed if you try to login to the Mid Tier while the AR System Server is down:
You are redirected to the config page.
Perform the following steps:
- Click the link given in the message and open the Mid Tier configuration page.
- Enter the details of the AR System Server that is up and running.
You can now login to the Mid Tier.
Changing password of the Mid Tier Configuration Tool
You can change the password for the Mid Tier Configuration Tool by using the Change Password form, if you enable the Force Password Change On Login checkbox on the User form. For more information, see Enabling users to change their passwords at will.
Related topic
Using the Mid Tier Configuration Tool with a load balancer | https://docs.bmc.com/docs/ars2002/accessing-the-mid-tier-configuration-tool-909634264.html | 2020-11-23T20:04:38 | CC-MAIN-2020-50 | 1606141164142.1 | [] | docs.bmc.com |
style component
In Mapbox Studio, a style component (sometimes referred to as a "component") is a collection of related map features that you style as a single unit. Road network and Administrative boundaries are examples of components. Each component can contain one or more style layers.
You can style map features using component properties. A component property is one of a few available options for styling a single component. A single component property can control multiple layer properties across several layers.
Values for component properties are often defined using a toggle (on or off), a dropdown menu with a few options, or a slider with several options along a scale. Label density is an example of a component property for the Natural labels component. It controls several layer properties across several layers including
waterway-label,
natural-line-label,
natural-point-label,
water-line-label, and
water-point-label.
Component properties are not directly related to the Mapbox Style Specification and cannot be edited outside Mapbox Studio (at runtime).
Related resources:
- Mapbox Studio manual Styles reference
- Create a custom style tutorial | https://docs.mapbox.com/help/glossary/style-component/ | 2020-11-23T20:04:38 | CC-MAIN-2020-50 | 1606141164142.1 | [] | docs.mapbox.com |
Overview
Resources are the heart of Octave edge devices. The Resources of a device define its services, sensors, and actuators that make up a solution, and are organized as a tree (hierarchy) on a device, similar to that of a file system on your computer. A device's Resources are viewed and managed via the Build > Device > Resource screen in Octave.
Note
For additional information, see the Resources reference guide.
More generally, Resources represent entities that create, receive, or store Events (e.g., a physical input or output pin on an Octave edge device). A Resource is created by Octave after you have configured an entity (e.g., a GPIO output pin pin).
The following video provides an overview of Resources and how to work with them:
Since Resources are organized into a tree hierarchy, each Resource will have a path that identifies its location. For example, a sensor defined as a resource, might be represented as
/redSensor/light, an actuator defined by the LCD driver might be called
/lcd/txt, and the configuration for a GPIO service might be defined in
/io/config.
Resources for built-in features (e.g., onboard light sensors) are predefined in Octave, while Resources for other configurable features (e.g., GPIO pins) are only available after that underlying feature/service (e.g., a GPIO pin) is configured.
Your device will initially broadcast its Resource tree, such that it is available to view and manage in the Octave dashboard as well as through the Device object in the Octave REST API.
All Resources either input or output values, and are stateful (i.e., they hold the last single value sent to them). Inputs are tied to Input Resources and Sensor Resources and generate new events from the underlying application or hardware (e.g., the temperature data from a temperature sensor). Outputs (actuators) forward Events to the application or an asset (e.g., to move a robotic arm).
Inputs move data from the application to the Data Hub and Outputs move data from the Data Hub to the application. However, the direction is not strictly enforced and is used primarily to advertise the expected direction of flow. Applications may read/subscribe to Inputs if they want to and they may also write values to Outputs.
Each Resource is of a particular data type which can be a trigger, boolean, numeric, string, or JSON. Values of the wrong type are sometimes coerced to the right type. For example, if you send any non-zero number to a boolean type, it will be coerced to
true.
For additional information see the Resources reference guide.
Initial Configuration
Before a Resource such as light sensor can be used, you must first configure its
enable and
period (sub) Resources (see Additional Resources for more information about sub Resources). To do so, you must push a new event to the Resource, either immediately or every time the device powers on. The latter is the preferred approach, and involves adding values to the
state attribute of the Octave edge device's Device Object.
Any updates to the Resource's configuration, Observations or Edge Actions, will be sent to the device when it next connects to the cellular network. The updated configuration will then persist locally on the device such that the configuration will survive device restarts.
Working with Specific Resource Types
The following subtopics provide tutorials on how to work with Resources:
- Managing Resources via the Octave Dashboard: how to manually work with Resources in Octave's user interface.
- Output (Actuator) Resources: set up a Resource for an output pin (e.g., GPIO Pin to emit voltage.
- Input Resources: set up a Resource for an input pin (e.g., GPIO Pin, that can receive data pushed by an asset to the Octave edge device at an arbitrary time and frequency.
- Sensor Resources: set up a Resource for a sensor which is an entity that gathers data about some physical phenomenon (e.g., light, temperature, etc.) and converts that data into analog or digital signals.
- Virtual Resources: set up and work with user-defined Resources not associated with a physical entity.
- Utilitary Resources: work with Resources which control and monitor several edge system-level processes.
Updated about a month ago | https://docs.octave.dev/docs/work-with-resources | 2020-11-23T19:31:54 | CC-MAIN-2020-50 | 1606141164142.1 | [] | docs.octave.dev |
Backing Up and Restoring NSX-T Manager
Page last updated:
This topic describes how to back up and restore NSX-T Data Center for TKGI.
NSX-T Data Center Backup and Recover
NSX-T Data Center provides in-product backup and recovery that supports backup and restore of the NSX Manager Nodes. For more information, see Backing Up and Restoring NSX Manager in the NSX-T documentation.
Deployment Assumptions
To backup and restore NSX-T Data Center, it is assumed that 3 NSX Manager Nodes are deployed, and there is an HA VIP configured for access to the NSX Management Plane. In addition, there are at least 2 Edge Nodes deployed with an HA VIP for the Edge Nodes.
For more information, refer the to the NSX-T for TKGI installation instructions.
Backup Procedure
Create a backup of the NSX-T Manager Nodes as follows:
Log in to the NSX Manager web console.
Navigate to System > Backup & Restore.
Select Edit and configure the backup location for the NSX Configuration. For more information, refer to Configure Backups in the NSX-T Data Center documentation.
Click Start Backup to begin the backup of the NSX Manager database.
Restore Procedure
To restore NSX-T Data Center, you restore the configuration using the backup and start sending traffic. See Restore a Backup in the NSX-T Data Center documentation.
Note: Configuration changes made between backup and restore will not be saved.
Testing Procedure
The following test scenario assumes TKGI is installed on vSphere with NSX-T 3.0, and that a full backup of NSX-T Manager has been performed. This scenario tests the restoration of NSX-T.
- Verify NSX-T connectivity by testing access to a deployed Kubernetes application that is fronted by a service of type LoadBalancer. This verifies that the NSX-T load balancer is functioning correctly.
- Shut down all 3 NSX Manager VMs, and delete them.
- Deploy a new NSX Manager node. For more information, refer to the NSX-T for TKGI installation documentation.
- Restore the NSX Manager configuration from the backup. See See Restore a Backup in the NSX-T documentation.
- Add 2 additional Managers.
Please send any feedback you have to [email protected]. | https://docs.pivotal.io/tkgi/1-9/backup-and-restore-nsxt.html | 2020-11-23T19:36:52 | CC-MAIN-2020-50 | 1606141164142.1 | [] | docs.pivotal.io |
Rotating Cluster Certificates
Page last updated:
This topic describes how to rotate certificates for Kubernetes clusters created by Tanzu Kubernetes Grid Integrated Edition (TKGI).
Overview of Rotating Cluster Certificates
Certificate authority (CA) certificates used by TKGI-created Kubernetes clusters and the leaf-level certificates that they issue include the following:
Cluster-specific certificates: Kubernetes cluster certificates that have values and expiration dates unique to each cluster.
The following table lists the cluster-specific certificates and how to rotate them.
Shared cluster certificates: Certificates that are common to all TKGI-deployed Kubernetes clusters. Shared cluster certificates do not have unique values for each cluster. Some of them are used by the TKGI control plane for communication between the TKGI control plane and the clusters.
To rotate shared cluster certificates, see Rotate Shared Cluster Certificates below.
For how to rotate certificates used only by the TKGI control plane, and not Kubernetes clusters, see Rotating TKGI Control Plane Certificates.
Rotate Cluster-Specific Certificates
This procedure rotates the following CA certificates and their leaf certificates:
kubo_ca_2018:
tls-metrics-server-2018
tls-kubelet-client-2018
tls-kubelet-2018
tls-kube-controller-manager-2018
etcd_ca_2018:
tls-etcdctl-root-2018-2
tls-etcdctl-flanneld-2018-2
tls-etcdctl-2018-2
tls-etcd-2018-2
All TKGI-deployed Kubernetes clusters use
kubo_ca_2018,
etcd_ca_2018, and their leaf certificates.
Their values are unique to each cluster,
and their expiration dates depend on the cluster creation date.
The procedure below uses the CredHub Maestro command line interface (CLI) to rotate the
kubo_ca_2018 and
etcd_ca_2018 CA certificates and their leaf certificates.
For more information about the CredHub Maestro CLI, see Getting Started with CredHub Maestro in the Ops Manager documentation.
Limitations:
- In deployments that use NSX-T networking, clusters also have unique NSX-T certificates that must be registered with the NSX Manager. To rotate those, see Rotate Cluster-Specific NSX-T Certificates, below.
- This procedure differs from the procedures to rotate shared cluster certificates or TKGI control plane certificates.
Warning: Do not use the instructions in this section to rotate NSX-T certificates or shared Kubernetes cluster certificates.
Prerequisites
To rotate certificates using the CredHub Maestro CLI, you must have the following:
- TKGI v1.9 or later. Earlier versions of TKGI are not compatible with the CredHub Maestro CLI.
- Kubernetes clusters upgraded to TKGI v1.9.
- Ops Manager v2.9 or later.
- The
pks.cluster.adminUAA scope.
The certificate rotation procedure with the CredHub Maestro CLI provided below has been tested on Ops Manager v2.9 and TKGI v1.9.
Downtime
Depending on cluster topology, rotating
kubo_ca_2018 or
etcd_ca_2018 may cause cluster downtime while cluster nodes restart:
- Multiple control plane (master) and worker nodes: No downtime
- Single control plane node: Cluster control plane downtime
- Single worker node: Workload downtime
Prepare to Rotate
Before rotating your certificates, complete the following steps:
- Retrieve the Cluster UUID
- Access CredHub Maestro on the Ops Manager VM
- Identify Your Cluster Deployment Names
- Determine Which Certificates Are Expiring
Retrieve the Cluster UUID
This section describes how to retrieve the universally unique identifier (UUID) of a TKGI-provisioned Kubernetes cluster. You will use this UUID in Identify Your Cluster Deployment Names and Rotate the Certificates below.
To retrieve the UUID of a TKGI-provisioned Kubernetes cluster:
- Log in to TKGI. For instructions, see Logging in to Tanzu Kubernetes Grid Integrated Edition.
To view the list of your deployed clusters, run
tkgi clusters.
For example:
$ tkgi clusters Name Plan Name UUID Status Action test multi-master ae681cd1-7ff4-4661-b12c-49a5b543f16f succeeded CREATE
In the output of the
tkgi clusterscommand, locate your target cluster and record its UUID. If you want to rotate the
kubo_ca_2018and
etcd_ca_2018CA certificates and their leaf certificates for multiple clusters, locate all your target clusters in the output and record the UUIDs.
Proceed to Access CredHub Maestro on the Ops Manager VM below.
Access CredHub Maestro on the Ops Manager VM
To access the CredHub Maestro CLI on the Ops Manager VM:
- SSH into the Ops Manager VM. For instructions, see Log in to the Ops Manager VM with SSH in the Ops Manager documentation.
Set the BOSH command line and CredHub environment variables on the Ops Manager VM.
To set the BOSH environment variables, follow the instructions in Set the BOSH Environment Variables on the Ops Manager VM in the Ops Manager documentation.
For example:
$ export BOSH_CLIENT=ops_manager \ BOSH_CLIENT_SECRET=example_secret \ BOSH_CA_CERT=/var/tempest/workspaces/default/root_ca_certificate \ BOSH_ENVIRONMENT=10.0.16.5 bosh
To set the CredHub environment variables, export the following variables:
CREDHUB_SERVERis the URL of the BOSH Director CredHub server. This should be
BOSH_ENVIRONMENT:8844.
CREDHUB_CLIENTis the name of the CredHub client. This is the same as
BOSH_CLIENT.
CREDHUB_SECRETis the CredHub client secret. This is the same as
BOSH_CLIENT_SECRET.
CREDHUB_CA_CERTis the path or value of the CredHub trusted CA certificate. This is the same as
BOSH_CA_CERT.
For example:
$ export CREDHUB_SERVER=10.0.16.5:8844 \ CREDHUB_CLIENT=ops_manager \ CREDHUB_SECRET=example_secret \ CREDHUB_CA_CERT=/var/tempest/workspaces/default/root_ca_certificate
Identify Your Cluster Deployment Names
To identify your Kubernetes cluster deployment names, run:
bosh deployments
Kubernetes cluster deployment names begin with
service-instance_. For example,
service-instance_ae681cd1-7ff4-4661-b12c-49a5b543f16f, where
ae681cd1-7ff4-4661-b12c-49a5b543f16fis the cluster UUID you retrieved in Retrieve the Cluster UUID above.
Determine Which Certificates Are Expiring
To review the expiration dates of the
kubo_ca_2018and
etcd_ca_2018CA certificates and their leaf certificates for a TKGI-provisioned Kubernetes cluster:
Run the following command:
maestro list --expires-within TIME-PERIOD --deployment-name service-instance_CLUSTER-UUID
Where:
TIME-PERIODis the expiry window you want to filter. Valid units are
dfor days,
wfor weeks,
mfor months, and
yfor years. For example,
1ylists the certificates expiring within one year.
CLUSTER-UUIDis the cluster UUID you retrieved in Retrieve the Cluster UUID above.
For more information about how to check certificate expiration dates, see maestro list in the Ops Manager documentation.
Locate the
kubo_ca_2018and
etcd_ca_2018CA certificates and their leaf certificates in the output of the
maestro listcommand.
Rotate the Certificates
This section describes how to rotate the
kubo_ca_2018 and
etcd_ca_2018 CA certificates and their leaf certificates
for a TKGI-provisioned Kubernetes cluster.
To rotate these certificates for multiple clusters,
repeat the procedure below for each cluster.
You can modify this procedure to rotate
kubo_ca_2018 and
etcd_ca_2018
separately, rather than at the same time.
To rotate the
kubo_ca_2018 and
etcd_ca_2018 CA certificates and their leaf
certificates for a TKGI-provisioned
Kubernetes cluster:
Regenerate the
kubo_ca_2018and
etcd_ca_2018CA certificates:
maestro regenerate ca --name /p-bosh/service-instance_CLUSTER-UUID/etcd_ca_2018 maestro regenerate ca --name /p-bosh/service-instance_CLUSTER-UUID/kubo_ca_2018
Where
CLUSTER-UUIDis the cluster UUID. You retrieved the UUID in Retrieve the Cluster UUID above.
This step creates a new version of the CA certificates.
If you are running Ops Manager v2.10 or later, skip to the next step. If you are running Ops Manager v2.9, mark the latest version of each CA certificate as transitional:
maestro update-transitional latest --name /p-bosh/service-instance_CLUSTER-UUID/etcd_ca_2018 maestro update-transitional latest --name /p-bosh/service-instance_CLUSTER-UUID/kubo_ca_2018
Redeploy the cluster:
Download the latest cluster deployment manifest:
bosh -d service-instance_CLUSTER-UUID manifest > PATH-TO-DEPLOYMENT-MANIFEST
Where
PATH-TO-DEPLOYMENT-MANIFESTis the location where you want to save the cluster deployment manifest. For example,
/tmp/manifest.yml.
Deploy the cluster:
bosh -d service-instance_CLUSTER-UUID deploy PATH-TO-DEPLOYMENT-MANIFEST
After the cluster redeployment completes successfully, mark the signing version of each CA certificate as transitional:
maestro update-transitional signing --name /p-bosh/service-instance_CLUSTER-UUID/etcd_ca_2018 maestro update-transitional signing --name /p-bosh/service-instance_CLUSTER-UUID/kubo_ca_2018
This command also removes the transitional flag from the latest version of the CA certificates.
Regenerate all leaf certificates signed by
kubo_ca_2018and
etcd_ca_2018:
maestro regenerate leaf --signed-by /p-bosh/service-instance_CLUSTER-UUID/etcd_ca_2018 maestro regenerate leaf --signed-by /p-bosh/service-instance_CLUSTER-UUID/kubo_ca_2018
Redeploy the cluster:
bosh -d service-instance_CLUSTER-UUID manifest > PATH-TO-DEPLOYMENT-MANIFEST bosh -d service-instance_CLUSTER-UUID deploy PATH-TO-DEPLOYMENT-MANIFEST
After the cluster redeployment completes successfully, remove the transitional flag:
maestro update-transitional remove --name /p-bosh/service-instance_CLUSTER-UUID/etcd_ca_2018 maestro update-transitional remove --name /p-bosh/service-instance_CLUSTER-UUID/kubo_ca_2018
Redeploy the cluster:
bosh -d service-instance_CLUSTER-UUID manifest > PATH-TO-DEPLOYMENT-MANIFEST bosh -d service-instance_CLUSTER-UUID deploy PATH-TO-DEPLOYMENT-MANIFEST
Rotate Cluster-Specific NSX-T Certificates
To rotate the unique NSX-T certificates used by a TKGI cluster and register them with the NSX Manager, see the Knowledge Base article How to rotate TKGi tls-nsx-t cluster certificate.
Warning: Rotating NSX-T cluster certificates causes TKGI API downtime.
Rotate Shared Cluster Certificates
TKGI rotates shared cluster certificates automatically with selected tile upgrades. Most of these certificates have four- or five-year expiry periods, so users do not ordinarily need to rotate them.
After a shared certificate is rotated, all clusters must be updated. If the certificate is also used by the TKGI control plane, the TKGI control plane must be redeployed as well.
Certificate-specific notes:
kubo_odb_ca_2018: Shared by Kubernetes clusters and the TKGI control plane. If rotation is needed, contact Support.
Please send any feedback you have to [email protected]. | https://docs.pivotal.io/tkgi/1-9/rotate-cluster-certificates.html | 2020-11-23T19:44:35 | CC-MAIN-2020-50 | 1606141164142.1 | [] | docs.pivotal.io |
JMS Synchronous Invocations: Dual Channel HTTP-to-JMS¶
A JMS synchronous invocation takes place when a JMS producer receives a response to a JMS request produced by it when invoked. The WSO2 Micro Integrator uses an internal JMS correlation ID to correlate the request and the response. See JMSRequest/ReplyExample for more information. JMS synchronous invocations are further explained in the following. The
SMSSenderProxy proxy service picks the response from the
SMSReceiveNotification queue and delivers it to the client as an HTTP message using the internal mediation logic.
Note that the
SMSSenderProxy proxy service is able to pick up the message from the
SMSReceiveNotification queue because the
transport.jms.ReplyDestination parameter of the
SMSSenderProxy proxy service is set to the same
SMSReceiveNotification queue.
Info
Correlation between request and response:
Note that the message that is passed to the back-end service contains the JMS message ID. However, the back-end service is required to return the response using the JMS correlation ID. Therefore, the back-end service should be configured to copy the message ID from the request (the value of the JMSMessageID header) to the correlation ID of the response (using the JMSCorrelationID header).
Synapse configurations¶
Create two proxy services with the JMS publisher configuration and JMS consumer configuration given below and then deploy the proxy service artifacts in the Micro Integrator.
See the instructions on how to build and run this example.
JMS publisher configuration¶
Shown below is the
SMSSenderProxy proxy service.
&transport.jms.ReplyDestination=SMSReceiveNotificationStore"/> </endpoint> </target> <description/> </proxy>
Listed below are some of the properties that can be used with the Property mediator used in this proxy service:
The endpoint of this proxy service uses the properties listed below to connect the proxy service to the JMS queue in the Message Broker.
JMS consumer configuration¶
Create a proxy service named
SMSForwardProxy with the configuration given below. This proxy service will consume messages from the
SMSStore queue of the Message Broker Profile, and forward the messages to the back-end service.
<proxy xmlns="" name="SMSForwardProxy" transports="jms" statistics="disable" trace="disable" startOnLoad="true"> <target> <inSequence> <header name="Action" value="urn:getQuote"/> parameter and the
transport.jms.Destination properties parameter map the proxy service to the
SMSStore queue.
Build and run¶
Create the artifacts:
- Set up WSO2 Integration Studio.
- Create an integration project with an ESB Configs module and an Composite Exporter.
- Create the proxy services with the configurations given above.
- Deploy the artifacts in your Micro Integrator.
Set up the broker:
- Configure a broker with your Micro Integrator instance. Let's use Active MQ for this example.
- Start the broker.
Start the Micro Integrator (after starting the broker).
Warning
If you are using message processor with Active MQ broker add the following configuration to the startup script before starting the server as shown below, For Linux/Mac OS update
micro-integrator.shand for Windows update
micro-integrator.batwith
-Dorg.apache.activemq.SERIALIZABLE_PACKAGES="*"system property.
Set up the back-end service:
- Download the back-end service.
- Extract the downloaded zip file.
- Open a terminal, navigate to the
axis2Server/bin/directory inside the extracted folder.
Execute the following command to start the axis2server with the SimpleStockQuote back-end service:
sh axis2server.sh
axis2server.bat
To invoke this service, the address URI of this proxy service is defined as. Send a POST request to the above address URI with the following payload:
TopTop
<soapenv:Envelope xmlns: <soapenv:Header/> <soapenv:Body> <ser:getQuote> <ser:request> <xsd:symbol>IBM</xsd:symbol> </ser:request> </ser:getQuote> </soapenv:Body> </soapenv:Envelope> | https://ei.docs.wso2.com/en/latest/micro-integrator/use-cases/examples/jms_examples/dual-channel-http-to-jms/ | 2020-11-23T19:28:40 | CC-MAIN-2020-50 | 1606141164142.1 | [] | ei.docs.wso2.com |
Version 6 of Shield Docs' secure file and data sharing system delivers security and productivity enhancements. As previously reported, the idea behind Shield Docs is to combine secure file-sharing and collaboration, a virtual data room, document management and data protection.
The newly released version 6 of Shield Docs delivers three major enhancements:
"Throughout their history, banks have relied heavily upon perimeter-based security measures to maintain, protect and control sensitive customer information. As financial institutions engage with emerging technologies and concepts such as open banking, the effectiveness of these traditional approaches to security can become significantly compromised." said."
(This article was originally published in IT Wire: ):
Emma Davy
Communications Manager
02 9286 9966
Alexandra Drury
Country Manager
Shield Docs
0423 798 999 | https://shielddocs.com/why-shield-docs/news/it-wire-talks-about-how-shield-docs-6-tightens-security/ | 2020-11-23T18:56:45 | CC-MAIN-2020-50 | 1606141164142.1 | [] | shielddocs.com |
subscription filters for the specified log group. You can list all the subscription filters or filter the results by prefix. The results are ASCII-sorted by filter name.
See also: AWS API Documentation
See 'aws help' for descriptions of global parameters.
describe-subscription-filters: subscriptionFilters
describe-subscription-filters --log-group-name <value> [--filter-name-prefix <value>] [--cli-input-json <value>] [--starting-token <value>] [--page-size <value>] [--max-items <value>] [--generate-cli-skeleton <value>]
--log-group-name (string)
The name of the log group.
--filter-name-prefix (string)
The prefix to match. If you don't specify a value, no prefix filter is.
subscriptionFilters -> (list)
The subscription filters.
(structure)
Represents a subscription filter.
filterName -> (string)The name of the subscription filter.
logGroupName -> (string)The name of the log group.
filterPattern -> (string)A symbolic description of how CloudWatch Logs should interpret the data in each log event. For example, a log event can contain timestamps, IP addresses, strings, and so on. You use the filter pattern to specify what to look for in the log event message.
destinationArn -> (string)The Amazon Resource Name (ARN) of the destination.
roleArn -> (string)
distribution -> (string)The method used to distribute log data to the destination, which can be either random or grouped by log stream.
creationTime -> (long)The creation time of the subscription filter, expressed as the number of milliseconds after Jan 1, 1970 00:00:00 UTC.
nextToken -> (string)
The token for the next set of items to return. The token expires after 24 hours. | https://docs.aws.amazon.com/cli/latest/reference/logs/describe-subscription-filters.html | 2021-01-16T03:09:31 | CC-MAIN-2021-04 | 1610703499999.6 | [] | docs.aws.amazon.com |
Click TRACKERS from the Project Home menu.
In the List Trackers, Planning Folders and Teams page, click Kanban.
Select the required kanban board from Manage Boards (available only for project administrators). If you are a project member, select your kanban board from the Select Board list. The selected kanban board for the current project context is displayed.
Select the planning folder for which you want to view the status of the artifacts. If your project contains teams, the Select a Team drop-down list appears. From this drop-down, select All artifacts or if you want to view artifacts specific to a team within the selected planning folder, select the relevant team. Depending upon your selection, artifacts pertaining to the mapped trackers are displayed in the appropriate swimlane.
You can view a maximum of 5 swimlanes at a time on your Kanban Board. If there are more, use the carousel scroll to slide across the swimlanes. You can expand or collapse each swimlane. When there are no results to display in a swimlane, the Expand/Collapse button does not appear.
Based on the configuration values, if the constraints are violated, the relevant status headers are highlighted appropriately. A violation of minimum constraint is highlighted in orange indicating that resources are underutilized whereas that of maximum constraint is highlighted in red indicating resources being overloaded and bottlenecks to be fixed. For more information on how a kanban board is configured, see Set up Kanban Board.
Each swimlane header displays the status label, the total number of artifacts within the selected planning folder, and the minimum and maximum constraints that were configured for the kanban state.
In the above scenario, for the ‘In Progress’ status:
‘3’ is the total number of artifacts in the ‘In Progress’ status within the selected planning folder.
‘2’ indicates the minimum constraint and ‘*’ indicates the maximum constraint ‘-1’. (-1 as the minimum constraint translates to 0 whereas -1 as maximum indicates there is no limit and so represented by an *.)Important: These values and constraint validation are planning folder specific and not team specific. For example, in the above scenario, the total number of artifacts is the overall count of artifacts which are in the ‘In Progress’ status within the selected planning folder and not within the selected team.
Each artifact card displays its points (story points) or estimated effort if these fields are enabled in your tracker settings. If both are enabled, only the estimated effort is displayed hovering the mouse on which the artifact’s points is displayed. Similarly, if all the effort fields (estimated, remaining and actuals) are enabled, you can view them by hovering the mouse on the estimated effort.
Cards have a color-coded bar to visually identify the priority. In addition, the open and closed cards’ background is color-coded to uniquely identify the status.
A parent artifact card displays the count of its child artifacts (Show/Hide child artifacts button). Click this button to view or hide the child artifacts. If there are more than 5 child artifacts, when you expand the parent artifact, you will see pagination arrows (‘Previous’/Next’) at the bottom of the parent artifact card.
If the tracker types of both the parent and child artifacts have been mapped with the kanban statuses, the child artifacts appear within their parent artifact card and also as an individual card. In the above scenario, Epic (parent artifact) and Story (child artifact) trackers have been mapped to the kanban status ‘In Progress’. Story ‘artf1014’ is the child artifact of the epic ‘artf1002’. So you will see ‘artf1014’ within the parent artifact card as well as outside of it as an individual artifact card.
When a child artifact is closed, a ‘Closed’ tag appears next to the child artifact ID within the parent artifact card.
Edit artifacts
Use Kanban Board to edit an artifact using the Edit icon on the artifact card or move artifacts from one status to another appropriately. Based on the status changes you make, the swimlane headers get updated appropriately.Remember:
You can edit an artifact only if you have the tracker edit permission. Otherwise, you can only view it..
When you drag the artifacts from one swimlane to another, note the following validations:
The changes you see in swimlane headers do not apply to any specific team but takes into account the total count of artifacts within the selected planning folder.
You cannot move an artifact card to a status that is not mapped to that particular tracker type. For example, a ‘Ready for QA’ kanban status may not have been mapped to any of the epic tracker statuses. So when you attempt to move an epic to that status, you will get an
Invalid status configurationerror.
You can move a parent artifact to the ‘Closed’ status only if all its child artifacts are closed. However, a child artifact is independent of its parent with regard to the status change, that is, when you move a parent artifact from one status to another, the status of the child may still remain unchanged. For example, an epic may have many stories, the status of some may change from ‘Not Started’ to ‘In Progress’ whereas some may not. So when the status of an epic as one single unit may change, it need not apply to all of its child artifacts.
When your move violates a constraint (minimum or maximum), a warning is displayed; but you can still move the artifacts.
If a kanban status is mapped to more than one tracker status, when you move an artifact, you have an option to choose a status as shown in the following screen shot.
Create artifacts and child artifacts
Using the Create Artifact icon on the top right of your kanban board, you can quickly create an artifact. The Create Artifact icon appears only when you configure a kanban board and select a planning folder.
Similarly, using the Create Child Artifact icon available on the artifact card, you can quickly create a child artifact for any artifact card displayed on Kanban Board.Note: The Create Child Artifact icon does not appear on closed artifact cards.
You can create an artifact or a child artifact only if you have the required tracker permission. the required icon: Create Artifact or Create Child Artifact (
).
Enter the required information in the relevant window and click Submit.
Only trackers configured for the kanban board appear in the Trackers drop-down list.
While you quickly add artifacts with data for just three fields such as the tracker type, title and description, the artifacts are, however, saved with default values for other required fields, which you may choose to update later. If the default state of the selected tracker is not mapped for the newly created artifact or child artifact, the artifact card does not show up on the kanban board.
Show / Hide closed cards
When you have a large volume of closed cards in a planning folder, you can restrict the number of closed cards you want to view by toggling between the ‘Show all closed artifacts’ icon (
).
Or the ‘Hide artifacts older than 60 days’ icon using which you can hide closed cards older than 60 days. This is the default option (
).Note: The Show/Hide toggle icon appears only on configured kanban boards. Your selection is saved for the subsequent sessions as well.
Position artifact cards
If you have the requisite permission, you can drag and drop a card above or below the other cards within a swim lane.
[]: | https://docs.collab.net/teamforge193/usekanbanboard.html | 2021-01-16T03:22:24 | CC-MAIN-2021-04 | 1610703499999.6 | [] | docs.collab.net |
SSH
SSH (secure shell) is the only way to log into the systems at SciNet. It opens a secure, encrypted connection between your computer and those at SciNet, protecting not only your password, but all other data going between the machines. If you have a Linux or Mac OSX machine, you already have SSH installed; if you have a Windows machine, you will have to install additional software before logging into SciNet.
Contents
SSH For Linux or Mac OS X Users
Simple Login
To login to the systems at SciNet, you will have to open a terminal in Linux, or Mac OS X, and type
ssh [email protected]
where you will replace USERNAME with your username; you will then be prompted to type your password. Once done, you will be logged into the login nodes at the SciNet data centre, as if you have a terminal from those machines on your destop.
Note that if your username is the same on both the machine you're logging in from and the scinet machines, you can drop the USERNAME@, as SSH by default will try to use the username on the machine you are logging in from.
Copying Files
The SSH protocol can be used for more than logging in remotely; it can also be used to copy files between machines. The advantages are the same; both your password and the data you are sending or receiving are secure.
To copy small files from your home computer to a subdirectory of your /scratch directory at SciNet, you would type from a terminal on your computer
scp filetocopy.txt [email protected]:/scratch/USERNAME/some_subdirectory/
Note that soon the location of your scratch directory will change, and you will have to type:
scp filetocopy.txt [email protected]:/scratch/G/GROUPNAME/USERNAME/some_subdirectory/
Similarly, to copy files back into your current directory, you would type
scp [email protected]:/scratch/G/GROUPNAME/USERNAME/my_dirs/myfile.txt .
The Data Management wiki page has much more information on doing large transfers efficiently.
SSH for Windows Users
To use SSH on Windows, you will have to install SSH software. SciNet recommends, roughly in order of preference:
- Cygwin is an entire linux-like environment for Windows. Using something like Cygwin is highly recommended if you are going to be interacting a lot with linux systems, as it will give you a development environment very similar to that on the systems you'll be using. Download and run setup.exe, and install any packages you think you'll need. Once this is done, you will have icons for terminals, including one saying something like "X11". From either of these, you'll be able to type ssh [email protected] as above; if you think you will need to pop up windows from SciNet machines (e.g., for displaying data or using Profiling Tools), you'll need to use the X11 terminal and type ssh -Y [email protected]. Other ssh tools such as scp will work as above.
- MobaXterm is a tabbed ssh client with some Cygwin tools all wrapped up into one executable.
- OpenSSH For Windows installs only those parts of Cygwin necessary to run SSH. Again, once installed, opening up one of the new terminals allows you to use SSH as in the Linux/Mac OSX section above, but X11 forwarding for displaying windows may not work.
- PuTTY is one of the better stand-alone SSH programs for windows. It is a small download, and is enough to get you logged into the SciNet machines. For advanced use like X11 forwarding however, you are better off using Cygwin. A related program, PSCP, can be used to copy files using a graphical user interface.
WARNING: Make sure you download putty from the official website, because there are "trojanized" versions of putty around that will send your login information to a site in Russia (as reported here).
X11 Forwarding
If during your login session you will only need to be typing and reading text, the techniques described above will suffice. However, if in a session you will need to be displaying graphics — such as plotting data on the scinet machines or using our performance profiling tools — you can use SSH's very powerful ability to forward several different types of data over one connection. To enable "X11 forwarding" over this SSH connection, add the option -Y to your command,
ssh -Y [email protected]
- Both, Windows and Mac OS users, will need to install an additional program to have X-forwarding working, usually referred to as "Xserver" which will interprete the data (graphics) forwarded and displayed on the local computer.
- Mac OS users need to install XQUARTZ
- Windows users could opt for installing MobaXterm which is a ssh-client which already includes an Xserver.
Advanced SSH Usage
There are a few SSH techniques that are handy to know.
SSH Keys
You can automate the process of logging into SciNet systems by setting up SSH keys. You can read about doing so by visiting this page.
SSH Tunnels
A more-obscure technique for setting up SSH communication is the construction of an SSH tunnel. This can be useful if, for example, your code needs to access an external software license server from a Niagara compute node. You can read about setting up SSH tunnels on Niagara here.
Two-Factor authentication
As a protection for you and for you data and programs, you may use Two-Factor authentication when connecting to Niagara thru SSH. This is optional.
What is Two-Factor authentication?
According to [[1]], Multi-factor authentication.
Two-step verification or two-step authentication is a method of confirming a user's claimed identity by utilizing something they know (password) and a second factor other than something they have or something they are. An example of a second step is the user repeating back something that was sent to them through an out-of-band mechanism (such as a code sent over SMS), or a number generated by an app that is common to the user and the authentication system.
Benefits of Two-Factor authentication, 2FA:
2FA delivers an extra layer of protection for user accounts that, while not impregnable, significantly decreases the risk of unauthorized access and system breaches. Users benefit from increased security in the same manner as account access requires far more resources from the hacker.
If you already follow basic password security measures, two-factor authentication will make it more difficult for cyber criminals to breach your account because it is hard to get the second authentication factor, they would have to be much closer to you. This drastically reduces their chances to succeed.
A hacker may gain access to computer. This is not impossible and rather common. They can plant a malware in your computer such as a key logger which will transmit all your keyboard activity. Or a malware that will give a hacker total remote access to you computer. This hacker will easily get your passwords, but it is virtually impossible that the same hacker can get access to your second factor.
We encourage all our users to setup Two-Factor authentication. It’s for your own protection. To setup, you can do it here.. | https://docs.scinet.utoronto.ca/index.php?title=SSH&oldid=2663&printable=yes | 2021-01-16T03:41:16 | CC-MAIN-2021-04 | 1610703499999.6 | [] | docs.scinet.utoronto.ca |
Deprecation: #84109 - Deprecate DependencyResolver¶
See Issue #84109
Description¶
The class
\TYPO3\CMS\Core\Package\DependencyResolver has been marked as deprecated as the code as been merged
into
\TYPO3\CMS\Core\Package\PackageManager.
Additionally the
\TYPO3\CMS\Core\Package\PackageManager method
injectDependencyResolver has been marked as
deprecated and the
\TYPO3\CMS\Core\Package\PackageManager triggers a deprecation warning when
\TYPO3\CMS\Core\Service\DependencyOrderingService is not injected through the constructor.
Impact¶
Installations that use
\TYPO3\CMS\Core\Package\DependencyResolver or create an own
\TYPO3\CMS\Core\Package\PackageManager instance will trigger a deprecation warning.
Affected Installations¶
All installations that use custom extensions that use the
\TYPO3\CMS\Core\Package\DependencyResolver class or
create an own
\TYPO3\CMS\Core\Package\PackageManager instance. | https://docs.typo3.org/c/typo3/cms-core/master/en-us/Changelog/9.2/Deprecation-84109-DeprecateDependencyResolver.html | 2021-01-16T03:12:21 | CC-MAIN-2021-04 | 1610703499999.6 | [] | docs.typo3.org |
< Main Index Seam Tools News >
Quick Fix Description
CDI Quick Fix proposals now have detailed description where you can see the code which is going to be added/removed/modified:
Related Jira
Quick fix for @Named injected methods/parameters
A new quick fix is available for @Named injected methods/parameters that does not specify the value member:
Multiple @Disposes/@Observes
There is also a new quick fix for problems with methods that have more than one parameter annotated @Disposes/@Observes.
@SuppressWarnings
CDI Validator now supports @SuppressWarnings annotation. There is a quick fix available for every validation warning which adds the corresponding @SuppressWarnings.
You can find the full list of all the available warning names in Jira. | http://docs.jboss.org/tools/whatsnew/cdi/cdi-news-3.3.0.Beta1.html | 2018-07-16T04:48:02 | CC-MAIN-2018-30 | 1531676589179.32 | [] | docs.jboss.org |
New in version 2.4.
The below requirements are needed on the host that executes this module.
Note
- name: Get facts for one load balancer azure_rm_loadbalancer_facts: name: Testing resource_group: TestRG - name: Get facts for all load balancers azure_rm_loadbalancer_facts: - name: Get facts by tags azure_rm_loadbalancer_facts: tags: - testing
Common return values are documented here, the following are the fields unique to this module: | https://docs.ansible.com/ansible/latest/modules/azure_rm_loadbalancer_facts_module.html | 2018-07-16T04:37:22 | CC-MAIN-2018-30 | 1531676589179.32 | [] | docs.ansible.com |
Events and routed events overview
[This article is for Windows 8.x and Windows Phone 8.x developers writing Windows Runtime apps. If you’re developing for Windows 10, see the latest documentation]
We describe the programming concept of events in a Windows Runtime app, when using C#, Visual Basic or Visual C++ component extensions (C++/CX) as your programming language, and XAML for your UI definition. You can assign handlers for events as part of the declarations for UI elements in XAML, or you can add the handlers in code. Windows Runtime supports routed events: certain input events and data events can be handled by objects beyond the object that fired the event. Routed events are useful when you define control templates, or use pages or layout containers.
Events as a programming concept
Generally speaking, event concepts when programming a Windows Runtime app are similar to the event model in most popular programming languages. If you know how to work with Microsoft .NET or C++ events already, you have a head start. But you don't need to know that much about event model concepts to perform some basic tasks, such as attaching handlers.
When you use C#, Visual Basic or C++/CX as your programming language, the UI is defined in markup (XAML). In XAML markup syntax, some of the principles of connecting events between markup elements and runtime code entities are similar to other Web technologies, such as ASP.NET, or HTML5.
Note The code that provides the runtime logic for a XAML-defined UI is often referred to as code-behind or the code-behind file. In the Microsoft Visual Studio solution views, this relationship is shown graphically, with the code-behind file being a dependent and nested file versus the XAML page it refers to.
Button.Click: an introduction to events and XAML
One of the most common programming tasks for a Windows Runtime app is to capture user input to the UI. For example, your UI might have a button that the user must click to submit info or to change state.
You define the UI for your Windows Runtime app by generating XAML. This XAML is usually the output from a design surface in Visual Studio. You can also write the XAML in a plain-text editor or a third-party XAML editor. While generating that XAML, you can wire event handlers for individual UI elements at the same time that you define all the other XAML attributes that establish property values of that UI element.
To wire the events in XAML, you specify the string-form name of the handler method that you've already defined or will define later in your code-behind. For example, this XAML defines a Button object with other properties (x:Name, Content) assigned as attributes, and wires a handler for the button's Click event by referencing a method named
showUpdatesButton_Click:
<Button x:
Tip Event wiring is a programming term. It refers to the process or code whereby you indicate that occurrences of an event should invoke a named handler method. In most procedural code models, event wiring is implicit or explicit "AddHandler" code that names both the event and method, and usually involves a target object instance. In XAML, the "AddHandler" is implicit, and event wiring consists entirely of naming the event as the attribute name of an object element, and naming the handler as that attribute's value.
You write the actual handler in the programming language that you're using for all your app's code and code-behind. With the attribute
Click="showUpdatesButton_Click", you have created a contract that when the XAML is markup-compiled and parsed, both the XAML markup compile step in your IDE's build action and the eventual XAML parse when the app loads can find a method named
showUpdatesButton_Click as part of the app's code.
showUpdatesButton_Click must be a method that implements a compatible method signature (based on a delegate) for any handler of the Click event. For example, this code defines the
showUpdatesButton_Click handler.
private void showUpdatesButton_Click (object sender, RoutedEventArgs e) { Button b = sender as Button; //more logic to do here... }
Private Sub showUpdatesButton_Click(ByVal sender As Object, ByVal e As RoutedEventArgs) Dim b As Button = CType(sender, Button) ' more logic to do here... End Sub
void MyNamespace::BlankPage::showUpdatesButton_Click(Platform::Object^ sender, Windows::UI::Xaml::Input::RoutedEventArgs^ e) { Button^ b = (Button^) sender; //more logic to do here... }
In this example, the
showUpdatesButton_Click method is based on the RoutedEventHandler delegate. You'd know that this is the delegate to use because you'll see that delegate named in the syntax for the Click method on the MSDN reference page.
Tip Visual Studio provides a convenient way to name the event handler and define the handler method while you're editing XAML. When you provide the attribute name of the event in the XAML text editor, wait a moment until a Microsoft IntelliSense list displays. If you click <New Event Handler> from the list, Microsoft Visual Studio will suggest a method name based on the element's x:Name (or type name), the event name, and a numeric suffix. You can then right-click the selected event handler name and click Navigate to Event Handler. This will navigate directly to the newly inserted event handler definition, as seen in the code editor view of your code-behind file for the XAML page. The event handler already has the correct signature, including the sender parameter and the event data class that the event uses. Also, if a handler method with the correct signature already exists in your code-behind, that method's name appears in the auto-complete drop-down along with the <New Event Handler> option. You can also press the Tab key as a shortcut instead of clicking the IntelliSense list items.
Defining an event handler
For objects that are UI elements and declared in XAML, event handler code is defined in the partial class that serves as the code-behind for a XAML page. Event handlers are methods that you write as part of the partial class that is associated with your XAML. These event handlers are based on the delegates that a particular event uses. Your event handler methods can be public or private. Private access works because the handler and instance created by the XAML are ultimately joined by code generation. In general, we recommend that you make your event handler methods private in the class.
Note Event handlers for C++ don't get defined in partial classes, they are declared in the header as a private class member. The build actions for a C++ project take care of generating code that supports the XAML type system and code-behind model for C++.
The sender parameter and event data
The handler you write for the event can access two values that are available as input for each case where your handler is invoked. The first such value is sender, which is a reference to the object where the handler is attached. The sender parameter is typed as the base Object type. A common technique is to cast sender to a more precise type. This technique is useful if you expect to check or change state on the sender object itself. Based on your own app design, you usually know a type that is safe to cast sender to, based on where the handler is attached or other design specifics.
The second value is event data, which generally appears in syntax definitions as the e parameter. You can discover which properties for event data are available by looking at the e parameter of the delegate that is assigned for the specific event you are handling, and then using IntelliSense or Object Browser in Visual Studio. Or you can use the Windows Runtime reference documentation.
For some events, the event data's specific property values are as important as knowing that the event occurred. This is especially true of the input events. For pointer events, the position of the pointer when the event occurred might be important. For keyboard events, all possible key presses fire a KeyDown and KeyUp event. To determine which key a user pressed, you must access the KeyRoutedEventArgs that is available to the event handler. For more info about handling input events, see Responding to keyboard input and Quickstart: Pointers. Input events and input scenarios often have additional considerations that are not covered in this topic, such as pointer capture for pointer events, and modifier keys and platform key codes for keyboard events.
Event handlers that use the async pattern
In some cases you'll want to use APIs that use an async pattern within an event handler. For example, you might use a Button in an AppBar to display a file picker and interact with it. However, many of the file picker APIs are asynchronous. They have to be called within an async/awaitable scope, and the compiler will enforce this. So what you can do is add the async keyword to your event handler such that the handler is now asyncvoid. Now your event handler is permitted to make async/awaitable calls.
For an example of user-interaction event handling using the async pattern, see File access and pickers (part of theCreate your first Windows Runtime app using C# or Visual Basic series). See also Quickstart: Calling asynchronous APIs in C# or Visual Basic.
Adding event handlers in code
XAML is not the only way to assign an event handler to an object. To add event handlers to any given object in code, including to objects that are not usable in XAML, you can use the language-specific syntax for adding event handlers.
In C#, the syntax is to use the
+= operator. You register the handler by referencing the event handler method name on the right side of the operator.
If you use code to add event handlers to objects that appear in the run-time UI, a common practice is to add such handlers in response to an object lifetime event or callback, such as Loaded or OnApplyTemplate, so that the event handlers on the relevant object are ready for user-initiated events at run time. This example shows a XAML outline of the page structure and then provides the C# language syntax for adding an event handler to an object.
<Grid x: <StackPanel> <TextBlock Name="textBlock1">Put the pointer over this text</TextBlock> ... </StackPanel> </Grid>
void LayoutRoot_Loaded(object sender, RoutedEventArgs e) { textBlock1.PointerEntered += textBlock1_PointerEntered; textBlock1.PointerExited += textBlock1_PointerExited; }
Note A more verbose syntax exists. In 2005, C# added a feature called delegate inference, which enables a compiler to infer the new delegate instance and enables the previous, simpler syntax. The verbose syntax is functionally identical to the previous example, but explicitly creates a new delegate instance before registering it, thus not taking advantage of delegate inference. This explicit syntax is less common, but you might still see it in some code examples.
void LayoutRoot_Loaded(object sender, RoutedEventArgs e) { textBlock1.PointerEntered += new PointerEventHandler(textBlock1_PointerEntered); textBlock1.PointerExited += new MouseEventHandler(textBlock1_PointerExited); } don't need an object lifetime-based event handler to initiate attaching the other event handlers; the Handles connections are created when you compile your XAML page.
Private Sub textBlock1_PointerEntered(ByVal sender As Object, ByVal e As PointerRoutedEventArgs) Handles textBlock1.PointerEntered '... End Sub
Note Visual Studio and its XAML design surface generally promote the instance-handling technique instead of the Handles keyword. This is because establishing the event handler wiring in XAML is part of typical designer-developer workflow, and the Handles keyword technique is incompatible with wiring the event handlers in XAML.
In C++, you also use the += syntax, but there are differences from the basic C# form:
- No delegate inference exists, so you must use ref new for the delegate instance.
- The delegate constructor has two parameters, and requires the target object as the first parameter. Typically you specify this.
- The delegate constructor requires the method address as the second parameter, so the & reference operator precedes the method name.
textBlock1->PointerEntered += ref new PointerEventHandler(this,&BlankPage::textBlock1_PointerExited);
Removing event handlers in code
It's not usually necessary to remove event handlers in code, even if you added them in code. The object lifetime behavior for most Windows Runtime objects such as pages and controls will destroy the objects when they are disconnected from the main Window and its visual tree, and any delegate references are destroyed too. .NET does this through garbage collection and Windows Runtime with C++/CX uses weak references by default.
There are some rare cases where you do want to remove event handlers explicitly. These include:
- Handlers you added for static events, which can't get garbage-collected in a conventional way. Examples of static events in the Windows Runtime API are the events of the CompositionTarget and Clipboard classes.
- Test code where you want the timing of handler removal to be immediate, or code where you what to swap old/new event handlers for an event at run time.
- The implementation of a custom remove accessor.
- Custom static events.
- Handlers for page navigations.
FrameworkElement.Unloaded or Page.NavigatedFrom are possible event triggers that have appropriate positions in state management and object lifetime such that you can use them for removing handlers for other events.
For example, you can remove an event handler named textBlock1_PointerEntered from the target object textBlock1 using this code.
textBlock1.PointerEntered -= textBlock1_PointerEntered;
RemoveHandler textBlock1.PointerEntered, AddressOf textBlock1_PointerEntered
You can also remove handlers for cases where the event was added through a XAML attribute, which means that the handler was added in generated code. This is easier to do if you provided a Name value for the element where the handler was attached, because that provides an object reference for code later; however, you could also walk the object tree in order to find the necessary object reference in cases where the object has no Name.
If you need to remove an event handler in C++/CX, you'll need a registration token, which you should've received from the return value of the
+= event handler registration. That's because the value you use for the right side of the
-= deregistration in the C++/CX syntax is the token, not the method name. For C++/CX, you can't remove handlers that were added as a XAML attribute because the C++/CX generated code doesn't save a token.
Routed events
The Windows Runtime with C#, Microsoft Visual Basic or C++/CX supports the concept of a routed event for a set of events that are present on most UI elements. These events are for input and user interaction scenarios, and they are implemented on the UIElement base class. Here's a list of input events that are routed events:
-
- GotFocus
- LostFocus
A routed event is an event that is potentially passed on (routed) from a child object to each of its successive parent objects in an object tree. The XAML structure of your UI approximates this tree, with the root of that tree being the root element in XAML. The true object tree might vary somewhat from the XAML element nesting, because the object tree doesn't include XAML language features such as property element tags. You can conceive of the routed event as bubbling from any XAML object element child element that fires the event, toward the parent object element that contains it. The event and its event data can be handled on multiple objects along the event route. If no element has handlers, the route potentially keeps going until the root element is reached.
If you know Web technologies such as Dynamic HTML (DHTML) or HTML5, you might already be familiar with the bubbling event concept.
When a routed event bubbles through its event route, any attached event handlers all access a shared instance of event data. Therefore, if any of the event data is writeable by a handler, any changes made to event data will be passed on to the next handler, and may no longer represent the original event data from the event. When an event has a routed event behavior, the reference documentation will include remarks or other notations about the routed behavior.
The OriginalSource property of RoutedEventArgs
When an event bubbles up an event route, sender is no longer the same object as the event-raising object. Instead, sender is the object where the handler that is being invoked is attached.
In some cases, sender is not interesting, and you are instead interested in info such as which of the possible child objects the pointer is over when a pointer event fired, or which object in a larger UI held focus when a user pressed a keyboard key. For these cases, you can use the value of the OriginalSource property. At all points on the route, OriginalSource reports the original object that fired the event, instead of the object where the handler is attached. However, for UIElement input events, that original object is often an object that is not immediately visible in the page-level UI definition XAML. Instead, that original source object might be a templated part of a control. For example, if the user hovers the pointer over the very edge of a Button, for most pointer events the OriginalSource is a Border template part in the Template, not the Button itself.
Tip Input event bubbling is especially useful if you are creating a templated control. Any control that has a template can have a new template applied by its consumer. The consumer that's trying to recreate a working template might unintentionally eliminate some event handling declared in the default template. You can still provide control-level event handling by attaching handlers as part of the OnApplyTemplate override in the class definition. Then you can catch the input events that bubble up to the control's root on instantiation.
The Handled property
Several event data classes for specific routed events contain a property named Handled. For examples, see PointerRoutedEventArgs.Handled, KeyRoutedEventArgs.Handled, DragEventArgs.Handled. In all cases Handled is a settable Boolean property.
Setting the Handled property to true influences the event system behavior. When Handled is true, the routing stops for most event handlers; the event doesn't continue along the route to notify other attached handlers of that particular event case. What "handled" means in the context of the event and how your app responds to it is up to you. Basically, Handled is a simple protocol that enables app code to state that an occurrence of an event doesn't need to bubble to any containers, your app logic has taken care of what needs done. Conversely though, you do have to be careful that you aren't handling events that probably should bubble so that built-in system or control behaviors can act. For example, handling low-level events within the parts or items of a selection control can be detrimental. The selection control might be looking for input events to know that the selection should change.
Not all of the routed events can cancel a route in this way, and you can tell that because they won't have a Handled property. For example, GotFocus and LostFocus do bubble, but they always bubble all the way to the root, and their event data classes don't have a Handled property that can influence that behavior.
Input event handlers in controls
Specific Windows Runtime controls sometimes use the Handled concept for input events internally. This can make it seem like an input event never occurs, because your user code can't handle it. For example, the Button class includes logic that deliberately handles the general input event PointerPressed. It does so because buttons fire a Click event that is initiated by pointer-pressed input, as well as by other input modes such as handling keys like the Enter key that can invoke the button when it's focused. For purposes of the class design of Button, the raw input event is conceptually handled, and class consumers such as your user code can instead interact with the control-relevant Click event. Topics for specific control classes in the Windows Runtime API reference often note the event handling behavior that the class implements. In some cases, you can change the behavior by overriding OnEvent methods. For example, you can change how your TextBox derived class reacts to key input by overriding Control.OnKeyDown.
Registering handlers for already-handled routed events
Earlier we said that setting Handled to true prevents most handlers from being called. But the AddHandler method provides a technique where you can attach a handler that is always invoked for the route, even if some other handler earlier in the route has set Handled to true in the shared event data. This technique is useful if a control you are using has handled the event in its internal compositing or for control-specific logic. but you still want to respond to it from a control instance, or your app UI. But use this technique with caution, because it can contradict the purpose of Handled and possibly break a control's intended interactions.
Only the routed events that have a corresponding routed event identifier can use the AddHandler event handling technique, because the identifier is a required input of the AddHandler method. See the reference documentation for AddHandler for a list of events that have routed event identifiers available. For the most part this is the same list of routed events we showed you earlier. The exception is that the last two in the list: GotFocus and LostFocus don't have a routed event identifier, so you can't use AddHandler for those.
Routed events outside the visual tree
Certain objects participate in a relationship with the primary visual tree that is conceptually like having an overlay over the main visuals. These objects are not part of the usual parent-child relationships that connect all tree elements to the visual root. This is the case for any displayed Popup or ToolTip. If you want to handle routed events from a Popup or ToolTip, place the handlers on specific UI elements that are within the Popup or ToolTip and not the Popup or ToolTip elements themselves. Don't rely on routing inside any compositing that is performed for Popup or ToolTip content. This is because event routing for routed events works only along the main visual tree. A Popup or ToolTip is not considered a parent of subsidiary UI elements and never receives the routed event, even if it is trying to use something like the Popup default background as the capture area for input events.
Hit testing and input events
Determining whether and where in UI an element is visible to mouse, touch, and stylus input is called hit testing. For touch actions and also for interaction-specific or manipulation events that are consequences of a touch action, an element must be hit-test visible in order to be the event source and fire the event that is associated with the action. Otherwise, the action passes through the element to any underlying elements or parent elements in the visual tree that could interact with that input. There are several factors that affect hit testing, but you can determine whether a given element can fire input events by checking its IsHitTestVisible property. This property returns true only if the element meets these criteria:
- The element's Visibility property value is Visible.
The element's Background or Fill property value is not null. A null[Brush]() value results in transparency and hit test invisibility. (To make an element transparent but also hit testable, use a [Transparent]() brush instead of **null.) Note Background and Fill aren't defined by UIElement, and are instead defined by different derived classes such as Control and Shape. But the implications of brushes you use for foreground and background properties are the same for hit testing and input events, no matter which subclass implements the properties.
If the element is a control, its IsEnabled property value must be true.
- The element must have actual dimensions in layout. An element where either ActualHeight and ActualWidth are 0 won't fire input events.
Some controls have special rules for hit testing. For example, TextBlock has no Background property, but is still hit testable within the entire region of its dimensions. Image and MediaElement controls are hit testable over their defined rectangle dimensions, regardless of transparent content such as alpha channel in the media source file being displayed. WebView controls have special hit testing behavior because the input can be handled by the hosted HTML and fire script events.
Most Panel classes and Border are not hit-testable in their own background, but can still handle the user input events that are routed from, applied transforms and layout changes can adjust the relative coordinate system of an element, and therefore affect which elements are found at a given location.
Commanding
A small number of UI elements support commanding. Commanding uses input-related routed events in its underlying implementation and enables processing of related UI input (a certain pointer action, a specific accelerator key) by invoking a single command handler. If commanding is available for a UI element, consider using its commanding APIs instead of any discrete input events. You typically use a Binding reference into properties of a class that defines the view model for data. The properties hold named commands that implement the language-specific ICommand commanding pattern. For more info, see ButtonBase.Command.
Custom events in the Windows Runtime
For purposes of defining custom events, how you add the event and what that means for your class design is highly dependent on which programming language you are using.
- For C# and Visual Basic, you are defining a CLR event. You can use the standard .NET event pattern, so long as you aren't using custom accessors (add/remove). Additional tips:
- For the event handler it's a good idea to use System.EventHandler<TEventArgs> because it has built-in translation to the Windows Runtime generic event delegate EventHandler<T>.
- Don't base your event data class on System.EventArgs because it doesn't translate to the Windows Runtime. Use an existing event data class or no base class at all.
- If you are using custom accessors, see Custom events and event accessors in Windows Runtime Components.
- If you're not clear on what the standard .NET event pattern is, see Defining Events for Custom Silverlight Classes. This is written for Microsoft Silverlight but it's still a good summation of the code and concepts for the standard .NET event pattern.
- For C++/CX, see Events (C++/CX).
- Use named references even for your own usages of custom events. Don't use lambda for custom events, it can create a circular reference.
You can't declare a custom routed event for Windows Runtime; routed events are limited to the set that comes from the Windows Runtime.
Defining a custom event is usually done as part of the exercise of defining a custom control. It's a common pattern to have a dependency property that has a property-changed callback, and to also define a custom event that's fired by the dependency property callback in some or all cases. Consumers of your control don't have access to the property-changed callback you defined, but having a notification event available is the next best thing. For more info, see Custom dependency properties.
Related topics
Responding to keyboard input
.NET events and delegates
Creating Windows Runtime components | https://docs.microsoft.com/en-us/previous-versions/windows/apps/hh758286(v=win.10) | 2018-07-16T05:27:20 | CC-MAIN-2018-30 | 1531676589179.32 | [] | docs.microsoft.com |
After you create users for unauthenticated access, you must enable unauthenticated access in the Connection Server to enable users to connect and access published applications.
Procedure
- In Horizon Administrator, select .
- Click the Connection Servers tab.
- Select the Connection Server instance and click Edit.
- Click the Authentication tab.
- Change Unauthenticated Access to Enabled.
- From the Default unauthenticated access user drop-down menu, select a user as the default user.
The default user must be present on the local pod in a Cloud Pod Architecture environment. If you select a default user from a different pod, Connection Server creates the user on the local pod before it makes the user the default user.
- (Optional) Enter the default session timeout for the user.
The default session timeout is 10 minutes after being idle.
- Click OK.
What to do next
Entitle unauthenticated users to published applications. See Entitle Unauthenticated Access Users to Published Applications. | https://docs.vmware.com/en/VMware-Horizon-7/7.1/com.vmware.horizon-view.administration.doc/GUID-84D87BC7-D1A8-4FE8-AF1D-2D06783676E9.html | 2018-07-16T05:22:56 | CC-MAIN-2018-30 | 1531676589179.32 | [] | docs.vmware.com |
User sign-in with Azure Active Directory Pass-through Authentication
What is Azure Active Directory Pass-through Authentication?
Azure Active Directory (Azure AD) Pass-through Authentication allows your users to sign in to both on-premises and cloud-based applications using the same passwords. This feature provides your users a better experience - one less password to remember, and reduces IT helpdesk costs because your users are less likely to forget how to sign in. When users sign in using Azure AD, this feature validates users' passwords directly against your on-premises Active Directory.
This feature is an alternative to Azure AD Password Hash Synchronization, which provides the same benefit of cloud authentication to organizations. However, security and compliance policies in certain organizations don't permit these organizations to send users' passwords, even in a hashed form, outside their internal boundaries. Pass-through Authentication is the right solution for such organizations.
You can combine Pass-through Authentication with the Seamless Single Sign-On feature. This way, when your users are accessing applications on their corporate machines inside your corporate network, they don't need to type in their passwords to sign in.
Key benefits of using Azure AD.
- The agent only makes outbound connections from within your network. Therefore, there is no requirement to install the agent in a perimeter network, also known as a DMZ.
- Protects your user accounts by working seamlessly with Azure AD Conditional Access policies, including Multi-Factor Authentication (MFA), and by filtering out brute force password attacks.
- Highly available
- Additional agents can be installed on multiple on-premises servers to provide high availability of sign-in requests.
Feature highlights
- Supports user sign-in into all web browser-based applications and into Microsoft Office client applications that use modern authentication.
- Sign-in usernames can be either the on-premises default username (
userPrincipalName) or another attribute configured in Azure AD Connect (known as
Alternate ID).
- The feature works seamlessly with conditional access features such as Multi-Factor Authentication (MFA) to help secure your users.
- Integrated with cloud-based self-service password management, including password writeback to on-premises Active Directory and password protection by banning commonly used passwords.
- Multi-forest environments are supported if there are forest trusts between your AD forests and if name suffix routing is correctly configured.
- It is a free feature, and you don't need any paid editions of Azure AD to use it.
- It can be enabled via Azure AD Connect.
- It uses a lightweight on-premises agent that listens for and responds to password validation requests.
- Installing multiple agents provides high availability of sign-in requests.
- It protects your on-premises accounts against brute force password attacks in the cloud.
Next steps
- Quick Start - Get up and running Azure AD Pass-through Authentication.
- Smart Lockout - Configure Smart Lockout capability on your tenant to protect user accounts.
- Current limitations - Learn which scenarios are supported and which ones are not.
- Technical Deep Dive - Understand how this feature works.
- Frequently Asked Questions - Answers to frequently asked questions.
- Troubleshoot - Learn how to resolve common issues with the feature.
- Security Deep Dive - Additional deep technical information on the feature.
- Azure AD Seamless SSO - Learn more about this complementary feature.
- UserVoice - For filing new feature requests. | https://docs.microsoft.com/en-au/azure/active-directory/connect/active-directory-aadconnect-pass-through-authentication | 2018-07-16T05:11:48 | CC-MAIN-2018-30 | 1531676589179.32 | [array(['media/active-directory-aadconnect-pass-through-authentication/pta1.png',
'Azure AD Pass-through Authentication'], dtype=object) ] | docs.microsoft.com |
You can use this workflow to configure firewall rules between interfaces.
About this task
Specify the source, destination and action of the firewall rules. Other fields of the firewall tuple default to "any." Source and destination are typically Edge vNIC interfaces indices.
Procedure
- Click the Workflows tab and then navigate to .
- Click the green Start Workflow icon.
- Select the NSX Connection object (NSX endpoint). If not set, select the connection from the NSX inventory from the vRO inventory view.
- Enter the Edge ID.
- Enter the firewall rules, specifying the source, destination and action.
- Click Submit. | https://docs.vmware.com/en/vRealize-Orchestrator/7.1/com.vmware.using.vro.nsx.plugin.doc_11/GUID-AB7E0DBF-D6BC-4E72-B5AB-609D84555E8F.html | 2018-07-16T04:58:52 | CC-MAIN-2018-30 | 1531676589179.32 | [] | docs.vmware.com |
Legacy: Start a one-to-one chat How to start a one-to-one chat in legacy chat. To start a chat with one user in your favorites list, double-click the user's name or right-click and select Send Message. To start a chat with one available user, double-click the user's name on the online users list. See Viewing Online Users. Send a message to start a conversation. | https://docs.servicenow.com/bundle/istanbul-servicenow-platform/page/use/using-social-it/task/t_StartAOneToOneChat.html | 2018-07-16T04:41:01 | CC-MAIN-2018-30 | 1531676589179.32 | [] | docs.servicenow.com |
You can set up filters for Linux log files to explicitly include or exclude log events. Linux event fields and operators see Collect Events from a Log a whitelist or blacklist parameter in the [filelog|] section.
For example
[filelog|apache] directory = path_to_log_directory include = glob_pattern blacklist = filter_expression
- Create a filter expression from Linux events fields and operators.
For example
whitelist = server_name
- Save and close the liagent.ini file.
Filter Configurations
You can configure the agent to collect only Apache logs where the server_name is sample.com and the remote_host is not equal to 127.0.0.1, for example
[filelog|apache] directory=/var/log/httpd include=access_log parser=clf whitelist = server_name == "sample.com" blacklist = remote_host == "127.0.0.1" | https://docs.vmware.com/en/vRealize-Log-Insight/4.5/com.vmware.log-insight.agent.admin.doc/GUID-519C6823-E576-4169-B0F4-6FF097CE4FFE.html | 2018-07-16T05:23:14 | CC-MAIN-2018-30 | 1531676589179.32 | [] | docs.vmware.com |
History panel in main window¶
With version 2017.4.0, the BcdaMenu was transformed from a button window style to a main window style of application. This change gained several features:
- keyboard motion between the various menus and menu items
- a status line to report program information (like the unimenu predecessor of BcdaMenu)
- a main window panel to gather any command output and show history
The history panel (like real history) cannot be changed after it has been written. Both stderr and stdout from any command are combined and reported in the window. There are options (under the Help menu) to control what is written to the history.
Since the original program did not have a history panel, the default is to not display the history panel. Again, the Help menu has an item to show/hide the history.
History panel is shown.
History panel is shown with debugging turned on. This shows when commands are started and stopped. All lines are time stamped with debugging turned on. | http://bcdamenu.readthedocs.io/en/latest/history.html | 2018-07-16T04:17:24 | CC-MAIN-2018-30 | 1531676589179.32 | [array(['_images/history.png', '_images/history.png'], dtype=object)
array(['_images/history_debug.png', '_images/history_debug.png'],
dtype=object) ] | bcdamenu.readthedocs.io |
FlowMeans is method for automated identification of cell populations in flow cytometry data based on K-means clustering.
The FlowMeans algorithm utilizes the R statistical computing environment and is implemented in FlowJo 10.1r7 as a plugin accessible from the Plugins menu (Workspace tab –> Populations band).
Setup
- If you have never used a FlowJo Plugin, please see the Installing Plugins page for detailed information on plugin and R setup before continuing.
- This plugin requires R and the package “flowMeans“. Ensure that you have this R package installed prior to launching the plugin. To install flowMeans, open R and enter the following into the R console:
source("") biocLite("flowMeans")
- To view documentation for the version of this package installed in your system, start R and enter:
browseVignettes("flowMeans")
Basic Operation
- Open FlowJo v10.1r7 or later.
- Load some cytometry data files (ex. FCS or LMD) into your FlowJo workspace, then Save the workspace.
- Select/highlight a sample or gated population node within the samples pane of the FlowJo workspace.
- Initiate the plugin by clicking on the FlowMeans menu item, located in the Plugins menu (Workspace tab –> Populations band–>Plugins).
- A new FlowMeans dialog window will open, prompting you to select the parameters to be used for clustering, and specify the number of cluster populations that will be produced.
- To initiate the FlowMeans calculation, click OK. The algorithm will run and return gated populations containing the events for each cluster.
References
- Aghaeepour, N. et. al. (2011) Rapid cell population identification in flow cytometry data. Cytometry A, DOI: 10.1002/cyto.a.21007. PubMed Link.
- Link to flowMeans package on Bioconductor
For more information on installing and running specific Plugins:
Questions about plugins or FlowJo? Send us an email at TechSupport [at] FlowJo [dot] com | http://docs.flowjo.com/d2/plugins/flowmeans/ | 2018-07-16T04:27:37 | CC-MAIN-2018-30 | 1531676589179.32 | [] | docs.flowjo.com |
Reporting roles Learn about the different reporting roles and the default abilities of each. Note: Users must have the itil role to see the Reports application, edit, and share reports. report scheduler [report_scheduler] Can Schedule emailing of all reports that they can see, including reports they cannot manage. Users with this role must also have another role that grants permission to create, edit, and share reports. group report user [report_group] Can manage reports that are shared with them (listed in Group). global report user [report_global] Can manage reports that are shared with everyone (listed in Global). report administrator [report_admin] Can manage, share, publish, and schedule all reports. Can access Reports > Administration and manage all report-related objects. The report_admin role inherits all other report roles. Related TasksPublish a reportSchedule a reportRelated ReferenceView the reports list | https://docs.servicenow.com/bundle/istanbul-performance-analytics-and-reporting/page/use/reporting/reference/r_ReportRoles.html | 2018-07-16T04:51:13 | CC-MAIN-2018-30 | 1531676589179.32 | [] | docs.servicenow.com |
This tutorial shows you how to get started using our Virtual Alexa emulator with Node.js and Javascript.
Please note - our Virtual Alexa project has replaced the BSTAlexa classes.
They are still available, and you can read about them here but will be deprecated in a future version.
The purpose of the emulator is to enable unit-testing!
Tutorial Prerequisites
- Mocha Test Framework
-
$ npm install mocha --save-dev
- A Node.js, Lambda-based Alexa skill
- If you do not have one and want to follow along at home, try ours here.
- The test used in this tutorial is found here.
- Virtual Alexa added to your project's package.json
$ npm install virtual-alexa --save-dev
- For this example, we make it a "dev" dependency as we will be using it only for testing.
Test Structure
We are using Mocha for tests, and Chai for assertions.
There are lots of other fine testing frameworks out there - for an example that uses Jest, look here.
Adding the Virtual Alexa module
At the top of your test, include:
const vax = require("virtual-alexa");
First Simple Test
it('Launches successfully', function (done) { const alexa = vax.VirtualAlexa.Builder() .handler("index.handler") // Lambda function file and name .intentSchemaFile("./speechAssets/IntentSchema.json") .sampleUtterancesFile("./speechAssets/SampleUtterances.txt") .create(); let reply = await alexa.launch(); assert.include(reply.response.outputSpeech.ssml, "Welcome to guess the price"); });
This test runs through some simple behavior:
- It emulates the Skill being launched
- It confirms the Skill returns the correct outputSpeech after being launched
describe("One player", () => { it("Flow works", async function () { const alexa = vax.VirtualAlexa.Builder() .handler("index.handler") // Lambda function file and name .intentSchemaFile("./speechAssets/IntentSchema.json") .sampleUtterancesFile("./speechAssets/SampleUtterances.txt") .create(); const launchResponse = await alexa.launch(); assert.include(launchResponse.response.outputSpeech.ssml, "Welcome to guess the price"); const singlePlayerResponse = await alexa.utter("1"); assert.include(singlePlayerResponse.response.outputSpeech.ssml, "tell us your name"); const firstProductQuestion = await alexa.utter("juan"); assert.include(firstProductQuestion.response.outputSpeech.ssml, "Guess the price"); const secondProductQuestion = await alexa.utter("200 dollars"); assert.include(secondProductQuestion.response.outputSpeech.ssml, "the actual price was"); assert.include(secondProductQuestion.response.outputSpeech.ssml, "Guess the price"); const thirdProductQuestion = await alexa.utter("200 dollars"); assert.include(thirdProductQuestion.response.outputSpeech.ssml, "the actual price was"); assert.include(thirdProductQuestion.response.outputSpeech.ssml, "Guess the price"); const gameEndQuestion = await alexa.utter("200 dollars"); assert.include(gameEndQuestion.response.outputSpeech.ssml, "Game ended, your final score was"); }); });
This test runs through an entire interaction with the user.
We start with launch, and then work our way through a series of utterances. With each step, we ensure the proper response is received.
Additional tests can be constructed on any part of the payload - cards, video directives, etc.
Going Even Further
We also support testing the AudioPlayer. You can see an example with our Super Simple Audio Player.
And here is an example that uses the Jest testing framework.
Lastly, to see how this is tied into a Continuous Integration/Continuous Delivery process, read our blog post here.. | http://docs.bespoken.io/en/latest/tutorials/tutorial_alexa_unit_testing/ | 2018-07-16T04:19:53 | CC-MAIN-2018-30 | 1531676589179.32 | [] | docs.bespoken.io |
How to use the Generate Caption feature
You can generate AI optimized captions for your posts using our Generate Caption feature. With this feature, you will no longer need to spend time creating the perfect caption, our AI will create it for you.
Important
The Generate Caption uses credits. Each credit is used upon the following circumstances
- Clicking on the Generate Caption button uses up 1 credit
- Clicking on Re-Generate Caption uses up to 1 credit for each time it is clicked. Clicking on the Re-Generate caption recreates a new caption, hence it is counted as using 1 token.
- Caption Generation credits are shared among team members who have permission to use them and are shared across all workspaces. For example, You have 200 credits and multiple workspaces:
- Workspace A uses Caption Generation 2 times
- Workspace B uses Caption Generation 1 time
Total credits used will be 2 + 1 = 3. Thus, 3 credits will be used leaving you with 200-3 = 197 credits remaining.
The Terms & Conditions of the Caption Generation credits
- A set amount of complimentary Caption Generation credits will be given to account holders.
- Caption Generation credits will reset at the start of every month based on the membership of your account. For example, a Medium account (199/mo) is given 250 Caption Generation credits. If you use 30 credits in one month, you will have 220 credits remaining. When the month starts, your Caption Generation credits will reset from 220 to 250.
- You can increase your monthly limit with payment as shown in section 3. For example, if your complimentary caption generation limit of 100, and you purchase another 100 credits, your monthly complimentary caption generation limit will increase to 200,
The following are restrictions/limitations to the Generate Caption feature:
- ContentStudio can only generate captions for article links
- The caption generation feature is limited to social posts only (Facebook, Twitter, Instagram, LinkedIn, GMB, Tumblr, Youtube, Pinterest)
1: How to start the Generate Caption feature
You can use the Generate Caption feature anywhere a composer pops up. Below are 2 examples of using this feature from different composers:
- Social Media Post composer
You can start this by going from the Dashboard to Publish-> Composer-> Social Media Post. In the text editor section, paste any link or use our assistant to find articles for your post and just drag and drop your selection into the text editor box. Doing that will show a Generate Caption button in the text editor. Check the image below for a visual representation of this process
- Discovery Composer
You can use the caption generation feature from the discovery composer by clicking on the "share icon" on any article in Discovery. There is no need to add any links here as the article is already selected.
2: Generate Caption
The Generate Caption feature brings up 3 options for you to select from. These options allow you to variate between the type of caption you would like:
- Paragraph Type: This option generates a caption in the form of a short, summarized paragraph. ContentStudios' AI uses various articles related to your selection to learn the key points of your article. Thus, the caption it makes lightly touches those points so that the content is not spoiled for readers.
- Listicle Type: This option creates a caption in the form of a summarized list. The AI uses its information to compile a short, inviting list of points of your article for readers.
- Tweet Type: If you need a caption fit for tweets, this option will generate a few short tweets of your article, you can simply scroll through them and select your choice. The AI will also add some hashtags into the caption.
- Re-Generate Caption: You can click on this button to recreate a different caption for your post, it will use up one Generate Caption credit (click here)
- Copy: Copy the caption to your clipboard. You can then simply paste anywhere you want to use the generated caption.
- +Add to Editor: Click on this to directly add your generated caption to the text editor and proceed to compose your post.
3: Generate Caption Credits
You can view and manage your Generate Caption credits by going to Settings-> Billing & Plan
Here, on the right side of the interface, you can see your Caption Generation Credits. Clicking on the Increase Limits button will open the Upgrade Limits popup menu where you can increase the limits of any upgradeable feature of your ContentStudio account. There, you can also increase or add more Caption Generation Credits for your ContentStudio account.
This is the Upgrade Limit popup. The cost per 100 credits is $5 USD. Should you reach your limit, you can top up more credits from here at any time.
That is all. If you would like to check out more on our Composer then click here.
If you would like to learn more about our Discovery feature then click here. | https://docs.contentstudio.io/article/878-generate-caption | 2022-06-25T05:24:36 | CC-MAIN-2022-27 | 1656103034170.1 | [] | docs.contentstudio.io |
The Input Values property controls the inputs given to the Specification in the Specification Host Control.
Property Type:
Dynamic
Default Value: No default value is applied to this property
Hierarchical Reference: ControlName.InputValues
The default value of the property can be changed by any of the following methods:
The recommended method for passing a dynamic array into the hosted specification is to create a Calculation Table in the project hosting the specification.
Where the table is constructed as follows:
Value can be controlled by a rule. | https://docs.driveworkspro.com/Topic/InputValues | 2022-06-25T04:23:56 | CC-MAIN-2022-27 | 1656103034170.1 | [] | docs.driveworkspro.com |
Fusion Directory
FusionDirectory is where you can manage your account and password.
The same username and password are used for: Mattermost, Kanboard, Nextcloud.
How to change your password:How to change your password:
- Go to the FusionDirectory sign-in page. ()
- Login to your account page with your existing username and password.
- Select “Edit” on the bottom-right Corner of the page. Edit your Password, Select “OK” when finished.
Click “Sign out” at the top-left side of the page. | https://docs.glia.org/docs/team-tools/fusion-directory/ | 2022-06-25T05:18:17 | CC-MAIN-2022-27 | 1656103034170.1 | [] | docs.glia.org |
Use API connectors to customize and extend self-service sign-up
Overview
As a developer or IT administrator, you can use API connectors to integrate your self-service sign-up user flows with web APIs to customize the sign-up experience and integrate with external systems. For example, with API connectors, you can:
- Integrate with a custom approval workflow. Connect to a custom approval system for managing and limiting account creation.
- Perform identity verification. Use an identity verification service to add an extra level of security to account creation decisions.
- Validate user input data. Validate against malformed or invalid user data. For example, you can validate user-provided data against existing data in an external data store or list of permitted values. If invalid, you can ask a user to provide valid data or block the user from continuing the sign-up flow.
- Overwrite user attributes. Reformat or assign a value to an attribute collected from the user. For example, if a user enters the first name in all lowercase or all uppercase letters, you can format the name with only the first letter capitalized.
- Run custom business logic. You can trigger downstream events in your cloud systems to send push notifications, update corporate databases, manage permissions, audit databases, and perform other custom actions.
An API connector provides Azure Active Directory with the information needed to call API endpoint by defining the HTTP endpoint URL and authentication for the API call. Once you configure an API connector, you can enable it for a specific step in a user flow. When a user reaches that step in the sign up flow, the API connector is invoked and materializes as an HTTP POST request to your API, sending user information ("claims") as key-value pairs in a JSON body. The API response can affect the execution of the user flow. For example, the API response can block a user from signing up, ask the user to re-enter information, or overwrite and append user attributes.
Where you can enable an API connector in a user flow
There are two places in a user flow where you can enable an API connector:
- After federating with an identity provider during sign-up
- Before creating the user
Important
In both of these cases, the API connectors are invoked during user sign-up, not sign-in.
After federating with an identity provider during sign-up
An API connector at this step in the sign-up process is invoked immediately after the user authenticates with an identity provider (like Google, Facebook, & Azure AD). This step precedes the attribute collection page, which is the form presented to the user to collect user attributes. This step is not invoked if a user is registering with a local account. The following are examples of API connector scenarios you might enable at this step:
- Use the email or federated identity that the user provided to look up claims in an existing system. Return these claims from the existing system, pre-fill the attribute collection page, and make them available to return in the token.
- Implement an allow or blocklist based on social identity.
Before creating the user
An API connector at this step in the sign-up process is invoked after the attribute collection page, if one is included. This step is always invoked before a user account is created. The following are examples of scenarios you might enable at this point during sign-up:
- Validate user input data and ask a user to resubmit data.
- Block a user sign-up based on data entered by the user.
- Perform identity verification.
- Query external systems for existing data about the user to return it in the application token or store it in Azure AD.
Next steps
- Learn how to add an API connector to a user flow
- Learn how to add a custom approval system to self-service sign-up
Σχόλια
Υποβολή και προβολή σχολίων για | https://docs.microsoft.com/el-GR/azure/active-directory/external-identities/api-connectors-overview | 2022-06-25T05:45:42 | CC-MAIN-2022-27 | 1656103034170.1 | [] | docs.microsoft.com |
The class represents rotated (i.e. not up-right) rectangles on a plane. More...
#include "types.hpp"
The class represents rotated (i.e. not up-right) rectangles on a plane.
Each rectangle is specified by the center point (mass center), length of each side (represented by Size2f structure) and the rotation angle in degrees.
The sample below demonstrates how to use RotatedRect:
default constructor
full constructor
returns the rotation angle. When the angle is 0, 90, 180, 270 etc., the rectangle becomes an up-right rectangle.
returns the rectangle mass center
returns width and height of the rectangle | https://docs.opencv.org/4.0.0-beta/db/dd6/classcv_1_1RotatedRect.html | 2022-06-25T05:11:00 | CC-MAIN-2022-27 | 1656103034170.1 | [] | docs.opencv.org |
For Oxxo payment method there aren’t any test data available, but you can see how it works with the payment flow given below.
Oxx voucher. In order to complete the payment, he needs to print the voucher and present it to any Oxxo store in his area to make the payment.
Upon completion of the payment flow the customer is redirected back to your ReturnURL. | https://docs.smart2pay.com/s2p_testdata_1092/ | 2022-06-25T04:56:33 | CC-MAIN-2022-27 | 1656103034170.1 | [] | docs.smart2pay.com |
Falcon monitor pinboards
Use the Falcon Monitor Pinboards for an overview of Falcon, ThoughtSpot’s in-memory database, and its health, based on query, data load, system stats, and varz metrics.
In ThoughtSpot release 6.2, there are 5 new Pinboards, based on Falcon metrics, that are available to system administrators.
Falcon is ThoughtSpot’s in-memory database. Falcon monitoring functionality pushes different kinds of metrics to Falcon system tables every fifteen minutes. These system tables, when updated, update the 5 new Pinboards that you can use to monitor Falcon’s health. You can see these Pinboards from the Pinboards page, by searching for 360_Overview:
The 5 new database monitoring Pinboards are: Falcon_360_Overview, Falcon_Query_360_Overview, Falcon_Dataload_360_Overview, Falcon_Varz_360_Overview, and System_Stats_360_Overview.
You can use these Pinboards for proactive monitoring, or, with help from ThoughtSpot Support, for debugging.
Falcon_360_Overview
This Pinboard provides basic information regarding Falcon’s performance and health. Visualizations include Interactive query latency(sec) percentiles last 24 hours, Dataload : Avg Ingestion Speed (# Rows Ingested / Load Time) Per Hour - Last 72 Hours, CPU Utilization (System, Idle, User) - Last 72 Hours, Dataload Frequency By Hour Of Day (Aggregated over 7 days), Top 10 frequently changed pinboard vizes, and so on.
Falcon_Query_360_Overview
This Pinboard provides information about Falcon query execution based on traces. Visualizations include Interactive query latency(sec) percentiles last 24 hours, Average duration(sec) by request source last 72 hours, Median latency(sec) by hour of the day last 7 days, Count of trace ids by error status last 72 hours, Max JIT compilation time(sec) last 72 hours, Top 10 vizs based on avg duration(sec), and so on.
Falcon_Dataload_360_Overview
This Pinboard provides information about Falcon data loads based on traces. Visualizations include Failed Dataloads, Tables With Most Frequent Inserts/Upserts, Table Growth (# Rows) Over Time, Load Frequency By Hour Of Day (Aggregated in a time window), Slowest Loads & Corresponding Region Load Time Skew, Loads With Highest Compaction Overhead (# Rows) and so on.
Falcon_Varz_360_Overview
This Pinboard provides information about Falcon services based on metrics in VarZ format. Visualizations include Falcon Worker Execution Metrics, Falcon Query Runtime (Average and Max), Falcon Worker Memory Manager, Daily Data Load Statistics, Falcon Compiler Cache Daily Usage, and so on.
| https://docs.thoughtspot.com/software/latest/falcon-monitor | 2022-06-25T04:15:34 | CC-MAIN-2022-27 | 1656103034170.1 | [array(['_images/falcon-360-pinboard.png',
'Falcon Pinboards on the Pinboards page'], dtype=object)
array(['_images/falcon-360-overview-pinboard.png',
'Falcon_360_Overview Pinboard'], dtype=object)
array(['_images/falcon-query-360-pinboard.png',
'Falcon_Query_360_Overview Pinboard'], dtype=object)
array(['_images/falcon-dataload-360-pinboard.png',
'Falcon_Dataload_360_Overview Pinboard'], dtype=object)
array(['_images/falcon-varz-360-pinboard.png',
'Falcon_Varz_360_Overview Pinboard'], dtype=object)] | docs.thoughtspot.com |
The maximum number of emails the user is allowed to send in a 24-hour interval. A value of -1 signifies an unlimited quota.
The maximum number of emails that Amazon SES can accept from the user's account per second.
The rate at which Amazon SES accepts the user's messages might be less than the maximum send rate.
The number of emails sent during the previous 24 hours.
Metadata pertaining to this request. | https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-ses/interfaces/getsendquotacommandoutput.html | 2022-06-25T05:52:23 | CC-MAIN-2022-27 | 1656103034170.1 | [] | docs.aws.amazon.com |
Class: Aws::S3::BucketRegionCache
- Defined in:
- gems/aws-sdk-s3/lib/aws-sdk-s3/bucket_region_cache.rb
Instance Method Summary collapse
- #bucket_added(&block) ⇒ void
Registers a block as a callback.
- #initialize ⇒ BucketRegionCache constructor
A new instance of BucketRegionCache.
- #to_hash ⇒ Hash (also: #to_h)
Returns a hash of cached bucket names and region names.
Constructor Details
#initialize ⇒ BucketRegionCache
Returns a new instance of BucketRegionCache.
Instance Method Details
#bucket_added(&block) ⇒ void
This method returns an undefined value.
Registers a block as a callback. This listener is called when a new bucket/region pair is added to the cache.
S3::BUCKET_REGIONS.bucket_added do |bucket_name, region_name| # ... end
This happens when a request is made against the classic endpoint, "s3.amazonaws.com" and an error is returned requiring the request to be resent with Signature Version 4. At this point, multiple requests are made to discover the bucket region so that a v4 signature can be generated.
An application can register listeners here to avoid these extra requests in the future. By constructing an Client with the proper region, a proper signature can be generated and redirects avoided.
#to_hash ⇒ Hash Also known as: to_h
Returns a hash of cached bucket names and region names. | https://docs.aws.amazon.com/ja_jp/sdk-for-ruby/v3/api/Aws/S3/BucketRegionCache.html | 2022-06-25T06:01:35 | CC-MAIN-2022-27 | 1656103034170.1 | [] | docs.aws.amazon.com |
A manifest file is created when you write objects to external object store. The manifest file lists the paths of the objects stored on external object store.
- The files that are specified in the manifest can be in different buckets, but all the buckets must be in the same region.
- Manifest files are in JSON format.
- The storage location for manifest files must start with the following:
- Amazon S3 location must begin with /S3 or /s3, for example /S3/YOUR-BUCKET.s3.amazonaws.com/20180701/ManifestFile2/manifest1.json
- Google Cloud Storage location must begin with /GS or /gs, for example /gs/storage.googleapis.com/YOUR-BUCKET/JSONDATA/manifest1.json
- Azure Blob location (including Azure Data Lake Storage Gen2 in Blob Interop Mode) must begin with /AZ or /az, for example /az/YOUR-STORAGE-ACCOUNT.blob.core.windows.net/td-usgs/JSONDATA/manifest1.json
- Manifest files are not cumulative. If you want to add entries to a manifest, you must create a new manifest that includes the original entries plus the ones you want to add.
- An error is reported if you attempt to write another manifest file in the same location. Use OVERWRITE('TRUE') with MANIFESTONLY('TRUE') keywords to replace a manifest in the same location. | https://docs.teradata.com/r/Teradata-VantageTM-Native-Object-Store-Getting-Started-Guide/July-2021/Writing-Data-to-External-Object-Store/Working-with-Manifest-Files | 2022-06-25T05:34:50 | CC-MAIN-2022-27 | 1656103034170.1 | [] | docs.teradata.com |
In addition to being able to transmit and receive DMX-over-Ethernet, CueServer also has built-in DMX ports for hard-wired DMX connections to fixtures, dimmers, consoles and virtually any other DMX compatible devices.
The rack-mounted CS-900 has four replaceable DMX module slots, and the miniature CS-920 has two replaceable DMX module slots, each of which can accept any of seven available DMX modules for input or output of DMX. The surface-mounted CS-940 has two DMX input ports and two DMX output ports that are available on the unit’s pluggable terminal block strips. | http://docs.interactive-online.com/cs2/1.0/en/topic/dmx-ports | 2019-06-16T03:10:28 | CC-MAIN-2019-26 | 1560627997533.62 | [] | docs.interactive-online.com |
Offload supported backups to secondary replicas of an availability group
SQL Server
Azure SQL Database
Azure SQL Data Warehouse
Parallel Data Warehouse
The Always.
Note
RESTORE statements are not allowed on either the primary or secondary databases of an availability group.
Backup Types Supported on Secondary Replicas.
In a distributed availability group, backups can be performed on secondary replicas in the same availability group as the active primary replica, or on the primary replica of any secondary availability groups. Backups cannot be performed on a secondary replica in a secondary availability group because secondary replicas only communicate with the primary replica in their own availability group. Only replicas that communicate directly with the global primary replica can perform backup operations.
Configuring Where Backup Jobs Run).
Related Tasks
To configure backup on secondary replicas
To determine whether the current replica is the preferred backup replica
To create a backup job
See Also
Overview of Always On Availability Groups (SQL Server)
Copy-Only Backups (SQL Server)
CREATE AVAILABILITY GROUP (Transact-SQL)
ALTER AVAILABILITY GROUP (Transact-SQL)
Feedback
Send feedback about: | https://docs.microsoft.com/en-us/sql/database-engine/availability-groups/windows/active-secondaries-backup-on-secondary-replicas-always-on-availability-groups?view=sql-server-2017 | 2019-06-16T03:05:17 | CC-MAIN-2019-26 | 1560627997533.62 | [] | docs.microsoft.com |
Released on:
Thursday, April 19, 2018 - 16:00
This release has been unpublished. Use version 8.1.712.0 or higher.
This version introduced a bug that could cause crashes for .NET Framework applications manually loading types from assemblies in the application domain. This includes enumerating custom attributes. The encountered error will appear like the following:.
Upgrading
- Follow standard procedures to update the .NET agent.
- If you are upgrading from a particularly old agent, review the list of major changes and procedures to upgrade legacy .NET agents. | https://docs.newrelic.com/docs/release-notes/agent-release-notes/net-release-notes/net-agent-817090 | 2019-06-16T03:24:51 | CC-MAIN-2019-26 | 1560627997533.62 | [] | docs.newrelic.com |
Contents Now Platform Capabilities Previous Topic Next Topic Add personal subscriptions in UI15 and earlier Subscribe Log in to subscribe to topics and get notified when content changes. ... SAVE AS PDF Selected Topic Topic & Subtopics All Topics in Contents Share Add personal subscriptions in UI15 and earlier After setting up your devices, you can subscribe to notifications that are configured as subscribable. Before you begin The Subscription Based Notifications 2.0 plugin must be active. About this task If you have subscribed to messages, your list of notification messages can build over time. You can create preferences for how and when these messages are delivered, or unsubscribe to messages that are not configured as mandatory. Note: Conditions that you apply to personal subscriptions do not override the filters that the administrator creates for the subscribable notifications. Your conditions are evaluated after the conditions on the subscribable notification are met. If the notification filter set by the administrator fails, the filter conditions on your personal subscription are not evaluated. Procedure Navigate to Self-Service > My Profile to open your user profile. Click the Notification Preferences related link. The Notification Preferences page opens. You can see your personal subscriptions and the general notifications that you are subscribed to. Click Subscriptions. Click Add Personal Subscriptions. Fill in the fields as described in the table. Figure 1. Add personal subscriptions Table 1. Field Description Name A descriptive name for the subscription. Notification The notification to subscribe to. You can only subscribe to notifications that are configured to allow subscriptions. Table The table that the incident is configured to run on. You cannot modify the table from this form. To select another table, configure the notification. See Create an email notification. Active Check box indicating whether the subscription is active. Users can receive notifications for subscriptions only if the subscription is active. If it is not active, the on-off switch for the subscription is set to off and is read-only. Send to The devices that this subscription is sent to. Selecting the devices in this field is the same as turning on the switch for the subscription on the Subscriptions page. Affected record The specific record that the subscription is based on. Click the lookup icon, and then select the table and the specific record in that table. Send when Another condition that must be met to send the notification. For example, you might select a filter whose conditions send notifications when an incident with a priority of 1 - Critical is opened for a network issue. The system evaluates the conditions in this filter after the conditions set in the notification filter by the administrator. Click Submit. You can turn the subscription for active subscribable notifications on or off using the switch on the Subscriptions management section of the Notification Preferences page. Figure 2. Personal subscriptions Personal subscriptions are saved in the Notification Subscriptions [sys_notif_subscription] table. The records in this table are made active or inactive when you slick the switch to subscribe or unsubscribe from the notification. You can edit the subscription at any time by clicking Edit next to it. On this page Send Feedback Previous Topic Next Topic | https://docs.servicenow.com/bundle/kingston-servicenow-platform/page/use/email-and-notification-preferences/task/t_SubscribeToANotificationMessage.html | 2019-06-16T03:23:22 | CC-MAIN-2019-26 | 1560627997533.62 | [] | docs.servicenow.com |
SmartAnthill 2.0 Overall Architecture.
SmartAnthill 2.0 represents further work on SmartAnthill 1.0, which was designed solely by Ivan Kravets, an author of PlatformIO. Improvements in SmartAnthill 2.0 cover several areas, from introducing security, to support of protocols such as ZigBee and improvements aimed at reducing energy consumption. SmartAnthill 2.0 is not intended to be compatible with SmartAnthill 1.0.
Contents
- SmartAnthill 2.0 Overall Architecture
- Sales pitch (not to be taken seriously)
- Aims
- Requirements
- SmartAnthill Architecture
- Simple Topology
- System Control Software
- SmartAnthill Core
- SmartAnthill Devices
- Life Cycle of SmartAnthill Device
- SmartAnthill protocol stack
Sales pitch (not to be taken seriously)¶
This ... SmartAnthill thing... What does it do?
Sir, better you should ask “What doesn’t it do?”It crypts, flips, scripts, and strips,It loops, groups, hooks, and schnooks,It chases, races, faces, and places!
No home or maximum security prison should be without one!
With SmartAnthill, each of your devices will get their very own personal IP address (whatever that is)! And for those low-income devices which were able to save only limited amount of RAM and cannot afford running an IP stack, SmartAnthill will simulate an IP address, so nobody from outside world will be able to tell the difference! It is an ultimate tool in keeping your devices’ self-esteem, even when they cannot afford the latest greatest technology through no fault of their own!
In addition to being an indispensable motivation vehicle to keep your devices from becoming apathetic and irresponsive, SmartAnthill is also an ultimate engine to keep your devices in line. Yes, your very own SmartAnthill keeps a comprehensive secret dossier on each and every of your devices, monitors their behaviour, and takes corrective measures whenever necessary! And yes, you can plausibly deny any knowledge of SmartAnthill’s actions if you feel like it, too!
But that’s not all! SmartAnthill will go to great lengths to make sure that your devices don’t misuse any watts and milliamp-hours you give them! It will enable suitable devices to run a year or even more when being fed just once! No energy-saving trick in the book is left without SmartAnthill’s attention - from sleep to hibernation, from minimizing data being transmitted to minimizing time when RF oscillator is on (whatever that is)!
What else you, a true manager of your house, can possibly want? You’ll get your devices motivated, under tight control, using only very minimum amount of food, and with plausible deniability on top! What are you thinking about? Download your very own SmartAnthill today, and we’ll provide 50% discount from your DIY setup price! That’s right, if you install SmartAnthill today, you’ll be able to pay yourself twice less for setting SmartAnthill up!
Aims¶
SmartAnthill aims to create a viable system of control for the Internet of Things (IoT) devices in home and office environments. More secure and more risky environments (such as industrial control, military, etc.) are currently out of scope. Due to SmartAnthill roots in hardware hobbyist movement, special attention is to be paid for hobbyist needs.
Requirements¶
SmartAnthill is built around the following requirements. They follow from the aims and generally are not negotiable.
- Low Cost. In home/office environments SmartAnthill should aim for a single device (such as sensor) to be in the range of $10-$20. Rationale: higher costs will deter acceptance greatly.
- Support for Devices with Limited Resources. Many of devices and MPUs aimed to be used with SmartAnthill are very limited in their resources (which is closely related to their low cost). Currently, minimal MPU configuration which minimal SmartAnthill aims to run on, is as low as 512 bytes RAM, 16K bytes PROM (to store the program) and 256 bytes EEPROM. [TODO: think about number of rewrites in EEPROM, incl. optimization]
- Wireless Support. SmartAnthill needs to support wireless technologies. Wired support is optional. Rationale: without wireless devices, acceptance in home environments is expected to be very low.
- Support for Heterogeneous Systems. SmartAnthill should allow to create systems consisting of devices connected via different means. ZigBee and RF technologies are of the particular interest.
- System Integration should not require asm or C programming. Most MPUs require C or asm programming. This is ok, as long as such programming can be done once per device type and doesn’t need to be repeated when the system integrator needs to adjust system behavior. To achieve it, SmartAnthill should provide clear separation between device developer and system integrator, and system integration should not require C or asm programming skills.
- Energy Efficiency. SmartAnthill should aim to achieve best energy efficiency possible. In particular, a wide range of SmartAnthill sensors should be able to run from a single ‘tablet’-size battery for at least a year (more is better).
- Security. SmartAnthill should provide adequate protection given the home/office environment. In other words, SmartAnthill as such doesn’t aim to protect from NSA (or any other government agency) or from somebody who’s already obtained physical access to the system. However:
- protection from remote attackers (both over the Internet and present within the reach of wireless communications) is the must
- level of protection should be sufficient to control home/office physical security systems
- protection from local attackers trying to obtain physical entry requires additional physical security measures, which can be aided by SmartAnthill. For example, if the attacker gets entrance to the hardware of SmartAnthill Central Controller, SmartAnthill becomes vulnerable. However, SmartAnthill-enabled sensors may be installed to detect unauthorized entrance to the room where SmartAnthill is installed, and/or to detect unauthorized opening of the SmartAnthill Central Controller physical box, with an appropriate action taken by Central Controller before it becomes vulnerable (for example, notifying authorities).
- Openness. All core SmartAnthill technologies should be open. SmartAnthill protocols are intended to be published, and any device compliant with these protocols should be able to interoperate with other compliant devices. SmartAnthill project will provide a reference software stack as an open source code, which will be distributed under GPL v2 [TODO:decide] license.
- Openness of SmartAnthill does not mean that all SmartAnthill devices should use open-source software. Any device, whether using open- or closed-source software, is welcome as long as it complies with published SmartAnthill protocols.
- Openness of SmartAnthill does not mean that SmartAnthill devices are not allowed to use existing proprietary protocols as a transport.
- Position on patents. SmartAnthill Core MUST use patent-free technologies wherever possible. Support for patented technologies as a transport is allowed. All SmartAnthill contributors MUST fill a form with a statement on their knowledge on patents related to their contribution.
- Vendor and Technology Neutrality. SmartAnthill should not rely on any single technology/platform (leave alone any single vendor). All kinds of suitable technologies and platforms are welcome. Any references to a specific technology should be considered only as an example.
- Extensibility. Closely related to technology neutrality is extensibility. SmartAnthill should expect new technologies to emerge, and should allow them to be embraced in a non-intrusive manner. It is especially important to allow easy addition of new communication protocols, and of new devices/MPUs.
- Ability to Utilize Resources of More Capable Devices. Non-withstanding Requirement #2 above, it is recognized that there are some devices out there which have better capabilities than minimal capabilities. Moreover, it is recognized that share of such more capable devices is expected to grow. Therefore, as long as it is helpful to achieve any of the goals above, SmartAnthill should allow to utilize capabilities of more sophisticated devices. One example is to utilize device’s ability to sleep and wake up on timer, allowing to improve battery life greatly. Another example is to allow combining several commands into one wireless transmission, allowing to reduce amount of time wireless module needs to be turned on, which should also help improving battery life.
- It doesn’t mean that SmartAnthill is going to increase minimal requirements. However, if minimal requirements are exceeded by any particular device, SmartAnthill should allow to utilize those improved capabilities to improve other user-observable characteristics.
- Support both for mass-market devices and for hobbyist devices. While SmartAnthill is not limited to hobbyists and aims to become a widely-accepted network for controlling IoT and smart homes, it should consider hobbyists as a first-class citizens and pay attention to their needs. In particular, compatibility with existing devices and practices is to be taken seriously, as well as any feedback.
SmartAnthill Architecture¶
Simple Topology¶
Simple SmartAnthill system consists of one SmartAnthill Central Controller and one or more SmartAnthill Devices (also known as “Ants”) controlled by it (see Sample SmartAnthill Single-Node System diagram above for an example topology).
SmartAnthill Central Controller is a relatively complex device (such as PC or credit-card sized computer Raspberry Pi, BeagleBoard or CubieBoard) which normally runs several pieces of software, including operating system TCP/IP stack, 3rd-party System Control Software, and SmartAnthill Core.
System Control Software¶
System Control Software is intended to be easily customizable according to customer needs. It can be very different, but we aim to support OpenHAB, and to support DYI programming with pretty much any programming language which can support one of the REST, WebSockets or Sockets. SmartAnthill project as such doesn’t provide control software, it is rather a service which can be used by a control software.
SmartAnthill Core¶
SmartAnthill Core represents a cross-platform software which is written in Python language and supports all the popular server/desktop operation systems: Mac OS X, Linux (x86 or ARM), and Windows. System requirements of SmartAnthill Core are very low for a modern server-side application:
- < 1% CPU in IDLE mode
- < 20Mb RAM for service/daemon
- < 20Mb of free disk space (cross-compilers, tool chains, and firmware upload software are not included here)
More detailed information on SmartAnthill Core is provided in a separate document, SmartAnthill 2.0 Core Architecture.
API Service¶
API Service is responsible for supporting multiple protocols (such as REST, Websocket, or plain socket) and converting them into requests to the other parts of SmartAnthill.
Dashboard Service¶
Dashboard Service is responsible for providing UI for the SmartAnthill administrator. It allows to:
- administer SmartAnthill Core (control services running, view logs etc.)
- configure and program/”pair” SmartAnthill Devices so they can be used with specific SmartAnthill system (see Life Cycle of SmartAnthill Device below for details on configuring, programming, and “pairing”)
Device Service¶
Device Service provides device abstraction to the rest of SmartAnthill Core, allowing to handle different devices in a consistent manner.
Device Firmware Module¶
Device Firmware Module is used for SmartAnthill Hobbyist Devices (see on them below). Device Firmware Module is responsible for generating device firmware (for specific device, based on configuration entered via Dashboard), and for programming it. Device Firmware Module is implemented on top of PlatformIO.
SmartAnthill Router¶
SmartAnthill Router is responsible for handling so-called SmartAnthill Simple Devices (see below; in a nutshell - SmartAnthill Simple Device is not able to run it’s own IP stack).
SmartAnthill Router provides SmartAnthill Simple Devices with a virtual IP address (or more precisely - either with a separate IP address, or with a dedicated port on one of SmartAnthill Central Controller’s IP addresses). While SmartAnthill Simple Device itself knows nothing about IP, SmartAnthill Router completely encapsulates all connected SmartAnthill Simple Devices, so from the point of view of the outside world, these SmartAnthill Simple Devices are completely indistinguishable from fully-fledged SmartAnthill IP-Enabled Devices.
SmartAnthill Database (SA DB)¶
SmartAnthill Database (SA DB) is a database which stores all the information about SmartAnthill Devices within specific SmartAnthill System. SA DB is used by most of SmartAnthill Core components.
SmartAnthill Database is specific to the Central Controller and SHOULD NOT be shared. In SA DB, at least the following information is stored:
- device addresses (bus-specific for Simple Devices and IPs for IP-enabled devices)
- credentials (i.e. symmetric keys)
- configuration (i.e. which device is connected to which pins)
- device capabilities (i.e. amount of RAM/PROM/EEPROM available, MPU capabilities etc.)
SmartAnthill Devices¶
TODO: Master-Slave topology!
Each SmartAnthill Device (also known as ‘Ant’) is either SmartAnthill Hobbyist Device, or a SmartAnthill Mass-Market Device. While these devices are similar, there are some differences as outlined below. In addition, in a completely different and independent dimension each SmartAnthill Device is either a Simple Device, or an IP-enabled Device.
These properties are independent of each other, so it is possible to have all four different types of devices: SmartAnthill Hobbyist Simple Device, SmartAnthill Hobbyist IP-enabled Device, SmartAnthill Mass-Market Simple Device, and SmartAnthill Mass-Market IP-enabled Device.
SmartAnthill Hobbyist Device¶
A diagram of a typical SmartAnthill Hobbyist Device is provided in section SmartAnthill Devices. SmartAnthill Hobbyist Device consists of an MCU, persistent storage (such as EEPROM or Flash), communication module, and one or more sensors and/or actuators (which are also known as ‘ant body parts’). TODO: add persistent storage to the diagram. MCU on SmartAnthill Hobbyist Device runs several layers of software:
- SmartAnthill-Generated Software it is system-specific, i.e. it is generated for each system
- Device-Specific Plugins for each type of sensor or actuator present
- SmartAnthill 2.0 Protocol Stack; it is generic, i.e. it is intended to be pretty much the same for all SmartAnthill Devices. SmartAnthill 2.0 Protocol Stack uses persistent storage, in particular, to provide security guarantees.
An important part of SmartAnthill Hobbyist Device (which is absent on SmartAnthill Mass-Market Devices) is programming interface; for example, it can be some kind of SPI, UART or USB.
SmartAnthill Mass-Market Device¶
A diagram of a typical SmartAnthill Mass Market Device is also provided in the section SmartAnthill Devices. In addition to the components available on SmartAnthill Hobbyist Device, SmartAnthill Mass-Market Device MAY additionally include:
- an additional LED to support Single-LED Pairing. In practice, an existing LED MAY be re-used for this purpose.
In addition, Persistent Storage on Mass-Market Devices stores System-specific Data. System-specific Data contains information such as bus-specific addresses and security keys; it is obtained during “pairing” process which is described below
MCU on SmartAnthill Mass-Market Device runs several layers of software (note the differences from SmartAnthill Hobbyist Device):
- SmartAnthill Configurator, which is responsible for handling “pairing” process and populating system-specific data. SmartAnthill Configurator is generic.
- Device-Specific Plugins for each type of sensor or actuator present
- SmartAnthill 2.0 Protocol Stack as noted above, protocol stack is generic.
SmartAnthill Simple Device¶
Many of SmartAnthill Devices are expected to have very little resources, and might be unable to implement IP stack. Such devices are known as SmartAnthill Simple Devices; they implement a portion of SmartAnthill 2.0 Protocol Stack, with SmartAnthill Router providing interface to the outside world and conversion between IP-based requests/replies and Simple Device requests/replies.
SmartAnthill IP-enabled Device¶
SmartAnthill IP-enabled Device is a device which is able to handle IP requests itself. For example, if SmartAnthill IP-enabled Device uses IEEE 802.15.4 for communication, it may implement 6LoWPAN and IP stack with at least UDP support (TCP stack, which is more resource-intensive than UDP/IP stack, is optional for SmartAnthill IP-enabled Devices). SmartAnthill IP-enabled Devices can and should be accessed without the assistance of SmartAnthill Router.
Life Cycle of SmartAnthill Device¶
Let’s consider how new devices are added and used within a SmartAnthill. Life cycle is a bit different for SmartAnthill Hobbyist Device and SmartAnthill Mass-Market Device.
Life Cycle of SmartAnthill Hobbyist-Oriented Device¶
During it’s life within SmartAnthill, a hobbyist-oriented device goes through the following stages:
- Initial State. Initially (when shipped to the customer), Hobbyist-oriented SmartAnthill Device doesn’t need to contain any program. Program will be generated and device will be programmed as a part of ‘Program Generation and Programming’ stage. Therefore, programming connector is a must for hobbyist-oriented devices.
- Specifying Configuration. Configuration is specified by a user (hobbyist) using a Dashboard Service. User selects board type and then specifies connections of sensors or actuators to different pins of the board. For example, one hobbyist might specify that she has [TODO] board and has a LED connected to pin 1, a temperature sensor connected to pins 2 through 5, and a DAC connected to pins 7 to 10.
- Program Generation and Programming. Program generation and programming is performed by SmartAnthill Firmware Builder and Uploader automagically based on configuration specified in a previous step. Generated program includes a SmartAnthill stack, credentials necessary to authenticate the device to the network and vice versa (as described in SATP section below, authentication is done via symmetric keys), and subprograms necessary to handle devices specified in a previous step. Currently SmartAnthill supports either UART-programmed devices, or SIP-programmed devices [TODO:check]
After the device is programmed, it is automatically added to a SmartAnthill Database of available devices.
- Operation. After the device is programmed, it can start operation. Device operation involves receiving and executing commands from Central Controller. Operations can be either device-specific (such as “measure temperature and report”), or generic (such as “wait for XXXX seconds and come back for further instructions”).
Life Cycle of SmartAnthill Mass-Market-Oriented Device¶
Mass-market devices are expected to be shipped in already programmed state, with a pre-defined configuration. Expected life cycle of a SmartAnthill Mass-market-oriented Device can be described as follows:
- Initial State. Initially (when shipped to the customer), SmartAnthill mass-market-oriented device contains a program which ensures it’s operation. Re-programming capability and connector are optional for SmartAnthill mass-market-oriented devices.
- “Pairing” with Central Controller. “Pairing” includes Central Controller (controlled via SmartAnthill Dashboard) generating and exchanging credentials with device, querying device configuration and capabilities, and entering credentials, configuration and capabilities into SmartAnthill Database. “Pairing” is described in detail in SmartAnthill Pairing document.
- Physically, “pairing” can be done in one of two different ways:
- OtA Single-LED Pairing. Requires user to point a webcam of Central Controller (or a phone camera with SmartAnthill app running - TODO) to the Device intended to be paired. On the Device side, requires only one single LED (existing LED MAY be re-used for “pairing”)
- Zero Paper Pairing. Requires user to enter 26-symbol key into Central Controller. On the Device side, requires printed key (unique to the Device); additionally requires Device to fullfil Reprogramming Requirements as specified in SmartAnthill Pairing.
- Special considerations: SmartAnthill Device MUST NOT allow to extract keys; the only action allowed is to re-pair device with a different Central Controller, destroying previously existing credentials in the process. In other words, while it is possible to steal device to use with a different Central Controller, it should not be possible to impersonate device without access to Central Controller. In addition, re-pairing MUST be initiated on the Device itself (and Devices MUST NOT allow initiating re-pairing remotely); this is necessary to ensure that to hijack Device, attacker needs to be in physical possession of the Device.
- Operation. Operation of Mass-market-oriented device is the same as operation of Hobbyist-oriented device.
SmartAnthill protocol stack¶
SmartAnthill protocol stack is described in detail in a separate document, SmartAnthill 2.0 Protocol Stack. | http://docs.smartanthill.org/en/latest/design-documents/smartanthill-overall-architecture.html | 2019-06-16T02:48:56 | CC-MAIN-2019-26 | 1560627997533.62 | [array(['../_images/smartanthill-overall-architecture-diagram.png',
'SmartAnthill Overall Architecture'], dtype=object)
array(['../_images/smartanthill-device-diagram.png',
'SmartAnthill Devices'], dtype=object) ] | docs.smartanthill.org |
Describes .
See also: AWS API Documentation
See 'aws help' for descriptions of global parameters.
describe-host-reservation: OfferingSet
describe-host-reservation-offerings [--filter <value>] [--max-duration <value>] [--min-duration <value>] [--offering-id <value>] [--cli-input-json <value>] [--starting-token <value>] [--page-size <value>] [--max-items <value>] [--generate-cli-skeleton <value>]
--filter (list)
The filters.
- instance-family - The instance family of the offering (for example, m4 ).
- payment-option - The payment option (NoUpfront | PartialUpfront | AllUpfront ).
Shorthand Syntax:
Name=string,Values=string,string ...
JSON Syntax:
[ { "Name": "string", "Values": ["string", ...] } ... ]
--max-duration (integer)
This is the maximum duration of the reservation to purchase, specified in seconds. Reservations are available in one-year and three-year terms. The number of seconds specified must be the number of seconds in a year (365x24x60x60) times one of the supported durations (1 or 3). For example, specify 94608000 for three years.
--min-duration (integer)
This is the minimum duration of the reservation you'd like to purchase, specified in seconds. Reservations are available in one-year and three-year terms. The number of seconds specified must be the number of seconds in a year (365x24x60x60) times one of the supported durations (1 or 3). For example, specify 31536000 for one year.
--offering-id (string)
The ID of the reservation Reservation offerings
This example describes the Dedicated Host Reservations for the M4 instance family that are available to purchase.
Command:
aws ec2 describe-host-reservation-offerings --filter Name=instance-family,Values=m4
Output:
{ "OfferingSet": [ { "HourlyPrice": "1.499", "OfferingId": "hro-03f707bf363b6b324", "InstanceFamily": "m4", "PaymentOption": "NoUpfront", "UpfrontPrice": "0.000", "Duration": 31536000 }, { "HourlyPrice": "1.045", "OfferingId": "hro-0ef9181cabdef7a02", "InstanceFamily": "m4", "PaymentOption": "NoUpfront", "UpfrontPrice": "0.000", "Duration": 94608000 }, { "HourlyPrice": "0.714", "OfferingId": "hro-04567a15500b92a51", "InstanceFamily": "m4", "PaymentOption": "PartialUpfront", "UpfrontPrice": "6254.000", "Duration": 31536000 }, { "HourlyPrice": "0.484", "OfferingId": "hro-0d5d7a9d23ed7fbfe", "InstanceFamily": "m4", "PaymentOption": "PartialUpfront", "UpfrontPrice": "12720.000", "Duration": 94608000 }, { "HourlyPrice": "0.000", "OfferingId": "hro-05da4108ca998c2e5", "InstanceFamily": "m4", "PaymentOption": "AllUpfront", "UpfrontPrice": "23913.000", "Duration": 94608000 }, { "HourlyPrice": "0.000", "OfferingId": "hro-0a9f9be3b95a3dc8f", "InstanceFamily": "m4", "PaymentOption": "AllUpfront", "UpfrontPrice": "12257.000", "Duration": 31536000 } ] }
NextToken -> (string)
The token to use to retrieve the next page of results. This value is null when there are no more results to return.
OfferingSet -> (list)
Information about the offerings.
(structure)
Details about the Dedicated Host Reservation offering.
CurrencyCode -> (string)The currency of the offering.
Duration -> (integer)The duration of the offering (in seconds).
HourlyPrice -> (string)The hourly price of the offering.
InstanceFamily -> (string)The instance family of the offering.
OfferingId -> (string)The ID of the offering.
PaymentOption -> (string)The available payment option.
UpfrontPrice -> (string)The upfront price of the offering. Does not apply to No Upfront offerings. | https://docs.aws.amazon.com/cli/latest/reference/ec2/describe-host-reservation-offerings.html | 2019-06-16T03:47:10 | CC-MAIN-2019-26 | 1560627997533.62 | [] | docs.aws.amazon.com |
API Gateway 7.5.3 Policy Developer Filter Reference CA SOA Security Manager authorization Overview acts as a Policy Enforcement Point (PEP) in this situation, enforcing the authorization decisions made by the CA SOA Security Manager, which acts a Policy Decision Point (PDP). Note A CA SOA Security Manager authentication filter must be invoked before a CA SOA Security Manager authorization filter in a given policy. In other words, the end user must authenticate to CA SOA Security Manager before they can be authorized for a protected resource. Prerequisites Integration with CA SOA Security Manager requires CA TransactionMinder SDK version 6.0 or later. You must add the required third-party binaries to your API Gateway and Policy Studio installations. Add third-party binaries to API Gateway To add third-party binaries to API Gateway, perform the following steps: Add the binary files as follows:Add .jar files to the INSTALL_DIR/apigateway/ext/lib directory.Add .dll files to the INSTALL_DIR\apigateway\Win32\lib directory.Add .so files to the INSTALL_DIR/apigateway/<platform>/lib directory. Restart API Gateway. Add third-party binaries to Policy Studio To add third-party binaries to Policy Studio, perform the following steps: Select Windows > Preferences > Runtime Dependencies in the Policy Studio main menu. Click Add to select a JAR file to add to the list of dependencies. Click Apply when finished. A copy of the JAR file is added to the plugins directory in your Policy Studio installation. Click OK. Restart Policy Studio with the -clean option. For example: > cd INSTALL_DIR/policystudio/> policystudio -clean Configuration Configure the following fields on the CA SOA Security Manager authorization filter: Name:Enter an appropriate name for the filter to display in a policy. box, and click the Add button to specify an attribute to fetch from CA SOA Security Manager. Related Links | https://docs.axway.com/bundle/APIGateway_753_PolicyDevFilterReference_allOS_en_HTML5/page/Content/PolicyDevTopics/connector_tm_authz.htm | 2019-06-16T02:46:02 | CC-MAIN-2019-26 | 1560627997533.62 | [] | docs.axway.com |
API Gateway 7.6.2 Policy Developer Guide Packet sniffers. Note On Linux platforms,. is to monitor. The default entry is any. Note This setting is only valid on Linux. On Linux-based systems, network interfaces are usually identified using names like eth0, eth1, and so on. On Windows, these names are more complicated (for example, \Device\NPF_{00B756E0-518A-4144 ... }.:,. Related Links | https://docs.axway.com/bundle/APIGateway_762_PolicyDevGuide_allOS_en_HTML5/page/Content/PolicyDevTopics/general_pcap.htm | 2019-06-16T03:21:51 | CC-MAIN-2019-26 | 1560627997533.62 | [] | docs.axway.com |
All content with label client+demo+distribution+gridfs+infinispan+jbossas+query+snapshot.
Related Labels:
expiration, publish,, events, hash_function, configuration, batch, buddy_replication, loader, colocation, write_through, cloud, remoting, mvcc, tutorial, notification, murmurhash2, xml, read_committed, started, cachestore, data_grid, hibernate_search, resteasy, cluster, br, development, websocket, async, transaction, interactive, xaresource, build, hinting, gatein, searchable, installation, command-line, non-blocking, migration, rebalance, filesystem, jpa, tx, gui_demo, eventing, shell, client_server, testng, murmurhash, infinispan_user_guide, standalone, repeatable_read, webdav, hotrod, docs, batching, consistent_hash, store, jta, faq, 2lcache, as5, jsr-107, jgroups, lucene, locking, rest, hot_rod
more »
( - client, - demo, - distribution, - gridfs, - infinispan, - jbossas, - query, - snapshot )
Powered by a free Atlassian Confluence Open Source Project License granted to Red Hat, Inc.. Evaluate Confluence today. | https://docs.jboss.org/author/label/client+demo+distribution+gridfs+infinispan+jbossas+query+snapshot | 2019-06-16T04:12:16 | CC-MAIN-2019-26 | 1560627997533.62 | [] | docs.jboss.org |
All content with label deadlock+gridfs+hibernate_search+infinispan+installation+locking+repeatable_read+s3+testng.
Related Labels:
expiration, publish, datagrid, interceptor, server,, jbosscache3x, read_committed, xml, distribution, meeting, started, cachestore, data_grid, cacheloader, resteasy, cluster, development, websocket, transaction, async, xaresource, build, gatein, searchable, demo, scala, ispn, client, non-blocking, migration, jpa, filesystem, tx, eventing, client_server,, - repeatable_read, - s3, - testng )
Powered by a free Atlassian Confluence Open Source Project License granted to Red Hat, Inc.. Evaluate Confluence today. | https://docs.jboss.org/author/label/deadlock+gridfs+hibernate_search+infinispan+installation+locking+repeatable_read+s3+testng | 2019-06-16T03:54:47 | CC-MAIN-2019-26 | 1560627997533.62 | [] | docs.jboss.org |
All content with label gridfs+infinispan+loader+mvcc+notification+out_of_memory+setup.
Related Labels:
expiration, publish, datagrid, interceptor, server, replication, transactionmanager, release, query, deadlock, archetype, lock_striping, jbossas, guide, schema, listener, cache, amazon, s3,
grid, jcache, test, api, xsd, ehcache, maven, documentation, write_behind, ec2, 缓存, hibernate, aws, interface, custom_interceptor, clustering, eviction, concurrency, jboss_cache, import, events, hash_function, configuration, batch, buddy_replication, write_through, cloud, tutorial, jbosscache3x, xml, read_committed, distribution,, repeatable_read, webdav, hotrod, snapshot, docs, consistent_hash, batching, store, jta, faq, as5, 2lcache, jsr-107, lucene, jgroups, locking, rest, hot_rod
more »
( - gridfs, - infinispan, - loader, - mvcc, - notification, - out_of_memory, - setup )
Powered by a free Atlassian Confluence Open Source Project License granted to Red Hat, Inc.. Evaluate Confluence today. | https://docs.jboss.org/author/label/gridfs+infinispan+loader+mvcc+notification+out_of_memory+setup | 2019-06-16T03:09:58 | CC-MAIN-2019-26 | 1560627997533.62 | [] | docs.jboss.org |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.