content
stringlengths 0
557k
| url
stringlengths 16
1.78k
| timestamp
timestamp[ms] | dump
stringlengths 9
15
| segment
stringlengths 13
17
| image_urls
stringlengths 2
55.5k
| netloc
stringlengths 7
77
|
---|---|---|---|---|---|---|
When you select a custom configuration, the New Virtual Machine wizard prompts you to select the I/O controller type for the virtual machine.
Workstation Pro installs an IDE controller and a SCSI controller in the virtual machine. SATA controllers are supported for some guest operating systems. The IDE controller is always ATAPI. For the SCSI controller, you can choose BusLogic, LSI Logic, or LSI Logic SAS. If you are creating a remote virtual machine on an ESX host, you can also select a VMware Paravirtual SCSI (PVSCSI) adapter.
BusLogic and LSI Logic adapters have parallel interfaces. The LSI Logic SAS adapter has a serial interface. The LSI Logic adapter has improved performance and works better with generic SCSI devices. The LSI Logic adapter is also supported by ESX Server 2.0 and later.
PVSCSI adapters are high-performance storage adapters that can provide greater throughput and lower CPU utilization. They are best suited for environments where hardware or applications drive a very high amount of I/O throughput, such as SAN environments. PVSCSI adapters are not suited for DAS environments.
The choice of SCSI controller does not affect whether the virtual disk can be an IDE, SCSI, or SATA disk.
Some guest operating systems, such as Windows XP, do not include a driver for the LSI Logic or LSI Logic SAS adapter. You must download the driver from the LSI Logic Web site. driver support information. For guest operating system support information and known issues, as well as SATA support, see the online Compatibility Guide on the VMware Web site. | https://docs.vmware.com/en/VMware-Workstation-Pro/12.0/com.vmware.ws.using.doc/GUID-A0438F6C-6651-4A38-853A-0A7A494E23DF.html | 2018-12-10T02:44:02 | CC-MAIN-2018-51 | 1544376823236.2 | [] | docs.vmware.com |
The following procedure applies to the setup using the WorkflowGen PowerShell installation, which is only compatible with:
Azure SQL Database
MS SQL Server with SQL Server authentication enabled
Windows Server 2012 R2, Windows Server 2016, and Windows 10 x64
For other versions of Windows, use the manual installation procedure.
You'll need an active internet connection to perform this installation unless all of the dependencies have been downloaded by running the script with the
-DownloadOnly script flag.
If you're using Azure SQL database, you'll need to create and configure the database manually; see the Azure SQL database configuration section in the WorkflowGen for Azure guide for instructions on how to do this.
If you're using MS SQL Server with the WorkflowGen database creation, the installation will require the SQL Server PowerShell module.
If your Windows has SQL Server and SQL Server Management Studio installed, the SQL Server PowerShell module comes pre-installed.
If your Windows has PowerShell version 5 or later installed (e.g. Windows Server 2016/Windows 10), the installation script will auto-detect and install the SQL Server module from the PowerShell Gallery.
If your Windows has PowerShell version 4 or earlier installed (e.g. Windows Server 2012 R2), you'll need to manually install the PowerShell Extensions from the SQL Server Feature Pack according to your SQL Server version below:
Download and install the following packages from the feature pack:
ENU\x64\SQLSysClrTypes.msi
ENU\x64\SharedManagementObjects.msi
ENU\x64\PowerShellTools.msi
Note: If the PowerShell Extensions aren't available for your SQL Server version or the installation script still doesn't detect the SQL Server PowerShell module, then try installing the PowerShell Extensions from the SQL Server 2016 Feature Pack, or try installing SQL Server 2016 Management Studio.
Ensure that the PowerShell Execution Policy is correctly set (see). To do this, run
Set-ExecutionPolicy Unrestricted in the PowerShell command window.
Note: If you want to avoid modifying the Execution Policy, you can bypass it by running the WorkflowGen installation script as follows:
PowerShell.exe -ExecutionPolicy Bypass -File .\install.ps1.
Clicking on the shell while it is running will pause the output; you can change this option in the PowerShell options, or press
ENTER to resume the output (this will not pause the script, which will continue to run in the background).
In JSON format, backslashes (
\) must be escaped as follows:
{"param" : "C:\\valid\\windows\\path"}
You can abort the script at any point by pressing
CTRL+C. If this is done during a download or extraction process, the folders created might need to be deleted (e.g.
\package\); otherwise, the script will detect their presence and assume that they are complete.
The PowerShell installation will also install Node.js v10.16.3, iisnode, and IIS URL Rewrite.
Note: Visual C++ Redistributable is required in some particular Windows Server versions and/or configurations, otherwise you might encounter the error
The specified module could not be found regarding the
edge and
edge-js libraries when accessing the
/wfgen/graphql,
/wfgen/hooks, or
/wfgen/scim web apps. You'll need to download and install this manually.
To install a previous version of WorkflowGen, use that version's PowerShell installation, available in the Release Notes & Downloads section of the WorkflowGen Forum & Knowledge Base.
Open
config.json in a text editor and configure the parameters for your installation of WorkflowGen (see PowerShell installation configuration parameters below for information on each parameter).
Open a PowerShell instance as Administrator.
Run
.\install.ps1 (with the optional script flags listed below, if desired). | https://docs.advantys.com/workflowgen-technical-reference-guide/v/7.16.0/setup/powershell-installation | 2020-07-02T08:25:30 | CC-MAIN-2020-29 | 1593655878639.9 | [] | docs.advantys.com |
Anypoint Studio 6.0 Beta with Mule Runtime Engine 3.8.0-M1 Release Notes
January 28, 2016
Build ID:
Compatibility
Mule Runtime
Version: 3.8.0 EE - M1
Anypoint Studio
Version: 6.6.0
Build Id: TBD
APIkit
Versions: 3.8 - 1.7.3 - 1.6.3 - 1.5.4
SAP Connector
Versions: TBD
DataWeave
Version: 1.1.0
This new release features the following improvements:
Discover APIs to implement directly from API Manager, and easily synchronize API designs across Studio and API Designer projects.
Support for reading and writing fixed width and other flat files using DataWeave.
Users can now browse for WSDLs in the exchange and when configuring the WS Consumer.
Support for Mule 3.8, which unifies the API Gateway runtime with the core Mule runtime, adds additional TLS capabilities, additional tuning parameters for batch, and improved error messages.
What’s New
Synchronize API designs
Discover APIs to implement directly from API Manager, and easily synchronize API designs across Studio and API Designer projects.
Improved debugging and error messages.
And more
Support for reading and writing fixed width and other flat files using DataWeave.
Users can now browse for WSDLs in the exchange and when configuring the WS Consumer.
Support for Mule 3.8, which creates a single unified runtime for API gateways and integrations, adds additional TLS capabilities, additional tuning parameters for batch, and improved error messages.
Hardware and Software System Requirements
For most use cases, Anypoint Studio with the 3.8 Runtime does not change hardware or software requirements.
MuleSoft recommends a minimum of 4GB RAM on a developer workstation. As applications become complex, consider adding more RAM.
This version of Anypoint Studio requires Java 7.
Known Issues
To use the DataWeave preview with Flat Files, a full path must be configured for the schema and sample data files.
Support
Jira Ticket List for Anypoint Studio
New Features
[STUDIO-6862] - [DW-UI] Change Editor Layout
[STUDIO-7144] - Support batch configurable block size
[STUDIO-7145] - Support for configurable job instance id in Batch
[STUDIO-7153] - Add support for batch history configuration
[STUDIO-7404] - Use MULE_HOME and MULE_BASE to launch Mule from Studio using the wrapper
[STUDIO-7466] - Add support for WSDL discovery from Exchange in WSC connector
[STUDIO-7468] - Add flat type in mule common
[STUDIO-7480] - [Xeptember project] Merge the "Mule components contribute to debugger" feature
Bug Fixes
[STUDIO-3229] - __MacOS directory created when exporting Studio documentation
[STUDIO-5550] - Open in Studio button does not work while maven is running
[STUDIO-6328] - Running with a different runtime version that the project disables auto redeploy on save
[STUDIO-6359] - Exception when closing files in editor
[STUDIO-6495] - Null Pointer: "Close unrelated projects" tab issue
[STUDIO-6739] - NPE when deleting a project and after adding a custom metadata type
[STUDIO-7170] - [SE] Zoom is not working
[STUDIO-7183] - [DW-UI] mapObject is not adding the fx icon in some particular cases cases
[STUDIO-7187] - [DW-UI] Problem with highlight in right tree when mapping more than one element
[STUDIO-7208] - DW-UI Performance Issues with DW editor when file are big
[STUDIO-7228] - DW: drag and drop deletes my previous script
[STUDIO-7309] - DW: Generating Sample Data for XML text is creating an invalida XML content
[STUDIO-7372] - DW: When changing the target my layout should not be changed
[STUDIO-7381] - Error Markers are not correctly shown in the WS Consumer Global Element Properties
[STUDIO-7407] - Studio fails to open configs when the editor contains a nested element that contains it self
[STUDIO-7409] - CLONE - Import maven project does not copy source control files
[STUDIO-7410] - Profile attrs in bean elements are being deleted by Studio
[STUDIO-7427] - [DW-UI] There is no line render when using inboundProperties."http.query.params" or inboundProperties."http.uri.params"
[STUDIO-7440] - DataWeave preferences menu does not have a default acceptable value for "Levels of recursion" field.
[STUDIO-7457] - Projects with Gateway runtime do not run in Studio
[STUDIO-7463] - Spring bean is wrongly assigned (by default) as a reference in SAP extended properties
[STUDIO-7465] - Scaffolder from APIKit 1.7.3 does not work in Studio
[STUDIO-7473] - [DW] Migrator: In some cases doesn’t choose the "default" operation when it should
[STUDIO-7489] - Payload dropdown menu does not appear in Windows.
[STUDIO-7490] - Editing current target to inline or file, it erases current script.
[STUDIO-7492] - WSDL location attribute is added as a child element in the WSDL configuration.
[STUDIO-7501] - HTTP Request: When using a path with parameters, Studio does not generate all of them automatically.
[STUDIO-7514] - [DW-UI] Descendant selector is not working properly when two flowVars has the same structure
[STUDIO-7521] - 'Load CSV files from file' dialog doesn’t recognise "\t" as tab for delimiter
[STUDIO-7522] - Cannot generate flows from RAML
[STUDIO-7528] - [DW-UI] Autocomplete doesn’t work for Xml complex lists
[STUDIO-7529] - [DW-UI] User does not have any clue to set sample data when is trying to run previewd
[STUDIO-7532] - General configurations of uninstalled MPs are being populated in others MPs by default.
[STUDIO-7533] - Define Sample Data: Flat File missing from combo list
[STUDIO-7535] - Problem with Layout when setting the sample data from the preview link
[STUDIO-7542] - DataWeave: My original Sample Data file is deleted when closing the sample data editor
[STUDIO-7545] - FlatFiles: schema files inside the project are not being parsed
[STUDIO-7557] - [DW-UI] Regenerate sample data does not work
[STUDIO-7558] - [DW-UI] The Preview must be read-only
[STUDIO-7564] - [D2I] Show deprecated checkbox is not working
[STUDIO-7568] - [D2I] Default api.raml file in AP is not generated in Studio.
[STUDIO-7571] - [D2I] List of apis should show the version name not version id.
Improvements
[STUDIO-1333] - There is no specification when there is a global endpoint or a connector created in the global elements table, they are just called the same and it is confusing
[STUDIO-5576] - Improve Canvas watermark to give better first instructions to the User
[STUDIO-5929] - Improve New Flow layout
[STUDIO-5936] - Update blank canvas message
[STUDIO-7126] - When selecting JSON example the file filter is .schema instead of. json
[STUDIO-7220] - DW: Improve Change target experience
[STUDIO-7354] - Update message in the canvas when it is empty
[STUDIO-7435] - Support TLS context ciphers and protocols
[STUDIO-7451] - [DW-UI] Add Shortcuts
[STUDIO-7452] - Use sample file from metadata definition for input sample data in DataWeave
[STUDIO-7455] - [DW-UI] Change target experience
[STUDIO-7456] - We need to support weave grammar for 3.8.0
[STUDIO-7483] - [DW-UI] Change data type label for lists in flat files
[STUDIO-7496] - [DW-UI] Remove defined metadata button should be added.
[STUDIO-7519] - Metadata: list of types should be alpha sorted
[STUDIO-7555] - Add a highlight effect to apikit button when creating a new mule project.
Tasks
[STUDIO-7355] - Update to Eclipse 4.5
[STUDIO-7383] - Support TLS context trust-store "insecure" attribute
[STUDIO-7387] - Update/sign mars compatible jeeeyul features
[STUDIO-7398] - Create APIKIT 1.7.3 build (Nightly)
[STUDIO-7401] - Unified runtime: migrate features contributed from API Gateways into Studio
[STUDIO-7403] - Define strategy and implementation roadmap for API to Implementation initiative
[STUDIO-7420] - Review and improve Studio update mechanism and inter-plugin versions dependencies
[STUDIO-7470] - Remove XML/XSD Template Viewer from SAP Connector
[STUDIO-7488] - Support "encodeCredentialsInBody" attribute in token request element
[STUDIO-7536] - Brand Studio 6.0 beta
[STUDIO-7537] - Make the new UI the default one in Studio 6.0 beta
Jira Ticket List for DataWeave
Bug Fixes
[MDF-155] - Mapping using Java Map with String key is failing when input has numeric chars
[MDF-158] - First element of an array cannot have a condition
[MDF-162] - [DW] Attributes definition in key should start with a blank space after the key
[MDF-163] - [SE] DataWeave not processing property placeholder in reader properties
[MDF-164] - Weave not working with a 10K lines json
[MDF-168] - vars with arrays are consumed on first iteration
[MDF-170] - Range selector not working correctly on strings
[MDF-173] - joinBy throws exception with empty array
[MDF-174] - Avg Min Max Reduce Not Working with empty arrays
[MDF-178] - Json Parser not parsing correct numbers
[MDF-179] - CSV Not parsing
[MDF-177] - Inconsistency between distinctBy, equals and contains | https://docs.mulesoft.com/release-notes/studio/anypoint-studio-6.0-beta-with-3.8-m-1-runtime-release-notes | 2020-07-02T08:50:10 | CC-MAIN-2020-29 | 1593655878639.9 | [array(['../_images/studio/studio-apiplat-integration.png', 'api'],
dtype=object)
array(['../_images/studio/studio-apiplat-integration2.png', 'api'],
dtype=object)
array(['../_images/studio/studio-new-console.png', 'console'],
dtype=object) ] | docs.mulesoft.com |
- PROTOTYPED : API is deployed and published in the API Store as a prototype. A prototyped API is usually a mock implementation made public in order to get feedback about its usability. Users cannot subscribe to a prototyped API. They can only try out its functionality.
- PUBLISHED : API is visible in the API Store and available for subscription.
-
<ApplicationAcces.
Starting the API Manager
- Download WSO2 API Manager from.
- Install Oracle Java SE Development Kit (JDK) version 1.6.24 or later or 1.7.*.
- Set the JAVA_HOME environment variable.
- Using the command line, go to <Installation directory>/bin and execute wso2server.bat (for Windows) or wso2server.sh (for Linux).
- Wait until you see the message "WSO2 Carbon started in 'n' seconds."
It indicates that the server started successfully. To stop the API Manager, simply hit Ctrl-C in the command window. user interface (
https://<hostname>:9443/carbon) of the API Manager://<hostname>:9443/publisher) and log in as
apicreator.
Click the Add link and provide the information given in the table below.
Click Implement.
- It asks you to create a resource with wildcard characters (/*). Click Yes.
- Note that a resource by the name
defaultgets created as follows.
Click Implement again to go to the
Implementtab and provide the following information.
Click Manage to go to the
Managetab and provide the following information.
API Resources
An API is made up of one or more resources. Each resource handles a particular type of request and is analogous to a method (function) in a larger API. API resources accept following optional attributes:
- verbs : Specifies the HTTP verbs a particular resource accepts. Allowed values are GET, POST, PUT, DELETE. Multiple values can be specified.
-. For more information, see Managing Throttling Tiers.
- Auth-Type: Specifies the Resource level authentication along HTTP verbs. Auth-type can be None, Application or Application User.
- None : Can access the particular API resource without any access tokens
- Application: Application access token required to access the API resource
- Application User: User access token required to access the API resource example, in the
PhoneVerification API, we have changed the path for all the HTTP methods of API definition from
/phoneverify/1.0.0/
to
/phoneverify/1.0.0/CheckPhoneNumber as follows: in version.major.minor format (e.g., 1.1.0) version 1.1.0 that you created before. Note that you can now see a tab as API Lifecycle in the API Publisher UI.
- Go to the Lifecycle tab and select the state as
PUBLISHEDfrom the drop-down list.
- will be set to the DEPRECATED state automatically.
- Require Re-Subscription: Invalidates current user subscriptions, forcing users to subscribe again.
Subscribing to the API
You subscribe to APIs using the API Store Web application.
- Open the API Store (
https://<hostname>:9443/store) using your browser. Using the API Store, you can,
- Search and browse APIs
- Read documentation
- Comment on, rate and share/advertize APIs
- Take part in forums and request features etc.
The API you published earlier is available in the API Store. Self sign up to the API Store using the Sign-up link.
After subscription, log in to the API Store and click the API you published earlier (PhoneVerification 1.1.0).
- Note that you can see the subscription option in the right hand side of the UI after logging in. Select the default application,
Bronzetier and click Subscribe.
Applications
An application is a logical collection of one or more APIs, and is required when subscribing to an API. You can subscribe to multiple APIs using the same application. Instead of using the default application, you can also create your own by selecting the New Application... option in the above drop-down list or by going to the My Applications menu in the top menu bar.
- Once the subscription is successful, go to My Subscriptions page.
- In the My Subscriptions page, click the Generate buttons to generate production and sandbox access tokens and consumer key/secret pairs for the API. For more information on access tokens, see Working with Access Tokens.
Invoking the API
To invoke an API, you can use the integrated Swagger interactive documentation support (or any other simple REST client application or curl).
- Log in to the API Store (
https:/
/<YourHostName>:9443/store).
- Click the
PhoneVerification 1.1.0API that you published earlier.
- Click the API Console tab associated with the API.
Provide the necessary parameters and click Try it out to call the API. For example, the
PhoneVerificationAPI takes two parameters: the phone number and a license key, which is set to 0 for testing purposes.
Note the following in the above UI:
- The response for the API invocation
Configuring statistics
Steps below explain how to configure WSO2 BAM 2.4.1 with the API Manager.
Do the following changes in
<APIM_HOME>/repository/conf/api-manager.xmlfile:
- Enable API usage tracking by setting the
<APIUsageTracking>element to true
- Set the Thrift port to 7614
->
Next, prepare BAM to collect and analyze statistics from API manager.
- Download WSO2 BAM 2.4.1.
- Do the following changes in
<BAM_HOME>/repository/conf/datasources/bam_datasources.xmlfile:
- Copy/paste
WSO2_AMSTATS_DBdefinition from API Manager's
master-datasources.xmlfile. You edited it in step 2.
Replace the port of
WSO2BAM_CASSANDRA_DATASOURCEin URL (
jdbc:cassandra://localhost:9163/EVENT_KS). Note that localhost is used here; not the machine IP.
- Do not edit the
WSO2BAM_UTIL_DATASOURCE, which is using the offset
- Cassandra is bound by default on localhost, unless you change the data-bridge/data-bridge-config.xml file
-.xml file and change the port to localhost:9163. You must add the other nodes too when configuring a clustered setup.
<Nodes>localhost:9163</Nodes>
Restart the BAM server by running
<BAM_HOME>/bin/wso2server.[sh/bat].
Viewing statistics
To see statistics, you first generate some traffic via the API Gateway (invoke the Cdyne API we use in this guide) and wait a few seconds. Then, follow these steps:
- Connect to the API Publisher as a creator or publisher.:
For more information, see Viewing API Statistics.
This concludes the API Manager quick start. You have set up the API Manager and taken a look at its common usecases. For more advanced usecases, please see the User Guide and the Admin Guide of the API Manager documentation.
- Configure the
WSO2AM_STATS_DBdatabase used to store analytical data (BAM tooling analyzes and writes data to this database). | https://docs.wso2.com/pages/viewpage.action?pageId=45945514 | 2020-07-02T08:49:05 | CC-MAIN-2020-29 | 1593655878639.9 | [] | docs.wso2.com |
Enabling and Configuration
In this section, we will show you how to setup OCWP after installation.
Link to the WordPress admin panel
The OCWP modification adds a new link to your OpenCart admin panel main menu: WordPress.
The link will be visible after you link OCWP with WordPress (explained later in this document), and after you give your admin panel user access permissions for module/ocwp. Here is how it will look like:
Link to the WordPress front-end
OCWP gives you the option to add a link to your blog in the front-end main menu of your OpenCart store. This can be done from Extensions > Modules > OCWP > Main Settings > Display link to WordPress blog in front-end main menu. Here is how it will look like:
For developersThe modified controller is catalog/controller/common/header.php - keep this in mind in case of unexpected clashes with other third-party modifications.
Linking OCWP with WordPress
After you finish with the initial installation of OCWP and visit the module page in Extensions > Modules > OCWP you will see the WordPress installation screen. OCWP will automatically search for existing WordPress installations in your OpenCart root folder. Depending on the results from this query, the welcome screen will look like one of two possible ways.
Scenario 1 - You have no existing WordPress blogs
In this case you will see the following screen:
The initial installation of WordPress cannot get simpler - you simply need to enter the name of your WordPress blog, the URL, your admin credentials and click Proceed.
NoteThe new WordPress installation will be placed in the following directory, relative to your OpenCart root directory: vendors/ocwp/wordpress/
The installation input fields are the following:
If you wish to fine-tune your installation you can click the Show Advanced Options link. You will be presented with the following options:
Scenario 2 - you already have an existing WordPress blog
In this case you are greeted with the following screen:
As you can see, OCWP has already traversed your OpenCart root folder and it has listed all existing WordPress installations. All you need to do is select the installation which you need to link and click Proceed.
OCWP plugins for WordPress
During the linking of OCWP with a WordPress installation, a few new WordPress plugins are installed in your WordPress:
OCWP Cache
This plugin clears OpenCart's OCWP cache on any POST requests made in WordPress (for example, whenever you click Save as an administrator). If you disable this, the content of the OpenCart OCWP widgets will not get automatically updated whenever you change WordPress content, meaning it will get served from the OpenCart cache if any exists.
OCWP Login/Logout
Provides an automatic WordPress admin panel login for admins who are already logged in OpenCart. Also, if you keep it enabled, whenever you logout of the WordPress admin panel, you will also be logged out of the OpenCart admin panel.
In order for this plugin to work for a specific user (for example [email protected]), the following conditions must be met:
- [email protected] must be an existing admin panel user in your WordPress blog;
- [email protected] must be the e-mail of an existing admin panel user in your OpenCart store;
- In OpenCart [email protected] must have access rights for the OpenCart path module/ocwp.
If you disable this plugin, automatic login into the WordPress admin panel from OCWP will no longer be possible.
Creating a backup of the WordPress database
OCWP allows you to create backups of your WordPress database directly from your OpenCart admin panel. This will be possible only if your OpenCart and WordPress are using the same database. If they are both configured with different databases, then you will not be able to do backups of the WordPress tables from the OpenCart admin panel.
To do a backup of the WordPress database tables, follow these steps:
- Go to Admin > Tools > Backup/Restore
- Select the following tables and click on the Backup icon on the top right:
{wp_table_prefix}commentmeta
{wp_table_prefix}comments
{wp_table_prefix}links
{wp_table_prefix}options
{wp_table_prefix}postmeta
{wp_table_prefix}posts
{wp_table_prefix}term_relationships
{wp_table_prefix}term_taxonomy
{wp_table_prefix}terms
{wp_table_prefix}usermeta
{wp_table_prefix}users
Note{wp_table_prefix} stands for the prefix of your WordPress tables. In most cases this is oc_wp_, however it may be different depending on the settings you have chosen during the linking described in the Linking OCWP with WordPress section in this document.
The generated .SQL file can be used to restore your WordPress database. You can either insert it into PHPMyAdmin, or the OpenCart Backup/Restore form. | http://docs.isenselabs.com/ocwp/enabling_and_configuration | 2017-12-11T04:01:39 | CC-MAIN-2017-51 | 1512948512121.15 | [array(['/doc/ocwp/img/image12.png', 'image12'], dtype=object)
array(['/doc/ocwp/img/image33.png', 'image33'], dtype=object)
array(['/doc/ocwp/img/image11.png', 'image11'], dtype=object)
array(['/doc/ocwp/img/image15.png', 'image15'], dtype=object)
array(['/doc/ocwp/img/image14.png', 'image14'], dtype=object)] | docs.isenselabs.com |
Linux supports a number of different solutions for installing MySQL. The recommended method is to use one of the distributions from Oracle. If you choose this method, there are three options available:
Installing from a generic binary package in
.tar.gzformat. See Section 2.2, “Installing MySQL from Generic Binaries on Unix/Linux” for more information.
Extracting and compiling MySQL from a source distribution. For detailed instructions, see Section 2.11, “Installing MySQL from Source”.
Installing using a pre-compiled RPM package. For more information on using the RPM solution, see Section 2.5.1, “Installing MySQL from RPM Packages on Linux”.
As an alternative, you can use the native package manager within your Linux distribution to automatically download and install MySQL for you. Native package installations can take of the download and depdendencies 2.5.2, “Installing MySQL on Linux using Native Package Manager”. 2.12.1.2, “Starting and Stopping MySQL Automatically”. | http://doc.docs.sk/mysql-refman-5.5/linux-installation.html | 2017-12-11T04:05:41 | CC-MAIN-2017-51 | 1512948512121.15 | [] | doc.docs.sk |
Cumulus VX appliance
Cumulus VX is a community-supported virtual appliance that enables cloud admins!
More informations on
Default username is cumulus and password is CumulusLinux!
RAM: 512 MB
You need KVM enable on your machine or in the GNS3 VM.
Documentation for using the appliance is available on | http://docs.gns3.com/appliances/cumulus-vx.html | 2017-12-11T04:07:03 | CC-MAIN-2017-51 | 1512948512121.15 | [] | docs.gns3.com |
Legendre1D¶
- class
astropy.modeling.polynomial.
Legendre1D(degree, domain=None, window=[-1, 1], n_models=None, model_set_axis=None, name=None, meta=None, **params)[source] [edit on github]¶
Bases:
astropy.modeling.polynomial.PolynomialModel
Univariate Legendre series.
It is defined as:\[P(x) = \sum_{i=0}^{i=n}C_{i} * L_{i}(x)\]
where
L_i(x)is the corresponding Legendre polynomial.
Notes
This model does not support the use of units/quantities, because each term in the sum of Legendre polynomials is a polynomial in x - since the coefficients within each Legendre polynomial are fixed, we can’t use quantities for x since the units would not be compatible. For example, the third Legendre polynomial (P2) is 1.5x^2-0.5, but if x was specified with units, 1.5x^2 and -0.5. | http://docs.astropy.org/en/stable/api/astropy.modeling.polynomial.Legendre1D.html | 2017-12-11T03:42:42 | CC-MAIN-2017-51 | 1512948512121.15 | [] | docs.astropy.org |
This section will show you what to do when the scene becomes dark during placing the assets.
It may be caused by the deletion of the Directional Light which acts as the light source in the scene.
To solve this problem, we have to add a Directional Light into the scene again.
Go to Assets > 3D Model > Directional Light.
Then light got into the objects in the space.
Look at the hierarchy and confirm if the Directional Light is added or not.
| http://docs.styly.cc/faq/the-assets-in-the-scene-go-dark/ | 2017-12-11T03:58:13 | CC-MAIN-2017-51 | 1512948512121.15 | [array(['http://docs.styly.cc/wp-content/plugins/lazy-load/images/1x1.trans.gif',
None], dtype=object)
array(['http://docs.styly.cc/wp-content/plugins/lazy-load/images/1x1.trans.gif',
None], dtype=object)
array(['http://docs.styly.cc/wp-content/plugins/lazy-load/images/1x1.trans.gif',
None], dtype=object)
array(['http://docs.styly.cc/wp-content/plugins/lazy-load/images/1x1.trans.gif',
None], dtype=object)
array(['http://docs.styly.cc/wp-content/plugins/lazy-load/images/1x1.trans.gif',
None], dtype=object)
array(['http://docs.styly.cc/wp-content/plugins/lazy-load/images/1x1.trans.gif',
None], dtype=object) ] | docs.styly.cc |
Deployment groups
VSTS | TFS 2018 definition,. | https://docs.microsoft.com/en-us/vsts/build-release/concepts/definitions/release/deployment-groups/?WT.mc_id=blog-twitter-abewan | 2017-12-11T03:54:07 | CC-MAIN-2017-51 | 1512948512121.15 | [] | docs.microsoft.com |
Getting Started
This section walks you through the basics of MonoGame and helps you create your first game.
First, select the toolset and operating system you will be working with to create your first MonoGame project and then continue your journey to understand the basic layout of a MonoGame project.
By the end of this tutorial set, you will have a working project to start building from for your target platform and be ready to tackle your next steps. | https://docs.monogame.net/articles/getting_started/0_getting_started.html | 2020-09-18T17:05:48 | CC-MAIN-2020-40 | 1600400188049.8 | [] | docs.monogame.net |
Mailing List¶
General questions, potential contributors, and ideas should be directed to the developer mailing list. It is an open Google Group, so feel free to join anytime! If you are unsure about where to ask or post something, the mailing list is a good place to ask as well.
Issues¶
Bug reports and feature requests should be directed to the issues page of the Modin GitHub repo. | https://modin.readthedocs.io/en/latest/contact.html | 2020-09-18T16:32:14 | CC-MAIN-2020-40 | 1600400188049.8 | [] | modin.readthedocs.io |
2020-08-04
In SDK 3.3, we have added support for some rather exciting new features:
Core Location Fusion (iOS). In venues with Apple indoor positioning, we can improve the positioning performance compared to Apple’s WiFi-based system without any added infrastructure. As always, Android positioning is also supported in such venues without extra infrastructure using IndoorAtlas technology.
Encrypted iBeacons are now supported. They reduce the risk of 3rd parties being able to use your beacon deployments for their purposes.
Completely revised indoor-outdoor detection algorithm: Using the new algorithm requires some outdoor fingerprinting. When enabled, it can deliver very good performance.
The positioning services are again fully usable also from Mainland China
To use the new features, please contact IndoorAtlas sales & support to learn where they are available (Core Location Fusion) and to obtain instructions on how to use them.
SDK 3.3 has enabled us to clear a lot of technical debt from the SDK 2.x era, especially in our iOS codebase, which allows us to move a bit faster with the development again. | https://docs.indooratlas.com/technical/release-notes/sdk-33-release-information/ | 2020-09-18T17:16:08 | CC-MAIN-2020-40 | 1600400188049.8 | [] | docs.indooratlas.com |
How to Comply with PDF/A Standard
PDF/A is an ISO-standardized version of the PDF (Portable Document Format) specialized for the digital preservation of electronic documents.
PDF/A standard is designed to use the PDF format for achieving documents. This means that the compliant documents should contain all the information necessary for displaying the document embedded in the file. This includes all content, fonts, and color information. A PDF/A document is not permitted to rely on information from external sources. Other key elements to PDF/A conformance include:
- Audio and video content is forbidden.
- JS and executable file launches are forbidden.
- All fonts must be embedded. This applies to the Standard 14 fonts as well.
- Color spaces should be specified in a device-independent manner.
- Encryption is forbidden.
- Use of standards-based metadata.
- Transparent objects and layers are forbidden.
- LZW and JPEG2000 image compression models are forbidden.
Compliance Levels
There are three major versions of the standard – PDF/A-1 (2005), PDF/A-2 (2011), PDF/A-3 (2013).
PDF/A-1
PDF/A-1 standard uses the PDF Reference 1.4 and specifies two levels of compliance.
PDF/A-1b
Its goal is to ensure reliable reproduction of the visual appearance of the document.
PDF/A-1a
Its objective is to ensure that documents content can be searched and re-purposed. This compliance level has some additional requirements:
- Document structure must be included.
- Tagged PDF.
- Unicode character maps
- Language specification.
RadPdfProcessing does not support PDF/A-1a level of compliance.
PDF/A-2
Pdf/A-2 standard uses the PDF Reference 1.7. In addition, it has the following features:
- Support for JPEG2000 image compression.
- Support for transparency effects and layers.
It defines three conformance levels.
PDF/A-2a
Corresponding to the PDF/A-1a
RadPdfProcessing does not support PDF/A-2a level of compliance.
PDF/A-2b
This level corresponds to the PDF/A-1b.
PDF/A-2u
Similar to PDF/A-2b level with the additional requirement that all text in the document has Unicode mapping.
PDF/A-3
PDF/A-3 differs from PDF/A-2 in only one regard – it allows embedding of arbitrary file formats into the PDF file.
How to Conform to PDF/A Standard
The PdfFormatProvider class allows to export a RadFixedDocument to PDF and specify some specific settings when doing so. More information about the available export settings can be found in the Settings article.
To comply with one of the versions of the standard, you need to specify ComplianceLevel different from None. The snippet in Example 1 shows how this can be achieved.
Example 1: Export PDF/A compliant document
PdfFormatProvider provider = new PdfFormatProvider(); PdfExportSettings settings = new PdfExportSettings(); settings.ComplianceLevel = PdfComplianceLevel.PdfA2B; provider.ExportSettings = settings; return provider.Export(document);
PDF/A standard requires documents to contain all fonts used in them. RadPdfProcessing does not support embedding of the standard 14 fonts used in PDF documents, so using them will prevent the document from complying with the standard. More information about font embedding is available in the Fonts article. | https://docs.telerik.com/devtools/document-processing/libraries/radpdfprocessing/howto/comply-with-pdfa-standard | 2020-09-18T16:18:27 | CC-MAIN-2020-40 | 1600400188049.8 | [] | docs.telerik.com |
A tail-recursive function that does not need to reverse the list at the end is faster than a body-recursive function, as are tail-recursive functions that do not construct any terms at all (for example, a function that sums all integers in a list).
Some truths seem to live on well beyond their best-before date, perhaps because "information" spreads faster from person-to-person than a single release note that says, for example, that body-recursive calls have become faster.
This section tries to kill the old truths (or semi-truths) that have become myths.
According to the myth, using a tail-recursive function that builds a list in reverse followed by a call to
lists:reverse/1 is faster than a body-recursive function that builds the list in correct order; the reason being that body-recursive functions use more memory than tail-recursive functions.
That was true to some extent before R12B. It was even more true before R7B. Today, not so much. A body-recursive function generally uses the same amount of memory as a tail-recursive function. It is generally not possible to predict whether the tail-recursive or the body-recursive version will be faster. Therefore, use the version that makes your code cleaner (hint: it is usually the body-recursive version).
For a more thorough discussion about tail and body recursion, see
Erlang's Tail Recursion is Not a Silver Bullet.
A tail-recursive function that does not need to reverse the list at the end is faster than a body-recursive function, as are tail-recursive functions that do not construct any terms at all (for example, a function that sums all integers in a list).].)
String handling can be slow if done improperly. In Erlang, you need to think a little more about how the strings are used and choose an appropriate representation. If you use regular expressions, use the
re module in STDLIB instead of the obsolete
regexp module.
The repair time is still proportional to the number of records in the file, but Dets repairs used to be much slower in the past. Dets has been massively rewritten and improved..
That was once true, but from R6B the BEAM compiler can see that a variable is not used.
Similarly, trivial transformations on the source-code level such as converting a
case statement to clauses at the top-level of the function seldom makes any difference to the generated code.
Rewriting Erlang code to a NIF to make it faster should be seen as a last resort. It is only guaranteed to be dangerous, but not guaranteed to speed up the program.
Doing too much work in each NIF call will
degrade responsiveness of the VM. Doing too little work may mean that the gain of the faster processing in the NIF is eaten up by the overhead of calling the NIF and checking the arguments.
Be sure to read about
Long-running NIFs before writing a NIF.
© 2010–2017 Ericsson AB
Licensed under the Apache License, Version 2.0. | https://docs.w3cub.com/erlang~21/doc/efficiency_guide/myths/ | 2020-09-18T17:04:57 | CC-MAIN-2020-40 | 1600400188049.8 | [] | docs.w3cub.com |
Windows Symbol Packages
Symbol files make it easier to debug your code. The easiest way to get Windows symbols is to use the Microsoft public symbol server. The symbol server makes symbols available to your debugging tools as needed. After a symbol file is downloaded from the symbol server it is cached on the local computer for quick access.
Symbol package deprecation
Important
We are no longer publishing the offline symbol packages for Windows.
With the cadence that we release updates for Windows, the Windows debugging symbols we publish via the packages on this page are quickly made out of date. We have made significant improvements to the online Microsoft Symbol Server by moving this to be an Azure-based symbol store, and symbols for all Windows versions and updates are available there. You can find more about this in this blog entry.
For information on how to retrieve symbols for a machine that is not connected to the Internet, see Using a Manifest File with SymChk.
Symbol Resources and Feedback
To learn more about using symbols and debugging, see Symbols and Symbol Files.
For help with debugging issues, see Debugging Resources.
We are interested in your feedback about symbols. Please mail suggestions or bug reports to [email protected]. Technical support is not available from this address, but your feedback will help us to plan future changes for symbols and will make them more useful to you in the future. | https://docs.microsoft.com/en-us/windows-hardware/drivers/debugger/debugger-download-symbols | 2020-09-18T18:01:33 | CC-MAIN-2020-40 | 1600400188049.8 | [] | docs.microsoft.com |
Release Note 20171201
This is a summary of new features and improvements introduced in the December 1st, 2017 release. If you have any product feature requests, submit them at feedback.treasuredata.com.
Table of Contents
- Presto: DELETE statement support
- Presto: Version Upgrade to v0.188
- Presto: Continued Performance Improvements
- Workflow: Enhanced User Interface (Private Beta)
- Hive: TD_NUMERIC_RANGE() and TD_ARRAY_INDEX()
- Collection: Hubspot Custom Properties
- Collection: Google Doubleclick for Publishers (DFP)
- Collection: JavaScript SDK v1.9.1
- Output: Result Output to Salesforce Marketing Cloud (ExactTarget)
- Private Availability
Presto: DELETE statement support
Presto DELETE is now available for all TD customers, superseding our previous partial delete support. You can now delete rows from a table with the familiar SQL DELETE FROM table_name WHERE… with any condition. For details, click the following link:
Presto: Version Upgrade to v0.188
We’ve finished upgrading all customers to Presto v0.188 from v0.178. The major changes include:
Geospatial Functions Support
Presto 0.188 adds industry compliant geospatial functions. The new geospatial functions are a contribution from the ride-sharing company, Uber, to Presto project.
Lambda and Other New Expressions and Functions
Detailed Changelogs
The detailed changelogs are available here.
- Presto version 0.188 release note
- Presto version 0.187 release note
- Presto version 0.186 release note
- Presto version 0.185 release note
- Presto version 0.184 release note
- Presto version 0.183 release note
- Presto version 0.182 release note
- Presto version 0.181 release note
- Presto version 0.180 release note
- Presto version 0.179 release note
Presto: Continued Performance Improvements
A series of internal improvements released during October and November mean that Presto can now process, on average, at least 25% more data per Compute Unit than previously. No changes to your code are required to take advantage of this improvement.
Workflow: Enhanced User Interface (Private Beta)
We continue our private availability for our new Treasure Workflow web user interface experience. We are currently in Private Beta.
With the new experience, you can create and edit of workflows directly from your web browser, utilize quickstart templates, more easily find and debug workflow errors, and view history of edits to a workflow project.
In the last month we have further updated our Workflow UI edit experience, based on feedback, to enhance ease of use.
If you are interested in joining the private access of this new UI, and have not already requested access, complete this sign-up form, or contact your account representative.
Hive: TD_NUMERIC_RANGE() and TD_ARRAY_INDEX()
These two functions were added to Hive engine.
Collection: Hubspot Custom Properties
The Data Connector for Hubspot now enables users to import custom properties for objects.
Collection: Google Doubleclick for Publishers (DFP)
The Data Connector for Google Doubleclick for Publishers (DFP) enables users to import data from DFP, including the company, creative, inventory_adunit, line_item, order, placement, and report objects.
Collection: JavaScript SDK v1.9.1
JavaScript SDK v1.9.1 was released, which includes minor bug fixes and improvements. This version supports multi-token & key column for Personalization API.
Output: Result Output to Salesforce Marketing Cloud (ExactTarget)
Now you can publish user segments from Treasure Data into Salesforce Marketing Cloud (ExactTarget). Run data-driven email campaigns, by using your first party data from a variety of sources.
Private Availability
Console: Improved load time for table list & query editor pages (Private Access)
We have enabled support for a pagination based experience, for quicker page load time on these frequently accessed pages. If you’re interested in trying out this support, please let support or your account representative know, and we can turn it on for your account.
We will also be enabling support for faster load time on the saved queries page soon as well.
Last modified: Nov 30 2017 19:57:50 UTC
If this article is incorrect or outdated, or omits critical information, let us know. For all other issues, access our support channels. | https://docs.treasuredata.com/articles/releasenote-20171201 | 2018-02-18T04:33:57 | CC-MAIN-2018-09 | 1518891811655.65 | [] | docs.treasuredata.com |
Create new Form Builder Meta Box
A form builder meta box is a normal post meta box and can be added with add_meta_box().
add_meta_box( $id, $title, $callback, $screen, $context, $priority, $callback_args );
WordPress Codex:
Just make sure you use "buddyforms" as post type to make sure the meta box is only loaded in the BuddyForms edit screen.
In the above example we add a checkbox to the meta box.
You can use any kind of form element. As long as you make sure you add it to the "buddyforms_options" array all the rest like saving and validating will be done automatically. Just make suer you use the structure "buddyforms_options[YOUR OPTION]" in your form element name. | https://docs.buddyforms.com/article/308-create-new-form-builder-metabox | 2018-02-18T04:37:32 | CC-MAIN-2018-09 | 1518891811655.65 | [] | docs.buddyforms.com |
Working Remotely
If you access Windows SBS remotely, you can read e-mail, access your computer at work, view your company’s internal Web site, and synchronize offline files.
- You can use Remote Web Workplace to access the Windows SBS network remotely. For more information, see Using the Remote Web Workplace.
- When you are traveling or working from home, you can check your work e-mail remotely over the Internet by using a Web-based version of Outlook, called Outlook Web Access. For more information, see Checking work e-mail remotely.
- You can use Windows SBS to work offline by sharing network files and programs, even when you are not connected to the network. For more information, see Working offline. | https://docs.microsoft.com/en-us/previous-versions/windows/it-pro/windows-server-essentials/cc747283(v=ws.10) | 2018-02-18T05:44:14 | CC-MAIN-2018-09 | 1518891811655.65 | [] | docs.microsoft.com |
How to: Stop Code Changes
While Edit and Continue is in the process of applying code changes, you can stop the operation.
Caution
Stopping code changes in managed code can produce unexpected results. Applying changes to managed code is normally a quick process, so there is seldom a need to stop code changes in managed code.
To stop applying code changes
Choose Stop Applying Code Changes from the Debug menu.
This menu item is visible only when code changes are being applied.
If you choose this option, none of the code changes are committed.
See Also
Edit and Continue
Edit and Continue, Debugging, Options Dialog Box | https://docs.microsoft.com/en-gb/visualstudio/debugger/how-to-stop-code-changes | 2018-02-18T04:52:32 | CC-MAIN-2018-09 | 1518891811655.65 | [] | docs.microsoft.com |
New-SPVisio
Safe Data Provider
Syntax
New-SPVisioSafeDataProvider -DataProviderId <String> -DataProviderType <Int32> -VisioServiceApplication <SPVisioServiceApplicationPipeBind> [-AssignmentCollection <SPAssignmentCollection>] [-Description <String>] [<CommonParameters>]
Description
The
New-SPVisioSafeDataProvider cmdlet adds a new data provider to the list of safe data providers for a Visio Services application.
For permissions and the most current information about Windows PowerShell for SharePoint Products, see the online documentation at ().
Examples
-------------------EXAMPLE------------------------
C:\PS>New-SPVisioSafeDataProvider -VisioServiceApplication "VGS1" -DataProviderID "CustomProvider" -DataProviderType 5 -Description "Custom Data Provider"
This example creates a new safe data provider for a specified Visio Services application.
Required Parameters
Specifies the name of the data provider to create. The combination of DataProviderID and DataProviderType uniquely identify a data provider for a Visio Services application. The string that identifies the data provider can be a maximum of 255 alphanumeric characters.
The type must be a valid string that identifies the data provider; for example, VisioDataProvider1.
The type must be a valid identity of a data provider type.
Specifies the supported type of the data provider to add. Custom data types are supported; for example, Excel Services.
Specifies the Visio Services application in which to add the new safe data provider. description of the new safe data provider.
The type must be a string with a maximum of 4096 characters. | https://docs.microsoft.com/en-us/powershell/module/sharepoint-server/New-SPVisioSafeDataProvider?view=sharepoint-ps | 2018-02-18T06:09:22 | CC-MAIN-2018-09 | 1518891811655.65 | [] | docs.microsoft.com |
High availability and data protection for availability group configurations
This article presents supported deployment configurations for SQL Server Always On availability groups on Linux servers. An availability group supports high availability and data protection. Automatic failure detection, automatic failover, and transparent reconnection after failover provide high availability. Synchronized replicas provide data protection.
On a Windows Server Failover Cluster (WSFC), a common configuration for high availability uses two synchronous replicas and a third server or file share to provide quorum. The file-share witness validates the availability group configuration - status of synchronization, and the role of the replica, for example. This configuration ensures that the secondary replica chosen as the failover target has the latest data and availability group configuration changes.
The WSFC synchronizes configuration metadata for failover arbitration between the availability group replicas and the file-share witness. When an availability group is not on a WSFC, the SQL Server instances store configuration metadata in the master database.
For example, an availability group on a Linux cluster has
CLUSTER_TYPE = EXTERNAL. There is no WSFC to arbitrate failover. In this case the configuration metadata is managed and maintained by the SQL Server instances. Because there is no witness server in this cluster, a third SQL Server instance is required to store configuration state metadata. All three SQL Server instances together provide distributed metadata storage for the cluster.
The cluster manager can query the instances of SQL Server in the availability group, and orchestrate failover to maintain high availability. In a Linux cluster, Pacemaker is the cluster manager.
SQL Server 2017 CU 1 enables high availability for an availability group with
CLUSTER_TYPE = EXTERNAL for two synchronous replicas plus a configuration only replica. The configuration only replica can be hosted on any edition of SQL Server 2017 CU1 or later - including SQL Server Express edition. The configuration only replica maintains configuration information about the availability group in the master database but does not contain the user databases in the availability group.
How the configuration affects default resource settings
SQL Server 2017 introduces the
REQUIRED_SYNCHRONIZED_SECONDARIES_TO_COMMIT cluster resource setting. This setting guarantees the specified number of secondary replicas write the transaction data to log before the primary replica commits each transaction. When you use an external cluster manager, this setting affects both high availability and data protection. The default value for the setting depends on the architecture at the time the cluster resource is created. When you install the SQL Server resource agent -
mssql-server-ha - and create a cluster resource for the availability group, the cluster manager detects the availability group configuration and sets
REQUIRED_SYNCHRONIZED_SECONDARIES_TO_COMMIT accordingly.
If supported by the configuration, the resource agent parameter
REQUIRED_SYNCHRONIZED_SECONDARIES_TO_COMMIT is set to the value that provides high availability and data protection. For more information, see Understand SQL Server resource agent for pacemaker.
The following sections explain the default behavior for the cluster resource.
Choose an availability group design to meet specific business requirements for high availability, data protection, and read-scale.
The following configurations describe the availability group design patterns and the capabilities of each pattern. These design patterns apply to availability groups with
CLUSTER_TYPE = EXTERNAL for high availability solutions.
- Three synchronous replicas
- Two synchronous replicas
- Two synchronous replicas and a configuration only replica
Three synchronous replicas
This configuration consists of three synchronous replicas. By default, it provides high availability and data protection. It can also provide read-scale.
An availability group with three synchronous replicas can provide read-scale, high availability, and data protection. The following table describes availability behavior.
* Default
Two synchronous replicas
This configuration enables data protection. Like the other availability group configurations, it can enable read-scale. The two synchronous replicas configuration does not provide automatic high availability.
An availability group with two synchronous replicas provides read-scale and data protection. The following table describes availability behavior.
* Default
Note
The preceding scenario is the behavior prior to SQL Server 2017 CU 1.
Two synchronous replicas and a configuration only replica
An availability group with two (or more) synchronous replicas and a configuration only replica provides data protection and may also provide high availability. The following diagram represents this architecture:
- Synchronous replication of user data to the secondary replica. It also includes availability group configuration metadata.
- Synchronous replication of availability group configuration metadata. It does not include user data.
In the availability group diagram, a primary replica pushes configuration data to both the secondary replica and the configuration only replica. The secondary replica also receives user data. The configuration only replica does not receive user data. The secondary replica is in synchronous availability mode. The configuration only replica does not contain the databases in the availability group - only metadata about the availability group. Configuration data on the configuration only replica is committed synchronously.
Note
An availabilility group with configuration only replica is new for SQL Server 2017 CU1. All instances of SQL Server in the availability group must be SQL Server 2017 CU1 or later.
The default value for
REQUIRED_SYNCHRONIZED_SECONDARIES_TO_COMMIT is 0. The following table describes availability behavior.
* Default
Note
The instance of SQL Server that hosts the configuration only replica can also host other databases. It can also participate as a configuration only database for more than one availability group.
Requirements
- All replicas in an availability group with a configuration only replica must be SQL Server 2017 CU 1 or later.
- Any edition of SQL Server can host a configuration only replica, including SQL Server Express.
- The availability group needs at least one secondary replica - in addition to the primary replica.
- Configuration only replicas do not count towards the maximum number of replicas per instance of SQL Server. SQL Server standard edition allows up to three replicas, SQL Server Enterprise Edition allows up to 9.
Considerations
- No more than one configuration only replica per availability group.
- A configuration only replica cannot be a primary replica.
- You cannot modify the availability mode of a configuration only replica. To change from a configuration only replica to a synchronous or asynchronous secondary replica, remove the configuration only replica, and add a secondary replica with the required availability mode.
- A configuration only replica is synchronous with the availability group metadata. There is no user data.
- An availability group with one primary replica and one configuration only replica, but no secondary replica is not valid.
- You cannot create an availability group on an instance of SQL Server Express edition.
Understand SQL Server resource agent for pacemaker
SQL Server 2017 CTP 1.4 added
sequence_number to
sys.availability_groups to allow Pacemaker to identify how up-to-date secondary replicas are with the primary replica.
sequence_number is a monotonically increasing BIGINT that represents how up-to-date the local availability group replica is. Pacemaker updates the
sequence_number with each availability group configuration change. Examples of configuration changes include failover, replica addition, or removal. The number is updated on the primary, then replicated to secondary replicas. Thus a secondary replica that has up-to-date configuration has the same sequence number as the primary.
When Pacemaker decides to promote a replica to primary, it first sends a pre-promote notification to all replicas. The replicas return the sequence number. Next, when Pacemaker actually tries to promote a replica to primary, the replica only promotes itself if its sequence number is the highest of all the sequence numbers. If its own sequence number does not match the highest sequence number, the replica rejects the promote operation. In this way only the replica with the highest sequence number can be promoted to primary, ensuring no data loss.
This process requires at least one replica available for promotion with the same sequence number as the previous primary. The Pacemaker resource agent sets
REQUIRED_SYNCHRONIZED_SECONDARIES_TO_COMMIT such that at least one synchronous secondary replica is up-to-date and available to be the target of an automatic failover by default. With each monitoring action, the value of
REQUIRED_SYNCHRONIZED_SECONDARIES_TO_COMMIT is computed (and updated if necessary). The
REQUIRED_SYNCHRONIZED_SECONDARIES_TO_COMMIT value is 'number of synchronous replicas' divided by 2. At failover time, the resource agent requires (
total number of replicas -
REQUIRED_SYNCHRONIZED_SECONDARIES_TO_COMMIT replicas) to respond to the pre-promote notification. The replica with the highest
sequence_number is promoted to primary.
For example, An availability group with three synchronous replicas - one primary replica and two synchronous secondary replicas.
REQUIRED_SYNCHRONIZED_SECONDARIES_TO_COMMITis 1; (3 / 2 -> 1).
The required number of replicas to respond to pre-promote action is 2; (3 - 1 = 2).
In this scenario, two replicas have to respond for the failover to be triggered. For successful automatic failover after a primary replica outage, both secondary replicas need to be up-to-date and respond to the pre-promote notification. If they are online and synchronous, they have the same sequence number. The availability group promotes one of them. If only one of the secondary replicas responds to the pre-promote action, the resource agent cannot guarantee that the secondary that responded has the highest sequence_number, and a failover is not triggered.
Important
When
REQUIRED_SYNCHRONIZED_SECONDARIES_TO_COMMIT is 0 there is risk of data loss. During a primary replica outage, the resource agent does not automatically trigger a failover. You can either wait for primary to recover, or manually fail over using
FORCE_FAILOVER_ALLOW_DATA_LOSS.
You can choose to override the default behavior, and prevent the availability group resource from setting
REQUIRED_SYNCHRONIZED_SECONDARIES_TO_COMMIT automatically.
The following script sets
REQUIRED_SYNCHRONIZED_SECONDARIES_TO_COMMIT to 0 on an availability group named
<**ag1**>. Before you run replace
<**ag1**> with the name of your availability group.
sudo pcs resource update <**ag1**> required_synchronized_secondaries_to_commit=0
To revert to default value, based on the availability group configuration run:
sudo pcs resource update <**ag1**> required_synchronized_secondaries_to_commit=
Note
When you run the preceding commands, the primary is temporarily demoted to secondary, then promoted again. The resource update causes all replicas to stop and restart. The new value for
REQUIRED_SYNCHRONIZED_SECONDARIES_TO_COMMIT is only set once replicas are restarted, not instantaneously.
See also
Availability groups on Linux | https://docs.microsoft.com/en-us/sql/linux/sql-server-linux-availability-group-ha | 2018-02-18T05:00:12 | CC-MAIN-2018-09 | 1518891811655.65 | [array(['../includes/media/yes.png', 'yes'], dtype=object)
array(['../includes/media/no.png', 'no'], dtype=object)
array(['../includes/media/no.png', 'no'], dtype=object)
array(['../includes/media/no.png', 'no'], dtype=object)
array(['media/sql-server-linux-availability-group-ha/3-three-replica.png',
'Three replicas'], dtype=object)
array(['media/sql-server-linux-availability-group-ha/1-read-scale-out.png',
'Two synchronous replicas'], dtype=object)
array(['media/sql-server-linux-availability-group-ha/2-configuration-only.png',
'Configuration only availability group'], dtype=object) ] | docs.microsoft.com |
Cloning cluster data from a defined other.
This procedure steps you through the basic required selections in each of the three restore dialogs presented during the workflow.
Prerequisites
Procedure
- Click the target .
- Click the Details link for the Backup Service.
- In the Activity tab, click Restore Backup.
- Click the Other Location tab.The Step 1 of 3: Select Backup Restore from Backup dialog appears.
- Required: Select the Location. Available options are:
- If the location is Amazon S3:
- Required: Enter the S3 Bucket name.
- Required: Enter your AWS credentials in AWS Key and AWS Secret.
- Required: If the location is Local FS, enter the Path to the backups.
- Click Next.The Step 2 of 3: Select Backup Version dialog appears populated with the available backups at the selected location.
- Required: Select the backup to restore and click Next.The Step 3 of 3: Configure and Restore dialog appears.
- Required: Select the keyspaces or tables from the available Keyspaces.
To select only specific tables, expand the keyspace name and select the tables.
- Required: In the Location list, select the cluster to clone the data to.
- Required: Click Restore Backup.The Confirm Restore dialog appears.Warning: If a value was not set for throttling stream output, a warning message indicates the consequences of unthrottled restores. The throttle warning only appears for versions of DSE from 4.8.7 and later. Either click Cancel and set the throttle value in the Restore from Backup dialog, set the values in cassandra.yaml (
stream_throughput_outbound_megabits_per_secand
inter_dc_stream_throughput_outbound_megabits_per_sec), or proceed anyway at risk of network bottlenecks.Tip: If you are using LCM to manage DSE cluster configuration, update Cluster Communication settings in cassandra.yaml in the config profile for the cluster and run a configuration job. Stream throughput (not inter-dc) is already set to 200 in LCM defaults.
- Review the information to determine what adjustments if any need to be made to the current schema:
- To rectify the schema issues and try the restore again afterward, click Cancel.
- To proceed despite the schema mismatch, click Continue Restore.Warning: Attempting to restore a backup with an incompatible schema might result in corrupt or inaccessible data. Before forcing the restore, you might want to back up your current data. | https://docs.datastax.com/en/opscenter/6.0/opsc/online_help/services/cloneClusterOtherLocation.html | 2018-02-18T04:35:08 | CC-MAIN-2018-09 | 1518891811655.65 | [] | docs.datastax.com |
e-Conomic Integration
redSHOP provides a seamless integration with the accounting system e-conomic.
With the integration you can:
- Sync your entire product catalogue to e-conomic
- Create full products of all variants in e-conomic
- Sync redSHOP users as debtors in e-conomic
- Automatically or manually generate invoices to send to the customer (and store owner, if desired)
- Include shipping costs in invoices
- Integrate with your redSHOP payment processor for accepting payment of invoices.
Setting up the integration requires configuration in the redSHOP Configuration, in the Integration tab, as well as installation of the e-conomic plugin for redSHOP.
To get it all set up follow these steps:
- 1
- Download and install the e-Conomic Plugin for redSHOP.
- 2
- Go to Joomla Admin -> Extensions -> Plugin Manager, search for e-conomic to open the plugin and add your e-conomic account details:
- 3
- Go to your redSHOP Admin and click on Configuration:
- Go to the Integration tab:
- Set your preferences:
e-conomic Setting Options
- e-conomic Integration
This setting will set the integration on, or off. Default: Off
- Choice of Book Invoice
Select how you would like to book invoices. Options are:
- Directly Book Invoice
Book the invoice as soon as the order is placed.
- Manually Book Invoice
Allow a redSHOP Admin to manually book all invoices.
- Book Invoice on Selected Order Status
Select a status to book the invoices when an order reaches the defined status.
- e-conomic Book Invoice Number
Set how you would like the invoice numbers to be done in e-conomic. Options are:
- Same as Order Number
- Sequentially in e-conomic (No match up with order number)
- Default e-conomic Account Group
Select the default account group
- Store Attributes as Product in e-conomic
Select whether you want product attributes to be added / synced in e-conomic as products. Option are:
- No
- Store Attributes as Products in e-conomic
- Store Attributes and Products in e-conomic
- Short error messages
If set to Yes a SOAP exception message will be shown directly, otherwise it will show simple short error messages. | https://docs.redcomponent.com/article/97-e-conomic-integration | 2018-02-18T04:58:28 | CC-MAIN-2018-09 | 1518891811655.65 | [] | docs.redcomponent.com |
The Agile Release Plan Template allows you to show your Agile Programme Plan featuring EPICs and THEMEs, with Release Dates. Especially useful for Scrum teams in a portfolio.
9 Useful Agile Release Plan Slides in 1 Pack!
- Release Plan Title Slide
- “What is a Release Plan” Explanation Slide
- “Guidance” Slide, explaining how to use the Release Plan Template
- “Monthly Theme Summary” Release Plan
- Timeline with Milestones
- Programme-level summary workstream
- Five Workstreams
- Theme markers within each workstream
- Labels for the Themes
- Legend for Risk Level
- “Quarterly Theme Summary” Release Plan
- Same as Monthly Release Plan
- but featuring months instead of weeks to allow Quarterly visibility
- “Iteration Summary” Release Plan (4 Workstreams)
- Release Plan Template showing Iterations
- Displaying 1, 2, 3 and 4-week Iterations
- Showing Themes and Epics within Iterations
- “Iteration Summary” Release Plan (2 Workstreams)
- Same as 4 Workstream Iteration Summary
- but with only 2 Workstreams
- “High Level Features” Release Plan
- Two Workstreams
- Timeline with Milestones
- Two Themes
- Epics within the Themes on the Timeline
- Epic Description
- Release Plan RAID Summary
- Risks, Assumptions, Issues & Dependencies | https://business-docs.co.uk/downloads/powerpoint-agile-release-plan-template/ | 2020-03-28T19:49:56 | CC-MAIN-2020-16 | 1585370493120.15 | [] | business-docs.co.uk |
When collaborating in an asynchronous way, it may happen that 2 members edit the same section of content, leading to a conflict.
When trying to merge a draft, GitBook will automatically detect the changes that conflict with the changes of an other writer, you will be asked to resolve those conflicts.
Note: You will not be able to save or merge your content until all conflicts are solved.
You will be able to see the paragraphs that created conflicts, showing both content versions created. You can then pick the version you'd like to keep in order to resolve the conflicts. Once all conflicts have been resolved you will be able to save and merge your content.
🧙♂ Tips: See how the button on the left disappears when conflicts are resolved! | https://docs.gitbook.com/collaboration/conflict-resolution | 2020-03-28T19:58:28 | CC-MAIN-2020-16 | 1585370493120.15 | [] | docs.gitbook.com |
Introduction to MongoDB¶
On this page
Welcome to the MongoDB 4.2.)..
In addition, MongoDB provides pluggable storage engine API that allows third parties to develop storage engines for MongoDB. | https://docs.mongodb.com/manual/introduction/ | 2020-03-28T21:25:47 | CC-MAIN-2020-16 | 1585370493120.15 | [] | docs.mongodb.com |
What is Small Basic - Featured Video
What is Small Basic?
[embed][/embed]
This introduction video comes from Zero One Resources. It is 8 minutes and 47 seconds long.
This video is a short explanation of what Small Basic is and also contains a demonstration of some of its features.
If you want to learn how to program, then this programming language by Microsoft is a great place to start!
So what is Small Basic?
- A programming environment created by Microsoft that is simple, social, gradual, and extendable.
- It uses IntelliSense to suggest code and to provide hints and tips (as well as a real-time Help pane).
- It provides more detailed error messages.
- It allows for programs to be converted into the more industry used .NET Framework, so users can continue learning for free on Visual Studio Code or Visual Studio Community, with languages like Java, JavaScript, C#, Python, and more.
Have a Small and Basic week!
- Ninja Ed | https://docs.microsoft.com/en-us/archive/blogs/smallbasic/what-is-small-basic-featured-video | 2020-03-28T22:19:15 | CC-MAIN-2020-16 | 1585370493120.15 | [] | docs.microsoft.com |
Defining ServiceNow instances
Before performing these steps, make sure you have the following information:
- login credentials to the Quality Clouds portal -
Set up your instances
In order to be able to scan your instances, you need to point Quality Clouds to them.
→ To create an instance in Quality Clouds
- Login to the Quality Clouds portal at.
- Go to Instances section.
- Select the ServiceNow tab (if you use several SaaS with Quality Clouds).
- Click + New instance.
- Fill in the following information:
URL - URL of the ServiceNow instance. If your ServiceNow instance uses SSO or any other external authentication method, make sure that the account you provide has local authentication in ServiceNow (does not need to be authenticated by external providers) and is able to use the REST API.
- Description -Short and meaningful description of the instance. For example: MyCompany UAT
- Username and Password - ServiceNow access credentials to the instance.
Do not persist credentials - Set this flag ON if you do not want to persist the credentials in the Quality Clouds environment. If you activate this option, a popup form will appear when launching a scan and you will have to provide the credentials manually. In this mode, all scans must be interactive (only manual scans allowed, no schedule can be completed).
Environment - Environment type of the instance. This is used to add descriptive context to your instance.
NoteServiceNow production (PROD) instances are subject to an extra check: Debugging properties enabled in production environments.
Go-live Date - Date on which the instance was first started, serves as a baseline for Org changes.
- Click Save.
You can now continue with the following steps:
What's here | https://docs.qualityclouds.com/qcd/set-up-your-qc-environment-for-servicenow-7012372.html | 2020-03-28T20:05:22 | CC-MAIN-2020-16 | 1585370493120.15 | [] | docs.qualityclouds.com |
Optimizing web service performance
Logging payloads, monetizing, and throttling service invocations all at the same time greatly impacts Martini's performance. This is because as requests are received and responses are sent by Martini, data is also being processed and saved by Tracker and the invoke monitor. For every transaction, Martini must create multiple entries in a database and asynchronously add new documents to both search indices, all whilst using a broker to channel these events back and forth to ensure the distribution and execution of these events.
The throughput impact of Tracker and Monitor on web services is evident in TORO's test on web service performance. In this test, TORO compared Martini's REST web service throughput when Tracker and Monitor are both on versus when only Tracker is on. The results went as expected; turning off Tracker resulted in a major throughput boost by ~91.89%.
Can the invoke monitor be turned off?
If you're using Martini Online, invoke monitor logging cannot be turned off.
Turning off Tracker, whilst may improve performance, however, forfeits the ability to audit and troubleshoot transactions, resubmit failed requests, and create reports from Tracker data. With everything said, TORO recommends turning off Tracker if you don't plan on using any of its features. | https://docs.torocloud.com/martini/latest/setup-and-administration/performance-tuning/web-service/ | 2020-03-28T20:48:29 | CC-MAIN-2020-16 | 1585370493120.15 | [] | docs.torocloud.com |
The Physiome Model Repository and the link to bioinformatics¶
The Physiome Model Repository (PMR) [LCPF08] is the main online repository for the IUPS Physiome Project, providing version and access controlled repositories, called workspaces, for users to store their data. Currently there are over 700 public workspaces and many private workspaces in the repository..
More complete documentation describing how to use PMR is available in the PMR documentation:.
The CellML models on models.physiomeproject.org are listed under 20 categories, shown below: (numbers of exposures in each category are given besides the bar graph, correct as at early 2016)
Browse by category
Note that searching of models can be done anywhere on the site using the search box on the upper right hand corner. An important benefit of ensuring that the models on the PMR are annotated is that models can then be retrieved by a web-search using any of the annotated terms in the models.
To illustrate the features of PMR, click on the Hund, Rudy 2004 (Basic) model in the alphabetic listing of models under the Electrophysiology category.
Fig. 39 The Physiome Model Repository exposure page for the basic Hund-Rudy 2004 model.¶
The section labelled ‘Model Structure’ contains the journal paper abstract and often a diagram of the model1. This is shown for the Hund-Rudy 2004 model in Fig. 40. This model, with over 22 separate protein model components, is also a good example of why it is important to build models from modular components [CMEJ08], and in particular the individual ion channels for electrophysiology models.
There is a list of ‘Views Available’ for the CellML model on the right hand side of the exposure page. The function of each of these views is as follows:
Views Available
Documentation - Takes you to the main exposure page.
Model Metadata - Lists metadata including authors, title, journal, Pubmed ID and model annotations.
Model Curation - Provides the curation status of the model. Note: this is soon to be updated.
Mathematics - Displays all the mathematical equations contained in the model.
Generated Code - Various codes (C, C-IDA, F77, MATLAB or Python) generated from the model.
Cite this model - Provides details on how to cite use of the CellML model.
Source view - Gives a full listing of the XML code for the model.
Launch with OpenCOR - Opens the model (or simulation experiment) in OpenCOR.
Note that CellML models are available under a Creative Commons Attribution 3.0 Unported License2. This means that you are free to:
-
Share — copy and redistribute the material in any medium or format
-
Adapt — remix, transform, and build upon the material
for any purpose, including commercial use.
The next stage of content development for PMR is to provide a list of the modular components of these models each with their own exposure. For example, models for each of the individual ion channels used in the publication-based electrophysiological models will be available as standalone models that can then be imported as appropriate into a new composite model. Similarly for enzymes in metabolic pathways and signalling complexes in signalling pathways, etc. Some examples of these protein modules are:
Sodium/hydrogen exchanger 3
Thiazide-sensitive Na-Cl cotransporter
Sodium/glucose cotransporter 1
Sodium/glucose cotransporter 2
Note that in each case, as well as the CellML-encoded mathematical model, links are provided (see Fig. 41) to the UniProt Knowledgebase for that protein, and to the Foundational Model of Anatomy (FMA) ontology (via the EMBLE-EBI Ontology Lookup Service) for information about tissue regions relevant to the expression of that protein (e.g. Proximal convoluted tubule, Apical plasma membrane; Epithelial cell of proximal tubule; Proximal straight tubule). Similar facilities are available for SMBL-encoded biochemical reaction models through the Biomodels database [AYY].
Fig. 41 The PMR workspace for the Thiazide-sensitive Na-Cl cotransporter. Bioinformatic data for this model is accessed via the links under the headings highlight by the arrows and include Protein (labelled A) and the model Location (labelled B). Other information is as already described for the Hund-Rudy 2004 model.¶
Footnotes
- 1
These are currently hand drawn SVG diagrams but the plan is to automatically generate them from the model annotation and also (at some stage!) to animate them as the model is executed.
- 2 | https://tutorial-on-cellml-opencor-and-pmr.readthedocs.io/en/latest/pmr.html | 2020-03-28T20:46:34 | CC-MAIN-2020-16 | 1585370493120.15 | [array(['_images/pmr_website_exp_hund_rudy.png',
'PMR exposure page for the Hund-Rudy 2004 model'], dtype=object)
array(['_images/pmr_wsp_thiazide.png',
'Thiazide-sensitive Na-Cl cotransporter workspace'], dtype=object)] | tutorial-on-cellml-opencor-and-pmr.readthedocs.io |
The topics in this section guide you through the basic steps to prepare your environment and install Konvoy in an air-gapped environment. IMPORTANT air-gapped installation is still in Beta and the process may change in the future.
Before you beginBefore you begin.0,)
There may be additional dependencies that need to be installed that can be found in the standard CentOS/RHEL repositories.
The following example illustrates the configuration if the reserved virtual IP address is
10.0.50.20:
kind: ClusterConfiguration apiVersion: konvoy.mesosphere.io/v1beta1 spec: kubernetes: controlPlane: controlPlaneEndpointOverride: "10.0.50.20:6443" keepalived: interface: ens20f0 # optional vrid: 51 # optional, use
spec.kubernetes.controlPlane.keepalived: {} to enable it with.2 addonsList: ... - configRepository: /opt/konvoy/artifacts/kubeaddons-dispatch configVersion: stable-1.16-1.0.0 helmRepository: image: mesosphere/konvoy-addons-chart-repo:v1.4.2 addonsList: - name: dispatch # Dispatch is currently in Beta enabled: false - configRepository: /opt/konvoy/artifacts/kubeaddons-kommander configVersion: stable-1.16-1.0.0 helmRepository: image: mesosphere/konvoy-addons-chart-repo:v1.4.2 addonsList: - name: kommander enabled: false.
MetalLB can be configured in two modes -
layer2 and
bgp.
The following example illustrates the layer2 configuration in the
cluster.yaml configuration file:
kind: ClusterConfiguration apiVersion: konvoy.mesosphere.io/v1beta1 spec: addons: addonsList: - name: metallb enabled: true values: |- configInline: address-pools: - name: default protocol: layer2 addresses: - 10.0.50.25-10.0.50.50
The following example illustrates the BGP configuration in the
cluster.yaml configuration file:
kind: ClusterConfiguration apiVersion: konvoy.mesosphere.io/v1beta1 spec: addons: addonsList: - name: metallb enabled: true values: |- configInline: peers: - my-asn: 64500 peer-asn: 64500 peer-address: 172.17.0.4 address-pools: - name: my-ip-space protocol: bgp addresses: - 172.40.100.0/24
The number of virtual IP addresses in the reserved range determines the maximum number of services with a type of
LoadBalancer that you can create in the cluster., the dashboard and services may take a few minutes to be accessible.
Checking the files installedChecking the files installed
When the
konvoy up completes its setup operations, the following files are generated:
cluster.yaml- defines the Konvoy configuration for the cluster, where you customize [your cluster configuration][cluster_configuration].
admin.conf- is a kubeconfig file, which contains credentials to connect to the
kube-apiserverof your cluster through
kubectl.
inventory.yaml- is an Ansible Inventory file.
runsfolder - which contains logging information. | https://docs.d2iq.com/ksphere/konvoy/latest/install/install-airgapped/ | 2020-03-28T19:56:05 | CC-MAIN-2020-16 | 1585370493120.15 | [] | docs.d2iq.com |
Introduction
J.
| https://docs.eginnovations.com/JRun_Application_Server/Introduction_to_JRun_Application_Server_Monitoring.htm | 2020-03-28T21:23:15 | CC-MAIN-2020-16 | 1585370493120.15 | [array(['../Resources/Images/start-free-trial.jpg', None], dtype=object)] | docs.eginnovations.com |
✨ This feature is included in the Business plan.
SAML-based Single Sign-On (SSO) gives members access to GitBook through an identity provider (IdP) of your choice.
GitBook easily integrates with your existing identity provider (IdP) so you can provide your employees with single sign-on to GitBook using the same credentials and login experience as your other service providers (such as Slack and Dropbox).
By using SSO, your employees will be able to log into GitBook using the familiar identity provider interface, instead of the GitBook login page. The employee’s browser will then forward them to GitBook. The IdP grants access to GitBook when SSO is enabled and GitBook's own login mechanism is deactivated. In this way, authentication security is shifted to your IdP and coordinated with your other service providers.
SAML SSO on GitBook is supported for all Identity providers, and works well with:
Azure
Google for Work / G Suite
OneLogin
Your company’s identity provider (IdP) must support the SAML 2.0 standard.
You must have administrative permission on the IdP.
You must be an organization admin to enable SSO for your GitBook Organization.
After configuring SSO on your IdP, you will be able to upload or enter metadata manually. When the setup is successful, administrators will see a confirmation dialog and the URL of the SSO login for end-users will be displayed. GitBook does not send announcement emails when set up is complete. It is the responsibility of the administrator to notify company employees (and convey the login URL to them) so they can access GitBook via SSO.
You will need to find three pieces of information to your organization settings:
A sign-in page URL (also called a login URL)
Identity Provider Issuer
An X.509 certificate
🧙♂ Tips: You can enter anything as a Provider Label, it'll be displayed on the login page for your company employees.
Most SAML 2.0 compliant identity providers require the same information about the service provider for set up. (GitBook is the service provider.) These values are specific to your GitBook organization and are available in the
settings tab of the GitBook team where you want to enable SSO.
GitBook requires that the NameID contain the user’s email address. Technically we are looking for:
urn:oasis:names:tc:SAML:1.1:nameid-format:emailAddress
GitBook also requires some attributes mapping:
To add end-users, create accounts for these users in the IdP. The first time each new user logs in to GitBook via the IdP, a GitBook account will be created for each of them via automatic IdP provisioning. The user will have access to organization resources as an organization member.
🧙♂ Tips: Set-up requires lower case email addresses. Do not use mixed case email addresses.
In the SSO settings, you can set a default role. This role will be applied to every new organization's members:
👀 Reader: read-only access to all spaces
✍ Writer: read and write access to all spaces
🎩 Admin: read, write and admin access
🧙♂ Tips: You can find more details about permissions 👉 here. Keep in mind that you can change the role of a user any time in the
Team settings.
Removing an end-user from the IdP will prevent the user from being able to login to the corresponding GitBook account, but will not remove the account from GitBook. We advise removing the end-users account from the GitBook org associated with IdP as well.
🧠 Note: Automated de-provisioning is not yet supported, but we'll soon release a SCIM API.
For security reasons, users who signed up to an organization before the SSO was set up have to continue to log in normally. SSO will only benefit users who log in to an organization after the setup is complete. Admins could also ask prior SSO users to delete their account (or change their email) and then they should be able to login with SSO. | https://docs.gitbook.com/features/saml | 2020-03-28T20:40:24 | CC-MAIN-2020-16 | 1585370493120.15 | [] | docs.gitbook.com |
Arithmetic
Exception Class
Definition
The exception that is thrown for errors in an arithmetic, casting, or conversion operation.
public ref class ArithmeticException : Exception
public ref class ArithmeticException : SystemException
public class ArithmeticException : Exception
public class ArithmeticException : SystemException
[System.Serializable] public class ArithmeticException : SystemException
[System.Runtime.InteropServices.ComVisible(true)] [System.Serializable] public class ArithmeticException : SystemException
type ArithmeticException = class inherit Exception
type ArithmeticException = class inherit SystemException
Public Class ArithmeticException Inherits Exception
Public Class ArithmeticException Inherits SystemException
- Inheritance
-
- Inheritance
- ArithmeticException
- Derived
-
- Attributes
-
Remarksproperty or greater than its
MaxValueproperty.. | https://docs.microsoft.com/en-au/dotnet/api/system.arithmeticexception?view=netstandard-2.1 | 2020-03-28T21:35:10 | CC-MAIN-2020-16 | 1585370493120.15 | [] | docs.microsoft.com |
Difference between revisions of "Beat CHOP"
Revision as of 22:46, 22 July 2019
Summary[edit]
The Beat CHOP generates a variety of ramps, pulses and counters that are timed to the beats per minute and the sync of a piece of music, and automatically continues the beats. The Beat CHOP converts these beats into a repeating ramp or pulse that continues to keep time with the music after the taps stop.
The Beat CHOP's timing is defined by the Component Time of the Reference Node. If the Reference Node parameter is left blank, then the time defined at the Beat CHOP's location is used. restarts the ramps from zero. The ramp is also zero when the Beat CHOP's input is above 0.
Reset Pulse
resetpulse - Time Slice is the time from the last cook frame to the current cook frame. In CHOPs it is the set of short channels that only contain the CHOP channels' samples between the last and the current cook frame.'. | https://docs.derivative.ca/index.php?title=Beat_CHOP&diff=next&oldid=16432 | 2020-03-28T21:08:54 | CC-MAIN-2020-16 | 1585370493120.15 | [] | docs.derivative.ca |
[−][src]Module plotlib::
style
Manage how elements should be drawn
All style structs follows the 'optional builder' pattern:
Each field is a
Option which start as
None.
They can all be set with setter methods, and instances
can be overlaid with another one to set many at once.
Settings will be cloned in and out of it. | https://docs.rs/plotlib/0.5.0/plotlib/style/index.html | 2020-03-28T21:24:51 | CC-MAIN-2020-16 | 1585370493120.15 | [] | docs.rs |
AWS services or capabilities described in AWS Documentation may vary by region/location. Click Getting Started with Amazon AWS to see specific differences applicable to the China (Beijing) Region.
Represents the output of a
DescribeSnapshots operation.
Namespace: Amazon.ElastiCache.Model
Assembly: AWSSDK.ElastiCache.dll
Version: 3.x.y.z
The DescribeSnapshotsResponse type exposes the following members
Returns information about the snapshot mysnapshot. By default.
var response = client.DescribeSnapshots(new DescribeSnapshotsRequest { SnapshotName = "snapshot-20161212" }); string marker = response.Marker; List
snapshots = response.Snapshots;
| https://docs.aws.amazon.com/sdkfornet/v3/apidocs/items/ElastiCache/TDescribeSnapshotsResponse.html | 2018-08-14T09:02:02 | CC-MAIN-2018-34 | 1534221208750.9 | [] | docs.aws.amazon.com |
AWS services or capabilities described in AWS Documentation may vary by region/location. Click Getting Started with Amazon AWS to see specific differences applicable to the China (Beijing) Region.
Container for the parameters to the CreateLoggerDefinition operation. Creates a logger definition. You may provide the initial version of the logger definition now or use ''CreateLoggerDefinitionVersion'' at a later time.
Namespace: Amazon.Greengrass.Model
Assembly: AWSSDK.Greengrass.dll
Version: 3.x.y.z
The CreateLoggerDefinition | https://docs.aws.amazon.com/sdkfornet/v3/apidocs/items/Greengrass/TCreateLoggerDefinitionRequest.html | 2018-08-14T08:50:40 | CC-MAIN-2018-34 | 1534221208750.9 | [] | docs.aws.amazon.com |
Transfer CFT 3.2.2 Users Guide Add entire publication Select collection Cancel Create a New Collection Set as my default collection Cancel This topic and sub-topics have been added to MyDocs. Ok No Collection has been selected. --> nidf CFTIDF [NIDF = string ] IDF network identifier. This value is transferred in the network. SEND, RECV [NIDF = string ] The network identifier. Return to Command index Related Links | https://docs.axway.com/bundle/Transfer_CFT_322_UsersGuide_allOS_en_HTML5/page/Content/CFTUTIL/Parameter_index/nidf.htm | 2018-08-14T09:12:30 | CC-MAIN-2018-34 | 1534221208750.9 | [] | docs.axway.com |
Format of spatialos_worker_packages.json
Example
An example of the C# worker package file:
{ "targets": [ { "path": "improbable/dependencies/managed", "type": "worker_sdk", "packages": [ { "name": "csharp" } ] }, { "path": "improbable/dependencies/native", "type": "worker_sdk", "packages": [ { "name": "core-dynamic-x86_64-win32" }, { "name": "core-dynamic-x86_64-macos" }, { "name": "core-dynamic-x86_64-linux" } ] } ] },
macos. | https://docs.improbable.io/reference/13.0/shared/reference/file-formats/spatial-worker-packages | 2018-08-14T08:58:22 | CC-MAIN-2018-34 | 1534221208750.9 | [] | docs.improbable.io |
Preperations
- [x] Establishing a Build Environment
- [x] Downloading the Android Source
- [x] Install toolchains for Amlogic platform
Building
Note: Before you start to build, make sure you have done all the
Preperations listed above.
Build U-Boot:
Gernerated images in this step:
- fip/u-boot.bin: for onboard EMMC storage booting
- fip/u-boot.bin.sd.bin: for external TF card booting
Build Android:
Note:
- Replace ‘N’ as the number you want when you run ‘make -jN’
- Replace ‘TARGET_LUNCH’ to your lunch select.
For Android Marshmallow(6.0), it’s kvim-userdebug-32.
For Android Nougat(7.1), it’s kvim-userdebug-64.
Gernerated images in this step:
- out/target/product/kvim/update.img
Build Linux kernel:
When you build Android aboved, will build Linux kernel at the same time.
In some case, you might want to build Linux kernel separately, you can run the script below to do that: | https://docs.khadas.com/vim1/BuildAndroid.html | 2018-08-14T08:38:42 | CC-MAIN-2018-34 | 1534221208750.9 | [] | docs.khadas.com |
Generic Collections in .NET
The .NET class library provides a number of generic collection classes in the System.Collections.Generic and System.Collections.ObjectModel namespaces. For more detailed information about these classes, see Commonly Used Collection Types.
System.Collections.Generic. It has no nongeneric counterpart.
System.Collections.ObjectModel
The Collection<T> generic class provides a base class for deriving your own generic collection types. The ReadOnlyCollection<T> class provides an easy way to produce a read-only collection from any type that implements the IList<T> generic interface. The KeyedCollection<TKey,TItem> generic class provides a way to store objects that contain their own keys.
Other Generic Types
The Nullable<T> generic structure allows you to use value types as if they could be assigned
null. This can be useful when working with database queries, where fields that contain value types can be missing. The generic type parameter can be any value type.
Note
In C# and Visual Basic, it is not necessary to use Nullable<T> explicitly because the language has syntax for nullable types. See Nullable types (C# Programming Guide) and Nullable value types (Visual Basic).
The ArraySegment<T> generic structure provides a way to delimit a range of elements within a one-dimensional, zero-based array of any type. The generic type parameter is the type of the array's elements.
The EventHandler<TEventArgs> generic delegate eliminates the need to declare a delegate type to handle events, if your event follows the event-handling pattern used by the .NET Framework. For example, suppose you have created a
MyEventArgs class, derived from EventArgs, to hold the data for your event. You can then declare the event as follows:
public: event EventHandler<MyEventArgs^>^ MyEvent;
public event EventHandler<MyEventArgs> MyEvent;
Public Event MyEvent As EventHandler(Of MyEventArgs)
See Also
System.Collections.Generic
System.Collections.ObjectModel
Generics
Generic Delegates for Manipulating Arrays and Lists
Generic Interfaces | https://docs.microsoft.com/en-us/dotnet/standard/generics/collections | 2018-08-14T09:09:22 | CC-MAIN-2018-34 | 1534221208750.9 | [] | docs.microsoft.com |
Cost categories used in Production control and Project management accounting
Some types of production work can apply to project time estimates and reporting. This article provides information about the cost categories that you must define for these types of production work for production and project purposes.
Some types of production work can apply to project time estimates and reporting. In this case, a cost category is required for production and project purposes. When a cost category is used in production and projects, additional project-related information must be defined. For example, the hourly costs that are associated with projects can differ from the hourly costs that are associated with production. You can use the Cost categories page to define a cost category that is used in Production control and Project management accounting.
Note: Cost accounting has a Project categories page, but this page has no relationship to the functionality that is described in this topic. When you use a cost category in projects, the Cost categories page has additional tabs that show additional project-related information. This information includes the category group, a line property, and ledger accounts that are assigned to the cost category.
- The cost category must be assigned to a category group that supports a transaction type of Hours.
- The line property indicates default information about how reported time can be charged to a project.
- Typically, the ledger accounts that are related to costs and sales are defined for the category group that is assigned to the cost category. However, specific accounts can be defined for an individual cost category.
Additional buttons on the Cost categories page let you access project-related information about a selected cost category. For example, you can view project-related transactions, define employees or projects, define hourly costs and sales prices, and view reports. | https://docs.microsoft.com/en-us/dynamics365/unified-operations/supply-chain/cost-management/cost-categories-used-production-control-project-management-accounting | 2018-08-14T09:10:07 | CC-MAIN-2018-34 | 1534221208750.9 | [] | docs.microsoft.com |
routine squish
Documentation for routine
squish assembled from the following types:
class Any
(Any) method squish
Defined as:
method squish(:, : --> Seq)
Coerces the invocant to a
list by applying its
.list method and uses
List.squish on it.
say Any.squish; # OUTPUT: «((Any))»
class List
(List) routine squish
Defined as:
multi sub squish(*, :, : --> Seq)multi method squish(List: :, : --> Seq); # OUTPUT: «(a b c)»say <a b b c c b a>.squish; # OUTPUT: «(a b c b a)»
The optional
:as parameter, just like with
unique, allows values to be temporarily transformed before comparison.
The optional
:with parameter is used to set an appropriate comparison operator:
say [42, "42"].squish; # OUTPUT: «(42 42)»# Note that the second item in the result is still Strsay [42, "42"].squish(with => :<eq>); # OUTPUT: «(42)»# The resulting item is Int
class Supply
(Supply) method squish
method squish(Supply: :, : --> Supply)
Creates a supply that only provides unique values, as defined by the optional
:as and
:with parameters (same as with List.squish). | https://docs.perl6.org/routine/squish | 2018-08-14T08:26:15 | CC-MAIN-2018-34 | 1534221208750.9 | [] | docs.perl6.org |
How to Grant the Send As Permission for a.
Before You"
Procedure
Exchange 2007 SP1
To use the Exchange Management Console to grant a user the Send As permission for another user's mailbox.
To use the Exchange Management Shell to grant a user the Send As permission for another user's mailbox
Run the following command.
Add-ADPermission "Mailbox" -User "Domain\User" -Extendedrights "Send As"
For detailed syntax and parameter information, see the Add-ADPermission reference topic.
Exchange 2007 RTM
To use Active Directory Users and Computers to grant a user the Send As permission for another user's mailbox.
To use the Exchange Management Shell to grant a user the Send As permission for another user's mailbox
Run the following command.
Add-ADPermission "Mailbox" -User "Domain\User" -Extendedrights "Send As"
For detailed syntax and parameter information, see the Add-ADPermission (RTM) reference topic.
For More Information
For more information about granting Microsoft Outlook permissions, see Delegate Access Permissions in Outlook Help. | https://docs.microsoft.com/en-us/previous-versions/office/exchange-server-2007/aa998291(v=exchg.80) | 2018-08-14T08:41:09 | CC-MAIN-2018-34 | 1534221208750.9 | [] | docs.microsoft.com |
How Can I 'Try Before I Buy' a Premium Plugin Extension?
Popup Maker now provides access to a free, demo site on which to try out all of our premium plugins before you buy a license. The site also includes the 3 form plugins that we directly integrate with:
( Note: Popup Maker is a Ninja Forms affiliate. We support each other's products. )
Related article: Close/Open Popup and Create Cookie After Ninja Forms Submission
Related article: Close/Open Popup and Create Cookie After GravityForms Submission
Related article: Close/Open Popup and Create Cookie After Contact Form 7 Submission
On the demo site landing page, provide us with your first name and email address. You'll then be redirected to the front end of the demo site. The WordPress Admin toolbar is active at the top of the browser, which provides access to the demo site's Admin area.
The theme in use on the site is very plain. The point of the site is to provide interested users with access to the functionality of our base plugin and extensions. You provide the content, we provide the functionality.
Please contact our support team at any time if you have questions about one or more of our premium plugin extensions. We are happy to answer any questions you may have.
Need a refresher on how to create a new popup?
Related article: Create Your First Popup. | https://docs.wppopupmaker.com/article/181-how-can-i-try-before-i-buy | 2018-08-14T09:12:06 | CC-MAIN-2018-34 | 1534221208750.9 | [] | docs.wppopupmaker.com |
DeleteBucketReplication
Deletes the replication configuration from the bucket.
To use this operation, you must have permissions to perform the
s3:PutReplicationConfiguration action. The bucket owner has these permissions by default and can grant it to others.
For more information about permissions, see Permissions Related to Bucket Subresource Operations and Managing Access Permissions to Your Amazon S3 Resources.
Note
It can take a while for the deletion of a replication configuration to fully propagate.
For information about replication configuration, see Replication in the Amazon S3 Developer Guide.
The following operations are related to
DeleteBucketReplication
Request Syntax
DELETE /?replication HTTP
replication subresource from the specified bucket. This removes the replication configuration
that is set for the bucket.
DELETE /?replication HTTP/1.1 Host: examplebucket.s3.amazonaws.com Date: Wed, 11 Feb 2015 05:37:16 GMT 20150211T171320Z Authorization: authorization string
Sample Response
When the
replication subresource has been deleted, Amazon S3 returns a
204 No Content response. It will not replicate new objects that are stored in the
examplebucket bucket.
HTTP/1.1 204 No Content x-amz-id-2: Uuag1LuByRx9e6j5OnimrSAMPLEtRPfTaOAa== x-amz-request-id: 656c76696e672example Date: Wed, 11 Feb 2015 05:37:16 GMT Connection: keep-alive Server: AmazonS3
See Also
For more information about using this API in one of the language-specific AWS SDKs, see the following: | https://docs.aws.amazon.com/AmazonS3/latest/API/API_DeleteBucketReplication.html | 2019-11-12T05:22:52 | CC-MAIN-2019-47 | 1573496664752.70 | [] | docs.aws.amazon.com |
How to use custom fields (Content + URL+hashtags) in Revive Old Post
In this tutorial, we are going to go through the process of using custom fields as a way to grab data for sharing. To start, navigate to WordPress Dashboard-> Revive Old Post -> Post Format. The below image shows how to enable the custom field option for Post Content, post link, and hashtags
Now we've seen how to activate and enter your custom fields, let's see how to actually create them. If you are using a plugin to create custom fields on the posts screen you most likely already know how to get your custom field name. If you've never used a custom posts type plugin then this tutorial is more targeted at you.
Start by going to your post and enable the custom fields option if it's not already enabled. To do so click "Screen options" at the top and check the "Custom fields" option:
Now if you scroll down the page you will see a new area appear:
We are going to create 3 custom fields:
- rop_post_content
- rop_post_url
- rop_post_hashtags
Click " Enter New" then in the "Name" area enter the name of the custom field you want to create, in this case, the first one we will create is "rop_post_content". In the "Value" area add the content you wish:
Click " Add Custom Field" to add your newly created custom field. Repeat this step for the two other custom fields and we should have an outcome like this:
Insert a space before the first hashtag so it doesn't appear stuck to your post content after it's shared.
Update/Publish your post and that's it for creating custom fields! Things to note:
- You could edit the Value content anytime you please, just be sure to click "Update" on the custom field (next to delete) after doing so or your changes won't take effect!
- After creating these custom fields there is no need to recreate them for each post, the custom field name will already be in the list of custom fields, you will just have to select it and enter the value you wish.
Now we are going to create another post, but this time we want the same custom fields but just with a different value. After creating your new post, scroll to the bottom of the page and this time instead of adding a new custom field we will select it from the drop-down menu:
Select the field name and enter your text in the Value field then click "Add Custom Field". Repeat the Process again until all your custom fields have been added:
Great, now you've seen how to create your custom fields we just need to see them in action!
Go to WordPress Dashboard-> Revive Old Post -> Post Format and enter your custom fields into their respective areas:
Save your changes and that should be it! When posted you would have something like this(if post with image was not set or there was no featured image):
You now know how to use custom fields for retrieving share content :) | https://docs.revive.social/article/480-how-to-use-custom-fields-content-url-hashtags-in-revive-old-post | 2019-11-12T06:04:50 | CC-MAIN-2019-47 | 1573496664752.70 | [array(['http://d33v4339jhl8k0.cloudfront.net/docs/assets/55192029e4b0221aadf23f55/images/5af2001b2c7d3a3f981f6083/file-3awO3mozBx.png',
None], dtype=object)
array(['http://d33v4339jhl8k0.cloudfront.net/docs/assets/55192029e4b0221aadf23f55/images/5901676c2c7d3a057f889a5f/file-TYN1ueTmux.png',
None], dtype=object)
array(['http://d33v4339jhl8k0.cloudfront.net/docs/assets/55192029e4b0221aadf23f55/images/5af201150428631126f1d5a6/file-lWwvLi7EUB.png',
None], dtype=object)
array(['http://d33v4339jhl8k0.cloudfront.net/docs/assets/55192029e4b0221aadf23f55/images/5902b7272c7d3a057f88a277/file-sCcLSzgfOk.png',
None], dtype=object)
array(['http://d33v4339jhl8k0.cloudfront.net/docs/assets/55192029e4b0221aadf23f55/images/5902b6850428634b4a32affb/file-oLtaQsOJlB.png',
None], dtype=object) ] | docs.revive.social |
The conditional branching sample (
LoanApprovalProcess.bar) can be found in the
<BPS-HOME>/repository/samples/bpmn directory.
Flow of the sample
In this sample scenario, a loan approval process is displayed.
- The clerk user fills the required details (income and loan amount), which is then sent for confirmation.
- At the exclusive gateway, if the loan amount is higher than 50,000, a request is sent to the clerk user to revise the loan amount. In this case, the user can revise and resubmit the loan application.
- If the loan amount does not exceed 50,000, the "review application" task is triggered and the manager user can approve the loan application.
The following XML code snippet is the
LoanProcess.bpmn definition of the exclusive gateway that declares the condition mentioned above.
<sequenceFlow id="flow4" sourceRef="exclusivegateway1" targetRef="usertask3"> <conditionExpression xsi:<![CDATA[${loanAmount>50000}]]></conditionExpression> </sequenceFlow>
Note:
usertask3 is the id of the "revise amount" task. At the declaration of the exclusive gateway,
flow3 is defined as the default. This makes "review application" the default task that will be executed after the condition.
<exclusiveGateway id="exclusivegateway1" name="Exclusive Gateway" default="flow3"></exclusiveGateway>
Running the sample
- Log in to the BPMN Explorer using the clerk/clerk credentials.
- Select the PROCESSES tab to view the task in the task list.
- Click the Start button next to the LoanApprovalProcess sample.
- Fill in the required details (i.e., income and loan amount) and click Start.
- If the loan amount is less than 50,000, select the MY TASKS tab. You will see the that another task has appeared on the list.
If the loan amount is more than 50,000, logout and login to the bpmn-explorer using the manager/manager credentials and select the MY TASKS tab. You will see the that another task has appeared on the list.
- Click on the task and you will see the following screen where you (the manager) can either accept/reject it. Click on Complete Task to finish. | https://docs.wso2.com/display/BPS351/Conditional+Branching | 2019-11-12T06:03:27 | CC-MAIN-2019-47 | 1573496664752.70 | [] | docs.wso2.com |
Arcus Server
Overview
Workload Manager allows you to define your own parameters or use the Workload Manager Workload Manager provides the Arcus integration, it is up to the customer using this feature to address the following dependencies:
Send requests to the device's API URL
Call the correct device-specific webservice method
Convert the webservice response to the format expected by Workload Manager
Arcus installer packages are available as a standalone component and can be downloaded along with other Workload Manager components from the Cisco CloudCenter Suite download location.
Requirements
To use the Arcus integration, verify the following requirements:
OS with BASH installed
Docker v1.12.0 or later installed and accessible to the user running the installer
If using SSL, the certificate chain (arcus.crt) and key (arcus.key) in PEM format – the self-signed certificates are available in the arcus/certs folder from the same authority as the CCM and thus, works by default when you install the CCM.
An Arcus API account
CloudCenter Legacy 4.9.0 or later releases
Installation Process
To configure an Arcus server, an Arcus administrator who is also a Workload Manager administrator must follow this procedure.
Download the core_installer.bin package files:
SSH into the VM instance designated for this component by using the key pair that you used to launch the VM.
Along with the key pair, you may need to use your login credentials for sudo or root access based on your environment.
Download the following required files for this component from software.cisco.com. Be aware that the following files are contained in a filename that uses the following syntax:
Use the defaults or override defaults for the environment variables that the following table describes.
Run the core installer to setup core system components using the following commands.
sudo -i cd /tmp chmod 755 core_installer.bin #Set the following only if a local package store is setup export CUSTOM_REPO=< ip> ./core_installer.bin <ostype> <cloudtype> arcus
For example:
./core_installer.bin centos7 amazon arcus
Syntax:
<ostype>= centos7, rhel7
<cloudtype>= amazon, azurerm, azurepack, azurestack, google, kubernetes, opsource, openstack, softlayer, vmware, or vcd
(run the ./core_installer.bin help command for a complete list)
Remove the core_installer.bin file.
rm core_installer.bin
Reboot the Arcus VM.
You have successfully installed the Arcus server! You must now configure the Arcus server to integrate with Workload Manager.
Arcus API Account Access
The Arcus API Account is required to authorize access to the Arcus web service. The credentials for the Arcus API account must be set in Cisco Workload Manager when configuring a call through Arcus to gather information from your infrastructure device.
Create an Arcus API account.
Log in to Arcus. The following screenshot shows information for Arcus API accounts.
Select Arcus API Accounts from the left navigation menu to view a list of all Arcus API Accounts. From this list of devices, you can view, edit, or remove existing Arcus API Accounts.
Click the New Arcus API Account button.
Enter a descriptive name for the account.
Optionally, enter a longer description for the account.
Enter a Username.
Enter a Password and confirm the password.
If you change the Username or Password for an Arcus API Account, you will have to make the corresponding changes to the automation created in Workload Manager.
Click the Create Arcus API Account button.
Installing a Trusted Certificate Authority
To integrate Workload Manager with an Arcus server, your client must trust the HTTPS endpoint. If the client is not using an SSL certificate signed by the standard Java JRE's trusted CAs, you must add a trusted certificate.
Be sure to import the certificate from the CCM and update the certificates as specified in the Certificate Authentication > Update the certs.zip File on the Arcus Server section.
User Configuration
An Arcus user who is not an Arcus administrator is called a Member. Members cannot create additional Arcus users. Members can create and manage device types, devices, templates, and service accounts.
In addition to all of the capabilities of a Member, Admin users have the additional capability to create and manage Member users and other Admin users on the Arcus server. Only Admin users can create, modify, and remove other user accounts
To configure a Member or Admin user, follow this procedure:
Log in to the Arcus server as an Admin user.
Select Admin Users from the left navigation menu. The list of configured users is displayed! From this list of devices, you can view, edit, or remove existing users.
Click the New Admin User button to add a new user.
Enter the user’s email address.
Enter a password and confirm the password.
Choose either a Member or Admin for the role.
Click the Create Admin User button
Click the Edit button for a specific user to change the password: Changing the password of the user you are logged in as will require you to sign in again
Enter a new password and confirm the new password.
Click the Update Admin User button.
Click the Delete button for a specific user to delete this user: You cannot delete the user you are logged in as.
Verify the user name.
Confirm that you wish to delete the user.
Reset Admin Password from the Command Line
If any user has forgotten their password, then any Admin user can reset the user's password. If all admins have forgotten their passwords, you can reset the password for one of the Admins from the command line.
Log onto the host system for Arcus as a user who has Docker permission
Run the following command:
docker exec -it arcus_web_1 rake reset_admin {email of user to reset}
The system prompts you to enter the new password twice.
Once accepted, the system confirms that the password has been set and you can log in using the web interface.
Device Type Configuration
A Device Type represents the make and model of a brand or class of device existing in your infrastructure. As an example, if you have a number of F5 BIG-IP LTM 7050 load balancers in use, you would create a Device Type representing this type of infrastructure device. By creating this Device Type, you will be able to create individual devices for each of the 7050s deployed to your infrastructure and you will, further, be able to create templates that you can use to retrieve information from this Device Type.
Both Devices and Templates belong to a Device Type.
A Template returns data for any Device which shares its Device Type.
It is important to use the appropriate Device Type so Templates return meaningful data for all Devices belonging to the same Device Type.
To configure a Device Type, follow this procedure.
Login to the Arcus server as an Admin user. The following screenshot highlights the Device Types > New Device Type button.
Select Device Types from the left navigation menu. The list of configured devices is displayed! From this list of devices types, you can view, edit, or remove existing devices.
Click the New Device Type button to add a new device type:
Enter a unique name to describe the device type.
Click the Add New Step button.
Provide a step name that describes it.
If the device type should also apply the template settings to this step, check the Apply template box.
If different settings are configured in both the template setting and the step setting, be aware that the template setting overrides the step setting. The template's transformation is applied to the response body.
Configure the step to make the appropriate HTTP request.
If the device type should also include the basic authentication header using the device credentials in this step, check the Basic auth box.
Optional. Click Add New Step if you need to add another step.
Click the Create Device Type button to save all changes.
Click the Edit button for a specific device type: Changing the authentication details affects all devices associated with this device type
Click the Delete button for a specific device type: Device Types associated with one or more devices and/or templates cannot be removed. The Delete button will only be available for device types that are not associated with a device and/or template.
Device Configuration
A Device represents an individual and uniquely addressable device from your infrastructure. For example, you could have a F5 BIG-IP LTM 7050 load balancer with the IP address 12.18.1.1 represented by a device in Arcus. The device contains the information required to send requests to the device and collect information from the device’s APIs, including the username and password for the device’s APIs and the base URL or IP address to use when contacting the device’s APIs. Using a combination of a unique device and a template for the appropriate device type, you can retrieve information from the device using APIs.
To configure a device, follow this procedure.
Login to the Arcus server as an Admin user. The following screenshot highlights the Device Types > New Device button.
Select Devices from the left navigation menu. The list of configured devices is displayed! From this list of devices, you can view, edit, or remove existing devices.
Click the New Device button to add a new device:
Select the appropriate device type for the device (If the appropriate device type does not exist for this device, create a new device type for this class of device).
Enter a unique name to describe the device.
Enter the base URL or IP address assigned to the device.
When available and required, enter the username and password necessary to authenticate to the device.
If the device allows or requires SSL validation, check the Ssl validation box.
Click the Create Device button.
Click the Edit button for a specific device: Changing the authentication details affects all devices associated with this device type
Click the Delete button for a specific device: Device Types associated with one or more devices and/or templates cannot be removed. The Delete button will only be available for device types that are not associated with a device and/or template.
Template Configuration
Templates contain instructions specific to the detailed API endpoint you are trying to access. This includes the relative path to the endpoint, any payload that needs to be included with the request, and how to parse the data that is returned from the endpoint.
To configure a Device Type, follow this procedure.
Login to the Arcus server as an Admin user. The following screenshot highlights the Device Types > New Template button and a relative URL of the endpoint from which to access the data.
Select Templates from the left navigation menu.
Click the New Template button.
Select the appropriate Device Type for the device (If the appropriate device type does not exist for this device, create a new device type for this class of device).
Enter a unique name to describe the template.
Enter a description (optional). This is used to help other users of the system know the purpose of the template.
Enter the relative URL of the endpoint to from which to access the data.
Select the HTTP method to use to retrieve the data (get or post).
Enter the body that should be passed to the service during the request (mainly used when retrieving data with POST).
Add additional headers to pass the request, if needed.
Enter a valid XSLT in the Transformation section. For details on how to create a transformation, see the XSLT Transformation below.
Click the Create Template button.
XSLT Transformation
Arcus uses XSLT to retrieve results from various types of endpoints and return them in a common format. XSLT uses XPath to locate the required data inside the source XML document. See the following resources for more information on XSLT:
Hands-On XSL (from IBM)
Workload Manager's XSLT format is as follows:
<?xml version="1.0" encoding="ISO-8859-1"?> <xsl:stylesheet <xsl:template <data> <xsl:for-each <xsl:sort <results> <name><xsl:value-of</name> <displayName><xsl:value-of </displayName> </results> </xsl:for-each> </data> </xsl:template> </xsl:stylesheet>
The components for the XSLT transformation is explained in the following table.
Arcus accepts structured data in both XML and JSON formats. The returned information is parsed and transformed based on the template.
Example 1 (XML Data)
Data returned as XML is available to be parsed using the existing structure with which the endpoint returns the data.
<?xml version="1.0" encoding="UTF-8"?> <dataset> <hosts> <host> <name>Bins-Dicki</name> <internal> <account-id>e34667de-baad-45f3-b0c3-bcf954af93ba</account-id> </internal> </host> <host> <name>Corwin, Runte and Schumm</name> <internal> <account-id>0b0cefa7-6786-4add-a7e4-21f6b99f1d60</account-id> </internal> </host> <host> <name>Braun, Steuber and Kuphal</name> <internal> <account-id>8e63ec0a-38b6-407c-9652-0fe75dc2329e</account-id> </internal> </host> <host> <name>Lind LLC</name> <internal> <account-id>6f10eddd-b9bc-4347-8258-d1c1c7d539ab</account-id> </internal> </host> <host> <name>Ernser Group</name> <internal> <account-id>f74c899e-6324-4ffb-9241-cdc97cb45884</account-id> </internal> </host> </hosts> </dataset>
The following XSLT:
<?xml version="1.0" encoding="UTF-8"?> <xsl:stylesheet xmlns: <xsl:template <data> <xsl:for-each <xsl:sort <results> <name> <xsl:value-of </name> <displayName> <xsl:value-of </displayName> </results> </xsl:for-each> </data> </xsl:template> </xsl:stylesheet>
Returns this data:
[ {"name":"e34667de-baad-45f3-b0c3-bcf954af93ba","displayName":"Bins-Dicki"}, {"name":"8e63ec0a-38b6-407c-9652-0fe75dc2329e","displayName":"Braun, Steuber and Kuphal"}, {"name":"0b0cefa7-6786-4add-a7e4-21f6b99f1d60","displayName":"Corwin, Runte and Schumm"}, {"name":"f74c899e-6324-4ffb-9241-cdc97cb45884","displayName":"Ernser Group"}, {"name":"6f10eddd-b9bc-4347-8258-d1c1c7d539ab","displayName":"Lind LLC"} ]
Example 2 (JSON Data)
The JSON spec does not require a top-level key to be valid. Consequently, Workload Manager wraps the JSON response in a root element before attempting to transform the data. Hence, the XSLT written to consume JSON data must contain root as the first part of the select participle.
Arcus converts underscores to dashes in keys (so account_id is converted to account-id).
{ "accounts":[ { "name":"Langosh, Pfeffer and Kutch", "internal":{ "account_id":"26a44c79-1627-4485-9393-a88e49655481", "datacenter":"GB-LDN" } }, { "name":"Stamm-Zboncak", "internal":{ "account_id":"26a990fb-88db-43f0-b3cf-e89267864072", "datacenter":"US-ARL" } }, { "name":"Hirthe-Braun", "internal":{ "account_id":"cf37947a-f8e9-4c0d-bdfe-50b4f0b04798", "datacenter":"GB-LDN" } }, { "name":"Sanford Group", "internal":{ "account_id":"327e73b6-8ae7-45dc-aa7a-92341d39c55e", "datacenter":"GB-LDN" } }, { "name":"Medhurst-Keebler", "internal":{ "account_id":"94f91032-0f01-49a9-9002-c912ef124605", "datacenter":"GB-LDN" } } ] }
The following XSLT:
<?xml version="1.0" encoding="UTF-8"?> <xsl:stylesheet xmlns: <xsl:template <data> <xsl:for-each <xsl:sort <xsl:sort <results> <name> <xsl:value-of </name> <displayName> <xsl:value-of - <xsl:value-of </displayName> </results> </xsl:for-each> </data> </xsl:template> </xsl:stylesheet>
Returns this data:
[ {"name":"cf37947a-f8e9-4c0d-bdfe-50b4f0b04798","displayName":"GB-LDN - Hirthe-Braun"}, {"name":"26a44c79-1627-4485-9393-a88e49655481","displayName":"GB-LDN - Langosh, Pfeffer and Kutch"}, {"name":"94f91032-0f01-49a9-9002-c912ef124605","displayName":"GB-LDN - Medhurst-Keebler"}, {"name":"327e73b6-8ae7-45dc-aa7a-92341d39c55e","displayName":"GB-LDN - Sanford Group"}, {"name":"26a990fb-88db-43f0-b3cf-e89267864072","displayName":"US-ARL - Stamm-Zboncak"} ]
When converting arrays to XML, Arcus attempts to use the singular form of keys.
{ "data":{ "items":[ {"name":"Host 1"}, {"name":"Host 2"}, {"name":"Host 3"} ] } }
To loop over individual names, use the for-each string of root/data/items/item.
However, given this structure:
{ "data":{ "host":[ {"name":"Host 1"}, {"name":"Host 2"}, {"name":"Host 3"} ] } }
You would need to use the for-each string of root/data/host/host as host is already singular.
A key ending in “a” is a special case, as Arcus interprets the “a” ending as the plural form of the key.
{ "data":{ "imdata":[ {"name":"Host 1"}, {"name":"Host 2"}, {"name":"Host 3"} ] } }
To loop over the individual names, use the for-each string of root/data/imdata/imdatum.
- No labels | https://docs.cloudcenter.cisco.com/display/WORKLOADMANAGER/Arcus+Server | 2019-11-12T05:17:45 | CC-MAIN-2019-47 | 1573496664752.70 | [] | docs.cloudcenter.cisco.com |
Functions dedicated to widgets.
this function takes a DOM node defining a widget and instantiates / builds the appropriate widget class
This function is called on load and is in charge to build JS widgets according to DOM nodes found in the page
hiddenInputHandlers defines all methods specific to handle the hidden input created along the standard text input. An hiddenInput is necessary when displayed suggestions are different from actual values to submit. Imagine an autocompletion widget to choose among a list of CWusers. Suggestions would be the list of logins, but actual values would be the corresponding eids. To handle such cases, suggestions list should be a list of JS objects with two label and value properties.
inspects textarea with id areaId and replaces the current selected text with text. Cursor is then set at the end of the inserted text. | https://docs.cubicweb.org/js_api/cubicweb.widgets.html | 2017-03-23T00:14:43 | CC-MAIN-2017-13 | 1490218186530.52 | [] | docs.cubicweb.org |
Acceptus
Responsive OpenCart Fashion Template
By Kulerthemes !
Compatible with OpenCart 1.5.4.x, OpenCart 1.5.5.x and OpenCart 1.5.6.x
How to install theme ?
You don’t have to overwrite or modify OpenCart core files when installing and upgrading Acceptus as the theme is located in a separate folder (named Acceptus) from the OpenCart core structure.
In order to install Acceptus, follow these steps:
- Download acceptus-pro.zip to your computer (If you purchased via Themeforest, the folder contains 4 folders: Licensing, Documentation, Design and Installation)
- Installation / 1.5.4.x - use this folder if the version of your OpenCart installation is 1.5.4 or 1.5.4.1
- Installation / 1.5.5.x - use this folder if the version of your OpenCart installation is 1.5.5 or 1.5.5.1
- Installation / 1.5.6.x - use this folder if the version of your OpenCart installation is 1.5.6 or 1.5.6.1
- In each folder, you will see folders modules and acceptus-pro
- Upload acceptus-pro folders to catalog/view/theme on your web hosting via FTP (put it beside folder called default which is the default theme).
Now you've uploaded Accept accept CSS3 Slideshow
- configure module Kuler CSS3 SlideShow?
Each time you take a look at our demo site, you would see a nice image slideshow which is called Kuler CSS3 Slideshow. This module is built based on CSS3 technique which provides the same effect as jQuery version while increasing performance. Now we're going to show you how to create and configure it.
- Step 1: Log in OpenCart admin
- Step 2: Choose Extensions >> Module
- Step 3: Install Kuler CSS3 slideshow then click " Edit"
- Step 4: Click " Add module"
- Step 5: Choose module title and all necessary options
- There are 2 options in Image Source the first one is to take all images from built-in OpenCart Banners.Second options is to choose image for your slideshow manually.
- Dimesion: Type in width and height value of this module, then choose transition type.
- Split value: Split show how many parts the slideshow will be divided.
- Step 6: Choose all images that you want to displayed in your image slideshow. Click " Add image" >> "Browse" find the image you want to upload.
- (You can type in your image Title and Link if you want, users will be redirected to this link whenever they click to appropriate image.)
- Step 7: Click " Save" and see the result.
- This image slideshow works flawlessly with table/ mobile service while loading faster than the build-in OpenCart slideshow module.
- You can choose from 1 to 4 transition types and disable Auto Start if you want by turning into Auto Start to Off then click " Save".
An OpenCart slider that captures visitors’ look right the moment they enter your store. Built based on CSS3 tech and also responsive, the slider is the best choice if you’re looking for a lighweight and stable image slider.
How to configure module Kuler Accordion? cofigure module Kuler Filter? help your website becomes more lifeslike with slideshow which have a lot of eyes catching effects and many layers like image, text and video. Acceptus Pro, please download the package Acceptus Acceptus | http://docs.kulerthemes.com/acceptus/documentation.html | 2014-04-16T15:59:33 | CC-MAIN-2014-15 | 1397609524259.30 | [array(['images/doc-images/0-intro.jpg',
'Premium Responsive OpenCart Theme'], dtype=object)
array(['images/doc-images/1-fullscreen.jpg',
'Premium Responsive OpenCart Theme'], dtype=object)
array(['images/doc-images/2-install-theme.jpg', 'Install Acceptus 2'],
dtype=object)
array(['images/doc-images/3-active-theme.jpg', 'Active theme Acceptus'],
dtype=object)
array(['images/doc-images/4-sample-data.jpg', 'Install Sample Data'],
dtype=object)
array(['images/doc-images/5-upload-module.jpg',
'Install Opencart Modules'], dtype=object)
array(['images/doc-images/6-install-module.jpg',
'Active Opencart Modules'], dtype=object)
array(['images/doc-images/7-kulercp-general.jpg',
'KulerCP General Features'], dtype=object)
array(['images/doc-images/8-kulercp-design.jpg', 'KulerCP Design'],
dtype=object)
array(['images/doc-images/9-kulercp-bottom.jpg', 'KulerCP Bottom'],
dtype=object)
array(['images/doc-images/10-kulercp-utilities.jpg', 'KulerCP Utilities'],
dtype=object)
array(['images/doc-images/11-kuler-css3-slideshow.jpg',
'Kuler CSS3 Slideshow'], dtype=object)
array(['images/doc-images/12-kuler-accordion.jpg', 'Kuler Accordion'],
dtype=object)
array(['images/doc-images/13-kuler-tabs.jpg', 'Kuler Tabs'], dtype=object)
array(['images/doc-images/14-kuler-slides.jpg', 'Kuler Slides'],
dtype=object)
array(['images/doc-images/16-kuler-advanced-html.jpg',
'Kuler Advanced HTML'], dtype=object)
array(['images/doc-images/17-kuler-filter.jpg', 'Kuler Filter'],
dtype=object)
array(['images/doc-images/18-kuler-finder.jpg', 'Kuler Finder'],
dtype=object)
array(['images/doc-images/kuler_layer_slider/Installation.png',
'Install Kuler Layer Slider'], dtype=object)
array(['images/doc-images/kuler_layer_slider/actions.png', None],
dtype=object)
array(['images/doc-images/kuler_layer_slider/Slider Management.png', None],
dtype=object)
array(['images/doc-images/kuler_layer_slider/Slider Setting.png', None],
dtype=object)
array(['images/doc-images/kuler_layer_slider/Appearance Slider Setting.png',
None], dtype=object)
array(['images/doc-images/kuler_layer_slider/Slider Setting.png', None],
dtype=object)
array(['images/doc-images/kuler_layer_slider/Thumbnails Slider Setting.png',
None], dtype=object)
array(['images/doc-images/kuler_layer_slider/Mobile Visibility Setting.png',
None], dtype=object)
array(['images/doc-images/kuler_layer_slider/List of Slides.png', None],
dtype=object)
array(['images/doc-images/kuler_layer_slider/Slide Setting.png', None],
dtype=object)
array(['images/doc-images/kuler_layer_slider/Layer.png', None],
dtype=object)
array(['images/doc-images/kuler_layer_slider/Layer Setting.png', None],
dtype=object)
array(['images/doc-images/kuler_layer_slider/Modules.png', None],
dtype=object)
array(['images/doc-images/kuler_layer_slider/Import.png', None],
dtype=object)
array(['images/doc-images/kuler_blog_manager/folder.png', None],
dtype=object)
array(['images/doc-images/kuler_blog_manager/Install KBM.png', None],
dtype=object)
array(['images/doc-images/kuler_blog_manager/Menu.png', None],
dtype=object)
array(['images/doc-images/kuler_blog_manager/Blog Layout.png', None],
dtype=object)
array(['images/doc-images/kuler_blog_manager/Setting.png', None],
dtype=object)
array(['images/doc-images/kuler_blog_manager/Blog Author.png', None],
dtype=object)
array(['images/doc-images/kuler_blog_manager/Blog Category.png', None],
dtype=object)
array(['images/doc-images/kuler_blog_manager/Blog Article.png', None],
dtype=object)
array(['images/doc-images/kuler_blog_manager/Blog Comment.png', None],
dtype=object)
array(['images/doc-images/20-update-step-1.jpg',
'update kulerthemes products 1'], dtype=object)
array(['images/doc-images/21-update-step-2.jpg',
'update kulerthemes products 2'], dtype=object)
array(['images/doc-images/22-update-step-3.jpg',
'updates kulerthemes products 3'], dtype=object)] | docs.kulerthemes.com |
User Guide
Local Navigation
Change, move, or delete a saved Wi-Fi network
- On the home screen, click the connections area at the top of the screen, or click the Manage Connections icon.
- Click Wi-Fi Network > Saved Wi-Fi Networks.
- Highlight a saved Wi-Fi network.
- Press the
key.
Related reference
Previous topic: Saved Wi-Fi networks
Was this information helpful? Send us your comments. | http://docs.blackberry.com/en/smartphone_users/deliverables/37425/Change_move_or_delete_a_saved_Wi-Fi_network_61_1570009_11.jsp | 2014-09-15T04:19:21 | CC-MAIN-2014-41 | 1410657104119.19 | [] | docs.blackberry.com |
In order to install and configure the Archiva application, you should follow these steps:
At the time of the writing, Archiva is not yet released.
You should therefore download the Archiva sources and build it.
Or you can download a patched version of MRM-212 from Arnaud Heritier.
To create the MySQL database for Archiva, log as root, and execute the following commands:):
Using the Tomcat admin application, configure the database binding.
Select the Archiva's datasources.
Choose the "Create New Data Source" action.
Add the Values:
Save the changes, commit and restart Tomcat.: | http://docs.codehaus.org/plugins/viewsource/viewpagesrc.action?pageId=66508 | 2014-09-15T04:22:25 | CC-MAIN-2014-41 | 1410657104119.19 | [] | docs.codehaus.org |
WPD:discuss
Discuss
There are many ways for you to get in touch with fellow Webplatform.org community members, whether you have questions about contributing, feedback about the site itself, or just want to discuss relevant topics with like-minded individuals. Below you'll find information on our IRC/chat channels, mailing list and Q&A.
General note on discussion topics
In general, the Webplatform.org community is very welcoming and friendly, but we would like to gently remind you to keep your conversations on topic for the community. Relevant topics for Webplatform.org include questions about contributing to Webplatform.org (whether that is helping to moderate chat, write or edit articles, create or modify WPD templates, help with site graphics or styling, or general questions about writing style), feedback about aspects of Webplatform.org, and requests for new content.
We are not a tech support line for "please help me fix my code" type questions; such questions are better suited to code-specific IRC channels, or stackoverflow.
You can find more information on discussion topics and conduct at WPD:Conduct.
Public mailing list
Our public mailing list, [email protected], is for article organization, changes to common templates or forms, soliciting feedback, and setting new norms. It's also where we announce things like upcoming Doc Sprints (which everyone is welcome to join). You can subscribe to the list here. You do not need a W3C account to join or send messages.
Q&A
Please visit our Q&A page.
This is a Q&A forum where you can post questions and get them answered by other members of the community. You can get questions voted up and down so they will appear higher or lower in the answer list depending on how popular they are, so keep discussions high quality and on topic.
Web Platform IRC Chat
We have three IRC channels available for talking about different aspects of Webplatform.org, available on freenode.net IRC:
- #webplatform: The main channel, and the best place for general questions and discussion
- #webplatform-site: A channel reserved for discussions about updating the site, for example modifying styles and templates or creating new functionality
- #webplatform-offtopic: A channel specifically created for any off topic discussion that arises
To join the live chat, you first need to download an IRC client, such as
More can be found via Wikipedia as well.
Next, join one of the above IRC channels using the following details:
- Server: Freenode (irc.freenode.net)
- Channel: #webplatform, #webplatform-site, or #webplatform-offtopic
- URL to join a channel: irc: //chat.freenode.net[chosen chatroom name], for example irc://chat.freenode.net#webplatform
We will be providing a Web-based chat client in the future. | http://docs.webplatform.org/wiki/TEST:discuss | 2014-09-15T04:01:43 | CC-MAIN-2014-41 | 1410657104119.19 | [] | docs.webplatform.org |
...
WingSBuilder uses Maven2 as its build tool, which means that if you want to
build your own version of WingSBuilder from source you'll need to have it
installed. Follow the instructions at
Once Maven2 is installed you will also need to install 4 2 files from the wingS
distribution (but it wouldn't hurt to check at
if they are already there). The files are:
... | http://docs.codehaus.org/pages/diffpages.action?pageId=115474498&originalId=34701375 | 2014-09-15T04:16:55 | CC-MAIN-2014-41 | 1410657104119.19 | [] | docs.codehaus.org |
Recent Development
The brewer extension was conceived on the 2.2.x branch and is still undergoing major redesign. For the 2.34.x branch graph has been very stable, not very much development going on at allmuch of the StyleGenerator class is being rewritten to make better use of classification functions.
If you are a volunteer we could use help improving documentation of this module. Contributing a tutorial would be a great help.
... | http://docs.codehaus.org/pages/diffpages.action?pageId=68608&originalId=68607 | 2014-09-15T04:04:40 | CC-MAIN-2014-41 | 1410657104119.19 | [] | docs.codehaus.org |
Walkthrough of the Grid Features and Behavior
The
To
First
- field: The field in the data set that this column should be bound to.
- template: You can specify a template for the grid column to display instead of plain text.
- width: The desired width of the column.
Grid Creation From An HTML Table
Add
The next step is to bind the grid to data. The grid can be bound to local data very simply by setting the
dataSource option of the kendoGrid object. });
Data Binding – Remote
The
The
By
This.
If Grid virtual scrolling is used, then execute the following instead of
resize.
$("#GridID").data("kendoGrid").dataSource.fetch();
The above statements will take care of measuring the total height of the Grid and adjusting the height of the scrollable data area.
If locked (frozen) columns are used, executing
resize is not necessary.
The
resize method will work for Kendo UI versions Q3 2013 or later. For older versions, the following Javascript code must be used instead or
resize,
If a scrollable Grid with a set height is initialized while inside a hidden container, the Grid will not be able to adjust its vertical layout correctly,
because Javascript size calculations do not work for elements with a
display:none style. Depending on the exact configuration, the widget will appear smaller than expected or the scrollable data area will overflow.s
When
The
- float the Grid
DIVand clear the float right after the widget. Floated elements expand and shrink automatically to enclose their content, when needed.
Features
Scrolling
Grid
Virtual
When. At least one column should be locked initially. The Grid should have a height set.
Selection
The
The.
Grouping
Setting
By definition, the row template defines the row markup explicitly, while grouping requires changing the row markup. As a result, the two features can be used at the same time only if the row template includes a script, which adds additional cells, depending on the number>" }); });
Sorting
Sorting
Keyboard tabindex="-1" attribute, so that they are inaccessible via tabbing.
If needed, the described procedure can be avoided. The custom hyperlinks can be accessed via tabbing and activated via ENTER by hacking and bypassing the Grid keyboard navigation. This is achieved by preventing event bubbling of the custom hyperlinks' keydown event, so that the Grid never finds out about their ENTER keypresses.
Retrieving a Grid row by a model ID
In order to get a Grid table row by the data item ID can be achieved in the following way.
First, the ID field should be defined in the model configuration of the Grid datasource.
Then, the row model, the model UID and the Grid table row can be retrieved consecutively in the following way:
var rowModel = gridObject.dataSource.get(10249); // get method of the Kendo UI DataSource object var modelUID = rowModel.get("uid"); // get method of the Kendo UI Model object var tableRow = $("[data-uid='" + modelUID + "']"); // the data-uid attribute is applied to the desired table row element. This UID is rendered by the Grid automatically.
Applying Templates To Cells
Using templates within either a script tag, or the template option on the column object if the grid is being initialized from a div can format each cell in the grid.
In this example, a template is used to format the email address as a hyperlink by using a template declared in a script block.
<script id="template" type="text/x-kendo-tmpl"> <tr> <td> #= firstName # </td> <td> #= lastName # </td> <td> <a href="mailto:#= email #">#= email #</a> </td> </tr> </script>
This is then specified as the template for each row by passing it in to the
rowTemplate option on the grid and initializing it with the
kendo.template function.
$("#grid").kendoGrid({ rowTemplate: kendo.template($("#template").html()), // other configuration });
Now the email address is an interactive hyperlink that will open a new email message.
Printing the Grid
The following example shows how to inject the Grid HTML output in a new browser window and trigger printing.
When the Grid is scrollable (by default, except for the MVC wrapper), it renders a separate table for the header area. Since the browser cannot understand the relationship between the two Grid tables, it will not repeat the header row on top of every printed page. The code below addresses this issue by cloning the header row and prepending it to the printable Grid. Another option is to disable Grid scrolling.
HTML
<div id="grid"></div> <script type="text/x-kendo-template" id="toolbar-template"> <button type="button" class="k-button" id="printGrid">Print Grid</button> </script>
Javascript
function printGrid() { var gridElement = $('#grid'), printableContent = '', win = window.open('', '', 'width=800, height=500'), doc = win.document.open(); var htmlStart = '<!DOCTYPE html>' + '<html>' + '<head>' + '<meta charset="utf-8" />' + '<title>Kendo UI Grid</title>' + '<link href="' + kendo.version + '/styles/kendo.common.min.css" rel="stylesheet" /> ' + '<style>' + 'html { font: 11pt sans-serif; }' + '.k-grid { border-top-width: 0; }' + '.k-grid, .k-grid-content { height: auto !important; }' + '.k-grid-content { overflow: visible !important; }' + '.k-grid .k-grid-header th { border-top: 1px solid; }' + '.k-grid-toolbar, .k-grid-pager > .k-link { display: none; }' + '</style>' + '</head>' + '<body>'; var htmlEnd = '</body>' + '</html>'; var gridHeader = gridElement.children('.k-grid-header'); if (gridHeader[0]) { var thead = gridHeader.find('thead').clone().addClass('k-grid-header'); printableContent = gridElement .clone() .children('.k-grid-header').remove() .end() .children('.k-grid-content') .find('table') .first() .children('tbody').before(thead) .end() .end() .end() .end()[0].outerHTML; } else { printableContent = gridElement.clone()[0].outerHTML; } doc.write(htmlStart + printableContent +
When the datasource does not return any data (e.g. as a result of filtering) a table row with some user-friendly message can be added manually:
Example - adding a table row in the Grid's dataBound event handler
function onGridDataBound(e) { if (!e.sender.dataSource.view().length) { var colspan = e.sender.thead.find("th:visible").length, emptyRow = '<tr><td colspan="' + colspan + '">... no records ...</td></tr>'; e.sender.tbody.parent().width(e.sender.thead.width()).end().html(emptyRow); } } | http://docs.telerik.com/kendo-ui/web/grid/walkthrough | 2014-09-15T04:01:21 | CC-MAIN-2014-41 | 1410657104119.19 | [array(['/kendo-ui/web/grid/grid2_1.png', None], dtype=object)
array(['/kendo-ui/web/grid/grid3_1.png',
'Grid With Fixed Height And Scrolling'], dtype=object)
array(['/kendo-ui/web/grid/grid4_1.png',
'Grid With Row Selection Enabled'], dtype=object)
array(['/kendo-ui/web/grid/grid5_1.png', 'Grid With Grouping Enabled'],
dtype=object)
array(['/kendo-ui/web/grid/grid6_1.png', 'Grid Grouped By Last Name'],
dtype=object)
array(['/kendo-ui/web/grid/grid7_1.png', 'Grid With Sorting Enabled'],
dtype=object)
array(['/kendo-ui/web/grid/grid8_1.png', 'Grid With Row Template'],
dtype=object) ] | docs.telerik.com |
Cleanse Data Using DQS (Internal) Knowledge
This topic describes how to cleanse your data by using a data quality project in Data Quality Services (DQS). Data cleansing is performed on your source data using a knowledge base that has been built in DQS against a high-quality data set. For more information, see Building a Knowledge Base.
Data cleansing is performed in four stages: a mapping stage in which you identify the data source to be cleansed, and map it to required domains in a knowledge base, a computer-assisted cleansing stage where DQS applies the knowledge base to the data to be cleansed, and proposes/makes changes to the source data, an interactive cleansing stage where data stewards can analyze the data changes, and accept/reject the data changes, and finally the export stage that lets you export the cleansed data. Each of these processes is performed on a separate page of the cleansing activity wizard, enabling you to move back and forth to different pages, to re-run the process, and to close out of a specific cleansing process and then return to the same stage of the process. DQS provides you with statistics about the source data and the cleansing results that enable you to make informed decisions about data cleansing.
Before You Begin
Prerequisites
You must have specified appropriate threshold values for the cleansing activity. For information about doing so, see Configure Threshold Values for Cleansing and Matching.
A DQS knowledge base must be available on Data Quality Server against which you want to compare, and cleanse your source data. Additionally, the knowledge base must contain knowledge about the type of data that you want to cleanse. For example, if you want to cleanse your source data that contains US addresses, you must have a knowledge base that was created against a “high-quality” sample data for US addresses.
Microsoft Excel must be installed on the Data Quality Client computer if the source data to be cleansed
Permissions
You must have the dqs_kb_editor or dqs_kb_operator role on the DQS_MAIN database to perform data cleansing.
Create a Cleansing Data Quality Project
You must use a data quality project to perform data cleansing operation. To create a cleansing data quality project:
Follow steps 1-3 in the topic Create a Data Quality Project.
In step 3.d, select the Cleansing activity.
Click Create to create a cleansing data quality project.
This creates a cleansing data quality project, and opens up the Map page of the cleansing data quality wizard.
Mapping Stage
In the mapping stage, you specify the connection to the source data to be cleansed, and map the columns in the source data with the appropriate domains in the selected knowledge base.
On the Map page of the cleansing data quality wizard, select your source data to be cleansed: SQL Server or Excel File:
SQL Server: Select DQS_STAGING_DATA as the source database if you have copied your source data to this database, and then select appropriate table/view that contains your source data. Otherwise, select your source database and appropriate table/view. Your source database must be present in the same SQL Server instance as Data Quality Server to be available in the Database drop-down list.
Excel File: Click Browse, and select the Excel file that contains the data to be cleansed. Microsoft Excel must be installed on the Data Quality Client computer to select an Excel file. Otherwise, the Browse button will not be available, and you will be notified beneath this text box that Microsoft Excel is not installed. Also, leave the Use first row as header check box selected if the first row of the Excel file contains header data.
Under Mappings, map the data columns in your source data with appropriate domains in the knowledge base by selecting a source column from the drop-down list in the Source Column column, and then selecting a domain from the drop-down list in the Domain column in the same row. Repeat this step to map all the columns in your source data with appropriate domains in the knowledge base. If required, you can click the Add a column mapping icon to add rows to the mapping table.
Note
You can map your source data to a DQS domain for performing data cleansing only if the source data type is supported in DQS, and matches with the DQS domain data type. For information about supported source data types, see Supported SQL Server and SSIS Data Types for DQS Domains.
Click the Preview data source icon to see the data in the SQL Server table or view that you selected, or the Excel worksheet that you selected.
Click View/Select Composite Domains to view a list of the composite domains that are mapped to a source column. This button is available only if you have at least one composite domain mapped to a source column.
Click Next to proceed to the computer-assisted cleansing stage (Cleanse page).
Computer-Assisted Cleansing Stage
In the computer-assisted cleansing stage, you run an automated data cleansing process that analyzes source data against the mapped domains in the knowledge base, and makes/proposes data changes.
On the Cleanse page of the data quality wizard, click Start to run the computer-assisted cleansing process. DQS uses advanced algorithms and confidence levels based on the threshold levels specified to analyze your data against the selected knowledge base, and then cleanse it. For detailed information about how computer-assisted cleansing happens in DQS, see Computer-assisted Cleansing in Data Cleansing.
Important
After the data analysis has completed, the Start button turns into a Restart button. If the results from the previous analysis have not been saved as yet, clicking Restart will cause that previous data to be lost. As the analysis is running, do not leave the page or the analysis process will be terminated.
- If the knowledge base used for the cleansing project was updated and published after the time that the cleansing project was created, clicking Start prompts you whether to use the latest knowledge base for cleansing. This can typically happen if you created a data quality project using a knowledge base, closed the cleansing project mid-way by clicking Close, and then reopened the data quality project at a later point to perform cleansing. In the meantime, the knowledge base used in the cleansing project was updated and published.
Similarly, if the knowledge base used for the cleansing project was updated and published after the last time you ran the computer-assisted cleansing, clicking Restart prompts you whether to use the latest knowledge base for cleansing.
In both the cases, click Yes to use the updated knowledge base for the computer-assisted cleansing. Additionally, if there are any conflicts between current mappings and the updated knowledge base (such as domains were deleted or domain data type was changed), the message also prompts you to fix the current mappings to use the updated knowledge base. Clicking Yes takes you to the Map page where you can fix the mappings before continuing with the computer-assisted cleansing.
During the computer-assisted cleansing stage, you can switch on the profiler by clicking the Profiler tab to view real-time data profiling and notifications. For more information, see Profiler Statistics.
If you are not satisfied with the results, then click Back to return to the Map page, modify one or more mappings as necessary, return to the Cleanse page, and then click Restart.
After the computer-assisted cleansing process is complete, click Next to proceed to the interactive cleansing stage (Manage and View Results page).
Interactive Cleansing Stage
In the interactive cleansing stage, you can see the changes that DQS has proposed and decide whether to implement them or not by approving or rejecting the changes. On the left pane of the Manage and view results page, DQS displays a list of all the domains that you mapped earlier in the mapping stage along with the number of values in the source data analyzed against each domain during the computer-assisted cleansing stage. On the right pane of the Manage and view results page, based on adherence to the domain rules, syntax error rules, and advanced algorithms, DQS categorizes the data under five tabs using the confidence level. The confidence level indicates the extent of certainty of DQS for the correction or suggestion, and is based on the following threshold values:
Auto Correction threshold: Any value that has a confidence level above this threshold is automatically corrected by DQS. However, the data steward can override the change during interactive cleansing. You can specify the auto correction threshold value in the General Settings tab in the Configuration screen. For more information, see Configure Threshold Values for Cleansing and Matching.
Auto Suggestion threshold: Any value that has a confidence level above this threshold, but below the auto correction threshold, is suggested as a replacement value. DQS will make the change only if the data steward approves it. You can specify the auto suggestion threshold value in the General Settings tab in the Configuration screen. For more information, see Configure Threshold Values for Cleansing and Matching.
Other: Any value below the auto suggestion threshold value is left unchanged by DQS.
Based on the confidence level, the values are displayed under the following five tabs:
To interactively cleanse the data:
On the Manage and view results page of the cleansing data quality wizard, click on a domain name in the left pane.
Review the domain values under the five tabs, and take appropriate action as explained earlier.
The right-upper pane displays the following information for each value in the selected domain: original value, number of instances (records), a box to specify another (correct) value, the confidence level (not available for the values under the Correct tab), the reason for the DQS action on the value, and the option to approve and reject the corrections and suggestions for the value.
Tip
You can approve or reject all the values in the selected domain in the upper-right pane by clicking Approve all terms or Reject all terms icon respectively. Alternately, you can right-click a value in the selected domain, and click Accept all or Reject all in the shortcut menu.
The lower pane displays individual occurrences of the domain value selected in the right-upper pane. The following information is displayed: a box to specify another (correct) value, the confidence level (not available for the values under the Correct tab), the reason for the DQS action on the value, option to approve and reject the corrections and suggestions for the value, and the original value.
If you enabled the Speller feature for a domain while creating it, wavy red underscores are displayed against such domain values that are identified as potential error. The underscore is displayed for the entire value. For example, if “New York” is incorrectly spelled as “Neu York”, the speller will display red underscore under “Neu York”, and not just “Neu”. If you right-click the value, you will see suggested corrections. If there are more than 5 suggestions, you can click More suggestions in the context menu to view the rest of them. As with the error display, the suggestions are replacements for the whole value. For example, “New York” will be displayed as a suggestion in the previous example, and not just “New”. You can pick one of the suggestions or add a value to the dictionary to be displayed for that value. Values are stored in dictionary at a user account level. When you select a suggestion from the speller context menu, the selected suggestion will be added to the Correct To column. However, if you select a suggestion in the Correct To column, the value in the column is replaced by the selected suggestion.
The speller feature is enabled by default in the interactive cleansing stage. You can disable speller in the interactive cleansing stage by clicking the Enable/Disable Speller icon, or right-clicking in the domain values area, and then clicking Speller in the shortcut menu. To enable it back again, do the same.
Note
The speller feature is only available in the upper pane (domain values). Moreover, you cannot enable or disable speller for composite domains. The child domains in a composite domain that are of string type, and are enabled for the speller feature, will have the speller functionality enabled in the interactive cleansing stage, by default.
During the interactive cleansing stage, you can switch on the profiler by clicking the Profiler tab to view real-time data profiling and notifications. For more information, see Profiler Statistics.
After you have reviewed all the domain values, click Next to proceed to the export stage.
Export Stage
In the export stage, you specify the parameters for exporting your cleansed data: what and where to export.
On the Export page of the cleansing data quality wizard, select the destination type for exporting your cleansed data: SQL Server, CSV File, or Excel File.
Important
If you are using 64-bit version of Excel, you cannot export your cleansed data to an Excel file; you can export only to a SQL Server database or to a .csv file.
SQL Server: Select DQS_STAGING_DATA as the destination database if you want to export your data here, and then specify a table name that will be created to store your exported data. Otherwise, select another database if you want to export data to a different database, and then specify a table name that will be created to store your exported data. Your destination database must be present in the same SQL Server instance as Data Quality Server to be available in the Database drop-down list.
CSV File: Click Browse, and specify the name and location of the .csv file where you want to export the cleansed data. You can also type the file name for the .csv file along with the full path where you want to export the cleansed data. For example, “c:\ExportedData.csv”. The file is saved on the computer where Data Quality Server is installed.
Excel File: Click Browse, and specify the name and location of the Excel file where you want to export the cleansed data. You can also type the file name for the Excel file along with the full path where you want to export the cleansed data. For example, “c:\ExportedData.xlsx”. The file is saved on the computer where Data Quality Server is installed.
Select the Standardize Output check box to standardize the output based on the output format selected for the domain. For example, change the string value to upper case or capitalize the first letter of the word. For information about specifying the output format of a domain, see the Format Output to list in Set Domain Properties.
Next, select the data output: export just the cleansed data or export cleansed data along with the cleansing information.
Data Only: Click the radio button to export just the cleansed data.
Data and Cleansing Info: Click the radio button to export the following data for each domain:
<Domain>_Source: The original value in the domain.
<Domain>_Output: The cleansed values in the domain.
<Domain>_Reason: The reason specified for the correction of the value.
<Domain>_Confidence: The confidence level for all the terms that were corrected. It is displayed as the decimal value equivalent to the corresponding percentage value. For example, a confidence level of 95% will be displayed as .9500000.
<Domain>_Status: The status of the domain value after data cleansing. For example, Suggested, New, Invalid, Corrected, or Correct.
Record Status: Apart from having a status field for each mapped domain (<DomainName>_Status), the Record Status field displays the status for a record. If any of the domain’s status in the record is New or Correct, the Record Status is set to Correct. If any of the domain’s status in the record is Suggested, Invalid, or Corrected, the Record Status is set to the respective value. For example, if any of the domain’s status in the record is Suggested, the Record Status is set to Suggested.
Note
If you use reference data service for the cleansing operation, some additional data about the domain value is also available for exporting. For more information, see Cleanse Data Using Reference Data (External) Knowledge.
Click Export to export data to the selected data destination. If you selected:
SQL Server as the data destination, a new table with the specified name will be created in the selected database.
CSV File as the data destination, a .csv file will be created at the location on the Data Quality Server computer with the file name that you specified earlier in the CSV File name box.
Excel File as the data destination, an Excel file will be created at the location on the Data Quality Server computer with the file name that you specified earlier in the Excel file name box.
Click Finish to close the data quality project.
Profiler Statistics
The Profiler tab provides statistics that indicate the quality of the source data. Profiling helps you assess the effectiveness of the data cleansing activity, and you can potentially determine the extent to which data cleansing was able to improve the quality of the data.
The Profiler tab provides the following statistics for the source data, by field and domain:
Records: How many records in the data sample were analyzed for the data cleansing activity
Correct Records: How many records were found to be correct
Corrected Records: How many records were corrected
Suggested Records: How many records were suggested
Invalid Records: How many records were invalid
The field statistics include the following:
Field: Name of the field in the source data
Domain: Name of the domain that maps to the field
Corrected Values: The number of domain values that were corrected
Suggested Values: The number of domain values that were suggested
Completeness: The completeness of each source field that is mapped for the cleansing activity
Accuracy: The accuracy of each source field that is mapped for the cleansing activity
DQS profiling provides two data quality dimensions: completeness (the extent to which data is present) and accuracy (the extent to which data can be used for its intended use). If profiling is telling you that a field is relatively incomplete, you might want to remove it from the knowledge base of a data quality project. process..
Accuracy statistics will likely require more interpretation if you are not using a reference data service. If you are using a reference data service for data cleansing, you will have a level of trust in accuracy statistics. For more information about data cleansing using reference data service, see Cleanse Data Using Reference Data (External) Knowledge.
Cleansing Notifications
The following conditions result in notifications:
There are no corrections or suggestions for a field. You might want to remove it from mapping, run knowledge discovery first, or use another knowledge base.
There are relatively few corrections or suggestions for a field. You might want to remove it from mapping, run knowledge discovery first, or use another knowledge base.
The accuracy level of the field is very low. You might want to verify the mapping, or consider running knowledge discovery first.
For more information about profiling, see Data Profiling and Notifications in DQS. | https://docs.microsoft.com/en-us/sql/data-quality-services/cleanse-data-using-dqs-internal-knowledge | 2017-11-18T02:03:18 | CC-MAIN-2017-47 | 1510934804125.49 | [] | docs.microsoft.com |
Data Binding Support Overview
Data binding allows you to establish a link between the UI and the underlying business logic and to keep them synchronized.ContextMenu involves the following property:
- RadContextMenu.ItemsSource - gets or sets the data source (IEnumerable) used to generate the content of the RadContextMenu control. It can be bound to data from a variety of data sources in the form of common language runtime (CLR) objects.
Supported Data Sources
You can bind RadContextContextMenu to a collection of business objects, you should use its ItemsSource property. If you want the changes to the collection to be automatically reflected to the RadMenuItems, the collection should implement the INotifyCollectionChanged interface. There is a buildContextMenuContextMenu. | https://docs.telerik.com/devtools/wpf/controls/radcontextmenu/populating-with-data/data-binding-support-overview | 2017-11-18T00:41:04 | CC-MAIN-2017-47 | 1510934804125.49 | [] | docs.telerik.com |
Empty Cells in Combobox column
PROBLEM
When you use the GridViewComboBoxColumn you might encounter empty cells in that column:
CAUSE
First you need to check:
The Output for Binding exceptions
If the types are of the same type
If you do not encounter any of the above mentioned problems, then you probably use ElementName binding for that column, e.g.
Example 1: Binding with ElementName
<telerik:GridViewComboBoxColumn
This will not work, as the DataContext of the cell would not be the ViewModel, but the business object related to the row instead. We do not recommend such approach.
SOLUTION
There are two ways of solving the issue :
Setting the ItemsSource of GridViewComboBoxColumn
- Expose the ViewModel as a static resource on the page so that it can be easily accessible by the binding:
Example 2: Exposing the ViewModel as a Static Resource
<UserControl.Resources> <local:MyViewModel x: </UserControl.Resources>
- Set the ItemsSource of the ComboBox column:
Example 3: Setting the ItemsSource of GridViewComboBox declaratively
<telerik:GridViewComboBoxColumn
Example 4: Setting the ItemsSource of GridViewComboBoxColumn programmatically
private void gridView_DataLoaded(object sender, EventArgs e) { (this.radGridView.Columns["Category"] as GridViewComboBoxColumn).ItemsSource = GetCategories(); }
Private Sub gridView_DataLoaded(ByVal sender As Object, ByVal e As EventArgs) TryCast(Me.radGridView.Columns("Category"), GridViewComboBoxColumn).ItemsSource = GetCategories() End Sub
Setting the IsLightWeightModeEnabled property
As of R2 2016 GridViewComboBoxColumn exposes the IsLightWeightModeEnabled. When set to True, a completely new lookup logic is used which improves the performance of the column and could be a solution for a scenario when having empty cells in it. More information can be found in the ComboBoxColumn topic. | https://docs.telerik.com/devtools/wpf/controls/radgridview/troubleshooting/blank-cells | 2017-11-18T00:52:10 | CC-MAIN-2017-47 | 1510934804125.49 | [array(['images/gridview_troubleshoot_blank_cells.png',
'GridView Troubleshooting Blank Cells'], dtype=object)] | docs.telerik.com |
Gets and sets the property ClusterSecurityGroups.AWSSDK (Module: AWSSDK) Version: 1.5.60.0 (1.5.60.0) | http://docs.aws.amazon.com/sdkfornet1/latest/apidocs/html/P_Amazon_Redshift_Model_ModifyClusterRequest_ClusterSecurityGroups.htm | 2017-11-18T01:28:38 | CC-MAIN-2017-47 | 1510934804125.49 | [] | docs.aws.amazon.com |
If you are split testing and want to delete a page variation, first click the gear icon for the variation you would like to delete.
Then, click the Delete Page Variation button.
Then, click OK.
If you have any questions regarding Split Testing or your Variation pages, click the ClickFunnels support blue button.
Did this answer your question? | http://docs.clickfunnels.com/funnels/funnel-tips-and-tricks/how-to-delete-a-page-variation | 2017-11-18T00:36:44 | CC-MAIN-2017-47 | 1510934804125.49 | [array(['https://downloads.intercomcdn.com/i/o/30685754/02c1292f4845e570abcc36ca/image.png',
None], dtype=object)
array(['https://downloads.intercomcdn.com/i/o/30685805/83062658644dfecc8a7f15c7/image.png',
None], dtype=object)
array(['https://downloads.intercomcdn.com/i/o/30685829/28c7024508f1e3a7f936a873/image.png',
None], dtype=object) ] | docs.clickfunnels.com |
Shinken Manual
About
About Shinken
Feature comparison between Shinken and Nagios
Shinken notable innovations
The project Vision
Feature selection and release cycle
Getting Started
Advice for Beginners
Quickstart Installation Guides
Installations
Upgrading Shinken
Configuring Shinken
Configuration Overview
Main Configuration File (shinken.cfg) Options
Object Configuration Overview
Object Definitions
Custom Object Variables
Main advanced configuration
Running Shinken
Verifying Your Configuration
Starting and Stopping Shinken
The Basics
Setting up a basic Shinken Configuration
Monitoring Plugins
Understanding Macros and How They Work
Standard Macros in Shinken
Host Checks
Service Checks
Active Checks
Passive Checks
State Types
Time Periods
Determining Status and Reachability of Network Hosts
Notifications
Active data acquisition modules
Setup Network and logical dependencies in Shinken
Update Shinken
Medium
Business rules
Monitoring a DMZ
Shinken High Availability
Mixed GNU/linux AND Windows pollers
Notifications and escalations
The Notification Ways, AKA mail 24x7, SMS only the night for a same contact
Passive data acquisition
Snapshots
Advanced Topics
External Commands
Event Handlers
Volatile Services
Service and Host Freshness Checks
Distributed Monitoring
Redundant and Failover Network Monitoring
Detection and Handling of State Flapping
Notification Escalations
On-Call Rotations
Monitoring Service and Host Clusters
Host and Service Dependencies
State Stalking
Performance Data
Scheduled Downtime
Adaptive Monitoring
Predictive Dependency Checks
Cached Checks
Passive Host State Translation
Service and Host Check Scheduling
Object Inheritance
Advanced tricks
Migrating from Nagios to Shinken
Multi layer discovery
Multiple action urls
Aggregation rule
Scaling Shinken for large deployments
Defining advanced service dependencies
Shinken’s distributed architecture
Shinken’s distributed architecture with realms
Macro modulations
Shinken and Android
Send sms by gateway
Triggers
Unused nagios parameters
Advanced discovery with Shinken
Discovery with Shinken
Config
Host Definition
Host Group Definition
Service Definition
Service Group Definition
Contact Definition
Contact Group Definition
Time Period Definition
Command Definition
Service Dependency Definition
Service Escalation Definition
Host Dependency Definition
Host Escalation Definition
Extended Host Information Definition
Extended Service Information Definition
Notification Way Definition
Realm Definition
Arbiter Definition
Scheduler Definition
Poller Definition
Reactionner Definition
Broker Definition
Shinken Architecture
Arbiter supervision of Shinken processes
Advanced architectures
How are commands and configurations managed in Shinken
Problems and impacts correlation management
Shinken Architecture
Troubleshooting
FAQ - Shinken troubleshooting
Integration With Other Software
Integration Overview
SNMP Trap Integration
TCP Wrappers Integration
Use Shinken with Thruk
Nagios CGI UI
Thruk interface
Use Shinken with ...
Use Shinken with Centreon
Use Shinken with Graphite
Use Shinken with Multisite
Use Shinken with Nagvis
Use Shinken with Old CGI and VShell
Use Shinken with PNP4Nagios
Use Shinken with WebUI
Security and Performance Tuning
Security Considerations
Tuning Shinken For Maximum Performance
Scaling a Shinken installation
Shinken performance statistics
Graphing Performance Info With MRTG and nagiostats
How to monitor ...
Monitoring Active Directory
Monitoring Asterisk servers
Monitoring DHCP servers
Monitoring IIS servers
Monitoring Linux devices
Monitoring Linux devices
Monitoring Linux devices via a Local Agent
Monitoring Linux devices via SNMP
Monitoring Microsoft Exchange
Monitoring Microsoft SQL databases
Monitoring MySQL databases
Monitoring Routers and Switches
Monitoring Network devices
Monitoring Oracle databases
Monitoring Printers
Monitoring Publicly Available Services
Monitoring VMware hosts and machines
Monitoring Windows devices
Monitoring Windows devices via NSClient++
Monitoring Windows devices via WMI
How to contribute
Shinken packs
Shinken modules and Shinken packs
Help the Shinken project
Getting Help and Ways to Contribute
Shinken Package Manager
Development
Shinken Programming Guidelines
Test Driven Development
Shinken Plugin API
Developing Shinken Daemon Modules
Hacking the Shinken Code
Shinken documentation
Deprecated
Review of script’s option and parameters
Review of variable used in the script
Shinken on RedHat 6 with Thruk and PNP4Nagios HOWTO
Shinken modules
Reference
Exceptions
shinken
shinken Package
clients Package
daemons Package
discovery Package
misc Package
objects Package
webui Package
Shinken Manual
Docs
»
Troubleshooting
Edit on GitHub
Troubleshooting
¶
FAQ - Shinken troubleshooting
FAQ Summary
Frequently asked questions
General Shinken troubleshooting steps to resolve common issue
FAQ Answers
Review the daemon logs
Changing the log level during runtime
Changing the log level in the configuration
OSError read-only filesystem error
OSError too many files open
Notification emails have generic-host instead of host_name
Read the Docs
v: latest
Versions
latest
documentation
Downloads
PDF
HTML
Epub
On Read the Docs
Project Home
Builds
Free document hosting provided by
Read the Docs
. | http://testdocshinken.readthedocs.io/en/latest/10_troubleshooting/index.html | 2017-11-18T00:44:45 | CC-MAIN-2017-47 | 1510934804125.49 | [] | testdocshinken.readthedocs.io |
Row Details
Each RadTreeList.. | https://docs.telerik.com/devtools/silverlight/controls/radtreelistview/features/row-details | 2017-11-18T01:09:24 | CC-MAIN-2017-47 | 1510934804125.49 | [] | docs.telerik.com |
Investor Guide¶
Note
This guide is still under construction. Please excuse if what you are searching for is not yet available
The investor guide serves as an entry point for existing and potential investors in the BitShares ecosystem. We here merely discuss the BTS token as well as investment opportunities available within BitShares itself and deliberately do not advertise 3rd party businesses. Please be reminded that this is an information platform and thus we do not give investment advice. | http://docs.bitshares.org/bitshares/investor/index.html | 2017-11-18T00:32:28 | CC-MAIN-2017-47 | 1510934804125.49 | [] | docs.bitshares.org |
After you connect to a server, you can use the remote desktops and applications that you are authorized to use.
About this task
Before you have end users access remote desktops and applications, test that you can connect to remote desktop or application from the client system. use mycompany rather than mycompany.com.
Perform the administrative tasks described in Preparing Connection Server for Horizon Client.
If you are outside the corporate network and are not using a security server to access the remote desktop, verify that your client device is set up to use a VPN connection and turn that connection on.Important:
VMware recommends using a security server rather than a VPN. the RDP display protocol to connect to a remote desktop, verify that the AllowDirectRDP agent group policy setting is enabled.
If your administrator has allowed it, you can configure the certificate checking mode for the SSL certificate that the server presents. See Certificate Checking Modes for Horizon Client.
If you are using smart card authentication, you can configure Horizon Client to automatically use a local certificate or the certificate on your smart card. See Configure Horizon Client to Select a Smart Card Certificate.
If end users are allowed to use the Microsoft RDP display protocol, verify that the client system has Remote Desktop Connection Client for Mac from Microsoft, version 2.0 or later. You can download this client from the Microsoft Web site.
Procedure
- If a VPN connection is required, turn on the VPN.
- In the Applications folder, double-click VMware Horizon View Client (Horizon Client 3.0) or VMware Horizon Client (Horizon Client 3.1 and later).
- Click Continue to start remote desktop USB and printing services, or click Cancel to use Horizon Client without remote desktop USB and printing services.
If you click Continue, you must provide system credentials. If you click Cancel, you can enable remote desktop USB and printing services later.Note:
The prompt to start remote desktop USB and printing services appears the first time you launch Horizon Client. It does not appear again, regardless of whether you click Cancel or Continue.
- Connect to a server.
- If you are prompted for RSA SecurID credentials or RADIUS authentication credentials, type the user name and passcode and click Login.
- Type your user name and password, select a domain, and click Login.) : If multiple display protocols are configured for a remote desktop or application, select the.
- Double-click a remote desktop or application to connect..Note:
In Horizon Client 3.2 and later, if you are entitled to only one remote desktop on the server, Horizon Client automatically connects you to that desktop.
Results
After you are connected, the client window appears.
If you have Horizon Client 3.4 or later, the Sharing dialog box might appear. From the Sharing dialog box, you can allow or deny access to files on your local system. For more information, see Share Access to Local Folders and Drives.
If Horizon Client cannot connect to the remote desktop or application, View Agent or Horizon.
If you are using the RDP display protocol to connect to a remote desktop, verify that the client computer allows remote desktop connections. | https://docs.vmware.com/en/VMware-Horizon-Client-for-Mac/4.0/com.vmware.horizon.mac-client-doc/GUID-FDB963B9-1441-4ADA-94B1-2689BA673A4F.html | 2017-11-18T00:59:01 | CC-MAIN-2017-47 | 1510934804125.49 | [] | docs.vmware.com |
You can use Active Directory (AD) to add a dynamic CVD collection. You can add CVDs to the collection by Active Directory group, organizational unit, or domain. You can create a filter for multiple Active Directory elements.
About this task
The Active Directory is updated whenever a device is authenticated. Active Directory information might change if the Active Directory is updated for that user or device.
Procedure
- In the Mirage Management console tree, expand the Inventory node, right-click Collections, and select Add a Collection.
- Type the name and description for this dynamic collection.
- Select Dynamic Collection.
- In the Column drop-down menu, set the filter to define the dynamic collection by Active Directory group, Active Directory organizational unit, or Active Directory domain.
You can select additional filters from the Column drop-down menu.
- Click Apply to view the CVDs filtered to the collection. These CVDs appear in the lower pane.
- Click OK. | https://docs.vmware.com/en/VMware-Mirage/5.8.1/com.vmware.mirage.admin/GUID-F1CCE286-1684-4612-9704-F0D1386D2BDE.html | 2017-11-18T00:59:36 | CC-MAIN-2017-47 | 1510934804125.49 | [] | docs.vmware.com |
If you prefer an interactive NSX Edge installation, you can use a UI-based VM management tool, such as the vSphere Client connected to vCenter Server.
About this task
In this release of NSX-T, IPv6 is not supported.
Prerequisites
Verify that the system requirements are met. See System Requirements.
Verify that the required ports are open. See Ports and Protocols.
If you don't already have one, create the target VM port group network. Most deployments place NSX appliances on a management VM network.
If you have multiple management networks, you can add static routes to the other networks from the NSX appliance. Prepare management VM port group on which NSX appliances will communicate.
Plan your IPv4 IP address scheme. In this release of NSX-T, IPv6 is not supported.
Privileges to deploy an OVF template on the ESXi host.
Choose hostnames that datacenter.
The name you type will appear in the inventory.
The folder you select will.
- Set the NSX Edge password and IP settings.
For example, this screen shows the final review screen after all the options are configured.
- For optimal performance, reserve memory for the NSX component.
A memory reservation is a guaranteed lower bound on the amount of physical memory that the host reserves for a virtual machine, even when memory is overcommitted. Set the reservation to a level that ensures the NSX component has sufficient memory to run efficiently. See System Requirements.
Results
Open the console of the NSX Edge to track the boot process. If the window doesn’t open, make sure that pop-ups are allowed.
After the NSX Edge is completely booted, log in to the CLI and.
Ensure that your NSX Edge appliance has the required connectivity.
Make sure that you can ping your NSX Edge.
Make sure that the NSX Edge can ping its default gateway.
Make sure that your NSX Edge can ping the hypervisor hosts that are in the same network as the NSX Edge.
Make sure that the NSX Edge can ping its DNS server and its NTP server.
If you enabled SSH, make sure that you can SSH to your NSX Edge., you can correct this as follows:
stop service dataplane
set interface eth0 dhcp plane mgmt
Place eth0 into the DHCP network and wait for an IP address to be assigned to eth0.
start service dataplane. | https://docs.vmware.com/en/VMware-NSX-T/1.1/com.vmware.nsxt.install.doc/GUID-AECC66D0-C968-4EF2-9CAD-7772B0245BF6.html | 2017-11-18T00:59:40 | CC-MAIN-2017-47 | 1510934804125.49 | [] | docs.vmware.com |
Catalog items are published blueprints for machines, software components, and other objects. Actions in the catalog management area are published actions that you can run on the provisioned catalog items. You can use the lists to determine what blueprints and actions are published so that you can make them available to service catalog users.
Published Catalog Items
A catalog item is a published blueprint. Published blueprints can also be used in other blueprints. The reuse of blueprints in other blueprints is not displayed in the catalog items list.
The published catalog items can also include items that are only components of blueprints. For example, published software components are listed as catalog items, but they are available only as part of a deployment.
Deployment catalog items must be associated with a service so that you can make them available in the service catalog to entitled users. Only active items appear in the service catalog. You can configure catalog items to a different service, disable it if you want to temporarily remove it from the service catalog, and add a custom icon that appears in the catalog.
Published Actions
Actions are changes that you can make to provisioned catalog items. For example, you can reboot a virtual machine.
Actions can include built-in actions or actions created using XaaS. Built-in actions are added when you add a machine or other provided blueprint. XaaS actions must be created and published.
Actions are not associated with services. You must include an action in the entitlement that contains the catalog item on which the action runs. Actions that are entitled to users do not appear in the service catalog. The actions are available for the provisioned item on the service catalog user's Items tab based whether they are applicable to the item and to the current state of the item.
You can add a custom icon to the action that appears on the Items tab. | https://docs.vmware.com/en/vRealize-Automation/7.3/com.vmware.vra.prepare.use.doc/GUID-A493CEF6-C3DA-465F-966F-92E6285752C6.html | 2017-11-18T01:01:15 | CC-MAIN-2017-47 | 1510934804125.49 | [] | docs.vmware.com |
log.handlers¶
Module:
log.handlers¶
pyzmq logging handlers.
This mainly defines the PUBHandler object for publishing logging messages over a zmq.PUB socket.
The PUBHandler can be used with the regular logging module, as in:
>>> import logging >>> handler = PUBHandler('tcp://127.0.0.1:12345') >>> handler.>> logger = logging.getLogger('foobar') >>> logger.setLevel(logging.DEBUG) >>> logger.addHandler(handler)
After this point, all messages logged by
logger will be published on the
PUB socket.
Code adapted from StarCluster:
Classes¶
PUBHandler¶
- class
zmq.log.handlers.
PUBHandler(interface_or_socket, context=None)¶
A basic logging handler that emits log messages through a PUB socket.
Takes a PUB socket already bound to interfaces or an interface to bind to.
Example:
sock = context.socket(zmq.PUB) sock.bind('inproc://log') handler = PUBHandler(sock)
Or:
handler = PUBHandler('inproc://loc')
These are equivalent.
Log messages handled by this handler are broadcast with ZMQ topics
this.root_topiccomes first, followed by the log level (DEBUG,INFO,etc.), followed by any additional subtopics specified in the message by: log.debug(“subtopic.subsub::the real message”)
Tidy up any resources used by the handler.
This version removes the handler from an internal map of handlers, _handlers, which is used for handler lookup by name. Subclasses should ensure that this gets called from overridden close() methods..
flush()¶
Ensure all logging output has been flushed.
This version does nothing and is intended to be implemented by subclasses.
formatters= {40: <logging.Formatter object>, 10: <logging.Formatter object>, 20: <logging.Formatter object>, 50: <logging.Formatter object>, 30: <logging.Formatter object>}¶.
TopicLogger¶
- class
zmq.log.handlers.
TopicLogger(name, level=0)¶
A simple wrapper that takes an additional argument to log methods.
All the regular methods exist, but instead of one msg argument, two arguments: topic, msg are passed.
That is:
logger.debug('msg')
Would become:
logger.debug('topic.sub', 'msg')
callHandlers(record)¶.
exception(msg, *args, exc_info=True, **kwargs)¶
Convenience method for logging an ERROR with exception information..
findCaller(stack_info=False)¶
Find the stack frame of the caller so that we can note the source file name, line number and function name.
getChild(suffix)¶
Get a logger which is a descendant to this one.
This is a convenience method, such that
logging.getLogger(‘abc’).getChild(‘def.ghi’)
is the same as
logging.getLogger(‘abc.def.ghi’)
It’s useful, for example, when the parent logger is named using __name__ rather than a literal string.
getEffectiveLevel()¶
Get the effective level for this logger.
Loop through this logger and its parents in the logger hierarchy, looking for a non-zero logging level. Return the first one found.
handle(record)¶
Call the handlers for the specified record.
This method is used for unpickled records received from a socket, as well as those created locally. Logger-level filtering is applied.
hasHandlers()¶
See if this logger has any handlers configured.
Loop through all handlers for this logger and its parents in the logger hierarchy. Return True if a handler was found, else False. Stop searching up the hierarchy whenever a logger with the “propagate” attribute set to zero is found - that will be the last logger which is checked for the existence of handlers.
info(msg, *args, **kwargs)¶
Log ‘msg % args’ with severity ‘INFO’.
To pass exception information, use the keyword argument exc_info with a true value, e.g.
logger.info(“Houston, we have a %s”, “interesting problem”, exc_info=1)
log(level, topic, msg, *args, **kwargs)¶
Log ‘msg % args’ with level and topic.
To pass exception information, use the keyword argument exc_info with a True value:
logger.log(level, "zmq.fun", "We have a %s", "mysterious problem", exc_info=1)
makeRecord(name, level, fn, lno, msg, args, exc_info, func=None, extra=None, sinfo=None)¶
A factory method which can be overridden in subclasses to create specialized LogRecords. | http://pyzmq.readthedocs.io/en/latest/api/zmq.log.handlers.html | 2017-11-18T00:53:19 | CC-MAIN-2017-47 | 1510934804125.49 | [] | pyzmq.readthedocs.io |
Shinken provides a discovery mecanism in several steps. There are on a side the runners (cf Runners description) which are script that output in formatted way properties list of scanned host and on another side discovery rules which use properties list to tag hosts when some of these properties are meaningful.
There are two kinds of rules, those which generate a host definition and those which launch another runners more specific to the scanned object. Better an image than a long speech :
Filesystems
To make this plugin works you must have snmp activated on targeted hosts. Take care to activate it and make HOST-RESSOURCES MIB OID available to it. Beginning OID of HOST-RESSOURCES MIB is : .1.3.6.1.2.1.25. The default discovery runner rule trigger this runner on unix host with port 161 open.
FS discovery runner provides two modes : __macros__ and __tags__ modes. First one, __macros__ mode, will output a comma-separated list of filesystems under host macro ‘_fs’, the other one will output tags with filesystems mountpoint.
Important
All filesystems will output with character / replaced by an underscore _.
It is the easiest mode. It will add a line into host definition with host macro ‘_fs’ with comma-separated list of filesystems. Then it is only needed to write a service definition using that macro with shinken directive “duplicate_foreach”. Here is an example :
define service{ service_description Disks$KEY$ use generic-service register 0 host_name linux check_command check_linux_disks!$KEY$ duplicate_foreach _fs }
$KEY$ will be replaced by ‘_fs’ host macros value.
This mode will let you more flexibility to monitor filesystems. Each filesystems will be a tag named with filesystem mountpoint then you need discovery rules to tag scanned host with filesystem name. Example if you want to monitor “/var” filesystem on a host with following filesystems “/usr”, “/var”, “/opt”, “/home”, “/”. You will need a discovery rules to match “/var”, then a host template materializing the tag and a service applied to host template :
define discoveryrule { discoveryrule_name fs_var creation_type host fs var$ +use fs_var }
will match “/var” filesystem and tell to tag with “fs_var”.
define host{ name fs_var register 0 }
Host template used be scanned host.
define service{ host_name fs_var use 10min_short service_description Usage_var check_command check_snmp_storage!"var$$"!50!25 icon_set disk register 0 }
and service applied to “fs_var” host template, itself applied to scanned host.
Now, if you want to apply same treatment to several filesystems, like “/var” and “/home” by example :
define discoveryrule { discoveryrule_name fs_var_home creation_type host fs var$|home$ +use fs_var_home }
define host{ name fs_var_home register 0 }
define service{ host_name fs_var_home use 10min_short service_description Usage_var_and_home check_command check_snmp_storage!"var$$|home$$"!50!25 icon_set disk register 0 }
Pay attention to double “$$”, it is needed cause macros interpretation. When more than one “$” is used just double them else in this example we gotten Shinken trying to interprate macro ‘$|home$’.
Cluster
SNMP needed to make this runner works. You have to activate SNMP daemon on host discovered and make OID of clustering solution available to read. OID beginning for HACMP-MIB is : .1.3.6.1.4.1.2.3.1.2.1.5.1 and for Safekit is : .1.3.6.1.4.1.107.175.10.
Runner does only detects HACMP/PowerHA and Safekit clustering solutions for the moment. It will scan OID and return cluster name or module name list, depends on Safekit or HACMP. For an host with two Safekit modules test and prod, runner will output :
# ./cluster_discovery_runnner.py -H sydlrtsm1 -O linux -C public sydlrtsm1::safekit=Test,Prod | http://testdocshinken.readthedocs.io/en/latest/07_advanced/multi-layer-discovery.html | 2017-11-18T00:48:06 | CC-MAIN-2017-47 | 1510934804125.49 | [] | testdocshinken.readthedocs.io |
Set multiple startup projects
Visual Studio for Mac allows you to specify that more than one project should be started when you debug or run your solution.
To set multiple startup projects
In the Solution Window, select the solution (the top node).
Right-click the solution node and then select Set Startup Projects:
The Create Solution Run Configuration dialog box opens. This dialog box allows you to create a new named Solution Run Configuration for your solution. You can use any name you like. The default name is
Multiple Projects.
Select Create Run Configuration. The Solution Options dialog box opens with the new Solution Run Configuration selected:
Select the projects that you want to start when you debug or run your app from Visual Studio for Mac:
Select OK. The new Solution Run Configuration is set as the active run configuration:
Now the two projects are configured to start, which is represented by both projects appearing bold in the Solution Window. In the toolbar, the new run configuration is set as the current Solution Run Configuration. | https://docs.microsoft.com/en-us/visualstudio/mac/set-startup-projects?view=vsmac-2019 | 2022-01-17T02:52:31 | CC-MAIN-2022-05 | 1642320300253.51 | [] | docs.microsoft.com |
Required Parameter settings on the storage array for ONTAP systems
Contributors
Download PDF of this page
Certain parameter settings are required on the storage array for the storage array to work successfully with ONTAP systems.
Required host channel director port configuration parameters
The host channel director port configuration parameters that must be set on the storage array are shown in the following table:
The
Volume Set Addressing parameter must be set the same way on all channel director ports to which the LUN is mapped. If the settings are different, ONTAP reports this as a LUN ID mismatch in
storage errors show output and in an EMS message.
Related information | https://docs.netapp.com/us-en/ontap-flexarray/implement-third-party/reference_required_parameters_for_emc_symmetrix_storage_arrays_with_data_ontap_systems.html | 2022-01-17T01:55:01 | CC-MAIN-2022-05 | 1642320300253.51 | [] | docs.netapp.com |
You can install the Cloud Link Service on the following host types:
Perform the tasks in the sequence shown in the diagram.
Prerequisites
Note: Data will be fetched from only the host (Example: Log Decoder) on which the Cloud Link Service is installed.
To install Cloud Link Service:
Log in to the NetWitness Platform as an administrator and go to
Admin > Hosts.
The Hosts view is displayed.
Select a host (Example: Log Decoder) and click
.
The Install Services dialog is displayed.
Select the Cloud Link Service from the Category drop-down menu, and click Install.
Log in to the NetWitness Platform, and go to
Admin > Services to verify successful Cloud Link Service installation. | https://docs.netwitness.rsa.com/admin/installation/ | 2022-01-17T00:49:33 | CC-MAIN-2022-05 | 1642320300253.51 | [] | docs.netwitness.rsa.com |
/MessageEndpointMappingsconfiguration section, that message cannot participate in sender-side distribution. The endpoint address specified by a message endpoint mapping is a physical address (
QueueName@MachineName, where
MachineName MSMQ Routing.
Message distribution
Every message is always delivered to a single physical instance of the logical endpoint. When scaling out, there are multiple instances of a single logical endpoint registered in the routing system. Each outgoing message must));
class RandomStrategy : DistributionStrategy { static Random random = new Random(); public RandomStrategy(string endpoint, DistributionStrategyScope scope) : base(endpoint, scope) { } public override string SelectDestination(DistributionContext context) { // access to headers, payload... return context.ReceiverAddresses[random.Next(context.ReceiverAddresses.Length)]; } }
To learn more about creating custom distribution strategies, see the fair distribution sample.
Events and subscriptions
Subscription requests:
Each subscriber endpoint instance will at start send a subscription message to each configured publisher instance. Each publisher instance receives a subscription requests and stores this. In most cases the subscription storage is shared.
Events:
When an event is published the event will be send to only one of the endpoint instances. Which instance depends on the distribution strategy
Limitations
Sender-side distribution does not use message processing confirmations (the distributor approach). Therefore the sender has no feedback on the availability of workers and, by default, sends the messages in a round-robin behavior. Should one of the nodes stop processing, the messages will pile. | https://docs.particular.net/transports/msmq/sender-side-distribution | 2022-01-17T01:34:01 | CC-MAIN-2022-05 | 1642320300253.51 | [] | docs.particular.net |
Prepare for removal of third-party software
You can remove third-party software using the Sophos installer.
If you want the Sophos installer to remove any previously installed security software:
- On computers that are running another vendor’s anti-virus software, ensure that the anti-virus software user interface is closed.Note HitmanPro.Alert may already be installed either as a standalone product or from Sophos Central. You should remove HitmanPro.Alert before applying on-premise management from Sophos Enterprise Console.
- On computers that are running another vendor’s firewall or HIPS product, ensure that the firewall or HIPS product is turned off or configured to allow the Sophos installer to run.
If computers are running another vendor’s update tool, you may want to remove it. For more information, see the Sophos Enterprise Console help. | https://docs.sophos.com/esg/enterprise-console/5-5-2/help/en-us/esg/Enterprise-Console/tasks/Prepare_for_removal_of_third-party_software.html | 2022-01-17T00:34:24 | CC-MAIN-2022-05 | 1642320300253.51 | [] | docs.sophos.com |
Decision Insight 20210329 Save PDF Selected topic Selected topic and subtopics All content Manual libraries About manual libraries Manual libraries is the way to manually install Apache Camel components and custom components. This is the recommended way to install custom or proprietary jars that are not available in managed libraries. For all other basic uses of Apache Camel components, we recommend to use managed libraries. Manual libraries dependencies One manual library can require one or more other libraries to work. In that case a library that depends on another library is called a library dependency. . Manual libraries workflow Creation To create a manual library, click the New Library button. Enter the following information: Name – Name of the library you are creating. Dependencies – Any library dependency necessary for the library you are currently creating to work. Library jar files – The jar files containing your custom code. Here is an example of a manual library created to provide the mail Camel component: To manually upload a jar file, click the Upload library jar button. You can get jar files from Apache Camel components or from in the the official Apache Camel documentation or in other proprietary Camel documentation. You must ensure the jar files that you use match the Camel version used in the product. For more information , see How to retrieve the Camel version of Decision Insight. Security best practices You must never upload untrusted files to your installation. Make sure to have every file scanned by an anti-malware software before it can be used in the product. For new manual libraries to be taken in account in existing routes, restart the routes. Update To modify any existing manual library in DI, select the library and edit the relevant fields in the Details area. For your changes to apply, you must do one of the following: Quick but could impact performance – Restart the node. Thorough method: Update the associated data integration connectors (save the connectors again). Restart the associated Data Integration routes (stop and start). Delete To delete a manual libraries, use the trash button in the libraries list. If you are unable to delete a manual library , it can be because the library is still being used: As a dependency by another library. In a connector. In a route. Related Links | https://docs.axway.com/bundle/DecisionInsight_20210329_allOS_en_HTML5/page/manual_libraries.html | 2022-01-17T01:55:14 | CC-MAIN-2022-05 | 1642320300253.51 | [] | docs.axway.com |
Tau Compressor Plus¶
Controls¶
Mode selector knob is used to choose compression mode.
Threshold knob controls the level that the compressor starts reduce gain.
Makeup knob controls the gain applied after reduction.
Ratio knob controls how much the compressor reduce signal level. Larger value brings more reduction.
Mix knob controls blending ratio between unprocessed (dry) and processed (wet) signals. | https://docs.aom-factory.jp/plugins/tau_compressor_plus/index.html | 2022-01-17T00:14:26 | CC-MAIN-2022-05 | 1642320300253.51 | [array(['../_images/screenshot8.png', '../_images/screenshot8.png'],
dtype=object)
array(['../_images/mode1.png', '../_images/mode1.png'], dtype=object)
array(['../_images/threshold1.png', '../_images/threshold1.png'],
dtype=object)
array(['../_images/makeup.png', '../_images/makeup.png'], dtype=object)
array(['../_images/ratio1.png', '../_images/ratio1.png'], dtype=object)
array(['../_images/mix1.png', '../_images/mix1.png'], dtype=object)] | docs.aom-factory.jp |
EFR Connect Mobile App
Introduction
Silicon Labs EFR Connect is a generic BLE mobile app for testing and debugging Bluetooth® Low Energy applications. With EFR Connect, you can quickly troubleshoot your BLE embedded application code, Over-the-Air (OTA) firmware update, data throughput, and interoperability with Android and iOS mobiles, among the many other features. You can use the EFR Connect app with all Silicon Labs Bluetooth development kits, Systems on Chip (SoC), and modules..
EFR Connect includes many demos to test sample apps in the Silicon Labs GSDK quickly. The following are the demo examples:
- Blinky: The ”Hello World” of BLE – Toggling a LED is only one tap away.
- Throughput: Measure application data throughput between the BLE hardware and your mobile device in both directions
- Health Thermometer: Connect to a BLE hardware kit and receive the temperature data from the on-board sensor.
- Connected Lighting DMP: Leverage the dynamic multi-protocol (DMP) sample apps to control a DMP light node from a mobile and protocol-specific switch node (Zigbee, proprietary) while keeping the light status in sync across all devices.
- Range Test: Visualize the RSSI and other RF performance data on the mobile phone while running the Range Test sample application on a pair of Silicon Labs radio boards.
EFR Connect helps developers create and troubleshoot Bluetooth applications running on Silicon Labs’ BLE hardware. The following is a rundown of some example functionalities.
Bluetooth Browser - A powerful tool to explore the BLE devices around you. Key features include the following:
- Scan and sort results with a rich data set
- Label favorite devices to surface on the top of scanning results
- Advanced filtering to identify the types of devices you want to find
- Save filters for later use
- Multiple connections
- Bluetooth 5 advertising extensions
- Rename services and characteristics with 128-bit UUIDs (mappings dictionary)
- Over-the-air (OTA) device firmware upgrade (DFU) in reliable and fast modes
- Configurable MTU and connection interval
- All GATT operations
Bluetooth Advertiser – Create and enable multiple parallel advertisement sets:
- Legacy and extended advertising
- Configurable advertisement interval, TX Power, primary/secondary PHYs
- Manual advertisement start/stop and stop based on a time/event limit
-.3.2 used to quickly test and demo sample apps in the Silicon Labs Bluetooth SDK.
Demo View
Health Thermometer
This demo scans for devices that advertise the Health Thermometer service (UUID 0x1809). Once connected, it subscribes Bluetooth - SoC Thermometer sample application from the Silicon Labs Bluetooth SDK. For more information, see the Getting Started section of the Bluetooth documentation.
Connected Lighting
The Connected Lighting demo demonstrates the Dynamic Multiprotocol (DMP) capabilities of Silicon Labs' devices and software. It requires two kits running dedicated sample apps which are documented in QSG155: Using the Silicon Labs Dynamic Multiprotocol Demonstration Applications in GSDK v2.x.
When launched, the demo will scan for devices which are advertising themselves as a DMP node, running BLE and a second wireless protocol, for example Zigbee or Proprietary.
Searching for Light DMP devices
After establishing the connection with the DMP node, the demo allows controlling the light on the DMP board by tapping the lightbulb icon. If the user changes the light status using the physical push button on the kits, the light status on the app will be updated accordingly.
Toggling the lights
Furthermore, the source of the last event is shown on the demo. For example, "Bluetooth" means it came from the mobile app, "Proprietary" means that the light was toggled by the push button on the Switch board which communicates with the Light board (DMP node) via proprietary protocol.
Range Test
The Range Test demo is fully documented in UG471: Flex SDK v3.x Range Test Demo User's Guide. The mobile app is first used to set up the TX and RX node parameters. Then, it is used to visualize all the meaningful RF data as the test is on-going, which is relayed to the mobile app by the RX node.
When launched, the demo will scan for devices which are advertising themselves as a Range Test nodes.
Searching for Range Test device
After the nodes start performing the range test, the mobile app displays the performance data from the RX node.
Executing the range test
Blinky
The Blinky demo is the "Hello World" of BLE, allowing to toggle an LED and receive button press indications from the BLE device. For more information, see the readme of the Bluetooth - SoC Blinky sample application from the Silicon Labs Bluetooth SDK.
When launched, the demo will scan for devices which are advertising themselves as a Blinky Example.
Searching for Blinky devices
Once connected, the user can control the LED on the Silicon Labs kit and receive button state change indications when pressing or releasing the push button.
Toggling the light and receiving button state change indications
Throughput
The Throughput demo allows measuring throughput between EFR32 and the mobile, on both directions. For more information, see the readme of the Bluetooth - SoC Throughput sample application from the Silicon Labs Bluetooth SDK.
When launched, the demo will scan for devices which are advertising themselves as a Throughput Example.
Searching for Throughput devices
Once connected, the user can visualize the data throughput between the devices, which includes receiving data from EFR32 (triggered by button press on the kit) or sending data to the kit (triggered from within the demo view).
Executing the Throughput test and Release Notes for the mobile app.
Information view
Develop View
The develop view contains a collection of tools focused on Bluetooth firmware application developers. They contain a number of advanced features that will be covered in the subsequent sections.
Develop View
Browser.
Browser scan list (star.
- Bonding.
Browser Advertising Details
If the advertisement is a known beacon format, the details are also parsed according to that specific beacon format. The image below shows the details from an iBeacon advertisement (Bluetooth - SoC iBeacon sample application running on an EFR32BG22 kit)
Browser Advertising Details from iBeacon
EFR Connect also supports extended advertising, however, that must also be supported by the specific mobile device where the app is running. If the phone supports it and devices are sending extended advertising, that will be visible at the top of the advertisement details.
Browser Extended Advertising
Start and Stop Scanning
The user can control when scanning starts and stops. There are only four situations where the app does this automatically:
- When entering the browser from the Develop main view, scanning automatically starts
- When connecting to a device scanning automatically stops
- Refreshing the list automatically starts scanning if it was stopped
- Locking the screen will automatically stop scanning.
Log
The log keeps a record of all the Bluetooth activity. It can be shared via email or other methods for later analysis, filtered via text based search, and cleared.
Browser Log
Connections
The connection drop-down lists all active connections, which allows the user to go into each of those connection views or disconnect from those devices. Devices can be disconnected individually through each device's Disconnect button or altogether with the Disconnect all button at the bottom.
Browser Connections list
You can also disconnect devices from the main browser view, if they are listed in the scan list. Connected devices have a Disconnect button in red color instead of the blue Connect button.
Browser Connections
If the scan list is refreshed, those devices will disappear from the list unless they continue advertising after a connection has been established.
Filter
The filter allows narrowing down your search to a list of devices that fulfills a specific set of parameters. To reach the filter tap the Filter icon on the top which will reveal the filter options on a pull-down.
Browser Filter above a given value so that a user can limit results by signal strength threshold.
- Beacon type: iBeacon, Altbeacon, Eddystone or Unspecified (none of the other 3).
- Only favorites
- Only connectable
- Only bonded
Touching the Search option causes the new filter parameters to take effect, which also automatically collapses the filter. An active filter can be visible in the filter icon where a small check mark is added. The filter will be applied to the list of devices regardless of whether scanning is on-going or not.
Tapping Reset clears all the filter parameters, disables the filter and collapses it.
You can also save an existing set of filter parameters for later use. To save filter selections, touch the Save button. Saved filter parameters are listed in the Saved Searches at the bottom of the filter pull-down.
Browser Filter Saved Searches
Loading filtering parameters can be done by touching a saved search. The selected saved search is marked with blue font. To delete a saved search, tap the garbage bin on the right hand side. To take the newly loaded filter parameters into effect, tap the Search button.
When a filter is active in the browser a bar shows up at the top listing the filtering criteria.
Browser filtered scan list
Sort
The sort feature allows sorting scan results by ascending/descending RRSI as well as A->Z or Z->A device name. To disable sorting simply tap on the enabled sorting option. To collapse the sorting options tap on the sort icon or anywhere outside the drop-down.
Browser Sort.
Browser Mapping Dictionary
Device
When either a connection with a device is established or the user goes to an existing connection via the connections pull-down, the app will go to the device view where it displays the GATT database for both the remote device as well as the local one.
Remote (Cliente) vs Local (Server)
At the bottom of the device view, the user can switch between the GATT of the remote device (where the app is in the client role) or the GATT of the local mobile device (where the app acts as server role). The representation of both GATT databases is the same, as well as the associated controls.
The local GATT database can be modified through the GATT Configurator feature..
Device Main View
GATT Characteristics
Tapping anywhere on the card causes it to expand, which reveals the characteristics within that service as well as the supported properties for each characteristic (e.g., read, indicate).
Device View Characteristics.
Device View Characteristics).
Device View Characteristics pasted or the app will throw an incorrect format error.
Device View Characteristics
GATT Descriptors
If a characteristic has descriptors, they are listed underneath the characteristic UUID.
Device View Characteristics.
Device View Additional Options
- Create Bond: Bond with this device
- prompt
-.
OTA in progress
For additional guidance on OTA process using EFR Connect, see Using EFR Connect Mobile App for OTA DFU.
Advertiser
The advertiser allows you to use the mobile phone as a BLE peripheral by creating and customizing advertisement sets, both in terms of their configurations as well as payload.
This functionality comes in handy when you have a single Silicon Labs kit for a Bluetooth product, but you still want to test/evaluate/develop applications that leverage the Silicon Labs Bluetooth stack functionalities as a central device.
Creating New Advertisement Sets
When entering the Advertiser for the first time, the user sees an empty list of advertisement sets because none have been created yet.
Advertising Main
By tapping the menu icon on the top-right corner of the main advertiser view, the user is presented with the following options:
- Device name
- Create new
- Switch all OFF
Advertiser Main Menu
Selecting Device name brings up a dialog to change the device name, which is a global setting and cannot be set individually for each advertisement set.
Advertiser Device Name
Create new adds a new advertisement set that only includes the flags AD Type in its payload. The advertisement set is represented by a card, similar to the device representation in the Browser.
Advertiser Main
The image above shows the main elements of the device card which are as follows:
- Name of the advertisement set. This is not the device name but simply a way for the user to identify each advertisement set.
- TX Power in dBm
- Advertisement interval in milliseconds
- Copy control. This creates an exact copy of the advertisement set which is added at the end of the list of advertisement sets.
- Edit control. Takes the user to the edit view of a specific advertisement set.
- Delete control. Deletes the specific advertisement set.
- Enable/disable advertisement set.
If the user taps the delete control, there is a confirmation prompt which can be dismissed on further delete actions.
Advertiser Delete Set
When tapping the device card, it expands with further advertisement details in the same way as device cards expand on the Browser when tapped.
Advertiser Main
Editing Advertisement Sets
When entering the edit view for a specific advertisement set, the user is shown a list of customization options. Those options are divided into three areas:
- Advertising Type
- Advertising Data and Scan Response Data
- Advertising Parameters
The view header displays the name of the advertisement set name, which can be changed via the very first text box at the top.
This is followed by the advertisement type, which can be selected between Legacy Advertising and Extended Advertising (introduced in Bluetooth 5.0 specification). Support for Extended Advertising depends on the underlying mobile phone and OS stack. If not supported, a note is displayed and the Extended Advertising option will be grayed out.
Advertising Settings
The following Legacy Advertising types are supported:
- Connectable, scannable
- Non-connectable, scannable
- Non-connectable, non-scannable
Advertiser Settings
Extended Advertising supports the following types:
- Connectable, non-scannable
- Non-connectable, scannable
- Non-connectable, non-scannable
Advertiser Settings
TX power can be optionally included as part of extended advertising, and, if the type is non-connectable, non-scannable, it can optionally be set as anonymous advertising.
Depending on the selected advertisement type, the Advertising Data and Scan Response data will be available for editing. For example, a connectable, non-scannable type does not have scan response so the Add Data Type button for Scan Response Data will be grayed out.
Advertiser Settings
Data is added by tapping Add Data Type and selecting which data type to add.
Advertiser Settings
The following advertising data types are currently supported:
- 0x09: Complete Local Name
- 0x03: Complete List of 16-bit Service Class UUIDs
- 0x07: Complete List of 128-bit Service Class UUIDs
- 0x0A: TX Power Level
- 0xFF: Manufacturer Specific Data
The selected data type will be added to the advertisement data and below Add Data Type button the amount of bytes still available is shown.
Advertiser Settings
When adding AD types 0x03 or 0x07, each service needs to be added individually by tapping Add 16-bit service or Add 128-bit service buttons.
Advertiser Settings
Services are added via a dialog box where the user can input the service UUID or name. For 16-bit services (so called adopted services), an auto-complete functionality suggests services as they are being typed into the text box. Below the box, a link Bluetooth GATT Services takes the user to the Bluetooth SIG Web page that lists all adopted services.
Advertiser Settings
Multiple services can be added under the 0x03 and 0x07 AD Types and can be individually deleted by tapping the trash bin icon. If the trash bin icon for the AD type is tapped, all services are removed together with the AD Type.
Advertiser Settings
Advertising Parameters are below the advertising data and scan response data options. These allow the user to select the advertising PHYs, advertising interval, and TX Power. Lastly, an advertising limit can also be selected for the advertising set, which automatically disables the advertisements after a set time period of a number of advertisement events.
Advertiser Settings
The advertising PHYs for the primary and secondary channel depend on support from the underlying smartphone and OS. They are only available if extended advertising is supported. The alternative PHYs for each of the primary and secondary channels will be available in the respective drop-down menus.
Advertiser Settings
Interoperability Test (IOP)
The Interoperability Test executes a number of BLE test cases against the Bluetooth - SoC Interoperability Test sample app running on selected EFR32 radio boards.
IOP test
Once the test sequence is finalized, you have the option to share the results via a cloud drive, email or other standard mediums.
IOP test finalized
More detailed information about IOP can be found in AN1346: Running the BLE Interoperability (IOP) Test Application Note and AN1309: Bluetooth Low Energy Interoperability Testing Report.
GATT Configurator
The GATT Configurator is a powerful feature which allows you to create and modify the local GATT database on the mobile device where EFR Connect has been installed. Furthermore, it allows you to import and export the GATT database definition between GATT Configurator on EFR Connect mobile app and GATT Configurator in Simplicity Studio 5.
This feature comes in handy when you are developing a Bluetooth device which has GATT client feature, as you can quickly emulate the GATT server side with EFR Connect without having to write a single line of code.
Creating a New GATT Server
When entering the GATT Configurator for the first time, the user sees an empty list of servers sets because none have been created yet.
GATT Configurator main view
By tapping the menu icon on the top-right corner of the main GATT Configurator view, the user is presented with the following options:
- Create new
- Import
- Export
GATT Configurator main menu
Selecting Create new will create a new empty GATT database.
GATT Configrator new server
Each GATT database is represented through a card which has the following elements:
- GATT database name.
- Number of services within the database.
- Enable/disable database (Note: only one GATT database can be enabled at any given time. Enabling a new database will automatically disable the existing one)
- Copy control. This creates an exact copy of the database which is added at the end of the list.
- Edit control. Takes the user to the edit view of a specific database.
- Delete control. Deletes the specific database.
Import allows bringing in a GATT database file from Simplicity Studio GATT Configurator, starting from GSDK 3.0 and newer. The file can be stored on standard OS mediums, such as local drive or cloud storage, for example Dropbox or Google Drive. Before moving the file from Simplicity Studio, the extension must be changed from btconf to xml.
Export allows converting one (or more) of the existing GATT databases into the an xml file, which can be directly imported into GATT Configurator in Simplicity Studio.
GATT Configurator export
The xml filename is taken from the GATT database name. When selecting more than one database, they will be exported as individual xml files, and if there are duplicate GATT database names then the filename will be appended with a number. In the above example, the exported xml files would be new_gatt_server.xml and new_gatt_server_2.xml.
Tapping on the database card expands it with a list of each included services and how many characteristics are within each of them.
GATT Configurator server details
Adding Services
Tapping the edit control on the GATT database takes the user to the edit view where one can add services, characteristics and descriptors.
GATT Configurator server edit view
A newly created database will show an empty list of services and option to start adding a service. This will bring a pop-up where both standard as well as custom services can be added.
GATT Configurator adding service
When typing a name, there is an auto-complete feature that matches the name against Bluetooth SIG standard services.
GATT Configurator adding service auto-complete
When a standard service is selected, there is the option to also add all mandatory service requirements (mandatory characteristics and descriptors within that service).
GATT Configurator adding a standard service
When adding a custom service, it is necessary to write both the service name and 128-bit UUID. The uuid will be automatically formatted and capped to the right size as it gets typed-in.
GATT Configurator adding a custom service
When the service gets added, it will be listed on the GATT server view.
GATT Configurator new service added
Tapping on More Info will expand the service to reveal its characteristics and descriptors.
GATT Configurator service details
At the service level, it is possible to make a copy of the service and delete using the respective controls which have the same icons as elsewhere on the mobile app. Editing a service is not possible.
For characteristics and descriptors the copy, edit and delete controls are available. The allowed operations on each characteristic and descriptor (read, write, write without response, notify, indicate) are represented through the same icons used on the Device View.
Adding Characteristics and Descriptors
At the bottom of each service, there is the option to add characteristics, and at the bottom of each characteristic the option to add descriptors.
GATT Configurator adding characteristics
Adding characteristics, brings a pop-up where the characteristics parameters can be configured. Similarly to how services are added, there is an auto-complete feature for Bluetooth SIG standard characteristics, but also custom characteristics can be added with their respective 128-bit UUIDs.
GATT Configurator adding characteristics
Below the characteristic, name and UUID the properties can be set as well as the access parameters. The access parameters may vary between Android and iOS platforms.
GATT Configurator adding characteristics
The last option is to set the initial value, which can be done in hex or ascii representation. It's also possible to simply leave it empty.
GATT Configurator adding characteristics
Adding descriptors is similar to adding characteristics but with less properties available.
GATT Configurator adding descriptors | https://docs.silabs.com/bluetooth/3.3/miscellaneous/mobile/efr-connect-mobile-app | 2022-01-17T01:22:42 | CC-MAIN-2022-05 | 1642320300253.51 | [] | docs.silabs.com |
You are viewing documentation for version: 2.0 (latest) | 1.12 | Version History BootloaderStorageSlot_t Struct ReferenceApplication Interface > Application Storage Interface Description Information about a storage slot. Definition at line 52 of file btl_interface_storage.h. #include <btl_interface_storage.h> Data Fields uint32_t address Address of the slot. uint32_t length Size of the slot. The documentation for this struct was generated from the following file: btl_interface_storage.h | https://docs.silabs.com/mcu-bootloader/2.0/structBootloaderStorageSlot-t | 2022-01-17T00:36:13 | CC-MAIN-2022-05 | 1642320300253.51 | [] | docs.silabs.com |
As a Service Broker cloud administrator or catalog consumer, you can use the resources node to manage your cloud resources.
You can locate and manage your resources using the different views. You can filter the lists, view resource details, and then run actions on the individual items. The available actions depend on the resource origin, for example, discovered compared to deployed, and the state of the resources.
If you are a Cloud Assembly administrator, you can also view and manage discovered machines.
To view your resources, select.
Working with the resource list
You can use the resource list to manage the machines, storage volumes, and networks that make up your deployments. In the resource list you can manage them in resource type groups rather than by deployments.
Similar to the deployment list view, you can filter the list, select a resource type, search , sort, and run actions.
If you click the resource name, you can work with the resource in the context of the deployment details.
You can locate and manage your deployments using the card list. You can filter or search for specific deployments, and then run actions on those deployments.
- Filter your list based on resource attributes.
For example, you can filter based on project, cloud types, origin, or other attributes.
- Search for resources based on name, account regions, or other values.
- Run available day 2 actions that are specific to the resources type and the resource state.
For example, you might power on a discovered machine if it is off. Or you might resize an onboarded machine.
List of managed resources by origin
You can use the Resources tab to manage the following types of resources.
What is the resource details view
You can use the resource details view to get a deeper look at the selected resource. Depending on the resource, the details can include networks, ports, and other information collected about the machine. The depth of the information varies depending on cloud account type and origin.
To open the details pane, click the resource name or the double arrows.
What day 2 actions can I run on resources
The available day 2 actions depend on the resource origin, cloud account, resource type, and state. | https://docs.vmware.com/en/vRealize-Automation/services/Using-and-Managing-Service-Broker/GUID-701D3D7C-3DDC-4733-8B5F-AC59ADBE1C34.html | 2022-01-17T00:24:44 | CC-MAIN-2022-05 | 1642320300253.51 | [array(['images/GUID-31F69D3C-C80D-4E9B-9A5A-14ADAA75A561-low.png',
'Screenshot of a resources page with the actions menu showing the available actions to run on the selected machine.'],
dtype=object)
array(['images/GUID-FDA14CF2-5153-405D-9F78-02017EE2EBB8-low.png',
'Screenshot of a resources page with the detail pane showing the information collected about the selected machine.'],
dtype=object) ] | docs.vmware.com |
Case study—using a recursive CTE to compute Bacon Numbers for actors listed in the IMDb
The Bacon Numbers problem, sometimes referred to as "The Six_Degrees of Kevin Bacon" (see this Wikipedia article), is a specific formulation of the general problem of tracing paths in an undirected cyclic graph. It is a well-known set-piece exercise in graph analysis and is a popular assignment task in computer science courses. Most frequently, solutions are implemented in an "if-then-else" language like Java. Interestingly, solutions can be implemented in SQL and, as this section will show, the amount of SQL needed is remarkably small.
Representing actors and movies data
The Bacon Numbers problem is conventionally formulated in the context of the data represented in the IMDb—an acronym for Internet Movie Database. See this Wikipedia article. The data are freely available from IMDb but it's better, for the purposes of this section's pedagogy, to use sufficient subsets of the total IMDb content. These subsets restrict the population to the movies in which Kevin Bacon has acted and project the facts about the actors and movies to just their names. This entity relationship diagram (a.k.a. ERD) depicts the sufficient subset of the IMDb:
- each actor must act in at least one movie
- each movie's cast must list at least one actor
The actors are the nodes in an undirected cyclic graph. There is an edge between two actors when they both have acted together in at least one movie.
The ERD implies the conventional three-table representation with an "actors table, a "movies_table", and a "cast_members" intersection table. Create them with this script:
cr-actors-movies-cast-members-tables.sql
drop table if exists actors cascade; drop table if exists movies cascade; drop table if exists cast_members cascade; create table actors( actor text primary key); create table movies(movie text primary key); create table cast_members( actor text not null, movie text not null, constraint cast_members_pk primary key(actor, movie), constraint cast_members_fk1 foreign key(actor) references actors(actor) match full on delete cascade on update restrict, constraint cast_members_fk2 foreign key(movie) references movies(movie) match full on delete cascade on update restrict );
Of course, the IMDb has facts like date of birth, nationality, and so on for the actors and like release date, language and so on for the movies. The information would doubtless allow the "cast_members" table to have columns like "character_name". The data that this case study uses happen to include the movie release date, in parentheses, after the movie name in a single text field. The pedagogy is sufficiently served without parsing out these two facts into separate columns in the "movies" table.
Notice that the notion of a graph is so far only implied. A derived "edges" table makes the graph explicit. An edge exists between a pair of actors if they are both on the cast list of the same one or more movies. The SQL needed to populate the "edges" table from the "cast_members" table is straightforward.
When the paths have been found, it's useful to be able to annotate each edge with the list of movies that are responsible for its existence. The annotation code could, of course, derive this information dynamically. But it simplifies the overall coding scheme if a denormalization is adopted to annotate the paths at the time that they are discovered. Another departure from strict purity simplifies the overall coding scheme further. If the row for the edge between a particular pair of actors records the list of movies that brought it (rather than recording many edges, each with a single-valued "movie" attribute), then the path-tracing code that the section Using a recursive CTE to traverse graphs of all kinds presented can be used "as is". To this end, the columns that represent the actor pair in the "edges" table are called "node_1" and "node_2" rather than the more natural "actor_1" and "actor_2".
Note: The previous paragraph was stated as something of a sketch. In fact, each edge between a pair of actors is recorded twice—once in each direction, as is described in the section Graph traversal using the denormalized "edges" table design. Each of the edges in such a pair is annotated with the same list of movies.
This code creates the "edges" table and the procedure that populates it.
cr-actors-movies-edges-table-and-proc.sql
drop table if exists edges cascade; create table edges( node_1 text, node_2 text, movies text[], constraint edges_pk primary key(node_1, node_2), constraint edges_fk_1 foreign key(node_1) references actors(actor), constraint edges_fk_2 foreign key(node_2) references actors(actor)); drop procedure if exists insert_edges() cascade; create or replace procedure insert_edges() language plpgsql as $body$ begin delete from edges; with v1(node_1, movie) as ( select actor, movie from cast_members), v2(node_2, movie) as ( select actor, movie from cast_members) insert into edges(node_1, node_2, movies) select node_1, node_2, array_agg(movie order by movie) from v1 inner join v2 using (movie) where node_1 < node_2 group by node_1, node_2; insert into edges(node_1, node_2, movies) select node_2 as node_1, node_1 as node_2, movies from edges; end; $body$;
Notice the second
INSERT statement that re-inserts all the discovered directed edges in the reverse direction. The value of this denormalization is explained in the section Finding the paths in a general undirected cyclic graph.
Create a stored procedure to decorate path edges with the list of movies that brought each edge
The stored procedure (actually a table function) will annotate each successive edge along each path in the specified table with the list of movies that brought that edge.
When there are relatively few paths in all, as there are with the synthetic data that the section Computing Bacon Numbers for a small set of synthetic actors and movies data uses, it's convenient simply to show all the decorated paths.
However, with a data set as big as the IMDb (even the imdb.small.txt subset has 160 shortest paths), it's useful to be able to name a candidate actor and to annotate just the shortest path (more carefully stated, one of the shortest paths) from Kevin Bacon to the candidate. The site The Oracle of Bacon exposes this functionality.
The first formal parameter of the function "decorated_paths_report()" is mandatory and specifies the table in which the paths are represented. The second optional formal parameter, "terminal", lets you specify the last node along a path. If you omit it, the meaning is "report all the paths"; and it you supply it, the meaning is "report the path to the specified actor".
Dynamic SQL is therefore needed for two reasons, each of which alone is a sufficient reason:
The table name isn't known until run-time.
There may, or may not, be a
WHEREclause.
cr-decorated-paths-report.sql
drop function if exists decorated_paths_report(text, text) cascade; -- This procedure is more elaborate than you'd expect because of GitHub Issue 3286. -- It says this in the report: -- -- Commit 9d66392 added support for cursor. Our next releases will have this work. -- However, there are a few pending issues. -- -- Meanwhile, this code works around the issue by using a single-row SELECT... INTO. -- This is made possible by using array_agg(). But you cannot aggregate arrays of -- different cardinalities. So a second-level workaround is used. Each array in -- the result set is cast to "text" for aggregation and then cast back to the array -- that it represents in the body of the FOREACH loop that steps through the text -- values that have been aggregated. -- -- When a "stable" release supports the use of a cursor variable, this implementation -- will be replaced by a more straightforward version. create function decorated_paths_report(tab in text, terminal in text default null) returns table(t text) language plpgsql as $body$ <<b>>declare indent constant int := 3; q constant text := ''''; stmt_start constant text := 'select array_agg((path::text) '|| 'order by cardinality(path), terminal(path), path) '|| 'from ?'; where_ constant text := ' where terminal(path) = $1'; all_terminals_stmt constant text := replace(stmt_start, '?', tab); one_terminal_stmt constant text := replace(stmt_start, '?', tab)||where_; paths text[] not null := '{}'; p text not null := ''; path text[] not null := '{}'; distance int not null := -1; match text not null := ''; prev_match text not null := ''; movies text[] not null := '{}'; movie text not null := ''; pad int not null := 0; begin case terminal is null when true then execute all_terminals_stmt into paths; else execute one_terminal_stmt into paths using terminal; end case; foreach p in array paths loop path := p::text[]; distance := cardinality(path) - 1; match := terminal(path); -- Rule off before each new match. case match = prev_match when false then t := rpad('-', 50, '-'); return next; end case; prev_match := match; pad := 0; t := rpad(' ', pad)||path[1]; return next; <<step_loop>> for j in 2..cardinality(path) loop select e.movies into strict b.movies from edges e where e.node_1 = path[j - 1] and e.node_2 = path[j]; pad := pad + indent; <<movies_loop>> foreach movie in array movies loop t := rpad(' ', pad)||movie::text; return next; end loop movies_loop; pad := pad + indent; t := rpad(' ', pad)||path[j]; return next; end loop step_loop; end loop; t := rpad('-', 50, '-'); return next; end b; $body$;
Computing Bacon Numbers for synthetic data and the real IMDb data
The section Computing Bacon Numbers for a small set of synthetic actors and movies data demonstrates the approach using a small data set.
The section Computing Bacon Numbers for real IMDb data shows how to ingest the raw
imdb.small.txt file into the same representation that was used for the synthetic data. (The subsection Download and ingest some IMDb data explains how to download the IMDb subset that this case study uses.)
While a straightforward use of a recursive CTE can be used to produce the solution for the small synthetic data set quickly, it fails to complete before crashing (see the section Stress testing different find_paths() implementations on maximally connected graphs) when it's applied to the ingested
imdb.small.txt data. The approach described in the How to implement early path pruning section comes to the rescue. | https://docs.yugabyte.com/latest/api/ysql/the-sql-language/with-clause/bacon-numbers/ | 2022-01-17T01:34:52 | CC-MAIN-2022-05 | 1642320300253.51 | [array(['/images/section_icons/api/ysql.png',
'Case study—using a recursive CTE to compute Bacon Numbers on IMDb data Case study—using a recursive CTE to compute Bacon Numbers on IMDb data'],
dtype=object)
array(['/images/api/ysql/the-sql-language/with-clause/bacon-numbers/imdb-erd.jpg',
'imdb-erd'], dtype=object) ] | docs.yugabyte.com |
Get-Configservice¶
Gets the service record entries for the Configuration Service.
Syntax¶
Get-ConfigService [-Metadata <String>] [-Property <String[]>] [-ReturnTotalRecordCount] [-MaxRecordCount <Int32>] [-Skip <Int32>] [-SortBy <String>] [-Filter <String>] [-FilterScope <Guid>] [-BearerToken <String>] [-TraceParent <String>] [-TraceState <String>] [-VirtualSiteId <String>] [-AdminAddress <String>] [<CommonParameters>]
Detailed Description¶
Returns instances of the Configuration Service that the service publishes. The service records contain account security identifier information that can be used to remove each service from the database.
A database connection for the service is required to use this command.
Related Commands¶
Parameters¶
Input Type¶
¶
Return Values¶
Citrix.Configuration.Sdk.Service¶
The Get-Config.-Config...} ZoneName : Primary ZoneUid : 46fefe15-3b89-4a1f-9df6-a0f7c19956c0
Description¶
Get all the instances of the Configuration Service running in the current service group. | https://developer-docs.citrix.com/projects/citrix-virtual-apps-desktops-sdk/en/latest/CentralConfig/Get-ConfigService/ | 2022-01-17T00:27:37 | CC-MAIN-2022-05 | 1642320300253.51 | [] | developer-docs.citrix.com |
Date: Mon, 17 Jan 2022 00:46:31 +0000 (UTC) Message-ID: <[email protected]> Subject: Exported From Confluence MIME-Version: 1.0 Content-Type: multipart/related; boundary="----=_Part_14674_1247053436.1642380391390" ------=_Part_14674_1247053436.1642380391390 Content-Type: text/html; charset=UTF-8 Content-Transfer-Encoding: quoted-printable Content-Location:
The Support Manager is a support plugin included with Blesta. It may be = installed under [Settings] > [Company] > [Plugins], but is = not installed by default. It integrates a ticket system and knowledgebase t= o allow for clients to request support.
MailParse
The Support Manager requires the MailParse and Iconv PHP extensions in o= rder to parse tickets sent in through email. If tickets will be accepted vi= a email, these PHP extensions must be installed. MailParse can typically be= installed via root SSH access by running "pecl install mailparse". If usin= g cPanel/WHM, PECL extensions can be installed through WHM, see<= /a>
* Plugins are powerful and are not limited to the tie-ins listed abo= ve, these are just some of the most common
Support related email templates can be found under [Settings] >&= nbsp;[Company] > [Emails] > Email Templates, in the section labe= led f= ollowing tags are supported:
Due to the nature of tag objects containing several fields, many of whic= h are likely irrelevant for use in email templates, but may be useful to yo= u in certain circumstances, an example dump of the {staff} tag object is sh= own below.
stdClass Object ( [id] =3D> 2 [user_id] =3D> 3 [first_name] =3D> First [last_name] =3D> Last [email] =3D> [email protected] [email_mobile] =3D>=20 [status] =3D> active [username] =3D> [email protected] [two_factor_mode] =3D> none [two_factor_key] =3D> 6017d177a590b9cf0c04806e3634566a8f00190f [two_factor_pin] =3D>=20 [groups] =3D> Array ( [0] =3D> stdClass Object ( [id] =3D> 1 [company_id] =3D> 1 [name] =3D> Administrators ) ) [notices] =3D> Array ( [0] =3D> stdClass Object ( [staff_group_id] =3D> 1 [staff_id] =3D> 1 [action] =3D> payment_ach_approved ) ) )=20
The ticket received email template allows for the following tags:
The ticket received (mobile) email template allows for the following tag= s:
Support related message templates can be found under [Settings] >= ; [Company] > [Messengers] > Message Templates, in the sect= ion labeled "Plugin Templates".
The ticket received message template allows for the following tags:
POP/IMAP Settings
When selecting POP or IMAP for email handling, be sure to select the pro= per Security option for the port you are using. None, TLS, or SSL may be re= quired depending upon the port number you are connecting to. See mlc/list-of-smtp-and-pop3-servers-mailserver-list.html for a list of co= mmon configurations.
Importing Email
When importing email via piping or POP/IMAP, choose your department emai= l address carefully. The address should not be used with PayPal or any othe= r third-party service, which could be considered a security risk.
A support department may be created under [Support] -> [Departments] = -> [Add Department].
If your department is set up to receive tickets via email piping, your s= erver must be configured to pipe those messages into the support manager.= p>
Addon Companies
If you have any addon companies, you will need to copy pipe.php to somet= hing like pipe2.php, and edit the $company_id variable in the top of the fi= le to reference the proper company ID that email should be piped to. The pr= imary company is 1, a second company would be 2, etc. Go to [Settings] >= [System Settings] > Companies, and you can determine the company ID by = the "Edit" link. A link with a URL of "/admin/settings/system/companies/edi= t/2/" has a company ID of 2.
In your /etc/aliases file, it mi= ght look something like this..
support: "|/usr/bin/php /home/user/public_html/plugins/support_manager= /pipe.php"=20
If you experience any trouble with that, you can alternatively pipe mail= to "index.php plugin/support_manager/ticket_pipe/index/1" assuming "1" in = the company ID.
support: "|/usr/bin/php /home/user/public_html/index.php plugin/suppor= t_manager/ticket_pipe/index/1"=20
Hashbang
The first line of the file should begin with the hashbang, for example:<= /p>
#!/usr/bin/php -q=20
#!/usr/local/bin/php -q=20
On some systems you may need to create a symlink to php and pipe.php in = /etc/smrsh..
cd /etc/smrsh ln -s /usr/bin/php ln -s /home/user/public_html/plugins/support_manager/pipe.php=20
If you are having trouble with piping, it may be useful to try piping a = sample email manually via SSH. This will bypass your mail server and help d= etermine where the issue exists.
To enable error reporting, edit /config/blesta.php and change Co= nfigure::errorReporting(0); to Configure::errorReport= ing(-1); You may also wish to enable System Debug (Version 4.= 0+). To do so, change Configure::set("System.debug", false); to Configure::set("System.debug", true);
Disable System.debug
When you're done, be sure to disable System.debug. If left enabled, it m= ay cause licensing issues.
You should change these settings back when you are done.
Copy the following content into a text file called email.txt, changing a= ll instances of [email protected] to your department emai= l. You may also need to update the from address [email protected]= to a valid customer email if your department only allows customer= s to create tickets. Then upload to your web server at ~/plugins/support_ma= nager/
From - Thu Jun= 1 10:15:17 2017 Return-Path: <[email protected]> Delivered-To: [email protected] Received: from localhost (localhost.localdomain [127.0.0.1]) =09by mail.blesta.com (iRedMail) with ESMTP id 2E05935C4CC =09for <[email protected]>; Thu, 1 Jun 2017 13:14:48 -0400 (EDT) To: Support Department <[email protected]> From: Test Testerson <[email protected]> Subject: Test Ticket Subject Message-ID: <[email protected]>=3Dutf-8 Content-Transfer-Encoding: 7bit Test ticket body. Please disregard.=20
Via SSH, change directories to ~/plugins/support_manager/ i.e cd= /path/to/plugins/support_manager/
Run the following command:
./pipe.php < email.txt
Did that work? Did you get any errors?
The Markdown syntax is supported for ticket replies. Adding links, makin= g text bold, italic and more can be all be done by using Markdown. See = ; own-here/wiki/Markdown-Cheatsheet to learn more.
Tickets may be created by admins, clients, or emailed in to the system d= epending on the support department settings. As of version 2.5.0 of the Sup= port Plugin, client contact's may also create and reply to tickets on behal= f of the client, and will be included in any email correspondence for ticke= ts they are directly involved in. As of version 2.14.0 tickets can be= deleted. This can be done by changing a ticket to the 'Trash' status= , selecting it in the ticket list for that status, and using the delete opt= ion in the action window that appears.
The Knowledge Base was added in version 2.6.0, and included in Blesta v3= .4.0. It allows for the creation of a directory structure and articles to s= upplement client support. The public-facing knowledge base pages are viewab= le from your Blesta installation at "/client/plugin/support_manager/knowled= gebase/" (e.g. If Blesta is installed under the "billing" directory on your= domain, the URL to the knowledge base would be illing/client/plugin/support_manager/knowledgebase/).
To enable messengers for the Support System, you need to go to Support &= gt; Staff. Simply tick the boxes that you would like to get a notification = when a ticket has been created.
= p>= p>
Issue: Blesta shows a number of tickets for the Stat= us, but the tickets are not displayed.
Solution: Tickets that have no replies are not displaye= d. It should not be possible to create a ticket without any reply. If you i= mported from another system, or there was an issue that prevented ticket re= plies from being created, then you may wish to delete tickets that do not h= ave any replies. To do so, backup your database first= and then run the following query in its entirety.
DELETE support= _tickets.* FROM support_tickets LEFT JOIN ( SELECT `st`.`id` FROM `support_tickets` `st` INNER JOIN `support_replies` `sr` ON `sr`.`ticket_id` =3D `st`.`id` GROUP BY `st`.`id` ) t ON t.id =3D support_tickets.id WHERE t.id IS NULL=20 | https://docs.blesta.com/exportword?pageId=2163362 | 2022-01-17T00:55:04 | CC-MAIN-2022-05 | 1642320300253.51 | [] | docs.blesta.com |
Function
GLibconvert
Declaration
gchar* g_convert ( const gchar* str, gssize len, const gchar* to_codeset, const gchar* from_codeset, gsize* bytes_read, gsize* bytes_written, GError** error )
Description
Converts a string from one character set to another..)
Using extensions such as “//TRANSLIT” may not work (or may not work
well) on many platforms. Consider using
g_str_to_ascii() instead. | https://docs.gtk.org/glib/func.convert.html | 2022-01-17T02:09:23 | CC-MAIN-2022-05 | 1642320300253.51 | [] | docs.gtk.org |
Azure identity and access management considerations
Most architectures have shared services that are hosted and accessed across networks. Those services share common infrastructure and users need to access resources and data from anywhere. For such architectures, a common way to secure resources is to use network controls. However, that isn't enough.
Provide security assurance through identity management: the process of authenticating and authorizing security principals. Use identity management services to authenticate and grant permission to users, partners, customers, applications, services, and other entities.
Checklist
How are you managing the identity for your workload?
- Define clear lines of responsibility and separation of duties for each function. Restrict access based on a need-to-know basis and least privilege security principles.
- Assign permissions to users, groups, and applications at a certain scope through Azure RBAC. Use built-in roles when possible.
- Prevent deletion or modification of a resource, resource group, or subscription through management locks.
- Use Managed Identities to access resources in Azure.
- Support a single enterprise directory. Keep the cloud and on-premises directories synchronized, except for critical-impact accounts.
- Set up Azure AD Conditional Access. Enforce and measure key security attributes when authenticating all users, especially for critical-impact accounts.
- Have a separate identity source for non-employees.
- Preferably use passwordless methods or opt for modern password methods.
- Block legacy protocols and authentication methods.
Azure security benchmark
The Azure Security Benchmark includes a collection of high-impact security recommendations you can use to help secure the services you use in Azure:
The questions in this section are aligned to the Azure Security Benchmarks Identity and Access Control.The questions in this section are aligned to the Azure Security Benchmarks Identity and Access Control.
Azure services for identity
The considerations and best practices in this section are based on these Azure services: crossing segmentation boundaries.
Related links
Five steps to securing your identity infrastructure
Go back to the main article: Security | https://docs.microsoft.com/en-us/azure/architecture/framework/security/design-identity | 2022-01-17T02:54:07 | CC-MAIN-2022-05 | 1642320300253.51 | [] | docs.microsoft.com |
Create an Asset
To share an API, API Group, policy, example, template, or connector in either an Anypoint Exchange private instance or an Exchange public portal, create an asset of that asset type.
The way to create an asset depends on its type. The type is set when the asset is created and cannot be changed.
For OAS, RAML, RAML fragments, AsyncAPI, HTTP, WSDL, and Custom assets, create the asset directly using the Exchange Publish new asset menu.
For an example or template, create each using the Mavenize feature in Anypoint Studio, and publish each to Exchange.
For a connector, policy, example, or template, see Publish and Deploy Exchange Assets Using Maven.
You can also publish a RAML, or OAS or RAML fragments, using Design Center, and publish the API to Exchange.
Another way to publish these type of assets is using the Exchange Experience API.
RAML and OAS API specifications can be uploaded to Exchange with the Anypoint Platform Command Line Interface (CLI).
For example:
exchange asset upload --classifier raml --apiVersion v1 --name HelloWorld --mainFile helloworld.raml helloword/1.0.0 /Users/nmouso/Downloads/helloworld.raml.zip
API groups are published to Exchange from Anypoint API Manager.
Mule applications are published to Exchange from Anypoint Runtime Fabric and managed in Runtime Fabric, and are not visible in the Exchange user interface.
External libraries are published to Exchange from the Exchange Maven Facade API and managed with this API, and are not visible in the Exchange user interface.
Each asset in Exchange is versioned. You can manage which versions are visible by deprecating a version to hide it, and you can delete versions if needed. All versions of an asset always have the same type.
In addition to asset versions, APIs have a consumer facing API version that is shown at the top of an asset’s detail screen. This API version is defined by API providers.
Exchange asset versions follow the Semantic Versioning model of major, minor, and patch releases. For example, if an asset is version 2.4.6, then its major version is 2.x.x, its minor version is 2.4.x, and its patch version is 2.4.6.
To select a minor version of an asset, use the menu next to the asset name at the top of the asset portal. To select a patch version of an asset, use the version list at the right.
RAML versions have been automatically generated for all OAS assets published since January 12, 2019, and OAS 2.0 versions have been automatically generated for all RAML assets published since that date.
Exchange fully supports RAML fragments. Currently, Exchange does not support importing and managing OAS 3.0 components as Exchange dependencies inside a specification.
This illustration summarizes how each asset type (in green) appears in Exchange:
Asset Limits
An asset’s version format must use semantic versioning. Asset fields have these length limits:
Exchange prevents resource exhaustion attacks by limiting the number of asset versions that can be published.
The limit is 500 assets for root organizations with trial accounts, and 100,000 (one hundred thousand) assets for other root organizations.
The limit of dependencies for an asset is 100. For example, in Anypoint API Designer a RAML is limited to 100 RAML Fragment dependencies, and in Anypoint API Manager an API Group is limited to 100 REST API dependencies.
This asset count does not include deleted assets or assets generated by Exchange such as Mule 3 and Mule 4 connectors generated automatically from APIs.
When a root organization reaches 80% of its asset limit, Exchange shows a warning.
When a root organization reaches its asset limit, Exchange shows an error. Also, using the Publish new asset button on the home page or the Add new version button on the asset detail page shows an error explaining that the limit is reached and no more assets can be added.
Check the limit of your root organization with a curl command like this:
curl -X GET \ \ -H 'Authorization: bearer ANYPOINT_TOKEN'
Replace
ROOT_ORGANIZATION_ID with your root organization ID, and replace
ANYPOINT_TOKEN with the authorization token that has permissions for the root organization.
Get lists of assets with a curl command like this:
curl -X POST \ \ -H 'content-type: application/json' \ -H 'Authorization: bearer ANYPOINT_TOKEN' \ -d '{"query":"{assets(query: { masterOrganizationId:\"ROOT_ORGANIZATION_ID\", limit: 20, offset: 10 }) {groupId assetId version}}"}'
Replace
ROOT_ORGANIZATION_ID with your root organization ID, and replace
ANYPOINT_TOKEN with the authorization token that has permissions for the root organization. Vary the
limit and
offset values as needed.
See Search Using the Graph API for more information about searching for assets.
Create an API Asset
An API asset specifies an interface completely, including its functions, descriptions, how to handle return codes, and dependencies.
Creating an asset sets the asset type, which cannot be changed. All versions of an asset always have the same type.
To create an API asset:
In Exchange, select Publish new asset.
Enter the asset portal name.
Select the asset type from the drop-down list:
REST API - RAML: Provide a RAML API specification file. RAML specifications must be a RAML file (.raml).
REST API - OAS: Provide an OAS API specification file. OAS specifications can be either a YAML (.yaml) or JSON (.json) file. Exchange supports OAS 2.0 and OAS 3.0 specifications.
SOAP API - WSDL: Provide a WSDL API specification file. SOAP specifications file can be either a WSDL (.wsdl) or XML (.xml) file.
API Spec Fragment - RAML : Provide an API Fragment RAML specification file. Fragment specifications must be a RAML file (.raml).
HTTP API - This asset does not require a file. This type of asset provides an API endpoint that is defined by API Manager.
For RAML, API Spec Fragment, OAS, and WSDL assets:
Select Choose File to locate the API specification file.
Select the main file of the API.
If the file is a ZIP, the selected main file must be in the root directory of the ZIP file. If the file is not a ZIP or if the file is a ZIP file with only one main file, then the main file is selected automatically.
(Optional) Select Advanced and edit the advanced settings: GroupId, AssetId, Version, and API version. Exchange generates the group ID, asset ID, and version for you, and you can change these values here. You can change an API asset’s version (asset version) and API version separately. The advanced settings are most often used to change the asset version.
Select Publish.
To create a ZIP file and put the items in a folder into the root directory of the ZIP file, use a command like this. Replace
myfolder with the name of your folder and
name.zip with the name for the new ZIP file.
cd myfolder; zip -r name.zip .
Do not use a command such as
zip -r name.zip myfolder, which puts the folder into the root directory of the ZIP file. This causes an error message such as
The zip file does not contain a .raml file in the root directory.
Create an API Group Asset
An API Group is an asset that enables organizations to publish a group of API instances as a single unit, so that developer client applications can access the APIs as a group, using one client ID and, optionally, client secret.
API Groups are created in Anypoint API Manager and published to Exchange.
API Groups have major versions, such as 1.0.0, 2.0.0, or 3.0.0, but no minor or patch versions. Every version of an API Group has one or more API Group instances. Each API Group instance is a group of API instances that all have the same identity provider and the same environment type, either production or sandbox. An API Group instance can contain API instances from multiple business groups.
Creating an API Group requires having the Group Administrator permission, as well as the Asset Administrator permission for each of the APIs in the group. This ensures that a group creator can change the underlying APIs between private and public visibility levels.
If an API Group has any private APIs, you see a warning when publishing the API Group to your public portal. The public portal never shows private content. To ensure that all content in the API Group is published to the public portal, go to each API’s page in Exchange and make the API public, and then publish the API Group.
Warnings are also shown when publishing an API Group to your public portal if all of its API Group instances are private, and when making an API Group instance public if all of its API instances are private.
Create a Custom Asset
A custom asset lets you share information about any aspect of your organization such as announcements, documentation, videos, and sharing files. You can add an optional file to your Custom asset that users can download. The file is stored in Exchange.
Note: Exchange only permits the following file types as the optional file in a Custom asset:
Images:
.jpg, .jpeg, .png, .gif, .svg
Documents:
.docx, .pdf, .pptx, .rtf, .vsdx, .vssx
Compressed files:
.zip, .tgz, .jar, .gz, .7z
Text files:
.txt, .json, .raml, .yaml, .yml, .md, .csv, .xml, .xsd, .wsdl, .html, .pom, .log, .sql
A file without a file type is not allowed. All file types are case insensitive.
SVG files are limited to 100 KB or less.
MuleSoft recommends deleting all old Custom assets containing files of types that are not permitted. For any Custom asset containing a non-supported file, delete the asset to remove the file.
To create a custom asset:
In Exchange, select Publish new asset.
Enter the asset portal name.
Select the asset type Custom from the drop-down list.
(Optional) To share a file with users, choose the file.
(Optional) Select Advanced and edit the advanced settings: GroupId, AssetId, and Version. Exchange generates the group ID, asset ID, and version for you, and you can change these values here. The advanced settings are most often used to change the asset version.
Select Publish.
Asset Name, Icon, and Description Properties
You can create assets in API Designer or Exchange. After the asset is created, the name, icon, and description properties can only be changed in the Exchange asset details page as described in Describe an Asset.
Properties of Assets Created in API Designer
API Designer reads and uses the name of the asset from the RAML specification.
Before the asset is published in Exchange, you can edit the name in API Designer.
After the asset is published in Exchange, the name is used as the asset portal name. Any publication from API Designer to update the version of an asset in Exchange has the name field disabled, and shows the name from Exchange. The name, icon, and description properties can only be changed in the Exchange asset details page.
Create a New Version of an Existing Asset
If you have contributor or admin access to an asset, you can add a new version from the asset portal:
In Exchange, open the asset list and select the asset.
In the sidebar Versions table, select Add New Version.
Enter the Version (asset version).
(Optional) Enter the API version.
(Optional) Choose a file to upload.
Select Publish.
The new version of the asset has the same name, icon, and description as the previous version. Any changes to these properties apply to all versions of the asset.
Help us improve with your feedback. | https://docs.mulesoft.com/exchange/to-create-an-asset | 2022-01-17T01:29:06 | CC-MAIN-2022-05 | 1642320300253.51 | [array(['_images/ex2-exchange-assets.png', 'ex2 exchange assets'],
dtype=object)
array(['_images/ex2-exchange-assets.png', 'ex2 exchange assets'],
dtype=object) ] | docs.mulesoft.com |
Removing destination array configuration
Contributors
The following steps show how to remove the destination array configuration from the source array after FLI migration is complete.
Steps
Log in to Hitachi Storage Navigator Modular as system.
Select AMS 2100 array and click Show and Configure Array.
Log in using root.
Expand Groups and select Host Groups.
Select cDOT_FLI host group and click Delete Host Group.
Confirm the host group deletion. | https://docs.netapp.com/us-en/ontap-fli/san-migration/task_removing_destination_array_configuration.html | 2022-01-17T02:05:08 | CC-MAIN-2022-05 | 1642320300253.51 | [] | docs.netapp.com |
SecureTransport 5.5 Administrator Guide Save PDF Selected topic Selected topic and subtopics All content Subscriptions A subscription provides a functional connection between a user account and an application. For each subscription, SecureTransport creates a subscription folder and stores and manages all files that are transferred or transformed as a result of the application activity. A single application can have subscriptions from multiple accounts and a single account can subscribe to multiple applications. An account can subscribe to new instances of the same application as long as each instance has a unique subscription folder name. Additional transfer configurations are possible for subscriptions. Use subscriptions to trigger the execution of specific actions, defined in the respective application, when a subscription event occurs, such as an incoming file transfer in the dedicated subscription folder. Note The application is not triggered if the file is uploaded into another folder first and is then moved or copied into the subscription folder. The following topics describe how to manage subscriptions: Encryption options - Lists the subscription encryption options. Post-transmission actions - Lists the subscription post-transmission actions. Manage subscriptions - Provides how-to instructions for managing subscriptions. Related Links | https://docs.axway.com/bundle/SecureTransport_55_AdministratorGuide_allOS_en_HTML5/page/Content/AdministratorsGuide/accounts/c_st_subscriptions.htm | 2022-01-17T01:53:35 | CC-MAIN-2022-05 | 1642320300253.51 | [] | docs.axway.com |
Amplify Subscription Usage With Edge Agent 1.0 Save PDF Selected topic Selected topic and subtopics All content Secure the connection with the agent This topic describes establishing security on the connection between the connected product and the agent. This is applicable when the connected product uses the Lumberjack protocol and not QLT. If your product uses QLT as the connection protocol with the agent, see the user documentation for the product for configuration details. Authentication and SSL The agent uses certificate-based mutual authentication for event ingestion over Lumberjack. Sample certificates are included in the agent image file you downloaded when completing Deploy the agent. These are intended for testing, but you should replace them with your own. See Manage certificates to do so. To configure the server for Secure Sockets Layer (SSL) authentication, you must obtain server certificates and: Copy the truststore to the agent's capture_truststore shared file Copy the keystore to the agent's capture_keystore shared file In the agent's service_configuration shared file, update: Property Value com.systar.mercury.keystorePassword <keystore_password> To configure the Lumberjack client for SSL authentication, obtain the client certificates and add them to your client of choice. Manage certificates The agent uses certificate-based mutual authentication for data reception via the Lumberjack protocol. This following is what you need. We recommend using KeyStore Manager to manage certificates. The commands documented in this topic assume that you do, but you can use your own commands to generate certificates without KeyStore Manager. Certificate authority A trusted certificate authority (CA) is needed to generate certificates. You can use a commercial CA or create your own for self-signed certificates. Create a CA for self-signed certificates Execute the following command to generate the CA and generate a private key for it: $ ksm createCA <ca_name> -password <ca password> The generated CA keystore is created under <keystore manager installation directory>/var/data/<ca_name>.p12 Set up an external CA for certificate generation with KeyStore Manager To import a CA for use with KeyStore Manager: PKCS #12: Copy CA under <keystore manager installation directory>/var/data/<ca_name>.p12 Java KeyStore (JKS): Update the keystore type for CA files in KeyStore Manager configuration to JKS, following its documentation Copy CA under <keystore manager installation directory>/var/data/<ca_name>.jks Server certificates You need the following: A JKS truststore containing the CA. A JKS keystore containing the host, on which the container is deployed, private key and signed by the CA. Generate server certificates Create a new host key in the CA keystore by doman-name system (DNS) or Internet protocol (IP): By DNS $ ksm createHostKey <host_key_name> -ca <ca_name> -password <ca password> -dns <host dns entry> By IP $ ksm createHostKey <host_key_name> -ca <ca_name> -password <ca password> -ip <host ip> Export the host key and certificate in JKS format: $ ksm exportHostKey <host_key_name> -ca <ca_name> -password <ca password> -format JKS -exportpassword <key password> The generated files are created on disk: The trust store that contains the CA certificate: <keystore manager installation directory>/var/work/<ca_name>/<host_key_name>/jks/<ca_name>_truststore.jks The trust store that contains the host private key and its certificate signed by the CA: <keystore manager installation directory>/var/work/<ca_name>/<host_key_name>/jks/<host_key_name>_keystore.jks Client certificates You need the following: An OpenSSL CA corresponding to the CA used for server certificate signing An OpenSSL private key with: Alias tenant_1 dname CN=tenant_1 An OpenSSL certificate signed with the CA for the key Generate client certificates Create a host key in the CA keystore. $ ksm createHostKey tenant_1 -ca <ca_name> -password <ca password> -dns unchecked Note that the DNS is purposely invalid. It does not matter since it is not validated by either side in the SSL communication. Export the key and certificates in OpenSSL format. $ ksm exportHostKey tenant_1 -ca <ca_name> -password <ca password> -format openssl -exportpassword <client key password> The following generated files are create on disk: The OpenSSL CA: <keystore manager installation directory>/var/work/<ca_name>/tenant_1/openssl/<ca_name>.crt The OpenSSL private key: <keystore manager installation directory>/var/work/<ca_name>/tenant_1/openssl/tenant_1.key The OpenSSL key certificate: <keystore manager installation directory>/var/work/<ca_name>/tenant_1/openssl/tenant_1.crt Related Links | https://docs.axway.com/bundle/subusage_en/page/secure_the_connection_with_the_agent.html | 2022-01-17T02:08:53 | CC-MAIN-2022-05 | 1642320300253.51 | [] | docs.axway.com |
jax.numpy.argmax¶
- jax.numpy.argmax(a, axis=None, out=None)[source]¶
Returns the indices of the maximum values along an axis.
LAX-backend implementation of
argmax().
Original docstring below.
- Parameters
-
- Returns
index_array – Array of indices into the array. It has the same shape as a.shape with the dimension along axis removed. If keepdims is set to True, then the size of axis will be 1 with the resulting array having same shape as a.shape.
- Return type
ndarray of ints | https://jax.readthedocs.io/en/latest/_autosummary/jax.numpy.argmax.html | 2022-01-17T00:11:51 | CC-MAIN-2022-05 | 1642320300253.51 | [] | jax.readthedocs.io |
The statistics for pools and volume groups are calculated by aggregating all volumes, including reserved capacity volumes.
Reserved capacity is used internally by the storage system to support thin volumes, snapshots, and asynchronous mirroring, and are not visible to I/O hosts. As a result, the pool, controller, and storage array statistics may not add up to be the sum of the viewable volumes.
However, for application and workload statistics, only the visible volumes are aggregated. | http://docs.netapp.com/ess-11/topic/com.netapp.doc.ssm-sam-116/GUID-6A3110D3-9CD0-478F-B9C3-AEB0AC384B30.html | 2022-01-17T01:03:36 | CC-MAIN-2022-05 | 1642320300253.51 | [] | docs.netapp.com |
In recent times, the need to provide advanced remote support to field-based workers has come into sharp focus. Many new tasks in our work life can benefit from mixed reality technology. Devices like the Microsoft HoloLens 2 and the Dynamics 365 Remote Assist platform offer a game-changing solution to this problem.
However, in many environments where this technology could be most useful, being able to provide the ubiquitous Wi-Fi connectivity for real-time voice and video communication is a serious challenge.
Enter Celona: capable of delivering the enhanced wireless coverage on a clean RF spectrum and predictable application performance with our private LTE solution. In this document we will share how to:
Connect the Microsoft HoloLens 2 to a Celona network using a USB modem, and
Configure a Celona MicroSlicing policy to enforce application-level performance metrics for throughput, latency and packet error rate.
Connecting Hololens 2 to a Celona private LTE network
The HoloLens 2 headset runs a specialized version of the Windows OS which includes support for USB modems that work with
Microsoft RNDIS drivers.
In our lab testing, we have found that not all RNDIS drivers are equal, with only one of the USB adapters we tested being properly recognized by the HoloLens 2, so far.
The USB-C adapter by Quanta has support for private spectrum for cellular wireless, such as CBRS (LTE band 48) in the United States. It also provides support for the 5G n48 band, which is fantastic news. At just 41 grams or 1.5oz, this USB-C modem from Quanta can be easily attached to the HoloLens headset without affecting important ergonomics and user comfort.
For additional details on this specific adapter, please ping us at [email protected]. Here is a quick picture on how it can be installed with a Microsoft HoloLens - in this case with a 3D printed accessory created by one of our partners, attached to the headset.
Creating Application Policy using Celona MicroSlicing
The built-in Dynamics 365 Remote Assist application within HoloLens 2 connects the field-based engineers to remote support operators using Microsoft Teams. In our testing, we confirmed that the ports used for voice and data traffic for the Remote Assist application - enabling us to create the relevant Celona MicroSlicing policy in order to enforce specific service levels for the application at hand.
To create the MicroSlicing policy required within Celona Orchestrator, face of the Celona platform, we will:
Create a
Device Groupfor HoloLens 2 users,
Define the
Applicationfor Remote Assist, and
Assign the new
MicroSlicing policyto both.
Log in to your Celona Orchestrator account, select
Device Groups from the left-hand menu, and click
Create Device Group.
Once you have named the group and selected which devices will be included click
Add to save. Now select
Applications from the left-hand menu and click
Create Application. The Remote Assist application uses UDP ports 3478 to 3481 for the voice and video data, these are entered as below;
Remote Start Port: 3478
Remote End Port: 3481
Click
Add to save your new application. We can now move on to creating the MicroSlicing policy by choosing
MicroSlicing in the left-hand menu.
Because the multimedia data has variable bit rate, we have chosen
Non-GBR with
Interactive Multimedia QoS class. Next, choose the
Device Group and
Application you have previously created and click
Save to apply your Celona MicroSlicing policy to the network in real-time.
After this configuration, any new HoloLens 2 device that's onboarded to the network would be applied the relevant QoS policy when interacting with the Dynamics 365 Remote Assist application.
Thanks to this integration, field-based teams can now enjoy an interference cellular wireless network indoor and outdoors - private to the enterprise - at levels of reliability that has not so far been observed on private enterprise wireless networks.
To see the Celona solution in action, check out our getting started guide. | https://docs.celona.io/en/articles/5132236-microsoft-hololens-2-headset-on-celona | 2022-01-17T01:10:11 | CC-MAIN-2022-05 | 1642320300253.51 | [array(['https://celona-df2d49dbca1b.intercom-attachments-1.com/i/o/322425525/8a2a67ff3847b91de5327ea5/microsoft-hololens-2-demo.jpg?expires=1619355600&signature=0d3ccbc1d9e91b5eba7e5475dea21f561bcdbb25d72c48c91a06fdbde813f4eb',
'<a href='], dtype=object)
array(['https://downloads.intercomcdn.com/i/o/417631749/9949f5da75115bf0befa15da/USB-C+5G+Adapter+for+Microsoft+Hololens.png',
None], dtype=object)
array(['https://downloads.intercomcdn.com/i/o/322420292/9667f1660aa36b68f5d29d44/image.png?expires=1619355600&signature=fadfccf086a1b5a45297f2ffe9aff7b8b05b0d09511ec90134dcb9f6e9ea7a79',
None], dtype=object)
array(['https://downloads.intercomcdn.com/i/o/322420098/93e4b1ead634a6fe2269912c/image.png?expires=1619355600&signature=0a26a471b4a37987627ea34d7c66f362d1cea3e5887971ed76539082869a1375',
None], dtype=object)
array(['https://downloads.intercomcdn.com/i/o/322421457/0083340469bbebbf95dee6ff/image.png?expires=1619355600&signature=477e825146dfd593fc2f0c658f150ce957d534c92057e7096e39fe9b4629a391',
None], dtype=object) ] | docs.celona.io |
Get Docker
Update to the Docker Desktop terms
Professional use of Docker Desktop in large organizations (more than 250 employees or more than $10 million in annual revenue) requires users to have a paid Docker subscription. While the effective date of these terms is August 31, 2021, there is a grace period until January 31, 2022, for those that require a paid subscription. For more information, see the blog Docker is Updating and Extending Our Product Subscriptions.. | https://docs.docker.com/get-docker/ | 2022-01-17T00:41:39 | CC-MAIN-2022-05 | 1642320300253.51 | [] | docs.docker.com |
Installing v6.11.0
Overview
This document covers how to install a cluster of HYPR servers. To perform the installation, you will need to do the following:
- Review you have the prerequisites
- Perform customizations needed via the env vars in
envOverrides.sh
- Run the Installer (bash script)
There is one main HYPR service installed in this process:
- Control Center (CC) - UAF, FIDO2 services. Administrative Console for the HYPR
The installation script will also install the following services. These may also be provided before the installation:
- MySQL DB 8.0.15 – persistent storage
- Redis server 4.0.13 – caching layer
- Hashicorp Vault – safe storage provider
- Nginx 1.14.0 – provides SSL termination at the application server
Prerequisites
Install mode
Decide on one of the following install modes:
- Single node: 1 server node running HYPR and dependencies. This is a good option for exploring the product but not recommended for production.
- Cluster: You will need a minimum of 3 nodes/servers. These servers will be used, exclusively, to run HYPR and required dependencies.
You may add additional nodes at a later time.
Check Requirements for HYPR Servers
- Clean install of RHEL 7.5.
- RHEL 8 is NOT supported at the moment
- SSH access to the server
- Ability to create a user account ('hypr' by default) and grant it ownership to the installation directory ('/opt/hypr' by default')
- Servers must be able to access the external HYPR license server
License key for HYPR
- This will need to be provided to the Control Center for it to serve API requests
Installation Steps
Install required packages
Two external packages are required:
- Python 3. Needed for install only. Not used at runtime.
- libaio, needed for MySQL to run. See MySQL docs for details
Run the following as 'root' or another appropriately permissioned user:
# Python 3 yum install -y python3 # MySQL dependency - libaio yum -y install libaio # MySQL dependency - numa yum -y install numactl-libs
Creating user and install dir
- Create the 'hypr' user and grant ownership of the install dir:
mkdir /opt/hypr -p groupadd hypr useradd hypr -g hypr chown hypr:hypr /opt/hypr -R # Switch to the 'hypr' user su hypr
- Copy the .tar.gz (Install pkg) to the server onto which you are installing HYPR
- Extract the .tar.gz install pkg to the
/opt/hyprdir
cd /opt/hypr # Install pkg is the .tar.gz file cp <install pkg> . # Unarchive tar -xvf <install pkg>
The contents of the /opt/hypr dir must look like this:
Installing a Single Node
Ensure that you have installed the required pkgs and that user is created before proceeding
If you are installing a cluster, skip this section. Jump to the Installing a cluster
There are two (2) steps to installing a single node.
- Installing HYPR dependencies
- Installing HYPR services
Step 0: Set two required env vars
HYPR_MASTER_FQDN and
HYPR_MASTER_IP_ADDRESS. For example in bash
HYPR_MASTER_FQDNand
HYPR_MASTER_IP_ADDRESS. For example in bash
$ export HYPR_MASTER_FQDN=hypr.example.com $ export HYPR_MASTER_IP_ADDRESS=192.168.100.111
Step 1: Install HYPR dependencies (MySQL, Redis, Vault, Nginx). Run the following:
cd /opt/hypr ./startHyprDependencies.sh --single --all --enc <encrytion key>
What does this script do?
Installs and starts:
- prepackaged MySQL 8 DB in /opt/hypr/mysql/mysql-8.0.15
- adds required database and users to MySQL
- adds required metadata to MySQL
- prepackaged Redis master server in /opt/hypr/redis/hypr-redis-4.0.13
- prepackaged Vault in /opt/hypr/vault/vault-0.10.3
- prepackaged Nginx in /opt/hypr/nginx/nginx-1.16.1
Generates Vault, Redis, CC keys. Stores them in:
- /opt/hypr/.install
To view the .install file
- cat /opt/hypr/.install
The dependencies can also be started individually. For example, if you are troubleshooting, you can choose to restart the individual component. To see usage instructions:
cd /opt/hypr ./startHyprServices.sh --help Usage ================================================================= Specify the mode to install in. One of the following: -c, --cluster Install in Cluster config -s, --single Install in Single node config Specify the services to install. One or more of the following: -a, --all Start all required services (CC) -r, --rp (CC) Start RP/CC service Encryption key. One of the following: -e, --enc Enc key for encrypting install metadata -f, --enc-file File containing enc key. Only one line with the enc key -v, --reinit-vault Re-write contents of Vault. Use this if config changes and Vault needs updating =================================================================
Options of providing the encryption key
- Via command line: using the --enc flag. Do not type the keyon the screen, the script will present a encryption key prompt
- Via file: using the --enc-file parameter. The file is a text file containing the enc on the first line. This option is useful for integrating with infrastructure as code tools.
- Via env variables: using the ENC_KEY variable
Step 2: Install and start HYPR services. Run the following:
cd /opt/hypr ./startHyprServices.sh --single --all --enc # You will be prompted for the encryption key
At this point, you should have a running HYPR instance. See post install steps
Installing a cluster
A cluster must have a minimum three (3) nodes. This is recommended for typical workloads. An odd number of cluster members prevent a split brain condition in the network.
Steps to installing a cluster:
- Create user and install dir on all nodes See previous section on this
- Install required packages on all nodes See previous section on this
On Master node:
Install HYPR dependencies
Install HYPR services
On each Worker node:
Install HYPR dependencies
Install HYPR services
The first node the installation is performed on is designated as the 'MASTER'. The subsequent nodes are designated as Worker nodes.
Install dependencies on Master
You will need to set some env variables to guide the install process. All the relevant env variables are in
/opt/hypr/env.sh. Do not modify this file, its read only.
Copy the variables you want to modify into
/opt/hypr/envOverrides.sh
and set to your custom values. Example entry in envOverrides.sh
# env.sh file can change between releases # Put your env var overrides in envOverrides.sh instead of modifying env.sh directly # This insulates you from changes in env.sh during upgrades export HYPR_MASTER_FQDN=rp.mycompany.com
Populate the following mandatory env vars:
Additional configuration
To install HYPR dependencies, run the following:
cd /opt/hypr ./startHyprDependencies.sh --cluster --all --enc # You will be prompted for the encryption key
Installing dependencies on a WORKER node
Before you start installing dependencies on the Worker nodes, ensure that the dependencies have started successfully on the Master node
To install dependencies on a WORKER node:
- Copy the
/opt/hypr/.install.encfile from the master to the same location on worker node
- Copy the
/opt/hypr/envOverrides.shfile from the master to the same location on worker node
- Change the HYPR_NODE_ROLE in
envOverrides.shto be WORKER
- In /opt/hypr run: ./startHyprDependencies.sh --cluster --all --enc
Repeat the above steps for each Worker node you are installing
Installing HYPR services on Master
Run the following
cd /opt/hypr ./startHyprServices.sh --cluster --all --enc # You will be prompted for the encryption key
Installing HYPR services on a Worker node
Ensure that the services have started successfully on the master node
To start services on a worker node - repeat the same steps as on the Master
Customizing your install
Configuring Nginx SSL certificates
Nginx will be fully installed by the HYPR installation script. SSL certificates are needed for SSL termination at the hosts running the HYPR services. The installer ships with an self signed certificate and key in the
<InstallerDir>/nginx/certs directory.
You will need to provide the following
- A certificate (.crt) and key file (.key) file for each nginx install
- A wildcard certificate will be needed for cluster installs
Steps to add your SSL cert
- Replace the contents of the hyprServer.crt file in /nginx/certs with your certificate
- Replace the contents of the hyprServer.key file in /nginx/certs with your key
Restart Nginx dependencies via
./startHyprDependencies.sh --nginx
Using your own Database
Normally, the installer will install a single node DB. If you wish to bring your own DB, please follow the instructions below - before running the install.
The external DB should support more than 1250 connections. DB needs to be set up with relevant schema(s) and user(s) for HYPR services to connect and use the DB. The installer is capable of generating the DB setup scripts.
Step 1: Generate the DB init scripts using the installer
On the master node, make any changes to the /envOverrides.sh if you need
Run the following commands. DB scripts will be output on the terminal. Save these in a text file.
# Step 1: Confirm that you are on the MASTER node # Step 2: Edit envOverrides.sh to make modifications as needed # Set the MYSQL_HOST to point to the external DB cd <Installer Dir>; ./generateMySQLInitScript.sh ℹ️ Usage ================================================================= Utility to genarate MySQL DB init scripts, for supported DB versions Run from the install dir. Enter the encryption password used for the install, when prompted Two files with SQL scripts will be generated: - initScripts8015.sql (for MySQL ver 8) - initScripts57.sql (for MySQL ver 5.7) Apply these to the target DB before starting HYPR services ℹ️ Options: Specify the install mode you are in. One of the following: -c, --cluster Install in Cluster config -s, --single Install in Single node config Encryption key. -e, --enc Enc key for decrypting install metadata ==========================================================================
Step 2:
Pass the SQL scripts above to your DBA to prep the DB (schema and service accounts). The script generates schema and users, if they do not already exist.
Note that these scripts have been tested on MySQL 8 only. User creation syntax differs on MySQL 5_7
Step 3: Specify the external DB host
Once the scripts have run on the external DB
- Set the
MYSQL_HOSTproperty in the envOverrides.sh
- Ensure that external DB is accessible from HYPR service instances
Proceed with running the installer
Customizing logging
HYPR service is preconfigured with sensible logging defaults. If those defaults are not satisfactory, they can be overridden by using custom Log4J configuration. The service directory (CC) has a sample
log4j2.xml file. To increase or decrease logging verbosity
- update (or add) the relevant
Loggerentry in the corresponding file
- include the logging configuration to the corresponding environment variable:
CC_ADDITIONAL_STARTUP_PARAMS="--logging.config=${HYPR_INSTALL_DIR}/CC/log4j2.xml"
Running from the outside the install directory
The install scripts assume that the install is being run from the
/opt/hypr dir.
If some instances this might not be feasible - for example, when the installer is run via automation tools like Chef, Puppet etc.
Steps:
- Set up your install dir and ownership as outlined in the previous section
- Change the HYPR_INSTALL_DIR env var in envOverrides.sh to point to your custom dir
Setting the enc key programmatically
You can automate service management by wrapping bash scripts with
- systemd
- infrastructure as code tools like Ansible, Chef etc
In these scenarios it is desirable to avoid typing in the encryption key
You can use the
--key-file startup param. The value of this param would be a file containing the key. The enc keyis not required once the services startup. Hence, a tool like Ansible can provide the key file post startup and then remove it from the system
Post Installation
Verifying the install
The installer starts various components and verifies the startup. Once, the installer completes successfully, you can verify manually.
On any of HYPR target servers, you can verify that the services are running by running the following commands:
# Checking status for HYPR dependencies ps -ef | grep nginx ps -ef | grep redis-server ps -ef | grep redis-sentinel ps -ef | grep vault # Checking status for HYPR services pgrep java -a
Connecting to the Control Center (CC) web interface
Once the services are running you should be able to log into theCC.
An instance of CC runs on all the nodes marked as '[hypr]' in the config/servers.xml file.
CC communicates to various HYPR services during startup. Hence, a successful start of the CC is generally a good indicator that the services started normally.
Checking that an instance of CC is running – bypassing Nginx.
http://<hostname>:8009/
Checking that an instance of CC is running – with Nginx forwarding and SSL
https://<hostname>:8443/
Once the blue landing page loads for the CC, you can login with your service account.
- Default service user: HYPR
- Default service key: This is encrypted in the install metadata file generated during the install.
# Decrypt the install metadata file with the following command cd <install dir>; ./decryptMetadata.sh # You will be prompted for the <encryption key> # Look up CC_SERVICE_ACC_PASSWORD in the output
Stopping HYPR
Stop HYPR dependencies via
cd /opt/hypr; ./stopHyprDependencies.sh
Stop HYPR services via
cd /opt/hypr; ./stopHyprServices.sh
Restarting HYPR
- Stop HYPR as described here
- Start HYPR
- Start a single node
cd /opt/hypr ./startHyprDependencies.sh --single --all --enc ./startHyprServices.sh --single --all --enc
- Starting a cluster
On each HYPR node, starting with the Master, run:
cd /opt/hypr ./startHyprDependencies.sh --cluster --all --enc ./startHyprServices.sh --cluster --all --enc
Uninstalling HYPR
- Begin by stopping HYPR. For a cluster, repeat process for each node
rm -rf /opt/hypr
Installing systemd services
Once you have the HYPR install up and running, you may install the systemd services for HYPR components
# Stop hypr services and dependencies cd /opt/hypr # You need to have appropriate permissions to, install systemd services ./systemdInstall.sh ℹ️ Usage ================================================================== Running this script installs systemd services for all hypr services, including dependencies Systemd services are installed in /etc/systemd/service Systemd will run and monitor services. Failed services will be restarted Specify the mode to install in. One of the following: -c, --cluster Install in Cluster config -s, --single Install in Single node config Encryption key -e, --enc Enc key for encrypting install metadata ===========================================================================
Once the services are installed, they can be managed via:
systemctl [ start | stop ] hypr
To check the status of the hypr services, you can use
./systemdStatus.sh
Details of installed components
MySQL
The installer creates the following Databases
- fido – main operational DB used by HYPR services
- vault – configuration information for HYPR services
Corresponding database users are also created. The DB schema is created and managed by the services themselves.
Data dir
- /mysql/mysql-8.0.15/mysql-data
Logs
- /mysql/mysql-8.0.15/mysql-data/localhost.err
Nginx
An Nginx instance is installed on each node running HYPR services.
The Nginx is used to terminate the SSL traffic and forward to local service ports.
Nginx package (nginx-1.16.1.tar.gz) is packaged with the installer
SSL certs are applied
- /nginx/certs/hyprServer.crt
- /nginx/keys/hyprServer.key
Config is stored in
- /nginx/nginx-1.16.1/nginx.1161.conf.json
Log files
- /nginx/nginx-1.16.1/logs/access.log
- /nginx/nginx-1.16.1/logs/error.log
Redis
One Redis instance is installed per application server node, 3 nodes together provide HA
Redis package (hypr-redis-4.0.13.tar.gz) is packaged with the installer
Config is stored in
- /redis/hypr-redis-4.0.13/redis.master.4013.conf
- /redis/hypr-redis-4.0.13/redis.slave.4013.conf
- /redis/hypr-redis-4.0.13/redis.sentinel.4013.conf
Log files
- /redis/hypr-redis-4.0.13/logs/redis.log
- /redis/hypr-redis-4.0.13/logs/sentinel.log
HYPR Services
Install dir: /opt/hypr
HYPR services are installed as Java war files. These are completely self contained and directly executable by the JRE (Java Runtime Environment).
The Java command line and startup details can be found in the relevant folder in the .
- /CC/startCC.sh
Troubleshooting
I see a 502 response from the Nginx server
The 502 http status indicates that the Nginx web server is unable to communicate (reverse proxy) to the backing HYPR application server. This is typically caused by:
- HYPR server is not running. Confirm status via:
pgrep java -a
You should see one Java processes for CC
- Check that SELinux is not blocking calls. See below.
Connection is blocked by SELinux
SELinux implements fine grained access control for Linux process. For example, it decides whether the Nginx process is allowed to communicate with the HYPR server process.
Check the SELinux logs file at:
/var/log/audit/audit.log
Nginx logs will show an error along these lines
*2019/05/29 14:42:54 [crit] 21719#21719: *42 connect() to [::1]:8099 failed (13: Permission denied) while connecting to upstream, client: 10.100.10.100, server: localhost, request: "GET / HTTP/1.1", upstream: "http://[::1]:8099/", host: "gcn1.test.net"*
If you see messages blocking access to port 8099 or 8090 (HYPR servers), that is likely the problem. You will need to get in touch with your Linux admins to allow access.
You can fix this by running the following:
setsebool -P httpd_can_network_connect 1
java.sql.SQLNonTransientConnectionException: Data source rejected establishment of connection, message from server: "Too many connections"
If OS is not allowing enough open files, fix it by following steps
See:
vi /etc/security/limits.conf
Add the following line:
* soft nofile 10000
Verify by running:
ulimit -Sn
See:
The event table did not get updated with the traceId table.
ERROR 2020-08-12 00:42:38,894 main [,][] SpringApplication.reportFailure(837) : Application run failed org.springframework.beans.factory.BeanCreationException: Error creating bean with name 'eventEntityManager' defined in class path resource [com/hypr/server/commons/cloud/event/EventDBConfiguration.class]: Bean instantiation via factory method failed; nested exception is org.springframework.beans.BeanInstantiationException: Failed to instantiate [org.springframework.orm.jpa.LocalContainerEntityManagerFactoryBean]: Factory method 'eventEntityManagerFactory' threw exception; nested exception is javax.persistence.PersistenceException: [PersistenceUnit: EventService] Unable to build Hibernate SessionFactory;
Workaround
Add the missing column manually.
alter table fido.events add traceId varchar(255) null; alter table fido.events_bkp add traceId varchar(255) null;
Properties reference
The server install can be further customized by setting properties in Vault or passing them on the command line
Updated 8 months ago | https://docs.hypr.com/installinghypr/docs/installing-v6110 | 2022-01-17T00:31:04 | CC-MAIN-2022-05 | 1642320300253.51 | [] | docs.hypr.com |
Read about SDN here:
Aftalesystemet is available at:
You can find the user manual for Aftalesystemet at Medcom:
The health data network is supported as described here (in Danish) :
If you forgot or did not get your username and password for the Aftalesystemet, you can contact us at [email protected] and ask for a new one.
The DNS servers for .medcom are 195.80.240.129 and 195.80.240.130. The authoritative.
Provisioning (openings / closures) occurs every 1/2 hour if there have been changes in the Agreement system - this means that maximum 30 minutes after an agreement is approved, it will be open from client (network) to the service; provided that the Service Provider has remembered to open any Firewalls in its own network located outside SDN. | https://docs.netic.dk/plugins/viewsource/viewpagesrc.action?pageId=175709615 | 2022-01-17T01:56:43 | CC-MAIN-2022-05 | 1642320300253.51 | [] | docs.netic.dk |
Content and primitive part The content of a polynomial with integer coefficients is the greatest common divisor of its coefficients. The primitive part of such a polynomial is the quotient of the polynomial by its content. Syntax content_and_primitive_part(Polynomial) Description content_and_primitive_part(Polynomial) Returns a list with two elements: the content and the primitive part. Related functions Table of Contents Syntax Description Related functions | https://docs.wiris.com/en/calc/commands/abstract_algebra/content_and_primitive_part | 2022-01-17T00:55:51 | CC-MAIN-2022-05 | 1642320300253.51 | [] | docs.wiris.com |
Amplify API Management Save PDF Selected topic Selected topic and subtopics All content Additional utility filters Additional utility filters, including policy shortcut and policy shortcut chain, and the scripting language filter. 28 minute read Insert BST filter might insert a certificate into a message in a BST without signing or encrypting the message. For example, you can use the Insert BST filter when the API Gateway is acting as a client to a Security Token Service (STS) that issues security tokens (for example, to create OnBehalfOf tokens). For more details, see the topic on STS client authentication. to display in a policy. WS-Security Actor: Select or enter the WS-Security element in which to place the BST. Defaults to Current actor / role only.: Select this option to Base64-encode the data. This option applies only when the data in the message attribute is not already Base64 encoded. In some cases, the input might already be Base64 encoded, so you should deselect this setting in these cases. Check group membership filter The Check Group Membership filter checks whether the specified API Gateway user is a member of the specified API Gateway user group. The user and the group are both stored in the API Gateway user store. For more details, see Manage users. Configure the following required fields: Name: Enter an appropriate name for this filter to display in a policy. User: Enter the user name configured in the API Gateway user store. You can specify this value as a string or as a selector that expands to the value of the specified message attribute at runtime. Defaults to ${authentication.subject.id}. Group: Enter the user group name configured in the API Gateway user store (for example, engineering or sales ). You can specify this value as a string or as selector that expands to the value of the specified message attribute at runtime (for example, ${groupName}). Note The message attribute specified in the selector must exist on the message whiteboard prior to calling the filter. Check group membership possible results The possible paths through this filter are as follows: Result Description True The specified user is a member of the specified group. False The specified user is not a member of the specified group. CircuitAbort An exception occurred while executing the filter. Execute external process filter This filter enables you to execute an external process from a policy. It can execute any external process (for example, start an SSH session to connect to another machine, run a script, or send an SMS message). The output of the external process is captured in two variables on the message whiteboard: exec.output, which contains everything the process printed to the standard output stream, STDOUT. exec.error, which contains everything that the process printed to the standard error stream, STDERR. These variables might be empty if the program did not print any output to those streams. For programs that output data to files, a Load File Contents filter might be needed to read the output. Also, in this case, you must ensure to implement a logic to clean up old files. Complete the following fields: Name: Enter an appropriate name for the filter to display in a policy. The Command tab includes the following fields: Command to execute: Specify the full path to the command to execute (for example, c:/cygwin/bin/mkdir.exe). Arguments: Click Add to add arguments to your command. Specify an argument in the Value field (for example, dir1), and click OK. Repeat these steps to add multiple arguments (for example, dir2 and dir3). Working directory: Specify the directory to run the command from. You can specify this using a selector that is expanded to the specified value at runtime. Defaults to ${environment.VINSTDIR}, where VINSTDIR is the location of a running API Gateway instance. Expected exit code: Specify the expected exit code for the process when it has finished. The filter will follow the true path when this exit code is received from the process, and it will follow the false path otherwise. By following the Unix convention that processes should return zero on success and non-zero on failure, this filter defaults to 0. Kill if running longer than (ms): Specify the number of milliseconds after which the running process is killed. Defaults to 60000. The Advanced tab includes the following fields: Environment variables to set: Click Add to add environment variables. In the dialog, specify an Environment variable name (for example, JAVA_HOME) and a Value (for example, c:/jdk1.6.0_18), and click OK. Repeat to add multiple variables. Block till process finished: Select whether to block until the process is finished in the check box. This is enabled by default. Invoke policy per message body filter. Locate XML nodes filter. HTTP parser filter The HTTP Parser filter parses the HTTP message headers and body. As such, it acts as a barrier in the policy to guarantee that the entire content has been received before any other filters are invoked. It can be used, for example, to wait for an entire message from the back-end service before the gateway begins to reply to the caller. It requires the content.body attribute. The HTTP Parser filter forces the server to do store-and-forward routing instead of the default cut-through routing, where the request is only parsed on-demand. For example, you can use this filter as a simple test to ensure that the message is XML. Pause processing filter The Pause filter is mainly used for testing purposes. This filter causes a policy to sleep for a specified amount of time. Configure the following settings: Name: Enter an appropriate name for the filter to display in a policy. Pause for: When the filter is executed in a policy, it sleeps for the time specified in this field. Defaults to 10000 milliseconds. Policy shortcut chain filter The Policy Shortcut Chain filter enables you to run a series of configured policies in sequence without needing to wire up a policy containing several Policy Shortcut filters. This enables you to adopt a design pattern of creating modular reusable policies to perform specific tasks, such as authentication, content-filtering, or logging. You can then link these policies together into a single, coherent sequence using this filter. Each policy in the Policy Shortcut Chain is evaluated in succession. The evaluation proceeds as each policy in the chain passes, until finally the filter exits with a pass status. If a policy in the chain fails, the entire Policy Shortcut Chain filter also fails at that point. Complete the following general setting: Name: Enter a meaningful name for the filter to display in a policy. For example, the name might reflect the business logic of the policies that are chained together in this filter. Add a policy shortcut Click the Add button to display the Policy Shortcut Editor dialog, which enables you to add a policy shortcut to the chain.. This option is selected by default. Choose a specific policy to execute: Select this option to choose a specific policy to execute. This option is selected by default. Policy: Click the browse button next to the Policy field, and select a policy to reuse from the tree (for example, Health Check). You can search for a specific policy by entering its name in the text box, and the policy tree is filtered automatically. The policy in which this Policy Shortcut Chain filter is configured calls the selected policy when it is executed. Choose a policy to execute by label: Select this option to choose a policy to execute based on a specific policy label. For example, this enables you to use the same policy on all requests or responses, and also enables you to update the assigned policy without needing to rewire any existing policies. Policy Label: Click the browse button next to the Policy Label field, and select a policy label to reuse from the tree (for example, API Gateway request policy (Health Check)). The policy in which this Policy Shortcut Chain filter is configured calls the selected policy label when it is executed. Click OK when finished. You can click Add and repeat as necessary to add more policy shortcuts to the chain. You can alter the sequence in which the policies are executed by selecting a policy in the table and clicking the Up and Down buttons on the right. The policies are executed in the order in which they are listed in the table. Edit a policy shortcut Select an existing policy shortcut, and click the Edit button to display the Policy Shortcut Editor dialog.. Policy or Policy Label: Click the browse button next to the Policy or Policy Label field (depending on whether you chose a specific policy or a policy label when creating the policy shortcut). Select a policy or policy label to reuse from the tree (for example, Health Check or API Gateway request policy (Health Check)). The policy in which this Policy Shortcut Chain filter is configured calls the selected policy or policy label when it is executed. Policy shortcut filter. Complete the following fields: Name: Enter an appropriate name for the filter to display in a policy. Policy Shortcut: Select the policy that to reuse from the tree. You can search for a specific policy by entering its name in the text box, and the policy tree is filtered automatically. The policy in which this Policy Shortcut filter is configured calls the selected policy when it is executed. Tip Alternatively, to speed up policy shortcut configuration, you can drag a policy from the tree on the left of the Policy Studio and drop it on to the policy canvas on the right. This automatically configures a policy shortcut to the selected policy. Set response status filter The Set Response Status filter is used to explicitly set the response status of a call. This status is then recorded as a message metric for use in reporting. This filter is primarily used in cases where the fault handler for a policy is a Policy Shortcut filter. If the Policy Shortcut passes, the overall fail status still exists. You can use the Set Response Status filter to explicitly set the response status back to pass, if necessary. Configure the following: Name: Enter a meaningful name for the filter to display in a policy. Response Status: Select Pass or Fail to set the response status. Switch on attribute value filter executed. You can also specify a default policy, which is executed when none of the switch cases specified in the filter is found. Complete the following configuration settings: Name: Enter a meaningful name for the filter to display in a policy.. Default: This field specifies the default behavior of the filter when none of the specified switch cases are found in the configured message attribute value. Select one of the following options: Return result of calling the following policy: Click the browse button, and select a default policy to execute from the dialog (for example, XML Threat Policy). The filter returns the result of the specified policy. This option is selected by default. Return true: The filter returns true. Return false: The filter returns false. Add a switch case To add a switch case, click Add, and configure the following fields in the dialog: Comparison Type: Select the comparison type to perform with the configured message attribute. The available options include the following: Contains Doesn't Contain Ends With Equals Does not Equal. Quote of the day filter The Quote of the day filter is a useful test utility for returning a simple SOAP response to a client. The API Gateway wraps the quote in a SOAP response, which can then be returned to the client. To configure this filter,. You can also enter the quotes in this format into the Quotes text area to achieve the same goal. The following example shows a SOAP response returned by the API Gateway to a client who requested the Quote of the day service: <s:Envelope xmlns: <s:Header/> <s:Body xmlns:axway/>="axway.com"> <axway:getQuoteResponse> Every cloud has a silver lining <axway:getQuoteResponse> </s:Body> </s:Envelope> Scripting language filter For more details on using scripts to extend API Gateway, see the API Gateway Developer Guide. Write a custom script To write a custom script, you must implement the invoke() method. This method takes a com.vordel.circuit.Message object as a parameter and returns a boolean result. The API Gateway provides a Script Library that contains a number of prewritten invoke() methods to manipulate specific message attributes. For example, there are invoke() methods to check the value of the SOAPAction header, remove a specific message attribute, manipulate the message using the DOM, and assign a particular role to a user. You can access the script examples provided in the Script library by clicking the Show script library button on the filter’s main configuration code, always declare variables locally using var. Otherwise, the variables are global, and global variables can be updated by multiple threads. For example, always use the following approach: var myString = new java.lang.String("hello word"); for (var i = 100; i < 100; i++) { java.lang.System.out.println(myString + java.lang.Integer.toString(i)); } Do not use the following approach: myString = new java.lang.String("hello word"); for (i = 100; i < 100; i++) { java.lang.System.out.println(myString + java.lang.Integer.toString(i)); } Using the second example under load, you cannot guarantee which value is output because both classpath, including all JRE classes. If you wish to invoke a Java object, you must place its corresponding class file on the API Gateway classpath. The recommended way to add classes to the API Gateway classpath is to place them (or the JAR files that contain them) in the INSTALL_DIR/ext/lib folder. For more details, see the readme.txt in this folder.. Configure a script filter You can write or edit the JavaScript, Groovy, or Jython code in the text area on the Script tab. A JavaScript function skeleton is displayed by default. Use this skeleton code as the basis for your JavaScript code. You can also load an existing JavaScript or Groovy script from the Script library by clicking the Show script library button. On the Script library dialog, click any of the Configured scripts in the table to display the script in the text area on the right. You can edit a script directly in this text area. Make sure to click the Update button to store the updated script to the Script library. JavaScript:. Time filter The Time filter enables you to block or allow messages on a specified time of day, or day of week, or both. You can input the time of day directly in the Time filter window, or configure message attributes to supply this information using the Java SimpleDateFormat, or specify a cron expression. You can use the Time filter in any policy (for example, to block messages at specified times when a web service is not available, or has not been subscribed for by a consumer). In this way, this filter enables you to meter the availability of a web service and to enforce Service Level Agreements (SLAs). Configure the following general options: Name: Enter an appropriate name for this filter. Block Messages: Select this option to use this filter to block messages. This is the default option. Allow Messages: Select this option to use this filter to allow messages. Basic time settings Select Basic to block or allow messages at specified times of the day. This is the default option. You can configure following settings: User defined time: Select this option to input the times to block or allow messages directly in this screen. This is the default option. Configure the following settings: From: The time to start blocking or allowing messages from in hours, minutes, and seconds. Defaults to 9:00:00. To: The time to end blocking or allowing messages in hours, minutes, and seconds. Defaults to 17:00:00. Time from attribute: Select this option to specify times to block or allow messages using configured message attributes. You can specify these attributes using selectors, which are replaced at runtime with the values of the specified message attributes set in previous filters or messages. You must configure the following settings: From: Message attribute that contains the time to start blocking or allowing messages from (for example, $(message.starttime)). Defaults to a time of 9:00:00. To: Message attribute that contains the time to end blocking or allowing messages (for example, $(message.endtime)). Defaults to a time of 17:00:00. Pattern: Message attribute that contains the time format based on the Java SimpleDateFormat class (for example,$(message.pattern)). This enables you to format and parse dates in a locale-sensitive manner. Day, month, years, and milliseconds are ignored. Defaults to a format of HH:mm:ss. Days: To block or allow messages on specific days of the week, select the check boxes for those days. For example, to block messages on Saturday and Sunday only. Advanced time settings Select Advanced to block or allow messages at specified times based on a cron expression. Configure the following setting: Cron Expression: Enter a cron expression or a message attribute that contains a cron expression in this field. Alternatively, click the browse button next to this field to select a preconfigured cron expression or to create and test a new cron expression. For example, the following cron expression blocks all messages received on April 27 and 28 2012, at any time except those received between 10:00:01 and 10:59:59. * * 0-9,11-23 27-28 APR ? 2012 The default value is * * 9-17 * * ? *, which specifies a time of 9:00:00 to 17:00:00 every day. Management services RBAC filter Role-Based Access Control (RBAC) is used to protect access to the API Gateway management services. For example, management services are invoked when a user accesses the server using Policy Studio or API Gateway Manager (). For more information on RBAC, see Configure Role-Based Access Control (RBAC). The Management Services RBAC filter can be used to perform the following tasks: Read the user roles from the configured message attribute (for example, authentication.subject.role). Determine which management service URI is currently being invoked. Return true if one of the roles has access to the management service currently being invoked, as defined in the acl.json file. Otherwise, return false. Caution This filter is for management services use only. The Management Services HTTP services group should only be modified under strict supervision from Axway Support. Configure the following settings: Name: Enter an appropriate name for this filter to display in a policy. Role Attribute: Select or enter the message attribute that contains the user roles. Last modified June 25, 2021: Update section on Execute External Process (#1961) (6f8c2dae) Related Links | https://docs.axway.com/bundle/axway-open-docs/page/docs/apim_policydev/apigw_polref/utility_additional/index.html | 2022-01-17T00:20:44 | CC-MAIN-2022-05 | 1642320300253.51 | [] | docs.axway.com |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.