content
stringlengths 0
557k
| url
stringlengths 16
1.78k
| timestamp
timestamp[ms] | dump
stringlengths 9
15
| segment
stringlengths 13
17
| image_urls
stringlengths 2
55.5k
| netloc
stringlengths 7
77
|
---|---|---|---|---|---|---|
Category:New in Joomla! 3.6
From Joomla! Documentation
This page is the home for the categories and sub-categories about What's new in Joomla! 3.6.
To appear on this page each topic page should have the following code inserted at the end:
<noinclude>[[Category:New in Joomla! 3.6]]</noinclude>
Pages in category "New in Joomla! 3.6/en"
This category contains only the following page. | https://docs.joomla.org/Category:New_in_Joomla!_3.6/en | 2021-09-16T14:53:45 | CC-MAIN-2021-39 | 1631780053657.29 | [] | docs.joomla.org |
4.0
AWS Secret Manager
Secure secrets in AWS Secret Manager and use them in Kafka Connect.
Add the plugin to the worker classloader isolation via the plugin.path option:
plugin.path=/usr/share/connectors,/opt/secret-providers
Two authentication methods are support:
- credentails. When using this configuration the access-key and secret-key are used.
- default. This method uses the default credential provider chain from AWS. The default credential first checks environment variables for configuration. If environment configuration is incomplete, Java props, then profile file and finally it will try managed identity.
Configuring the plugin
Example Worker Properties
config.providers=aws config.providers.aws.class=io.lenses.connect.secrets.providers.AWSSecretProvider config.providers.aws.param.aws.auth.method=credentials config.providers.aws.param.aws.access.key=your-client-key config.providers.aws.param.aws.secret.key=your-secret-key config.providers.aws.param.aws.region=your-region config.providers.aws.param.file.dir=/connector-files/aws
Usage
To use this provider in a connector, reference the SecretManager containing the secret and the key name for the value of the connector property.
The indirect reference is in the form ${provider:path:key} where:
- provider is the name of the provider in the worker property file set above
- path is the name of the secret
- key is the name of the secret key in secret to retrieve. AWS can store multiple keys under a path.
For example, if we store two secrets as keys:
- my_username_key with the value lenses and
- my_password_key with the value my-secret-password
in a secret called my-aws-secret we would set:
name=my-sink class=my-class topics=mytopic username=${aws:my-aws-secret:my_username_key} password=${aws:my-aws-secret:my_password_key}
This would resolve at runtime to:
name=my-sink class=my-class topics=mytopic username=lenses password=my-secret-password
Data encoding
AWS SecretManager BinaryString (API only), is not supported. The secrets must be stored under the secret name in key, value pair format. The provider checks the SecretString API and expects a json string to be returned.
For example for an RDS Postgre secret, the following is returned by AWS Secret Manager:
{ "username": "xxx", "password": "xxx", "engine": "postgres", "host": "xxx", "port": 5432, "dbname": "xxx", "dbInstanceIdentifier": "xxxx" }. | https://docs.lenses.io/4.0/integrations/connectors/secret-providers/aws/ | 2021-09-16T16:42:36 | CC-MAIN-2021-39 | 1631780053657.29 | [] | docs.lenses.io |
Manage Splunk Cloud users and roles
Splunk Cloud administrators can create users and assign roles to them. Roles are named collections of capabilities that determine the access and permissions of any user assigned that role. Splunk Cloud comes with predefined user accounts and roles. You can also create custom user accounts and roles.
User accounts that have multiple roles inherit properties from the role with the broadest permissions, as follows.
- Search filters: Users that are assigned multiple roles inherit the capabilities from all assigned roles. For example, if you define two roles with different search filters, and a user account is assigned both roles, then the search filters and restrictions of both roles apply to the user. If a user that has no search restrictions is assigned a role that has search restrictions, the user inherits the search restrictions.
- Allowed indexes: Users who have multiple roles with multiple indexes assigned get the highest level of index access assigned for any of the roles. For example, if a user is assigned both the "user" role, which limits index access to a single index, and the power role, which allows access to all indexes, the user has access to all indexes. If you want the same user account to inherit capabilities from a different "advanced user" role, but nothing more, create a new role specifically for that user.
- Capabilities: Users who have multiple roles with multiple capabilities inherit the combined capabilities of all roles. For example if an administrator creates a user account and assigns the "administrator" role with 15 capabilities, and also assigns the "advanced user" role, with a different set of 15 capabilities, the user account has the combined 30 capabilities of both roles.
For more information about the user authentication methods that Splunk Cloud supports, see the Users and authentication section in the Splunk Cloud Service Description.
Manage Splunk Cloud users
You administer users from the Users page in Splunk Web.
Do not delete or edit the Splunk Cloud system user roles: admin, app-installer, index-manager, internal_ops_admin, and internal_monitoring. Splunk uses these system user roles to perform essential monitoring and maintenance activities. See the section System User Roles in this topic for more information.
Create a Splunk Cloud user account
To create an account for a Splunk Cloud user, perform the following steps:
- Go to Settings > Users.
- view events and other information in their local time zone.
- (Optional) Set a default app if you want to override the default app that launches after the user logs in. If unset, the user account inherits the default app that belongs to the role.
- Assign at least one role to the user or select Create a role for this user to create a new role and assign it to the user. Multiple roles inherit permissions.
-.
Clone a Splunk Cloud user account
Splunk Cloud administrators can clone a user account. The clone operation creates a new user account with the same settings as the cloned user account, except for the username. The username must be unique for each user account.
- Go to Settings >.
Manage Splunk Cloud roles
Each user account is assigned one or more roles. Roles give users permissions to perform tasks in Splunk Cloud based on the capabilities assigned to the role. To manage roles, you must be a Splunk Cloud administrator. Do not edit the predefined roles that are provided by Splunk Cloud. Instead, create custom roles that inherit from the built-in roles, and then modify the custom roles as required.
Do not delete or edit the Splunk Cloud system user roles: admin, app-installer, index-manager, internal_ops_admin, and internal_monitoring. Splunk uses these system user roles to perform essential monitoring and maintenance activities. See the section System User Roles in this topic for more information.
Do not edit any predefined roles to remove capabilities from them. The sc_admin role does not have enough permission to restore some of the capabilities you remove. Instead, create custom roles that inherit from the predefined roles, and use and edit those custom roles as you need.
Use roles to:
- Restrict the scope of searches.
- Inherit capabilities and available indexes from other roles.
- Specify user capabilities.
- Set the default index or indexes to search when no index is specified.
- Specify which indexes to search.
For more information about capabilities in user roles, see About defining roles with capabilities and List of capabilities in the Securing Splunk Enterprise manual.
Create roles in. confirm.
Role search job limits can be set up so that they always..
System User Roles
Splunk uses system user roles to perform essential monitoring and maintenance activities.
Splunk uses the Admin role and system user roles to perform essential monitoring and maintenance activities. You might observe the Admin and system user roles authenticating into your Splunk Cloud environment as part of Splunk performing monitoring and maintenance activities. Splunk performs these activities in accordance with a comprehensive security program designed to protect the confidentiality, integrity, and availability of your data.
In addition to these user roles, Splunk also uses ephemeral system user roles to perform essential monitoring and maintenance activities. Ephemeral system user roles begin with the prefix
int_, and you can use the following search command to audit those users.
index=_audit user=int* "login attempt"
Do not delete or edit the Splunk Cloud system user roles: admin, app-installer, index-manager, internal_ops_admin, and internal_monitoring.
General abilities of system user roles
The following table provides information about the general abilities of the internal_monitoring and internal_ops_admin system user roles.
This documentation applies to the following versions of Splunk Cloud Platform™: 8.2.2106, 8.2.2107
Feedback submitted, thanks! | https://docs.splunk.com/Documentation/SplunkCloud/latest/Security/ManageSplunkCloudusersandroles | 2021-09-16T17:03:11 | CC-MAIN-2021-39 | 1631780053657.29 | [array(['/skins/OxfordComma/images/acrobat-logo.png', 'Acrobat logo'],
dtype=object) ] | docs.splunk.com |
With Ethereum gas fees rising out of control in Q1 2021 and showing no signs of slowing down, we believe there will be an inevitable influx of developers launching applications on the much more practical Binance Smart Chain. As a result, we created a DApp that provides a much needed service to projects on BSC: building trust and security.
Wault Locker is a DApp to lock liquidity for fixed periods of time, offering similar value as certain prosperous offerings on the Ethereum blockchain. However, Wault Locker was the first of its kind on Binance Smart Chain! In addition, we locked our own liquidity inside for 6 months in order to bring our users peace of mind.
Each liquidity lock provides 0.20% of the total tokens locked as a service fee. Those tokens are then liquidated and used to buy back WAULTx tokens on the spot market, creating steady buying pressure and helping assure the WAULTx token maintains a valuable price.
The purchased tokens go to the marketing fund, and, in the future, the community will decide how to use them through decentralized governance. | https://docs.wault.finance/products/locker | 2021-09-16T15:26:17 | CC-MAIN-2021-39 | 1631780053657.29 | [] | docs.wault.finance |
GridViewHitInfoBase.Band Property
Gets a band located under the test object.
Namespace: DevExpress.Xpf.Grid
Assembly: DevExpress.Xpf.Grid.v21.1.dll
Declaration
Property Value
Remarks
Use the Band property to obtain the band that is under the test object. If the test object belongs to a visual element that does not belong to any band, the Band property returns null (Nothing in Visual Basic).
To identify the type of a visual element that is under the test object, use the TableViewHitInfo.HitTest property (CardViewHitInfo.HitTest in a card view).
To learn more, see Hit Information.
See Also
Feedback | https://docs.devexpress.com/WPF/DevExpress.Xpf.Grid.GridViewHitInfoBase.Band | 2021-09-16T16:45:46 | CC-MAIN-2021-39 | 1631780053657.29 | [] | docs.devexpress.com |
The Unique XPath feature gives you the ability to automatically generate an XPath query that will uniquely identify a specific element, based on the properties of both the desired element itself, and other elements around it if necessary. The query will use the fewest possible different properties that are necessary to uniquely identify the element.
To create the query, right-click on the element in the object spy tree. There will be two options available regarding Xpath.
Example: Identifying "Details goes here" under "Expense : 2"
Copy Unique Xpath
When you click on copy unique Xpath, the identification query would be copied to the clipboard. In the above example query would be:
xpath=//*[@text='Detail goes here' and ./preceding-sibling::*[@text='Expense : 2']]
Because the desired element does not have any unique properties of its own, the query relies on the text property of one of the element's siblings in the tree in order to create the unique identification. This query can now be pasted in the element identification field in either a dynamic command or an object from the repository.
The desired element will now be uniquely identified:
Copy Unique Xpath (skip text):
When you click on copy unique Xpath (skip text), the identification query would be copied to the clipboard but this will not include text property of any of the objects . For the same example query would be:
xpath=(//*[@id='expenseListView']/*/*[@id='detailTextView'])[2]
This is recommended option when need to handel dynamic object which there text will changed.
The desired element is uniquely identified: | https://docs.experitest.com/display/TDB/SeeTestAutomation-+Copy+Unique+XPath | 2021-09-16T16:22:33 | CC-MAIN-2021-39 | 1631780053657.29 | [] | docs.experitest.com |
Date: Thu, 1 Jan 2004 07:46:12 +0100 From: Peter Schuller <[email protected]> To: Gautam Gopalakrishnan <[email protected]> Cc: [email protected] Subject: Re: 5.2 RC2: Semi-deterministic gcc segfault during buildworld Message-ID: <[email protected]> In-Reply-To: <[email protected]> References: <[email protected]> <[email protected]>
Next in thread | Previous in thread | Raw E-Mail | Index | Archive | Help
> I had this exact problem. It was due to optimisation flag -O3 in > my CFLAGS in make.conf (the handbook says to not use too much > optimisation). I had no problems after I removed it. Thanks a lot for responding! Unfortunately this does not seem to be the cause in this case. /etc/make.conf does not contain any modifications to gcc optimization parameters, and indeed gcc is invoked with only '-O' when compiling. It's more or less a fresh install; I haven't touched make.conf. -- /:
Want to link to this message? Use this URL: <> | https://docs.freebsd.org/cgi/getmsg.cgi?fetch=1518793+0+archive/2003/freebsd-questions/20031231.freebsd-questions | 2021-09-16T15:40:36 | CC-MAIN-2021-39 | 1631780053657.29 | [] | docs.freebsd.org |
Date: Mon, 2 Jun 1997 22:17:57 -0500 (CDT) From: "Jay D. Nelson" <[email protected]> To: Glenn Johnson <[email protected]> Cc: [email protected], [email protected] Subject: Re: GIMP Message-ID: <[email protected]> In-Reply-To: <[email protected]>
Next in thread | Previous in thread | Raw E-Mail | Index | Archive | Help
It made clean and works fine on 2.2.1. -- Jay On Sun, 1 Jun 1997, Glenn Johnson wrote: ->Has anyone else had trouble with the 'gimp-devel 0.99.9 port'? I am running ->3.0 current and am getting the following: -> ->gmake[3]: Leaving directory `/usr/ports/graphics/gimp-devel/work/gimp-0.99.9/pl ->ug-ins/dgimp' ->gmake[2]: *** [all-recursive] Error 1 ->gmake[2]: Leaving directory `/usr/ports/graphics/gimp-devel/work/gimp-0.99.9/pl ->ug-ins' ->gmake[1]: *** [all-recursive] Error 1 ->gmake[1]: Leaving directory `/usr/ports/graphics/gimp-devel/work/gimp-0.99.9' ->gmake: *** [all-recursive-am] Error 2 ->*** Error code 2 -> ->Stop. ->*** Error code 1 -> ->Stop. ->*** Error code 1 -> ->Stop. -> ->Thanks for any help. ->-- ->Glenn Johnson ->[email protected] -> ->
Want to link to this message? Use this URL: <> | https://docs.freebsd.org/cgi/getmsg.cgi?fetch=246231+0+archive/1997/freebsd-questions/19970601.freebsd-questions | 2021-09-16T15:12:02 | CC-MAIN-2021-39 | 1631780053657.29 | [] | docs.freebsd.org |
Date: Wed, 06 Nov 96 13:30:00 PST From: Robert Clark <[email protected]> To: "'freebsd-questions'" <[email protected]> Subject: Help: Netboot.rom, tftp, bootp, WD8013W. Message-ID: <328103A7@smtp>
Next in thread | Raw E-Mail | Index | Archive | Help
Help, I'm looking for any info on setting up a 2.1.0R system to serve bootp, tftp, NFS, and the correct directories for Netboot.rom use. (The system is up and running well. NFS, pcnfsd, apache, etc. 2.1.0.R) I'm also looking for info and or help in building a Netboot.rom for WD8013W network cards. (I have access to a ROM eraser / burner, and one 27C512 EPROM for each NIC.) If you would like to email me directly, please use the address: [email protected]. Thanks, [RC]
Want to link to this message? Use this URL: <> | https://docs.freebsd.org/cgi/getmsg.cgi?fetch=547437+0+archive/1996/freebsd-questions/19961103.freebsd-questions | 2021-09-16T16:58:07 | CC-MAIN-2021-39 | 1631780053657.29 | [] | docs.freebsd.org |
Date: Tue, 15 May 2007 23:34:15 -0300 From: Agus <[email protected]> To: freebsd-questions <[email protected]> Subject: Find out startup programs execution order.. Message-ID: <[email protected]>
Next in thread | Raw E-Mail | Index | Archive | Help
Hi... i am trying to solve a problem with an error message during startup.. su: /bin/csh : Permission Denied so i am trying to find the way the programs start during boot.... thats it.... Thanxsss
Want to link to this message? Use this URL: <> | https://docs.freebsd.org/cgi/getmsg.cgi?fetch=952006+0+archive/2007/freebsd-questions/20070520.freebsd-questions | 2021-09-16T16:51:23 | CC-MAIN-2021-39 | 1631780053657.29 | [] | docs.freebsd.org |
Komponenten Versionsverlauf
From Joomla! Documentation
Contents
Überblick
Auf der Seite „Versionsverlauf“ werden frühere Versionen des bearbeiteten Elements verwaltet. Versionsverlauf ist für Kategorien, Beiträge, Banner und Banner-Kunden, Kontakte, Newsfeeds, Weblinks und Benutzerhinweise verfügbar. Wird ein Element mit Änderungen gespeichert, wird jedes Mal automatisch eine neue Version erstellt. Wie viele Versionen gespeichert werden, wird in den Optionen der Komponente eingestellt. Eine oder mehrere Versionen können auch dauerhaft gespeichert werden. Die so markierten Versionen werden nicht automatisch gelöscht, auch wenn die maximale Anzahl der in den Optionen eingestellten Versionen überschritten ist.:
File:Help-4x-Version-History-Help-toolbar-en.png. | https://docs.joomla.org/Help4.x:Components_Version_History/de | 2021-09-16T16:37:55 | CC-MAIN-2021-39 | 1631780053657.29 | [] | docs.joomla.org |
. Sonic ESB, Sonic ES). | https://docs.parasoft.com/display/SOAVIRT9108/Creating+Tests+From+Sonic+ESB+Transactions | 2021-09-16T15:37:06 | CC-MAIN-2021-39 | 1631780053657.29 | [] | docs.parasoft.com |
Inbound Email Parse Webhook 5XX status. This prevents data loss for customers who have misconfigured their website or POST URL.
Respond with a 2xx status to the POST request to stop the email from retrying.. Messages that cannot be delivered after 3 days will be dropped. SendGrid will not send a notification before the message authenticated an:
[charsets] => {"to":"UTF-8","cc":"UTF-8","subject":"UTF-8","from":"UTF-8","text":"iso-8859-1"}
This shows that all headers should be treated as UTF-8, and the text body is latin1.
Need some help?
We all do sometimes; code is hard. Get help now from our support team, or lean on the wisdom of the crowd browsing the SendGrid tag on Stack Overflow. | https://docs.sendgrid.com/for-developers/parsing-email/inbound-email | 2021-09-16T16:11:22 | CC-MAIN-2021-39 | 1631780053657.29 | [] | docs.sendgrid.com |
Workload Management examples
The following scenarios provide some guidance on how to use workload management in Splunk Cloud Platform. These are hypothetical examples only. The exact steps will depend on your specific objectives and sc_admin into a low priority pool.
- Abort all long-running searches (>10m) that are not from the security team or sc_admin.
To do this, follow the steps below:
- From Splunk Web, go to Settings > Workload Management.
- Create the following workload rules by clicking Add rule is matched, sc_admin. Cloud Platform™: 8.1.2012, 8.1.2101, 8.1.2103, 8.2.2104, 8.2.2105 (latest FedRAMP release), 8.2.2106, 8.2.2107
Feedback submitted, thanks! | https://docs.splunk.com/Documentation/SplunkCloud/8.2.2106/Admin/WLMExamples | 2021-09-16T17:05:15 | CC-MAIN-2021-39 | 1631780053657.29 | [array(['/skins/OxfordComma/images/acrobat-logo.png', 'Acrobat logo'],
dtype=object) ] | docs.splunk.com |
The impact of offline AMPs on the Update driver depends on:
- The number of offline AMPs in a cluster, either logically or physically
- The operational phase of the Update tasks when the offline AMP condition occurs
- Whether the target tables are fallback or nonfallback
The table below describes the impact of offline AMPs on Update driver tasks on fallback and nonfallback tables. | https://docs.teradata.com/r/JjNnUlFK6_12aVpI0c06XA/iNafy0lrwrEIRNOzbgDhyw | 2021-09-16T15:30:07 | CC-MAIN-2021-39 | 1631780053657.29 | [] | docs.teradata.com |
The current release of PISM is
v1.2.
PISM is released as source code. To get started building PISM from sources, we suggest using Git to obtain the code:
git clone git://github.com/pism/pism.git pism
This checks out the
master branch. Using
git means that we can easily distribute corrections (patches) to fix bugs.
Alternatively you can get a
.tar.gz or
.zip archive containing a tagged source code release.
See the Installation Manual for more on building from source.
There is also a rapidly-evolving development version. | https://pism-docs.org/wiki/doku.php?id=stable_version | 2021-09-16T15:12:46 | CC-MAIN-2021-39 | 1631780053657.29 | [] | pism-docs.org |
The API Service of a full node enables a read-only query API that is useful for many tools such as dashboards, wallets, and scripting in general.
The API Service is configured in
~/.akash/config/app.toml and can be enabled in the
[api] section:
[api]enable = "true"
By default, the service listens on port
1317, but this can also be changed in the
[api] section of
app.toml. | https://docs.akash.network/operations/node/api-service | 2021-09-16T15:06:33 | CC-MAIN-2021-39 | 1631780053657.29 | [] | docs.akash.network |
Date: Wed, 20 May 2015 13:36:40 +0200 From: Nikos Vassiliadis <[email protected]> To: "[email protected] Questions" <[email protected]> Subject: CPU frequency doesn't drop below 1200MHz (like it used to) Message-ID: <[email protected]>
Next in thread | Raw E-Mail | Index | Archive | Help
Hi, I just noticed that my CPU's frequency doesn't support dropping below 1200MHz. It used to be able to go down to 150MHz, if I am not mistaken. I'd like it to go down to 600MHz via powerd, like it used to go. This is a month's old 10-STABLE. > [nik@moby ~]$ sysctl dev.cpu.0.freq_levels > dev.cpu.0.freq_levels: 2400/35000 2300/32872 2200/31127 2100/29417 2000/27740 1900/26096 1800/24490 1700/22588 1600/21045 1500/19534 1400/18055 1300/16611 1200/15194 Thanks in advance for any ideas, Nikos
Want to link to this message? Use this URL: <> | https://docs.freebsd.org/cgi/getmsg.cgi?fetch=287163+0+archive/2015/freebsd-questions/20150524.freebsd-questions | 2021-09-16T16:29:46 | CC-MAIN-2021-39 | 1631780053657.29 | [] | docs.freebsd.org |
The Formal Goodness of Agile Software Architecture – Part 2
In Part 1 of this short series I discussed an article that proposes the Intensional/Locality Thesis for formally distinguishing between what is Architecture and what is Design. Part 1 also covered how the Thesis has shaped my thinking about the role of the Architect and Software Architecture within an Agile project team.
As I was writing Part 1, specifically the bulleted lists of architectural specifications, it occurred to me that I had used a number of mechanisms on past projects to communicate Architecture. These mechanisms have included:
- Whiteboard conversations (my personal favorite)
- Word documents using text (my least favorite)
- Visio diagrams (OK, but lacks some rigor)
- UML models (another personal favorite)
It was towards the end of writing Part 1 that I decided that in fact there would be two posts – Part 1 describing how the Intensional/Locality Thesis reshaped my thinking and this post describing how to use UML models to communicate architectural specifications (in compliance with the Thesis) in an Agile way.
As I was thinking about the prep work for this post, I figured I would have to fire up Sparx and create a UML model to use as a case study. However, as I got prepared to do some modeling it occurred to me that I might already have something better.
A number of months back I created a sample UML model to illustrate one way that UML could be used for Software Architecture on an Agile project. The main goal of the sample was to illustrate how UML can be a powerful communication and documentation tool that enhances the agility of a team – how Architecture can be a point of leverage.
Intrigued by the reuse potential, I’ve decided to use this UML model to see how it stands up to the ideas discussed in Part 1. What follows in this post is the unaltered UML model I created, interpreted in the context of the Intension/Locality Thesis.
NOTE – There is a complete HTML representation of this UML model available from my SkyDrive. The link to the .ZIP is located at the bottom of this post.
The Conceptual View
The first item of interest in the UML model is the Conceptual View. The intent of this model is to communicate the overall architecture of the software at a relatively high level of abstraction. While not 100% correct from a UML standards perspective, this model leverages the UML Component diagram as the means of communication. The Conceptual View is depicted below:
When using the concepts in Part 1 to interpret the model depicted above, we can quickly identify that the Conceptual View declares a number of architectural specifications. We can also identify that these specifications adhere to the two main categories described in Part 1:
- “Traditional” architectural specifications (e.g., Pipes and Filters)
- Design specifications (aka Patterns) that have been elevated to architectural specifications
The first thing that is worthy to note is that the Conceptual View prominently lays out the first architectural specification – the use of a Layered Architecture.
Additionally, the Conceptual View incorporates a large number of constraints on the structure of the software:
- The Presentation Layer depends upon the Services Layer
- The Services Layer depends upon the Application Layer
- The Application Layer depends on a Data Access Layer (DAL)
- The Application Layer depends upon the following:
- The Exception Handling Application Block of the Enterprise Library
- The Logging Application Block of the Enterprise Library
- The DAL depends on the following:
- The Caching Application Block of the Enterprise Library
- NHibernate framework
That’s a lot of architectural goodness derived from a single diagram, but is typical of the large amount of leverage I’ve seen in crafting architectures that are replete with Design Patterns.
As you might imagine, a full-blown collection of architectural specifications would likely include guidance on how Repositories should be implemented using NHibernate, how the Exception Handling Application Block will be configured, how the Caching Application Block should be used, etc.
For the sake of brevity the UML model isn’t 100% complete, but it does illustrate a very powerful mechanism that UML has for modeling architectural specifications – UML patterns.
Repository Architectural Specification
As the Conceptual View illustrates, there is an architectural specification for the use of the “Repository Pattern”. As covered in Part 1 of the series, I tend to think that one of the duties of the Agile Architect is transforming appropriate Design Patterns (which are inherently design specifications under the Intensional/Locality Thesis) into architectural specifications. This transformation is attained mainly through moving the scope of a Design Pattern from local to non-local. As illustrated in the Conceptual View, this transformation is embodied in the following subsection of the model:
The problem with this architectural specification is that it isn’t very clear to both the Developers and Code Reviewers/Code Pairs what this means in terms of the constraints on the structure of the software - even if the team is familiar with Repositories and has read Evans. That’s not so say there’s no value in the architectural specification as it stands (there is definite value in communicating the architecture at a high level), it’s just that if the specification could be enhanced it would become an even greater point of leverage for the team.
This enhancement can be accomplished very easily using the UML concept of a Pattern (some additional info is available here). Rather than bore you with the all the UML standards goodness, I’ll just pull the depiction from the model of the “Repository Pattern”:
As discussed in the Eden and Kazman article on the Intension/Locality Thesis, both architectural and design specifications concentrate on the definition of constraints on the structure of the software – they just differ on the scope of the constraints (the locality). The UML pattern specification above clearly illustrates this concept in the definition of the roles that constitute the pattern (e.g., “Repository Factory”) and constraints on those roles (e.g., “Only domain entities that are Aggregate Roots can have Repositories”).
What is particularly noteworthy in model above is that the pattern roles define a further constraint in terms of the use of a standard code interface (“IRepository<T>”). We’ll see the implication of this below.
As I discussed in Part 1, notice how this pattern does not specify to a Developer on the team which exact Aggregates, Aggregate Roots, and Repositories need to be built (e.g., Contracts, Customers, Orders, etc) – it only specifies only how these items should be built.
While this enhancement of the Repository architectural specification provides powerful leverage to the team, UML has one last enhancement to increase the power of Agile Software Architecture. The following model illustrates a hypothetical example of software that is in compliance with the Repository architectural specification:
As I described in Part 1, I would argue this model exemplifies the difference between Architecture and Design in an Agile project. I would also argue that the model above illustrates the differences between the Architect and Developer roles on an Agile project. Specifically, the model above illustrates that the Agile Architect is responsible for the definition of the architectural constraints of the software (Repository in this case). Additionally, the model illustrates that another role on the team identifies that a design is required for the concept of “Product” that complies with the architecture of the software – the Developer role on Agile projects.
NOTE – As I’ve wrote previously, the best Architects also craft software. As such, it is important to understand that Architects may bounce back and forth between the Architect and Developer roles on an Agile project – defining architectural specifications one day and delivery software in compliance with the architecture the next.
A second, more complex, example will hopefully cement these ideas.
WCF Service Architectural Specification
The Conceptual View of the software architecture identifies the usage of the “WCF Service Pattern” as an architectural specification. Specifically, the following subsection of the Conceptual View defines this architectural specification:
As we saw previously with the Repository architectural specification, the WCF Service architectural specification has a UML pattern specification:
While the WCF Service architectural specification is quite a bit more complicated than the Repository architectural specification, it is worthy to note that it again clearly segments Architecture from Design and segments the Architect from the Developer on an Agile project. Another thing that is worthy to note in the model above is the multiplicity constraint within the “WCF Operation Contract” role in the pattern. Specifically, this constraint specifies that every instance of the WCF Service architectural specification must have at least one instance of the “WCF Operation Contract” role, but there is no upper bound on the number of “WCF Operation Contracts”.
As with the Repository architectural specification, the WCF Service architectural specification also included an example instantiation of a design that is compliant with architectural spec. The model below illustrates this example design:
There you have it! Some very powerful examples of how to leverage UML for Agile Architecture that is compliant with the Intension/Locality Thesis.
Reusable Code Assets
The above architectural specifications rely on some common reusable code assets as part of their constraints on the structure of the code. The following model illustrates the these assets from the UML model.
Don’t Fear UML & Software Architecture
Believe it or not, UML and Software Architecture are not antithetical to the use of Agile.
I’ve personally seen these tools and techniques succeed under both XP-like and Scrum-like (I use “-like” because I never seen a “pure” implementation – nor do I ever expect to ;-) methodologies. Typically architecture in these instances were a Sprint (or two 2-week XP iterations) at the start of the project, and then the architecture is iteratively addressed Agile-style throughout the project. Those of you who are in the know will recognize this as akin to some of the ideas behind RUP Elaboration.
I heartily advise giving this a try – your Developers will love you for it and it just might increase the fun you have as an Architect to boot!
Any feedback from Architects and Developers on this series would be greatly appreciated.
SkyDrive Files | https://docs.microsoft.com/en-us/archive/blogs/dave_langer/the-formal-goodness-of-agile-software-architecture-part-2 | 2021-09-16T17:29:05 | CC-MAIN-2021-39 | 1631780053657.29 | [] | docs.microsoft.com |
You must configure individual nodes before you can add them to a cluster. After you install and cable a node in a rack unit and power it on, you can configure the node network settings using the per-node UI or the node terminal user interface (TUI). Ensure that you have the necessary network configuration information for the node before proceeding.
You cannot add a node with DHCP-assigned IP addresses to a cluster. You can use the DHCP IP address to initially configure the node in the per-node UI, TUI, or API. During this initial configuration, you can add. | https://docs.netapp.com/sfe-120/topic/com.netapp.doc.sfe-ug/GUID-92277504-3696-4457-94DE-AB2FB2B6DD03.html | 2021-09-16T15:43:47 | CC-MAIN-2021-39 | 1631780053657.29 | [] | docs.netapp.com |
SSL vs. TLS
TLS (Transport Layer Security) and SSL (Secure Sockets Layer) are protocols that provide data encryption and authentication between applications and servers sending data across an insecure network, such as, is there a practical difference between the two?
People used to believe websites using it). Even before the POODLE was set loose, the US Government had already mandated that SSL v3 not be used for sensitive government communications or for HIPAA-compliant communications. As a result of POODLE, SSL v3 is being disabled on websites all over the world and for many other services as well.
SSL v3.0 is effectively “dead” as a useful security protocol. Places that still allow its use for web hosting are placing their “secure websites” at risk; organizations that allow the use of SSL v3 to persist for other protocols (for example,. The newer TLS versions, if properly configured, prevent attacks and provide many stronger ciphers and encryption methods. SendGrid supports TLS v1.1 and higher.
Should You Be Using SSL or TLS? The IETF deprecated both SSL 2.0 and 3.0 (in 2011 and 2015, respectively). Over the years the deprecated SSL protocols continue to reveal vulnerabilities (for example, POODLE, DROWN). Most modern browsers show a degraded user experience (for example, a line through the padlock or https in the URL bar, or security warnings) when they encounter a web server using the old protocols. For these reasons, you should disable SSL 2.0 and 3.0 in your server configuration, leaving only TLS protocols enabled. your server configuration determines the protocols, and not the certificates themselves.
It’s likely you will continue to see certificates referred to as SSL Certificates because at this point that’s the term more people are familiar with, but we’re beginning to see increased usage of the term TLS across the industry. SSL/TLS is a common compromise until more people become familiar with TLS.
Are SSL and TLS Any Different Cryptographically? In truth, the answer to this question is yes, but you can say the same about the historical versions of SSL 2 and 3 or the TLS versions 1 with 1.1, 1.2 or 1.3. SSL and TLS are both about the same protocol but because of the version differences, SSL 2 was not interoperable with version 3, and SSL version 3 not with TLS version 1. You could argue that Transport Layer Security (TLS) was just a new name for SSL v4 - essentially, we are talking about the same protocol.
Each newly released version of the protocol came and will come with improvements and new or deprecated features. SSL version one was never released, version two did but had some significant flaws, SSL version 3 was a rewrite of version two (to fix these flaws) and TLS version 1 an improvement of SSL version 3. Since the release of TLS 1.0 the changes have been less significant, but never less important.
It’s worth noting here that SSL and TLS simply refer to the handshake that takes place between a client and a server. The handshake doesn’t actually do any encryption itself, it just agrees on a shared secret and type of encryption that is going to be used.
Additional Resources
Configuring ports with SendGrid
Need some help?
We all do sometimes; code is hard. Get help now from our support team, or lean on the wisdom of the crowd browsing the SendGrid tag on Stack Overflow. | https://docs.sendgrid.com/ui/sending-email/ssl-vs-tls | 2021-09-16T16:00:40 | CC-MAIN-2021-39 | 1631780053657.29 | [] | docs.sendgrid.com |
Go to Dashboard >> Appearance >> Customize >> Front page Sections >> Client
Hide/Show Testimonial– Click this setting box to hide and show client.
Client Items Content
Client Item ( Add itmes ) –
- Image – Upload a client image for client section.
- Title– Enter a text for client title.
- Link – Enter a link URL and check this setting box for open new tab .
| https://helpdocs.britetechs.com/docs/spawp-premium-plugin/front-page-section/how-to-setup-section-client/ | 2021-09-16T15:11:45 | CC-MAIN-2021-39 | 1631780053657.29 | [array(['https://helpdocs.britetechs.com/wp-content/uploads/2021/08/spawp-client.png',
None], dtype=object) ] | helpdocs.britetechs.com |
Thank you so much for thinking of contributing to the Human Connection project! It's awesome you're here, we really appreciate it. :-)
Instructions for how to install all the necessary software and some code guidelines can be found in our documentation.
To get you started we recommend that you join forces with a regular contributor. Please join our discord instance to chat with developers or just get in touch directly on an issue on either Github or Zenhub:
We also have regular pair programming sessions that you are very welcome to join! We feel this is often the best way to get to know both the project and the team. Most developers are also available for spontaneous sessions if the times listed below don't work for you – just ping us on discord.
We operate in two week sprints that are planned, estimated and prioritised on Zenhub. All issues are also linked to and synced with Github. Look for the
good first issue label if you're not sure where to start!
We try to discuss all questions directly related to a feature or bug in the respective issue, in order to preserve it for the future and for other developers. We use discord for real-time communication.
This is how we solve bugs and implement features, step by step: 1. We find an issue we want to work on, usually during the sprint planning but as an open source contributor this can happen at any time. 2. We communicate with the team to see if the issue is still available. (When you comment on an issue but don't get an answer there within 1-2 days try to mention @Human-Connection/hc-dev-team to make sure we check in.) 3. We make sure we understand the issue in detail – what problem is it solving and how should it be implemented? 4. We assign ourselves to the issue and move it to
In Progress on Zenhub. 5. We start working on it in a
new branch and open a
pull request prefixed with
[WIP] (work in progress) to which we regularly push our changes. 6. When questions come up we clarify them with the team (directly in the issue on Github). 7. When we are happy with our work and our PR is passing all tests we remove the
[WIP] from the PR description and ask for reviews (if you're not sure who to ask there is @Human-Connection/hc-dev-team which pings all core developers). 8. We then incorporate the suggestions from the reviews into our work and once it has been approved it can be merged into master!
Every pull request needs to:
fix an issue (if there is something you want to work on but there is no issue for it, create one first and discuss it with the team)
include tests for the code that is added or changed
pass all tests (linter, backend, frontend, end-to-end)
be approved by at least 1 developer who is not the owner of the PR (when more than 10 files were changed it needs 2 approvals)
There are many volunteers all around the world helping us build this network and without their contributions we wouldn't be where we are today. Big thank you to all of you!
You can see the core team behind Human Connection on our website. On Github you will mostly run into our developers:
Robert (@roschaefer)
Matt (@mattwr18)
Wolle (@Tirokk)
Alex (@ogerly)
Alina (@alina-beck)
Martin (@datenbrei), our head of IT
and sometimes Dennis (@DennisHack), the founder of Human Connection
Times below refer to German Time – that's CET (GMT+1) in winter and CEST (GMT+2) in summer – because most Human Connection core team members are living in Germany.
Daily standup
every Monday–Friday 11:30
in the discord
Conference Room
all contributors welcome!
everybody shares what they are working on and asks for help if they are blocked
Regular pair programming sessions
every Monday, Wednesday and Thursday 15:00
the link will be posted in the discord chat and on the Agile Ventures website
all contributors welcome!
we team up and work on an issue together (often using Visual Studio live sharing sessions)
Open-Source Community Meeting
bi-weekly on Mondays 13:00 (when there is no sprint retrospective)
the link will be posted in the discord chat and on the Agile Ventures website
all contributors welcome!
Meet the team
every Monday 21:00 (at the moment only in German)
details here
all contributors and users of the network welcome!
users of the network chat with the Human Connection team and discuss current questions and issues
Sprint planning
bi-weekly on Tuesday 13:00
all contributors welcome (recommended for those who want to work on an issue in this sprint)
we select and prioritise the issues we will work on in the following two weeks
Sprint retrospective
bi-weekly on Monday 13:00
all contributors welcome (most interesting for those who participated in the sprint)
we review the past sprint and talk about what went well and what we could improve
We practise collective code ownership rather than strong code ownership, which means that:
developers can make contributions to other people's PRs (after checking in with them)
we avoid blocking because someone else isn't working, so we sometimes take over PRs from other developers
everyone should always push their code to branches so others can see it
We believe in open source contributions as a learning experience – everyone is welcome to join our team of volunteers and to contribute to the project, no matter their background or level of experience.
We use pair programming sessions as a tool for knowledge sharing. We can learn a lot from each other and only by sharing what we know and overcoming challenges together can we grow as a team and truly own this project collectively.
As a volunteeer you have no commitment except your own self development and your awesomeness by contributing to this free and open-source software project. Cheers to you!
There are so many good reasons to contribute to Human Connection
You learn state-of-the-art technologies
You build your portfolio
You contribute to a good cause
Now there is one more good reason: You can receive a small fincancial compensation for your contribution! :tada:
Before you can benefit from the Open-Source bounty program you must get one pull request approved and merged for free. You can choose something really quick and easy. What's important is starting a working relationship with the team, learning the workflow, and understanding this contribution guide. You can filter issues by 'good first issue', to get an idea where to start. Please join our our community chat, too.
You can filter Github issues with label bounty. These issues should have a second label
€<amount> which indicate their respective financial compensation in Euros.
You can bill us after your pull request got approved and merged into
master. Payment methods are up to you: Bank transfer or PayPal is fine for us. Just send us your invoice as .pdf file attached to an E-Mail once you are done.
Our Open-Source bounty program is a work-in-progress. Based on our future experience we will make changes and improvements. So keep an eye on this contribution guide. | https://docs.human-connection.org/human-connection/contributing | 2021-09-16T15:07:22 | CC-MAIN-2021-39 | 1631780053657.29 | [] | docs.human-connection.org |
.
Note
The concepts described in this article equally applies when an ExpressRoute circuit is created under Virtual WAN or outside of it.).
Note
During a maintenance activity or in case of unplanned events impacting one of the connection, Microsoft will prefer to use AS path prepending to drain traffic over to the healthy connection. You will need to ensure the traffic is able to route over the healthy path when path prepend is configured from Microsoft and required route advertisements are configured appropriately to avoid any service disruption.:
Option 1:
NAT gets applied after splitting the traffic between the primary and secondary connections of the ExpressRoute circuit. To meet the stateful requirements of NAT, independent NAT pools are used for the primary and the secondary devices. The return traffic will arrive on the same edge device through which the flow egressed.
If the ExpressRoute connection fails, the ability to reach the corresponding NAT pool is then broken. That's why all broken network flows have to be re-established either by TCP or by the application layer following the corresponding window timeout. During the failure, Azure can't reach the on-premises servers using the corresponding NAT until connectivity has been restored for either the primary or secondary connections of the ExpressRoute circuit.
Option 2:
A common NAT pool is used before splitting the traffic between the primary and secondary connections of the ExpressRoute circuit. It's important to make the distinction that the common NAT pool before splitting the traffic doesn't mean it will introduce a single-point of failure as such compromising high-availability.
The NAT pool is reachable even after the primary or secondary connection fail. That's why the network layer itself can reroute the packets and help recover faster following.
- Terminating ExpressRoute BGP connections on stateful devices can cause issues with failover during planned or unplanned maintenances by Microsoft or your ExpressRoute Provider. You should test your set up to ensure your traffic will failover properly, and when possible, terminate BGP sessions on stateless devices.. | https://docs.microsoft.com/en-us/azure/expressroute/designing-for-high-availability-with-expressroute?WT.mc_id=AZ-MVP-5003408 | 2021-09-16T16:42:57 | CC-MAIN-2021-39 | 1631780053657.29 | [] | docs.microsoft.com |
docker-cmd¶
A hook which uses the docker command to deploy containers.
The hook currently supports specifying containers in the docker-compose v1 format. The intention is for this hook to also support the kubernetes pod format.
A dedicated os-refresh-config script will remove running containers if a deployment is removed or changed, then the docker-cmd hook will run any containers in new or updated deployments. | https://docs.openstack.org/heat-agents/latest/install/hooks/docker-cmd.html | 2021-09-16T15:29:27 | CC-MAIN-2021-39 | 1631780053657.29 | [] | docs.openstack.org |
Крутящий момент, необходимый для разрыва данного сустава.. | https://docs.unity3d.com/ru/2018.2/ScriptReference/Joint2D-breakTorque.html | 2021-09-16T17:03:17 | CC-MAIN-2021-39 | 1631780053657.29 | [] | docs.unity3d.com |
Hopsworks supports TensorFlow Serving, a flexible, high-performance serving system for machine learning models, designed for production environments.
The first step to serving your model is to export it as a servable model. This is typically done using the SavedModelBuilder after having trained your model. For more information please see:
Step 1.
The first step is to train and export a servable TensorFlow model to your Hopsworks project.
To demonstrate this we provide an example notebook which is also included in the TensorFlow tour.
In order to serve a TensorFlow model on Hopsworks, the .pb file and the variables folder should be placed in the Models dataset in your Hopsworks project. Inside the dataset, the folder structure should mirror what is expected by TensorFlow Serving.
Models └── mnist ├── 1 │ ├── saved_model.pb │ └── variables │ ├── variables.data-00000-of-00001 │ └── variables.index └── 2 ├── saved_model.pb └── variables ├── variables.data-00000-of-00001 └── variables.index
TensorFlow serving expects the model directory (in this case mnist) to contain one or more sub-directories. The name of each sub-directory is a number representing the version of the model, the higher the version, the more recent the model. Inside each version directory TensorFlow serving expects a file named saved_model.pb, which contains the model graph, and a directory called variables which contains the weights of the model.
Step 2.
To start serving your model, create a serving definition in the Hopsworks Model Serving service or using the Python API.
For using the Model Serving service, select the Model Serving service on the left panel (1) and then select on Create new serving (2).
Next click on the model button to select from your project the model you want to serve.
This will open a popup window that will allow you to browse your project and select the directory containing the model you want to serve. You should select the model directory, meaning the directory containing the sub-directories with the different versions of your model. In the example below we have exported two versions of the mnist model. In this step we select the mnist directory containing the two versions. The select button will be enabled (it will turn green) when you browse into a valid model directory.
After clicking select the popup window close and the information in the create serving menu will be filled in automatically. By default Hopsworks picks the latest available version to server. You can switch to a specific version using the dropdown menu. You can also change the name of the model, remember that model names should be unique in your project.
By clicking on Advanced you can access the advanced configuration for your serving instance. In particular you can configure the Kafka topic on which the inference requests will be logged into (see the inference for more information). By default a new Kafka topic is created for each new serving (CREATE). You can avoid logging your inference requests by selecting NONE from the dropdown menu. You can also re-use an existing Kafka topic as long as its schema meets the requirement of the inference logger.
At this stage you can also configure the TensorFlow Serving server to process the requests in batches.
Finally click on Create Serving to create the serving instance.
For using the python API, import the serving module from the hops library (API-Docs-Python) and use the helper functions.
from hops import serving from hops import model model_path = "Resources/mnist/" model.export(model_path, "mnist", model_version=2, overwrite=True) model_path = "Models/mnist/2/" if serving.exists("mnist"): serving.delete_serving("mnist") serving.create_or_update_serving(model_path, "mnist", serving_type="TENSORFLOW", model_version=2) serving.start_serving("mnist")
Step 3.
After having created the serving instance, a new entry is added to the list.
Click on the Run button to start the serving instance. After a few seconds the instance will be up and running, ready to start processing incoming inference requests.
You can check the logs of the TensorFlow Serving instance by clicking on the logs button. This will bring you to the Kibana UI, from which you will be able to see if the the serving instance managed to load the model correctly.
Step 4.
After a while your model will become stale and you will have to re-train it and export it again. To update your serving instance to serve the newer version of the model, click on the edit button. You don’t need to stop your serving instance, you can update the model version while the serving server is running.
From the dropdown menu you can select the newer version (1) and click Update serving (2). After a couple of seconds the model server will be serving the newer version of your model. | https://hopsworks.readthedocs.io/en/stable/hopsml/tf_model_serving.html | 2021-09-16T16:22:39 | CC-MAIN-2021-39 | 1631780053657.29 | [] | hopsworks.readthedocs.io |
Using Red Hat Advanced Cluster Security for Kubernetes you can view policy violations, drill down to the actual cause of the violation, and take corrective actions.
Red Hat Advanced Cluster Security for Kubernetes built-in policies identify a variety of security findings, including vulnerabilities (CVEs), violations of DevOps best practices, high-risk build and deployment practices, and suspicious runtime behaviors. Whether you use the default out-of-box security policies or use your own custom policies, Red Hat Advanced Cluster Security for Kubernetes reports a violation when an enabled policy fails.
You can analyze all violations in the Violations view and take corrective action.
To see discovered violations, select Violations from the left-hand navigation menu on the RHACS portal.
The Violations view shows a list of violations with the following attributes for each row:
Deployment: The name of the deployment.
Cluster: The name of the cluster.
Namespace: The namespace for the deployment.
Policy: The name of the violated policy.
Enforced: Indicates if the policy was enforced when the violation occurred.
Severity: Indicates the severity as
Low,
Medium,
High, or
Critical.
Categories: The policy categories.
Lifecycle: The lifecycle stages to which the policy applies,
Build,
Deploy, or
Runtime.
Time - The date and time when the violation occurred.
Similar to other views:
You can select a column heading to sort the violations in ascending or descending order.
Use the filter bar to filter violations. See the Searching and filtering section for more information.
Select a violation in the Violations view to see more details about the violation.
When you select a violation in the Violations view, the Violation Details panel opens on the right.
The Violation Details panel shows detailed information grouped by multiple tabs.
The Violation tab of the Violation Details panel explains how the policy was violated. If the policy targets deploy-phase attributes, you can view the specific values that violated the policies, such as violation names. If the policy targets runtime activity, you can view detailed information about the process that violated the policy, including its arguments and the ancestor processes that created it.
You can use tags and comments to specify what is happening with violations to keep your team up to date. Comments allow you to add text notes to violations and tags allow you to categorize your violations.
Comments allow you to add text notes to violations, so that everyone in the team can check what is happening with a violation.
To add and remove comments, you need a role with
write permission for the resource you are modifying. For example, to add comments on violations, your role must have
write permission for the
Alert resource.
To delete comments from other users, you need a role with
write permission for the
AllComments resource.
Click New in the Violations Comments section header.
Enter your comment in the comment editor. You can also add links in the comment editor. When someone clicks on the link in a comment, the linked resource opens in a new tab in their browser.
Click Save.
All comments are visible under the Violations Comments section, and you can edit and delete comments by selecting Edit or Delete icon for a specific comment.
You can use custom tags to categorize your violations. Then you can filter the Violations view to show violations for selected tags (
Tag attribute).
To add and remove tags, you need a role with
write permission for the resource you are modifying. For example, to add tags on violations, your role must have
write permission for the
Alert resource.
To delete tags from other users, you need a role with
write permission for the
AllComments resource.
Select the drop-down menu in the Violation Tags section. Existing tags appear as a list (up to 10).
Click on an existing tag or enter a new tag and then press Enter. As you enter your query, Red Hat Advanced Cluster Security for Kubernetes automatically displays relevant suggestions for the existing tags that match.
You can add more than one tag for a violation. All tags are visible under the Violation Tags section and you can remove tags by clicking on the Remove icon for a specific tag.
The Enforcement tab of the Details panel displays an explanation of the type of enforcement action that was taken in response to the selected policy violation
The Deployment tab of the Details panel displays details of the deployment to which the violation applies.
The overview section lists the following information:
Deployment ID: The alphanumeric identifier for the deployment.
Updated: The time and date when the deployment was updated.
Cluster: The name of the cluster where the container is deployed.
Namespace: The unique identifier for the deployed cluster.
Deployment Type: The type of the deployment.
Replicas: The number of the replicated deployments.
Labels: The labels that apply to the selected deployment.
Annotations: The annotations that apply to the selected deployment.
Service Account: The name of the service account for the selected deployment.
The container configuration section lists the following information:
Image Name: The name of the image for the selected deployment.
Resources:
CPU Request (cores): The number of cores requested by the container.
Memory Request (MB): The memory size requested by the container.
Volumes:
Name: The name of the location where the service will be mounted.
Source: The data source path.
Destination: The path where the data is stored.
Type: The type of the volume.
Secrets: Secrets associated with the selected deployment.
Lists whether the container is running as a privilaged container.
Privileged:
true if it is privileged.
false if it is not privileged.
The Policy tab of the Details panel displays details of the policy that caused the violation.
The policy details section lists the following information:
Id: The numerical identifier for the policy.
Name: The name of the policy.
Description: A detailed explanation of what the policy alert is about.
Rationale: Information about the reasoning behind the establishment of the policy and why it matters.
Remediation: Suggestions on how to fix the violation.
Enabled: Indicates if the policy is enabled.
Categories: The policy category of the policy.
Lifecycle Stage: Lifecycle stages that the policy belongs to,
Build,
Deploy, or
Runtime.
Severity - The risk level for the violation.
Lists the policy criteria for the policy. | https://docs.openshift.com/acs/operating/respond-to-violations.html | 2021-09-16T16:14:39 | CC-MAIN-2021-39 | 1631780053657.29 | [] | docs.openshift.com |
Logging requests using.
Server access logs don't record information about wrong-region redirect errors for Regions that launched after March 20, 2019. Wrong-region redirect errors occur when a request for an object or bucket is made outside the Region in which the bucket exists.
How do I enable log delivery?
To enable log delivery, perform the following basic steps. For details, see Enabling Amazon S3 server access logging.
Provide the name of the target bucket. This bucket is where you want Amazon S3 to save the access logs as objects. Both the source and target buckets must be in the same Amazon Web Services Region and owned by the same account.
You can have logs delivered to any bucket that you own that is in the same Region as the source bucket, including the source bucket itself. But for simpler log management, we recommend that you save access logs in a different bucket.
When your source bucket and target bucket are the same bucket, additional logs are created for the logs that are written to the bucket. This might not be ideal because it could result in a small increase in your storage billing. In addition, the extra logs about logs might make it harder to find the log that you are looking for. If you choose to save access logs in the source bucket, we recommend that you specify a prefix for all log object keys so that the object names begin with a common string and the log objects are easier to identify.
Key prefixes are also useful to distinguish between source buckets when multiple buckets log to the same target bucket.
(Optional) Assign a prefix to all Amazon S3 log object keys. The prefix makes it simpler for you to locate the log objects. For example, if you specify the prefix value
logs/, each log object that Amazon S3 creates begins with the
logs/prefix in its key.
logs/2013-11-01-21-32-16-E568B2907131C0C0
The key prefix can also help when you delete the logs. For example, you can set a lifecycle configuration rule for Amazon S3 to delete objects with a specific key prefix. For more information, see Deleting Amazon S3 log files.
(Optional) Set permissions so that others can access the generated logs. By default, only the bucket owner always has full access to the log objects. For more information, see Identity and access management in Amazon S3.
Log object key format
Amazon S3 uses the following object key format for the log objects it uploads in the target bucket:
TargetPrefixYYYY-mm-DD-HH-MM-SS-UniqueString/
In the key,
YYYY,
mm,
DD,
HH,
MM, and
SS are the digits of the year, month, day, hour, minute,
and seconds (respectively) when the log file was delivered. These dates and times
are in
Coordinated Universal Time (UTC).
A log file delivered at a specific time can contain records written at any point before that time. There is no way to know whether all log records for a certain time interval have been delivered or not.
The
UniqueString component of the key is there to prevent overwriting of
files. It has no meaning, and log processing software should ignore it.
The trailing slash / is required to denote the end of the prefix.
How are logs delivered?
Amazon S3 periodically collects access log records, consolidates the records in log files, and then uploads log files to your target bucket as log objects. If you enable logging on multiple source buckets that identify the same target bucket, the target bucket will have access logs for all those source buckets.. For more information, see Access control list (ACL) overview.. It is rare to lose log records, but server logging is not meant to be a complete accounting of all requests.
It follows from the best-effort nature of the server logging feature that the usage
reports available at the Amazon portal (Billing and Cost Management reports on the
Amazon Web Services Management Console
Bucket logging status changes take effect over time
Changes to the logging status of a bucket take time to actually affect the delivery of log files. For example, if you enable logging for a bucket, some requests made in the following hour might be logged, while others might not. If you change the target bucket for logging from bucket A to bucket B, some logs for the next hour might continue to be delivered to bucket A, while others might be delivered to the new target bucket B. In all cases, the new settings eventually take effect without any further action on your part.
For more information about logging and log files, see the following sections:
Topics | https://docs.amazonaws.cn/en_us/AmazonS3/latest/userguide/ServerLogs.html | 2021-09-16T16:42:44 | CC-MAIN-2021-39 | 1631780053657.29 | [] | docs.amazonaws.cn |
Respond to deletion of a node revision.
This hook is invoked from node_revision_delete() after the revision has been removed from the node_revision table, and before field_attach_delete_revision() is called.
Parameters
Node $node: The node revision (node object) that is being deleted.
Related topics
File
- core/
modules/ node/ node.api.php, line 444
- Hooks provided by the Node module.
Code
function hook_node_revision_delete(Node $node) { db_delete('mytable') ->condition('vid', $node->vid) ->execute(); } | https://docs.backdropcms.org/api/backdrop/core%21modules%21node%21node.api.php/function/hook_node_revision_delete/1 | 2021-09-16T16:41:07 | CC-MAIN-2021-39 | 1631780053657.29 | [] | docs.backdropcms.org |
specific blobs and virtual directories by putting their relative paths (NOT URL-encoded) in a file:
azcopy rm "https://[account].blob.core.windows.net/[container]/[path/to/parent/dir]" --recursive=true --list-of-files=/usr/bar/list.txt - file content: dir1/dir2 blob1 blob2 does not trigger the removal of the files.
--exclude-path string Exclude these paths when removing. This option does not support wildcard characters (*). Checks relative path prefix. For example:
myFolder;myFolder/subDirName/file.pdf
--exclude-pattern string Exclude files where the name matches the pattern list. For example:
*.jpg;
exactName
--force-if-read-only When deleting an Azure Files file or folder, force the deletion to work even if the existing object is has its read-only attribute set.
--from-to string Optionally specifies the source destination combination. For Example: BlobTrash, FileTrash, BlobFSTrash
--help help for remove.
--include-path string Include only these paths when removing. This option does not support wildcard characters (*). Checks relative path prefix. For example:
myFolder;myFolder/subDirName/file.pdf
--include-pattern string Include only files where the name matches the pattern list. For example:
.jpg;)
--recursive Look into subdirectories recursively when syncing between directories. | https://docs.microsoft.com/en-au/azure/storage/common/storage-ref-azcopy-remove?toc=/azure/storage/blobs/toc.json | 2021-09-16T17:27:48 | CC-MAIN-2021-39 | 1631780053657.29 | [] | docs.microsoft.com |
Quickstart: Building your first static site with Azure Static Web Apps
Azure Static Web Apps publishes a website by building an app cloned repository in the editor.
Create a static web app
Inside Visual Studio Code, select the Azure logo in the Activity Bar to open the Azure extensions window.
Note
You are required to sign in to Azure and GitHub in Visual Studio Code to continue. If you are not already authenticated, the extension will prompt you to sign in to both services during the creation process.
Under the Static Web Apps label, select the plus sign.
Note
The Azure Static Web Apps Visual Studio Code extension streamlines the creating process by using a series of default values. If you want to have fine-grained control of the creation process, open the command palate and select Azure Static Web Apps: Create Static Web App... (Advanced).
The command palette opens at the top of the editor and prompts you to select a subscription name.
Select your subscription and press Enter.
Next, name your application.
Type my-first-static-web-app and press Enter.
Select the presets that match your application type.
Enter ./src as the location for the application files and press Enter.
Enter ./src as the build output location and press Enter.
Once the app is created, a confirmation notification is shown in Visual Studio Code.
As the deployment is in progress, the Visual Studio Code extension reports the build status to you.
Once the deployment is complete, you can navigate directly to your website.
To view the website in the browser, right-click on the project in the Static Web Apps extension, and select Browse Site._1<< | https://docs.microsoft.com/en-us/azure/static-web-apps/getting-started | 2021-09-16T17:22:22 | CC-MAIN-2021-39 | 1631780053657.29 | [array(['media/getting-started/create-template.png',
'Create repository from template'], dtype=object)
array(['media/getting-started/extension-delete.png', 'Delete app'],
dtype=object) ] | docs.microsoft.com |
23.2. プロセシングフレームワークを設定する)..
描画スタイルは、各アルゴリズムとその出力のそれぞれに対して個別に設定できます。ツールボックスでアルゴリズムの名前を右クリックして 出力用にレンダリングスタイルを編集 を選択するだけです。次の図のようなダイアログが表示されます。
それぞれの出力に設定したいスタイルファイル (
.qml) を選択して OK を押して下さい。. | https://docs.qgis.org/3.16/ja/docs/user_manual/processing/configuration.html | 2021-09-16T15:48:23 | CC-MAIN-2021-39 | 1631780053657.29 | [] | docs.qgis.org |
Get up and running with the basics of the Unreal Editor.
Starting Out
Unreal Engine 4 For Unity Developers
Translate your Unity knowledge into UE4 so you can get up to speed quickly..
In Unreal Editor, the scenes in which you create your game experience are generally referred to as Levels. You can think of a level as a 3D environment into which you place a series of objects and geometry to define the world your players will experience. Any object that is placed in your world, be it a light, a mesh, or a character, is considered to be an Actor. Technically speaking, an Actor is a programming class used within the Unreal Engine to define an object that has 3D position, rotation, and scale data. Think of an Actor as any object that can be placed in your levels.
Editor Viewports
The
Creating levels begins by placing items in a map inside Unreal Editor. These items may be world geometry, decorations in the form of Brushes, Static Meshes, lights, player starts, weapons, or vehicles. Which items are added when is usually defined by the particular workflow used by the level design team.
The Blueprint Visual Scripting system in Unreal Engine is a complete gameplay scripting system based on the concept of using a node-based interface to create gameplay elements from within Unreal Editor. As with many common scripting languages, it is used to define object-oriented (OO) classes or objects in the engine. As you use UE4, you'll often find that objects defined using Blueprint are colloquially referred to as just "Blueprints."
This system is extremely flexible and powerful as it provides the ability for designers to use virtually the full range of concepts and tools generally only available to programmers. In addition, Blueprint-specific markup available in Unreal Engine's C++ implementation enables programmers to create baseline systems that can be extended by designers.. | https://docs.unrealengine.com/4.27/en-US/Basics/GettingStarted/ | 2021-09-16T15:04:55 | CC-MAIN-2021-39 | 1631780053657.29 | [array(['./../../../Images/Basics/UnrealEngineForUnityDevs/image_0.jpg',
'image_0.png'], dtype=object) ] | docs.unrealengine.com |
Event Organiser Pro allows you to quickly and easily e-mail selected bookees from the admin screen.
On the bookings:
Bookee details
%display_name%(Display name of the bookee as set in their profile
%first_name%– The bookee’s first name (if provided)
%last_name%– The bookee’s last name (if provided)
%username%– The bookee’s username
%bookee_email%– The bookee’s email
Booking details
%booking_reference%– The booking reference number
%tickets%– A table of tickets included in the booking
%booking_date%– The date the booking was made
%booking_amount%– Total amount of the booking
%ticket_quantity%– Total number of tickets in the booking
%transaction_id%– The transaction ID as specified by the payment gateway (if applicable).
- ‘%booking_admin_url% – The url to the edit booking page. Note this page is for users who manage bookings, bookees will not be able to access it.
Event details
%event_date%– Date of the event they booked
%event_title%– Name of the event they booked
%event_url%– Event url,
Venue details
%event_venue%– Venue name
%event_venue_address%– Venue address
%event_venue_city%– Venue city
%event_venue_state%– Venue state/province
%event_venue_postcode%– Venue postcode
%event_venue_country%– Venue country
%event_venue_url%– Venue url
Event organiser details
- `
Custom field data
%form_submission%– Date of the event they booked
Misc
%site_name%– Name of the site, as set in Settings > General
%site_url%– Url of the site, as set in Settings > General. | http://docs.wp-event-organiser.com/bookings/emailing-attendees/ | 2021-02-24T22:44:18 | CC-MAIN-2021-10 | 1614178349708.2 | [] | docs.wp-event-organiser.com |
Changelog for the firmware of ID-engine XE
(firmware ID 1094)
Here, you can find the history of all released firmware versions.
- Production version: 1.02.00
This is the version with which new readers are currently shipped.
- Versions marked as STABLE are official versions available for downloadcall_made.
- All other versions are only given to individual customers on request, e.g. to fix a very specific bug. Any changes in these versions will be included in one of the next higher stable versions.
1.02.00 (2019-10-22) | STABLE
Features
- Device port of ID-engine XE reader can be configured/disabled via Protocols / Network / NicPrinterPortSpeedDuplexModecall_made.
- Added Sys.GetFwCrccall_made command.
- VHL.Setupcall_made and VHL.Writecall_made support for inter-industry cards added.
- Sys.GetFeaturescall_made returns supported host protocol encryption (AES, PKI) now.
- Updated LEGIC SM-4200 firmware to OS V4.3.1.0.
- Added new configuration value VhlCfg / File / LegicApplicationSegmentListcall_made, which is the basis for a Legic VHL file definition. It replaces the former value VhlCfg / File / LegicSegmentListLegacycall_made, which is marked as deprecated now as it isn't supported by our SDK.
- Added support for reading iClass card numbers with ID-engine XE Legic readers equipped with an SM-4200M (ISO 15693-based iClass cards only - no iClass SE/Seos).
- Added new Legiccall_made low-level command dispatcher, which currently only contains the transparent Command for direct communication with the SM-4x00. This dispatcher replaces the Lgacall_made dispatcher, which is marked as deprecated now as it isn't supported by our SDK.
- Added 3 new ISO 15693 low-level commands: TransparentCmdcall_made, WriteMultipleBlockscall_made, and ReadMultipleBlockscall_made. These commands replace the commands 0x2105call_made, 0x2106call_made, and 0x2120call_made, which were marked as deprecated as they aren't supported by our SDK.
- Added VHL.ResolveFilenamecall_made, which allows you to address VHL files via name instead of index. We recommend to follow this approach for new projects.
- Readers now scan for BALTECH ConfigCards even when not adding ISO14443/A to Project / VhlSettings / ScanCardFamiliescall_made explicitly. In the latter case; ISO14443/A cards aren't processed by the Autoread Rules at all (it is only checked if they are BALTECH ConfigCards and transferred to the readers configuration if so). For the rare case that non-ISO14443/A ConfigCards are needed, a new configuration setting was introduced: "Project.VhlSettings.ConfCardFamily"
- Ultralightcall_made low-level commands added
Breaking Changes
- Deprecated ISO15693 commands (command code
0x20XX) were removed. To execute low-level ISO15693 commands, use the Iso15call_made command group instead (as recommended for all current products).
Bug fixes
- ISO 15693 block length was restricted to 16 bytes. Now it supports up to 64 bytes.
- HTG 1/S write command contained an error.
- On presentation of multiple ISO 14443A cards, sporadic reading problems could occur. This affected LEGIC readers only.
- Sys.GetFeaturescall_made didn't return the feature IDs of EM4205 and EM4450.
- VHL.IsSelectedcall_made didn't work with ioProx, Pyramid, and ISO15693 cards that had a block length of 32 bytes.
- 13.56 MHz card types couldn't be read anymore after executing security-based host commands.
- LED signalization during BALTECH ConfigCard presentation wasn't correct.
- HID iClass card presentation could cause the reader to reboot.
- Sys.GetFeaturescall_made could indicate MIFARE Classic/Plus support for LEGIC readers erroneously.
- Problems occurred when reading ISO14443-4-compatible cards with an ID-engine XE LEGIC reader.
- VHL Selectcall_made didn't scan for 125 kHz cards after card analysis was done in ID-engine Explorer.
- GProx is supported for any bit length now.
- AWID supports any bit length now.
- MIFARE DESFire cards with DES crypto may be accessed now.
- Reader configuration could be destroyed under certain circumstances.
- Reader could enter a state without reading any ISO 14443 or ISO 15693 compatible cards any longer.
- The delay after a TCP connection request failure configured with Protocols / BrpTcp / TcpConnectTrialMinDelaycall_made and Protocols / BrpTcp / TcpConnectTrialMaxDelaycall_made was not applied correctly.
1.01.00 (2018-10-02)
- Initial release | https://docs.baltech.de/release-info/changelog-firmware-1094.html | 2021-02-24T23:18:03 | CC-MAIN-2021-10 | 1614178349708.2 | [] | docs.baltech.de |
:
Maximum Builds per Page
This configuration setting allows you to limit the number of builds that are shown on a CI Builds panel, defaults to 100.
User Id vs Display Name
Some feature of the integration require user information, like triggering a build send who triggered the build to the Jenkins site. This setting allows you to choose whether the user’s display name or id is used.
Data Retention Period
Retention period, in months, how long jobs and builds that are marked as deleted will be kept in the build cache, defaults to 12.
Jobs are only cleaned up from the build cache if the job itself is marked as deleted, and the latest build in the cache is older than the retention period. Builds are cleaned up from the build cache if the build is marked as deleted, and the timestamp of the build is older than the retention period.
Date Format
This configuration setting allows you to configure the format used to make date and time strings human readable. Documentation on date / time formats can be found online.. | https://docs.marvelution.com/jji/cloud/administration/app-configuration/ | 2021-02-24T23:35:22 | CC-MAIN-2021-10 | 1614178349708.2 | [] | docs.marvelution.com |
Adjusting a Ruler Drawing Guide
The Ruler guide is simply a ruler. It has a single axis that you can position and rotate. You can draw along this axis as if you were drawing against a ruler.
NOTE To add a Ruler
Ruler guide that you wish to make adjustments to.
The guide appears in the
Camera or Drawingview.
- Do one of the following:
To reposition the guide without rotating it, click and drag on its offset handle
.TIP You can press and hold the Shift key to only move the guide horizontally or vertically relative to the camera angle.
To rotate the guide, click and drag on one of its rotation handles
. It will rotate around the opposite rotation handle.TIPS
- You can press and hold the Shift key to make the angle of the ruler snap to the nearest multiple of 15°.
- You can press and hold the Alt key to make the guide rotate around its centre.
-. | https://docs.toonboom.com/help/harmony-17/advanced/drawing/edit-ruler-guide.html | 2021-02-24T23:44:57 | CC-MAIN-2021-10 | 1614178349708.2 | [array(['../Skins/Default/Stylesheets/Images/transparent.gif', 'Closed'],
dtype=object) ] | docs.toonboom.com |
Backtesting
The beauty of working with a trading bot lies in the fact that you can backtest your strategies and configurations. That way, you don't need to simulate your trading to see if it's going to work out or not.
Professional algorithmic traders continuously backtest their strategies. They have a strategy running live on their investments that they've tested on the current market, and they keep creating and testing new strategies. The markets change, and so do their strategies, essential therefore is the backtester.
The backtesting tool tests your strategy in combination with your configuration. It scans when your Hopper would've bought and what the result would've been with your current setup. It's a perfect way to analyze if your Stop-Loss, Trailing Stop-Loss, and other settings are set correct.
We recommend first testing your Strategy Builder strategies so that you have an excellent strategy to work with. You can then start playing around with different configurations.
Your First BacktestYour First Backtest
Since this is your first time using the Backtesting tool, let's test the template that we've installed when we signed up at Cryptohopper.
Select the currency you want to backtest and click "Load Existing Config". All settings are automatically adjusted, as you can see! Select the period you want to backtest the config on. We recommend not setting the period too long, otherwise, it will take very long to analyze it. Also, remember what the professionals do? They use smaller time frames so that their strategies are more effective.
As you can see this needs some adjustments. Alter your configuration until your max profit is acceptable, and your sells with a loss aren't higher than your successful sells. When the test results look good, click "Deploy this Configuration". This will change your config to the settings you've set in the backtesting tool.
Your Backtest history shows the backtests you've done, so you can easily choose and deploy the most successful test. Please keep in mind that the backtester checks your indicator values every 5 minutes. The whole checking cycle of your strategy in a real funds hopper can therefore give different results. When using small candle sizes, the strategy can skip some candles. Below, we have listed the checking times of our subscriptions, including interval (break) time:
- Pioneer: Only manual trading, therefore indicators don't apply,
- Explorer: 15 to 20 minutes,
- Adventurer: 6 to 12 minutes,
- Hero: 2 to 6 minutes.
- Paper trading hopper: 10 minutes
The "Config Finder" helps you automate this part, but only works with the "Multiple TA Factors" strategy and not with strategies downloaded from the marketplace or the strategy builder. Edit the Multiple TA Factors strategy first in your config -> Strategy -> Select Multiple TA Factors. Do this before you start using the "Config Finder", otherwise you will keep getting errors that your Hopper couldn't find a target.
Are you satisfied with what your backtester found? Deploy it immediately and even share it with your friends! The configurations your backtester found are saved in the "Best Configurations" tab.
Keep testing your configurations and test your results with paper trading. It's the safest way of starting your adventure in the world of automated trading. Because we all know: Past performance does not guarantee future results. Trade safely! | https://docs.cryptohopper.com/docs/en/Backtesting/backtesting/ | 2021-02-24T22:52:19 | CC-MAIN-2021-10 | 1614178349708.2 | [array(['https://s3.amazonaws.com/cdn.cryptohopper.com/images/documentation/backtesting/overviewbacktesting.jpg',
'backtesting test testing back TA technical analysis signalers portfolio management manager free blockfolio delta automated automatic crypto cryptocurrency bitcoin ethereum trading bot platform cryptohopper'],
dtype=object)
array(['https://s3.amazonaws.com/cdn.cryptohopper.com/images/documentation/backtesting/resultbacktesting.jpg',
'backtesting test testing back TA technical analysis signalers portfolio management manager free blockfolio delta automated automatic crypto cryptocurrency bitcoin ethereum trading bot platform cryptohopper'],
dtype=object)
array(['https://s3.amazonaws.com/cdn.cryptohopper.com/images/documentation/backtesting/historybacktesting.jpg',
'backtesting test testing back TA technical analysis signalers portfolio management manager free blockfolio delta automated automatic crypto cryptocurrency bitcoin ethereum trading bot platform cryptohopper'],
dtype=object)
array(['https://s3.amazonaws.com/cdn.cryptohopper.com/images/documentation/backtesting/configfinderbacktesting.jpg',
'backtesting test testing back TA technical analysis signalers portfolio management manager free blockfolio delta automated automatic crypto cryptocurrency bitcoin ethereum trading bot platform cryptohopper'],
dtype=object) ] | docs.cryptohopper.com |
Using the ITK Adapter Kit
The ITK Adapter Kit is a set of components that provide a fast track to ITK-compliance for both legacy and new applications. Use this to communicate between ITK-accredited applications and legacy applications.
The IinterSystems ITK Adapter Kit includes the following items, which you can combine as meets your needs:
An ITK business service (EnsLib.ITK.AdapterKit.Service.SOAPService), which receives SOAP messages from ITK-accredited endpoints.
An ITK business operation (EnsLib.ITK.AdapterKit.Operation.SOAPOperation), which sends SOAP messages to ITK-accredited endpoints.
DTL transformations, routers, and business processes to convert ITK messages to native application formats and vice versa.
This includes a set of classes in the package EnsLib.ITK.AdapterKit.Process.
You can add these to an existing production or you can create a new production to contain them. | https://docs.intersystems.com/irisforhealthlatest/csp/docbook/DocBook.UI.Page.cls?KEY=EITK_ADAPTERKIT | 2021-02-25T00:19:44 | CC-MAIN-2021-10 | 1614178349708.2 | [] | docs.intersystems.com |
The Rate Limiting Advanced plugin for IAM is a re-engineered version of the incredibly popular IAM Rate Limiting plugin, with greatly enhanced configuration options and performance.=rate-limiting-advanced"
Without a database
Configure this plugin on a Service by adding this section do your declarative configuration file:
plugins: - name: rate-limiting-advanced service: {service} config:
In both cases,
{service} is the
id or
name of the Service that this plugin configuration will target.
Enabling the plugin on a Route
With a database
Configure this plugin on a Route with:
$ curl -X POST{route}/plugins \ --data "name=rate-limiting-advanced"
Without a database
Configure this plugin on a Route by adding this section do your declarative configuration file:
plugins: - name: rate-limiting-advanced route: {route} config:
In both cases,
{route} is the
id or
name of the Route that this plugin configuration will target.
Enabling the plugin on a Consumer
With a database
You can use the endpoint to enable this plugin
on specific Consumers:
$ curl -X POST{consumer}/plugins \ --data "name=rate-limiting-advanced" \
Without a database
Configure this plugin on a Consumer by adding this section do your declarative configuration file:
plugins: - name: rate-limiting-advanced. Read the Plugin Reference and the Plugin Precedence sections for more information.
Parameters
Here's a list of all the parameters which can be used in this plugin's configuration:
Note: Redis configuration values are ignored if the
cluster strategy is used.
Note: PostgreSQL 9.5+ is required when using the
cluster strategy with
postgres as the backing IAM cluster data store.
Note: The
dictionary_name directive was added to prevent the usage of the
kong shared dictionary, which could lead to
no memory errors
Notes
An arbitrary number of limits/window sizes can be applied per plugin instance. This allows users to create multiple rate limiting windows (e.g., rate limit per minute and per hour, and/or per any arbitrary window size); because of limitation with IAM's plugin configuration interface, each nth limit will apply to each nth window size. For example:
$ curl -i -X POST{service}/plugins \ --data name=rate-limiting-advanced \ --data config.limit=10,100 \ --data config.window_size=60,3600 \ --data config.sync_rate=10
This will apply rate limiting policies, one of which will trip when 10 hits have been counted in 60 seconds, or when 100 hits have been counted in 3600 seconds. For more information, please see Enterprise Rate Limiting Library. | https://docs.intersystems.com/irisforhealthlatest/csp/docbook/apimgr/plugins/rate-limiting-advanced.html | 2021-02-25T00:17:12 | CC-MAIN-2021-10 | 1614178349708.2 | [] | docs.intersystems.com |
Configuring Distributed Systems
An ECP application consists of one or more ECP data server systems — data providers — distributing to one or more ECP application server systems — data consumers. The primary means of configuring an ECP application is using the ECP Settings page of the Management Portal (System Administration > Configuration > Connectivity > ECP Settings).
Once you have decided how to distribute your data, configuring an ECP application is very straightforward:
Enable each system that provides data as an ECP data server. See the Configuring an ECP Data Server section for instructions.
If you are using Security, see the Managing ECP Privileges section for a discussion on how resources, roles, and privileges are managed in an ECP configuration.
Specify each system that requests data as an ECP application server for each data server with which it wishes to communicate. See the Configuring an ECP Application Server section for instructions.
In addition, configure each ECP application server system so that it can see remote data in the defined ECP data servers. See the Configuring ECP Remote Data Access section for instructions.
ECP shares the buffer pool with the local instance of Caché; therefore, InterSystems recommends allocating additional buffers to accommodate ECP. See the Memory Use on Large ECP Systems section of the “Developing Distributed Applications” chapter of this guide for details.
A system operating as an ECP data server can simultaneously act as an ECP application server, and vice versa. You may configure your ECP application and data servers in any order; you do not need to enable an ECP data server before defining an application server.
Configuring an ECP Data Server
To configure a system as an ECP data server, you must first enable the ECP service from the Services page of the Management Portal (System Administration > Security > Services). Click %Service_ECP, select the Service enabled check box, and click Save. This is the only configuration setting required to use this system as an ECP data server.
Alternatively, from the ECP Settings page, click Edit next to The ECP service is Disabled to navigate to the same Edit Service page. When you click Save, you return to the ECP Settings page.
To see a list of ECP application servers that have been configured to connect to this data server, click the Application Servers button on the ECP Settings page.
For a detailed explanation of Caché services, see the “Services” chapter of the Caché Security Administration Guide.
Update the Maximum number of application servers setting to specify the maximum number of application servers that can possibly access this data server simultaneously. Caché allocates a limited number of application server nodes. Increase the default value of 1 up to a maximum of 254 to avoid a system restart, which is required when the number of connections becomes greater than the number of allocated nodes.
If you increase the maximum number of application server, you must restart Caché.
The ECP data server is now ready to accept connections from valid ECP application servers.
You may wish to restrict access to the data server. See the following sections for ways to do this:
Restricting ECP Application Server Access
You can restrict which systems can act as ECP application servers for an ECP data server system by performing the following steps:
From the Services page, click %Service_ECP.
In the Allowed Incoming Connections box, click Add and enter a single address (for example, 192.9.202.55 or mycomputer.myorg.com) or a range of addresses (for example, 18.61.202–210.* or 18.68.*.*).
If you enter IP addresses in the Allowed Incoming Connections list, the ECP data server only accepts incoming ECP connections from application servers whose IP is in the list. If the list is empty, any application server can connect to this system if the ECP service is enabled.
After you add an IP address, it appears in the list of Allowed Incoming Connections with options to Delete the address from the list and Edit the Roles of the connection.
This process of managing roles on ECP data and application servers is part of Caché security. For details on how Caché roles and privileges work in general see the “Roles” chapter of the Caché Security Administration Guide. The following section details how these features work with ECP.
Specifying ECP Privileges and Roles
For each specified IP address or range of addresses, click Edit to display the Select Roles area that allows you to specify the roles associated with the connection from the IP address. By default, the connection holds the %All role. If you specify one or more other roles, these roles are the only roles that the connection holds. Hence, a connection from an IP address with the %Operator role has only the privileges associated with that role, while a connection from a different IP address with no associated roles (and therefore %All) has all privileges.
To specify the roles associated with an IP address:
Select roles from those listed under Available and click the right arrow to add them to the Selected list.
To remove roles from the Selected list, click them and then click the left arrow.
To add all roles to the Selected list, click the double right arrow; to remove all roles from the Selected list, click the double left arrow.
Click Save to associate the roles with the IP address.
The Managing ECP Privileges section discusses how Caché manages privileges within an ECP configuration.
Managing ECP Privileges
The following discussion assumes that resources and roles refer to the same assets on each machine. To be granted access to a resource on the ECP data server, the role held by the process on the application server and the role set for the ECP connection on the data server must both include permissions for the same resource.
By default, Caché grants the ECP data server the %All privilege when the data server runs on behalf of an ECP application server. This allows it to return any data in any database that the application server requests. Caché restricts access to this data on the application server based on the privileges of the user requesting the data on the application server.
For example, for a user on the application server who only has privileges for the %DB_USER resource, data in the USER database on the data server is available (which by default is assigned the %DB_USER resource), but attempting to access the SAMPLES database on the data server results in a <PROTECT> error. If a different user on the application server has privileges for the %DB_SAMPLES resource, then the SAMPLES database on the data server is available.
You can also restrict the set of roles on the data server based on the IP Address of the application server. For example, on the data server you can specify that when interacting with an application server named NODE_A the only available role is %DB_USER. In this case, users on the application server granted the %DB_USER role can access the USER database on the data server. However, users on the application server with %DB_SAMPLES access receive a <PROTECT> error if they try to access the SAMPLES database on the data server (since the data server is only set up with %DB_USER access).
The following are exceptions to this behavior:
Caché always grants the ECP data server the %DB_CACHESYS role since it requires Read access to the CACHESYS database to run. This means that a user on an ECP application server with %DB_CACHESYS can access the CACHESYS database on the ECP data server.
To prevent a user on the application server from having access to the CACHESYS database on the data server, there are two options:
Do not grant the user privileges for the %DB_CACHESYS resource.
On the data server, change the name of the resource for the CACHESYS database to something other than %DB_CACHESYS, making sure that the user on the application server has no privileges for that resource.
If the ECP data server has any public resources, they are available to any user on the ECP application server, regardless of either the roles held on the application server or the roles configured for the ECP connection.
Changes both to the configuration of the ECP connection and to the public permissions on resources require a restart of Caché before taking effect..
Configuring an ECP Application Server
To configure a system as an ECP application server, you define an ECP data server from which to retrieve data. Add this remote ECP data server by performing the following steps:
From the ECP Settings page, click Data Servers to display a list of currently configured ECP data servers.
Click Add Server to add a data server.
Enter the following information for the data server:
Server Name — Enter a logical name for the convenience of the application system administrator.
Host DNS Name or IP Address — Specify the host name either as a raw IP address (in dotted-decimal format or, if IPv6 is enabled, in colon-separated format) or as the Domain Name System (DNS) name of the remote host. If you use the DNS name, it resolves to an actual IP address each time the application server initiates a connection to that ECP data server host. For more information, see the IPv6 Support section in the “Configuring Caché” chapter of the Caché System Administration Guide.Important:
When adding a mirror as an ECP data server, do not enter the virtual IP address (VIP) of the mirror, but rather the DNS name or IP address of the current primary failover member. Because the application server regularly collects updated information about the mirror from the specified host, it automatically detects a failover and switches to the new primary failover member. See the “Mirroring” chapter of the Caché High Availability Guide for information about mirror failover and VIPs.
IP Port — The port number defaults to 1972; change it as necessary to the superserver port of the Caché instance on the data server.
Select the Mirror Connection check box if this data server is the primary failover member of a mirror.
Click Save.
Once you add a remote ECP data server, it appears in the list of defined data servers this application server can connect to at the bottom of this same portal page. Add additional ECP data servers to the list using the Add Remote Data Server link. Remove or edit server definitions using the Delete and Edit links, respectively. You may also click Change Status of the connection. See the “Monitoring Distributed Applications” chapter for details.
You may add as many data servers as allowed by the Maximum number of data servers setting. Update this value to specify the maximum number of server connections the application server may need later so that Caché reserves enough system resources so as not to require a restart each time you add a data server. Increase the default value of 2 up to a maximum of 254.
If you increase the maximum number of data servers, you must restart your Caché.
Your system is ready to act as an ECP application server. No further user intervention is required; when the ECP application server needs access to the ECP data server, it automatically establishes a connection to the server.
Configuring ECP Remote Data Access
After defining a list of one or more ECP data servers for an ECP application server, configure the ECP application server system so that it has access to data stored in the ECP data server system. Do this by defining a remote database on the ECP application server system.
A remote database is a database that is physically located on an ECP data server system, as opposed to a local database which is physically located on the local application server system.
To define a remote database on the ECP application server, perform the following steps:
Navigate to the Remote Databases page of the Management Portal (System Administration > Configuration > System Configuration > Remote Databases).
Click Create New Remote Database to invoke the Database Wizard, which displays a list of the logical names (the name you used when you added it to the list of ECP data servers) of the remote data servers on the application server.
Click the name of the appropriate ECP data server and click Next.
The portal displays a list of database directories on the remote ECP data server. Select one of these to serve as the remote database.
Enter a database name (its name on the ECP application server; it does not need to match its name on the ECP data server) and click Finish. You have defined a remote database.
Next, define a new namespace (or modify an existing namespace) to view the data in the remote database as you would in a local database.
By using the Namespace Wizard in the Management Portal, you can define a namespace and a remote database at the same time, thereby combining these two procedures for adding a remote database.
To define a new namespace that views the data in a remote database perform the following steps:
Navigate to the Namespaces page of the Management Portal (System Administration > Configuration > System Configuration > Namespaces).
Click Create New Namespace.
Fill in the form with the following fields:
Enter a name for the new namespace.
Click Remote Database.
If you created a remote database as described previously, select it; otherwise click Create New Database and follow the previous Database Wizard instructions.
If you use CSP, select Create a default CSP application for this namespace.
Choose a database for the new namespace. Select the remote database from the list (remote and local databases are listed together) and click Next.
Click Save. You have a new namespace that is mapped to a remote database.
Any data retrieved or stored in this namespace is loaded from and stored in the physical database on the ECP data server and updated in the local application server system cache if it is already cached.
ECP Security Notes
First, all the instances in an ECP configuration need to be within the secured Caché perimeter (that is, within an externally secured environment). This is because:
ECP is a basic security service (not a resource-based service), so there is no way to regulate which users have access to it. For more information on basic and resource-based services, see the “Available Services” section of the “Services” chapter of the Caché Security Administration Guide.
Caché does not support SSL/TLS to secure ECP connections. For more information on the use of SSL/TLS, see the “Using SSL/TLS with Caché” chapter of the Caché Security Administration Guide.
Also, when using encrypted databases on ECP data servers, it is recommended to encrypt the CACHETEMP database on all connected application servers. The same or different keys can be used. For more information on database encryption, see the “Managed Key Encryption” chapter of the Caché Security Administration Guide. | https://docs.intersystems.com/latest/csp/docbook/DocBook.UI.Page.cls?KEY=GDDM_CONFIG | 2021-02-25T00:05:15 | CC-MAIN-2021-10 | 1614178349708.2 | [] | docs.intersystems.com |
can then put all associated UUIDs under policies to quarantine those VMs from the rest of the network.
Panorama includes predefined payload formats for threat and traffic logs in the HTTP Server Profile. These payload formats correspond to predefined security tags in NSX manager. Use the following commands in OpenSSL to complete this step.cat cert_NSX_Root_CA.crt cert_NSX_Signed1.pem > cert_NSX_cert_chain.pemopenssl pkcs12 -export -in cert_NSX_cert_chain.pem -out cert_NSX_cert.p12
- Log in to NSX-V Manager and select. ClickManage Appliance SettingsSSL CertificatesUpload PKC#12 KeystoreChoose File, locate the p12 file you created in the previous step, and clickImport.
- Associate a security group with a security tag in vCenter.
- Log in to vCenter.
- Select.Networking & SecurityService ComposerSecurity Groups
- Select a security group that is counterpart to the quarantine dynamic address group you created previously and clickEdit Security Group.
- Select Define dynamic membership and click the + icon.
- ClickAdd.
- Set the criteria details to Security Tag Contains and then enter the NSX-V security tag that corresponds to the NSX-V payload format you selected previously. Each of the predefined NSX-V payload formats corresponds to an NSX-V security tag. To view the NSX-V security tags in NSX-V, select.Networking & SecurityNSX-V ManagersNSX-V Manager IPManageSecurity TagsIn this example,NSX-V-V Config-sync underand reboot the PA-VM to resolve this issue.PanoramaVMware NSX-VService Manager
- Log in to vCenter.
- SelectVMs and Templatesand choose the quarantined guest.
- Select.SummarySecurity TagsManage
- Uncheck the security tag used by the quarantine security group and click OK.
- Refresh the page and the quarantine security will no longer be listed under.SummarySecurity Group Membership
Recommended For You
Recommended Videos
Recommended videos not found. | https://docs.paloaltonetworks.com/vm-series/8-1/vm-series-deployment/set-up-the-vm-series-firewall-on-vmware-nsx/dynamically-quarantine-infected-guests.html | 2021-02-24T23:48:14 | CC-MAIN-2021-10 | 1614178349708.2 | [] | docs.paloaltonetworks.com |
Run SearchUnify Jira On-Premises Crawler
Jira On-Premises Crawler is a Java app. It indexes the your Jira data and stores the results in a JSON file. This article explains the steps to generating that JSON file.
Instructions
- Launch a terminal.
- Run
java-jar "Jira-Crawler.jar"
NOTE. Ensure that you are in the same directory as Jira-Crawler.jar before running the command.
- Enter Jira details when prompted.
- Run the crawler to create a JSON (metadata.json) file of your projects.
java-jar Jira-Crawler.jar --runCrawler
Last updated: Friday, February 19, 2021
Was this article helpful? Send us your review at [email protected] | https://docs.searchunify.com/Content/Content-Sources/Jira-On-Premises-Authentication.htm | 2021-02-24T23:25:55 | CC-MAIN-2021-10 | 1614178349708.2 | [] | docs.searchunify.com |
Font The generate Font object.
Creates a Font object which lets you render a font installed on the user machine.
CreateDynamicFontFromOSFont creates a font object which references fonts from the OS. This lets you render text using any font installed on the user's machine. See GetOSInstalledFontNames for getting names of installed fonts at runtime, which can be used with this function. | https://docs.unity3d.com/kr/2018.1/ScriptReference/Font.CreateDynamicFontFromOSFont.html | 2021-02-25T00:23:37 | CC-MAIN-2021-10 | 1614178349708.2 | [] | docs.unity3d.com |
This program is an example of writing some of the features of the
xlsxwriter
module.
---- -- -- A simple example of some of the features of the xlsxwriter.lua module. -- -- Copyright 2014-2015, John McNamara, [email protected] -- local Workbook = require "xlsxwriter.workbook" local workbook = Workbook:new("demo.xlsx") local worksheet = workbook:add_worksheet() -- Widen the first column to make the text clearer. worksheet:set_column("A:A", 20) -- Add a bold format to use to highlight cells. local bold = workbook:add_format({bold = true}) -- Write some simple text. worksheet:write("A1", "Hello") -- Text with formatting. worksheet:write("A2", "World", bold) -- Write some numbers, with row/column notation. worksheet:write(2, 0, 123) worksheet:write(3, 0, 123.456) workbook:close()
Notes:
write()method. | https://xlsxwriterlua.readthedocs.io/example_demo.html | 2021-02-24T22:38:58 | CC-MAIN-2021-10 | 1614178349708.2 | [] | xlsxwriterlua.readthedocs.io |
22.5. The section with five panels that can be used to add new elements to the model:
Model Properties: you can specify the name of the model and the group that will contain it
Inputs: all the inputs that will shape your model
Algorithms: the Processing algorithms available
Variables: you can also define variables that will only be available in the Processing Modeler
Undo History: this panel will register everything that happens in the modeler, making it easy to cancel things you did wrong.
Creating a model involves two basic steps:
Definition of necessary inputs. These inputs will be added to the parameters window, so the user can set their values when executing the model. The model itself is an algorithm, so the parameters window is generated automatically as for all algorithms available in the Processing framework.
Definition of the workflow. Using the input data of the model, the workflow is defined by adding algorithms and selecting how they use the defined inputs or the outputs generated by other algorithms in the model.
22.5.1. Definition of inputs¶
The first step is to define the inputs for the model. The following elements are found in the Inputs panel on the left side of the modeler window:
Authentication Configuration
Boolean
Color
Connection Name
Coordinate Operation
CRS
Database Schema
Database Table
Datetime
Distance
Enum
Expression
Extent
Field Aggregates
Fields Mapper
File/Folder
Geometry
Map Layer
Map Theme
Matrix
Mesh Layer
Multiple Input
Number
Point
Print Layout
Print Layout Item
Range
Raster Band
Raster Layer
Scale
String
TIN Creation Layers
Vector Features
Vector Field
Vector Layer
Vector Tile Writer Layers
Note
Hovering with the mouse over the inputs will show a tooltip with additional information.
When double-clicking on an element, a dialog is shown that lets you define its characteristics. Depending on the parameter, the dialog will contain at least one element (the description, which is what the user will see when executing the model). For example,.
The
Comments tab allows you to tag the input with more information,
to better describe
the parameter. Comments are visible only in the modeler canvas and not in the
final algorithm dialog.
For each added input, a new element is added to the modeler canvas.
You can also add inputs by dragging the input type from the list and dropping it at the position where you want it in the modeler canvas. If you want to change a parameter of an existing input, just double click on it, and the same dialog will pop up.
22.5.2. Definition of the workflow¶
In the following example we will add two inputs and two algorithms. The aim of
the model is to copy the elevation values from a DEM raster layer to a line layer
using the
Drape algorithm, and then calculate the total ascent of the line
layer using the
Climb Along Line algorithm.
In the Inputs tab, choose the two inputs as
Vector Layer for the line and
Raster Layer for the DEM.
We are now ready to add the algorithms to the workflow.
Algorithms can be found in the Algorithms panel, grouped much in the same way as they are in the Processing toolbox.
To add an algorithm to a model, double-click on its name or drag and
drop it, just like for inputs. As for the inputs you can change the description
of the algorithm and add a comment.
When adding an algorithm, an execution dialog will appear, with a content similar
to the one found in the execution panel that is shown when executing the
algorithm from the toolbox.
The following picture shows both the
Drape (set Z value from raster) and the
Climb along line algorithm dialogs.
As you can see there are some differences.
You have four choices to define the algorithm inputs:
Value: allows you to set the parameter from a loaded layer in the QGIS project or to browse a layer from a folder
Pre-calculated Value: with this option you can open the Expression Builder and define your own expression to fill the parameter. Model inputs together with some other layer statistics are available as variables and are listed at the top of the Search dialog of the Expression Builder
Model Input: choose this option if the parameter comes from an input of the model you have defined. Once clicked, this option will list all the suitable inputs for the parameter
Algorithm Output: is useful when the input parameter of an algorithm is an output of another algorithm
Algorithm outputs have the addditional
Model Output
option that makes the output of the algorithm available in the model.
If a layer generated by the algorithm is only to be used as input to another algorithm, don’t edit that text box.
In the following picture you can see the two input parameters defined as
Model Input and the temporary output layer:
In all cases, you will find an additional parameter named Dependencies that is not available when calling the algorithm from the toolbox. This parameter allows you to define the order in which algorithms are executed, by explicitly defining one algorithm as a parent of the current one. This Dependencies. You can also resize elements. This is particularly useful if the description of the input or algorithm is long.
Links between elements are updated automatically and you can see a plus button at the top and at the bottom of each algorithm. Clicking the button will list all the inputs and outputs of the algorithm so you can have a quick overview.
You can zoom in and out by using the mouse wheel.
You can run your algorithm any time by clicking on the
button.
In order to use the algorithm from the toolbox, it has to be saved
and the modeler dialog closed, to allow the toolbox to refresh its
contents.
22.5.3. Interacting with the canvas and elements¶
You can use the
,
,
and
buttons
to zoom the modeler canvas. The behavior of the buttons is basically the same
of the main QGIS toolbar.
The
Undo History panel together with the
and
buttons are
extremely useful to quickly rollback to a previous situation. The
Undo History
panel lists everything you have done when creating the workflow.
You can move or resize many elements at the same time by first selecting them, dragging the mouse.
If you want to snap the elements while moving them in the canvas you can choose.
Themenu contains some very useful options to interact with your model elements:
Select All: select all elements of the model
Snap Selected Components to Grid: snap and align the elements into a grid
Undo: undo the last action
Redo: redo the last action
Cut: cut the selected elements
Copy: copy the selected elements
Paste: paste the elements
Delete Selected Components: delete all the selected elements from the model
Add Group Box: add a draggable box to the canvas. This feature is very useful in big models to group elements in the modeler canvas and to keep the workflow clean. For example we might group together all the inputs of the example:
You can change the name and the color of the boxes. Group boxes are very useful when used together with. This allows you to zoom to a specific part of the model.
You might want to change the order of the inputs and how they are listed in the
main model dialog. At the bottom of the
Input panel you will find the
Reorder Model Inputs... button and by clicking on it a new dialog pops up
allowing you to change the order of the inputs:
22.5.4. Saving and loading models¶
Use the
Save model button to save the current model and the
Open Model button to open a previously saved model.
Models are saved with the
.model3 extension.
If the model has.
22.5.4.1. Exporting a model as an image, PDF or SVG¶
A model can also be exported as an image, SVG or PDF (for illustration
purposes) by clicking
Export as image,
Export as PDF or
Export as SVG.
22.5.5..
The Add comment… allows you to add a comment to the algorithm to better describe the behavior..
22.5.6. Editing model help files and meta-information¶
You can document your models from the modeler itself..
22.5.7. Exporting a model as a Python script¶
As we will see in a later chapter, Processing algorithms can be called from the QGIS Python console, and new Processing algorithms can be created using Python. A quick way to create such a Python script is to create a model and then export it as a Python file.
To do so, click on the
Export as Script Algorithm…
in the modeler canvas or right click on the name of the model in the Processing
Toolbox and choose
Export Model as Python Algorithm….
22.5.8.. | https://docs.qgis.org/3.16/en/docs/user_manual/processing/modeler.html | 2021-02-24T23:18:09 | CC-MAIN-2021-10 | 1614178349708.2 | [array(['../../../_images/checkbox.png', 'checkbox'], dtype=object)
array(['../../../_images/checkbox_unchecked.png', 'unchecked'],
dtype=object)
array(['../../../_images/mIconModelOutput.png', 'processingOutput'],
dtype=object) ] | docs.qgis.org |
Beacon to emit system load averages
salt.beacons.load.
beacon(config)¶
Emit the load averages of this host.
Specify thresholds for each load average and only emit a beacon if any of them are exceeded.
onchangeonly: when onchangeonly is True the beacon will fire events only when the load average pass one threshold. Otherwise, it will fire an event at each beacon interval. The default is False.
event when the minion is reload. Applicable only when onchangeonly is True. The default is True.
beacons: load: - averages: 1m: - 0.0 - 2.0 5m: - 0.0 - 1.5 15m: - 0.1 - 1.0 - emitatstartup: True - onchangeonly: False
salt.beacons.load.
validate(config)¶
Validate the beacon configuration | https://docs.saltproject.io/en/latest/ref/beacons/all/salt.beacons.load.html | 2021-02-24T22:43:50 | CC-MAIN-2021-10 | 1614178349708.2 | [] | docs.saltproject.io |
Returns a table based on an existing table with a new first column added with an incrementing number and a specified column heading
SppAddNumberColumn([Table Array],[Number Column Header Text],[IncludeLeadingZeros])
Where:
Table Array is a Table array (such as the data in a standard table, or the result of a QueryDataValues function).
Number Column Header Text is the text to use as the new heading.
IncludeLeadingZeros is set to TRUE to include leading zeros, FALSE to exclude. | https://docs.driveworkspro.com/Topic/SppAddNumberColumn | 2020-01-17T20:47:30 | CC-MAIN-2020-05 | 1579250590107.3 | [] | docs.driveworkspro.com |
Troubleshooting
This troubleshooting section provides a comprehensive list of common user Issues that may occur while developing and deploying GigaSpaces-based user applciations, along with troubleshooting methodology, possible causes, and what information to collect if you have to consult with GigaSpaces support.
This section also provides guidelines for building highly robust and efficient applications, with instructions on how to avoid common mistakes.
However, if you encounter problems that require troublshooting, the following may be helpful:
- Verifying your local or remote installation by testing the system environment.
- Viewing the clustered Space status using different logging levels.
- Configuring the Failure Detectionparameters more accurately to reduce the failure detection time and avoid the need for failover.
A list of recommended troubleshooting tools is also provided, which can be used for testing a running product and identifying environmental issues. | https://docs.gigaspaces.com/latest/admin/troubleshooting.html | 2020-01-17T20:07:10 | CC-MAIN-2020-05 | 1579250590107.3 | [] | docs.gigaspaces.com |
All content with label async+aws+cloud+development+ehcache+grid+hot_rod, amazon, s3, test, api, xsd, maven, documentation, roadmap, youtube, userguide, write_behind, 缓存, ec2, hibernate, jwt,,
more »
( - async, - aws, - cloud, - development, - ehcache, - grid, - hot_rod, - import, - infinispan, - jboss_cache, - jbossas, - listener, - read_committed, - release, - rest, - user_guide, - write_through )
Powered by a free Atlassian Confluence Open Source Project License granted to Red Hat, Inc.. Evaluate Confluence today. | https://docs.jboss.org/author/label/async+aws+cloud+development+ehcache+grid+hot_rod+import+infinispan+jboss_cache+jbossas+listener+read_committed+release+rest+user_guide+write_through | 2020-01-17T18:32:57 | CC-MAIN-2020-05 | 1579250590107.3 | [] | docs.jboss.org |
Azure Red Hat OpenShift is designed for building and deploying applications. Depending on how much you want to involve Azure Red Hat OpenShift in your development process, you can choose to:
focus your development within an Azure Red Hat OpenShift project, using it to build an application from scratch then continuously develop and manage its lifecycle, or
bring an application (e.g., binary, container image, source code) you have already developed in a separate environment and deploy it onto Azure Red Hat OpenShift.
You can begin your application’s development from scratch using Azure Red Hat OpenShift directly. Consider the following steps when planning this type of development process:
Initial Planning
What does your application do?
What programming language will it be developed in?
Access to Azure Red Hat OpenShift
Develop
Using your editor or IDE of choice, create a basic skeleton of an application. It should be developed enough to tell Azure Red Hat OpenShift what kind of application it is.
Push the code to your Git repository.
Generate
Create a basic application using the
oc new-app
command. Azure Red Hat OpenShift Azure Red Hat OpenShift to rebuild and redeploy
your application. Alternatively, you can hot deploy using
rsync to synchronize
your code changes into a running pod.
Another possible application development strategy is to develop locally, then use Azure Red Hat OpenShift to deploy your fully developed application. Use the following process if you plan to have application code already, then want to build and deploy onto an Azure Red Hat OpenShift installation when completed:
Initial Planning
What does your application do?
What programming language will it be developed in?
Develop
Develop your application code using your editor or IDE of choice.
Build and test your application code locally.
Push your code to a Git repository.
Access to Azure Red Hat OpenShift
Generate
Create a basic application using the
oc new-app
command. Azure Red Hat OpenShift generates build and deployment configurations.
Verify
Ensure that the application that you have built and deployed in the above Generate step is successfully running on Azure Red Hat OpenShift.
Manage
Continue to develop your application code until you are happy with the results.
Rebuild your application in Azure Red Hat OpenShift to accept any newly pushed code.
Is any extra configuration needed? Explore the Developer Guide for more options. | https://docs.openshift.com/aro/dev_guide/application_lifecycle/development_process.html | 2020-01-17T19:32:51 | CC-MAIN-2020-05 | 1579250590107.3 | [] | docs.openshift.com |
Sandbox¶
We offer to test the already configured of three nodes Waves Enterprise blockchain platform, which includes the authorization service and Docker contracts.
Attention
This version is not intended for commercial use and is provided for demonstration purposes only. The demo version can be run on Linux and MacOS operating systems.
You need the following software to use the sandbox version of the Waves Enterprise platform:
Perform the following commands to run the sandbox:
Create a working directory and navigate there using the terminal.
Download the
docker-compose.ymlconfiguration file from our GitHub page and copy it into the working directory.
Log in as an administrator using the
sudocommand, and you will be asked to enter your password after it.
Run the following command and wait for the results:
docker run --name generator -v $(pwd)/nodes/:/opt/generator/nodes/ wavesenterprise/generator:demo
Run the sandbox with the following command:
docker-compose up -d
Sending transactions from the web client¶
Follow these steps after the blockchain platform full start:
Open a browser and enter the.
Register in the web client using any valid email address and log in to the web client.
Open the
Choose address -> Add address manuallypage.
Fill in the fields below. You can take the values from the
accounts.confconfiguration file of the first node in the
nodes/node-1directory.
Node network address - specify the.
Address - specify the node address. See the
Addressfield marked in the picture below.
Key pair password - specify the key pair password of the node. See the
Key-pair passwordfield marked in the picture below.
You can also simply create a new custom blockchain address using the
Choose address -> Add address manuallypage and following the prompts of the web interface.
It is now possible to send transactions from the web client from the node address. | https://docs.wavesenterprise.com/en/how-to-setup/sandbox.html | 2020-01-17T19:27:15 | CC-MAIN-2020-05 | 1579250590107.3 | [array(['../_images/accountsConf.png', '../_images/accountsConf.png'],
dtype=object) ] | docs.wavesenterprise.com |
WooCommerce API Manager Requirements ↑ Back to top
Version 2.2.7 Requirements ↑ Back to top
- PHP 7.0 or greater. It is good practice to update the PHP version before the end-of-life where that version is no longer supported even with security fixes. See current supported PHP versions.
- WooCommerce 3.4 or greater. It is best to have updated WooCommerce within the past year to avoid breaking changes. See changelog for latest changes.
- If WooCommerce Subscriptions is installed and activated it must be version 2.3 or greater.
- When upgrading from API Manager pre 2.0 to 2.0 or greater, the data is migrated to custom database tables and the old data is deleted. There is no ability to rollback, or downgrade, to the older version. Test first, backup, and be prepared before upgrading.
- WordPress requirements (same as the version of WooCommerce installed).
- OpenSSL should be kept up-to-date.
- An HTTPS connection to your store is highly recommended.
- The AUTH_KEY and NONCE_SALT constants that should exist in wp-config.php are used by the API Manager for advanced encryption. If these constants are absent go to and add those keys to wp-config.php.
- In wp-config.php define( WP_DEBUG’, false ); If debug displays errors on a live production site it will break API requests.
- Do not cache query strings, since query strings are used in API requests, and must always be unique, not cached copies of previous requests or the API requests will break.
- NEVER cache the home page URL, since that is where the APIs listen for requests. If the home page URL is cached, the APIs will break if the query string requests are also cached. POST and GET requests should never be cached.
- Be careful with firewalls, since they can break API requests.
- DO NOT DELETE PRODUCTS if the API checkbox has been selected. Once a product has been selected to be used as an API product, that product ID will become the product_id (an integer) used by the API Manager to identify that product going forward. Deleting the product will break all client API requests for that product.
- If a product is duplicated, make sure the API information is unique to the newly created product.
- As of version 2.1, the API Manager authenticates with Amazon S3 using AWS Signature Version 4.
Questions & Support ↑ Back to top
WooCommerce.com has their own chat and support/communication form used to contact them directly. We are a third party developer, so you will need to use this support form for any and all communication with us. If you tried to contact us through WooCommerce.com we will redirect you back to the support form.
Already purchased and need some assistance? Fill out this support form via the Help Desk.
Have a question before you buy? Please fill out this pre-sales form.
How to Ask for Support ↑ Back to top
- Fill out this support form via the Help Desk. Without all the details the support form provides about your installation, support is hampered. Support will not be provided if the support form in not used.
- Provide a clear explanation of the issue, including how support can reproduce the issue, provide screenshots, or provide login credentials for support to see for themselves and take a closer look.
- Do not talk about how nothing is working, how much it is costing you, or blame the software. Frustration is always an issue when something isn’t working as expected, but it is contagious, and is counterproductive. Stay focused, and follow number 1 and 2. Problems get solved quicker when everyone works together. We are happy to help.
PHP Library ↑ Back to top documentation below.
The WooCommerce API Manager PHP Library for Plugins and Themes is available to current WooCommerce API Manager subscribers for a discounted price of $70, regular price is $200. Contact the developer using the official support form to request a coupon to purchase the PHP Library for a 65% discount.
For non WooCommerce API Manager subscribers, the WooCommerce API Manager PHP Library for Plugins and Themes can be purchased on the product page here for $200.
Using the PHP Library ↑ Back to top ↑ Back to top
A preconfigured Postman .json file collection template is included to make it easy to test the API functions. Server URL, and keys/values, will need to be modified specific to your product and server.
Software Update File Hosting ↑ Back to top
The file used for software updates can be hosted on the local server, Amazon S3, or from any remote URL. The file download URL is wrapped in a secure URL that expires after the expire time you set.
WooCommerce API Manager Versioning ↑ Back to top
The API Manager uses Semantic Versioning, as outlined below:
Given a version number MAJOR.MINOR.PATCH, increment the:
MAJOR version when you make incompatible API changes,
MINOR version when you add functionality in a backwards-compatible manner, and
PATCH version when you make backwards-compatible bug fixes.
A Major version would be 1.0.0 to 2.0.0. A Minor version would be 1.1.0 to 1.2.0. A Patch would be 1.1.0 to 1.1.1. This versioning approach was adopted by WooCommerce in early 2017 as explained in this post.
Updating to a New Version ↑ Back to top
If you’ve already read the section above on WooCommerce API Manager Versioning then you’ll know what a Major version is. When a version goes from something like 1.5.4 to 2.0 that is a Major version, and it means there are breaking changes, so testing is highly recommended, if not required, before applying that update to a live site. WooCommerce has recommendations on how to go about preparing for a Major, or even a Minor, update to make sure your site doesn’t break, or have unexpected behavior. The number one rule to remember is to ALWAYS BACKUP your website before updating to a Major or Minor version.
Always test your current PHP Library in your client software to make sure it works with the new version of the API Manager, especially if a major version is released.
Upgrading From Version 1.5.4 to 2.x ↑ Back to top
The client software for plugins and themes was called the PHP Library and is now referred to as the WooCommerce API Manager PHP Library for Plugins and Themes. As of API Manager version 2.0 the Product ID is required to be sent with each API call/API request depending on the API function and request type. See the API documentation. As of API Manager version 2.1, there was a change to the API to fix an issue created by some firewalls that block commonly used API call/API request query keys. Variable Products and Variable Subscriptions must send the Product ID (an integer) or software updates will break. Simple Products and Simple Subscriptions can still send the old Software Title, however the Product ID (an integer) is faster, and ensures future compatibility. For these reasons all client software must be updated for WooCommerce API Manager version 2.x. Customers who purchased Variable Products and Variable Subscriptions will need to update software manually, while Simple Products and Simple Subscriptions customers can use the one-click update.
API Server Hosting ↑ Back to top
Web hosts control all traffic going in and out of a WooCommerce store. This can be problematic when the web host unknowingly blocks traffic, or has caching that is too aggressive, both of which can cause the API Manager APIs (Application Programming Interfaces) to break. It is not easy to track down and fix these issues, so it is highly recommended to host your own server, or virtual server through Amazon, Google, Microsoft, or Digital Ocean. Todd Lahman LLC uses and prefers Digital Ocean due to their low cost, ease of use, advanced features, reliability, worldwide network, and MySQL database hosting.
Extensions ↑ Back to top
API Manager Product Tabs ↑ Back to top
WooCommerce API Manager Product Tabs plugin.
Settings ↑ Back to top
Accounts & Privacy ↑ Back to top
Found under WooCommerce > Settings > Accounts & Privacy > Guest checkout
API Manager products must be purchased by customers with an existing account, or an account that is created at the time of purchase, so the customer account can be assigned a User ID, which is a critical property in secure authentication through the API to your store. If a customer purchases a product with a guest account the API Manager will fail for that purchased API Resource. Make sure under the Guest Check section that the “Allow customers to place orders without an account” checkbox is unchecked as shown in the screenshot above. To make it easier for new customers it is recommended to check the checkbox for “Allow customers to create an account during checkout” under the Account creation section.
API Manager tab ↑ Back to top
Found under WooCommerce > Settings > API Manager tab.
Amazon S3 Region
An Amazon S3 region must be chosen, and is listed in the Amazon S3 dashboard in the bucket details row. Pick one, and put all your files in that bucket. A bucket can be organized into folders if needed.
Amazon S3
The setting allows for file downloads from Amazon S3..
define('WC_AM_AWS3_ACCESS_KEY_ID', 'your_access_key');
define('WC_AM_AWS3_SECRET_ACCESS_KEY', 'your_secret_key');
Download Links
URL Expire Time: Download URLs on the My Account dashboard, and for software updates, can be set to expire between 5 and 60 minutes after they are created. Each download URL is created on request, and expires to prevent download abuse on sites other than yours. Due to this security, download URLs are not sent in emails after software product purchase.
Save to Dropbox App Key: This creates a Save to Dropbox link in the My Account > My API Downloads section where customers can save their download directly to their Dropbox account.
API Doc Tabs
These tabs are displayed for the WordPress plugin information screen. If the product has a download it is considered software, and if it is a WordPress plugin, these tabs can be optionally displayed, although the changelog tab is required.
API Keys
Product Order API Keys: The default API Keys template always displays the Master API Key, and can hide the Product Order API Keys if you want clients to use only the single Master API Key to activate API resources.
API Response
Send API Resource Data: More detailed information about the product can be sent if this option is on, although it is not required.
Debug
There are several different options available to debug APIs in the API Manager. When this option is selected all information is recorded in log files that can be found under WooCommerce > Status > Logs. The output in the log is beautifully formatted for readability. For additional troubleshooting of areas of code not covered by the default options, the test_log() method in the /includes/wc-am-log.php file can be used. The test log syntax would be similar to the following line of code, where $resources is the variable storing the information to be displayed in the log:
WC_AM_Log()->test_log( PHP_EOL . esc_html__( 'Details from get_resources() method.', 'woocommerce-api-manager' ) . PHP_EOL . wc_print_r( $resources, true ) );
Postman is recommended for remote API testing.
Amazon S3 ↑ Back to top
The API Manager allows an Amazon S3 (Simple Storage Service) URL to be copied and pasted into a product’s Downloadable files > File URL form field, so files can be download through Amazon S3. The secure URLs created by the API Manager for Amazon S3 will expire between 5 – 60 minutes after creation, just like a local download URL, depending on your setting. The WooCommerce > Settings > API Manager screen has form fields for the Amazon S3 keys will be creating, and the secret key is strongly encrypted, however it is much more secure to put the keys in wp-config.php using the defined constants detailed in the Settings section above.
To get started with Amazon Web Services (AWS) login or create an account, then go to the Identity and Access Management (IAM) dashboard. Click on the Continue to Security Credentials button, then click on Users. The objective will be to create a user who has restricted read-only access to Amazon S3 buckets, and no other Amazon services. This helps avoid using root keys that have access to all services connected with the AWS account. The screenshots below will walk through the steps to setup a new restricted user.
Download the .csv file, and store it somewhere secure, because this file contains the Access Key ID, and Secret Access Key you will need to add to wp-config.php using the defined constants (strongly recommended), or to save in the API Manager settings. Step 4, the success screen, will be the only time you can download the .csv file, or to view the Secret Access Key from the Amazon IAM dashboard for this user.
There are different ways to setup restricted users, and Amazon has a lot of documentation in this regard, but the overall objective is better security by limiting access to specific resources.
Now that an IAM user has been created with limited Amazon S3 access it is time to go to Amazon S3 and create a bucket to hold the files that will be downloaded. Once at the Amazon S3 dashboard click the Create bucket button, and follow the screenshot steps below.
- Note: The IAM user do not have to match the Bucket name, as the Bucket and IAM user are unrelated, although the names do match in the example screenshots only because it made it easier to see which Amazon S3 Bucket was used for the specific IAM user.
On step 3 of the create bucket wizard the screenshot above was displayed, then 6 hours later the screenshot below was displayed, but either way accept the default settings as there are no changes needed.
On step 4 of the create bucket wizard the screenshot above was displayed, then 6 hours later the screenshot below was displayed, but either way accept the default settings as there are no changes needed.
After the bucket is created, click on the bucket and follow the screenshots below to upload a file.
In the previous example the file is a zip file, so the Content-Type was set to application/zip. Once created click on the file to make additional changes needed. The root account is given full access to the file.
Currently the root account has complete permission to access the bucket and files, but no one else does, not even customers, so we need to give the IAM user bucket level permission.
Open the Identity and Access Management (IAM) dashboard, click on Users, then click on the new IAM user you created, then copy the User ARN, and save it for the next step.
Open the Amazon S3 dashboard, select the new bucket > Permissions > Bucket Policy. Towards the bottom of this page click the Policy generator link.
Select the S3 Bucket Policy from the pull-down menu, which should automatically select the Amazon S3 for the AWS Service in the form. Paste the User ARN into the Principal field. In the Actions pull-down menu, select only GetObject, then paste
arn:aws:s3:::your-bukcket-name/* into the Amazon Resource Name (ARN) field. Now click Add Statement, then Generate Policy, and copy the resulting code snippet.
Now click Save at the top right of the screen. The code snippet below is a working copy of the code needed.
{ "Version": "2012-10-17", "Id": "Policy1546757468384", "Statement": [ { "Sid": "Stmt1546757465773", "Effect": "Allow", "Principal": { "AWS": "arn:aws:iam::592794580436:user/am-test-bucket-1" }, "Action": "s3:GetObject", "Resource": "arn:aws:s3:::am-test-bucket-1/*" } ] }
The Principal is the User ARN, and the Resource is the bucket name prefixed with
arn:aws:s3::: and postfixed with
/*. The forward slash is the delimiter, and the asterisk means anything below the bucket level, including all files.
The IAM user name am-test-bucket-1, which is the same name we gave the bucket, now has read access to any files placed in this bucket. Click on Overview and select the file that was uploaded.
Copy the Amazon S3 (Simple Storage Service) URL and paste it into a product’s Downloadable files > File URL form field, so files can be download through Amazon S3.
More than one file can be added to the same Amazon S3 bucket to keep them organized in the same place. Remember to keep the same file name or the Link will have to be changed in the product.
Whenever a product file is download from Amazon S3 through the API Manager now, the download request will be authenticated through the IAM user permissions.
Here’s a summary of how this will all work:
The API Manager will create a secure URL that acts as a wrapper around the Amazon S3 link. The URL is created when the My Account > API Downloads page loads, and it expires in 5 – 60 minutes depending on your setting. The URL is also created when client software sends an API query requesting a software update. The API Manager finds the URL in the product, and sends a reply URL wrapper. This URL has details required by Amazon S3 to authenticate the download request using the IAM user account information created to limit access to read-only Amazon S3, the file being requested, and an expiration time. The Access Key ID and Secret Access Key created, and entered into the API Manager settings or wp-config.php file, are encrypted and sent in the URL wrapper to download the file, so Amazon S3 know the limits of access for the request, and authenticates the request based on this information.
Going forward, the simplest way to manage new file releases is to keep the name the same, and replace the old file with the new file in the same Amazon S3 bucket, so the Link will remain unchanged.
If new Amazon S3 buckets are needed, follow the directions here, then copy and past the bucket policy code snippet, and modify the User ARN and bucket name as needed.
Save to Dropbox ↑ Back to top
After setting the Dropbox App Key in the API Manager settings, a Save to Dropbox button will appear on the API Downloads page in the My Account dashboard. This allows customers to easily save the API Resource directly into their Dropbox account.
There is a link to create the Dropbox app key following the Save to Dropbox App Key setting on the API Manager settings screen. The first step, once this link is followed, is to choose Drop-ins app.
An App key will be available on the next screen. For this to work however, the store domain must be added to Drop-ins domains, and the settings under the Details tab must be set as well.
Account Endpoints ↑ Back to top
Found under WooCommerce > Settings > Advanced > Account Endpoints.
The api-keys and api-downloads are the slug portion of the URL leading to those pages under the My Account dashboard. These values can be changed, or left empty.
To change the API Keys and API Downloads titles, as they appear in the My Account dashboard, use the
woocommerce_account_menu_items filter.
Endpoints Not Displayed in My Account Dashboard ↑ Back to top
If the endpoints above are not displayed in the My Account dashboard go to Settings > Permalinks and save twice to flush the rewrite rules, so these new endpoints are included.
SmartCache ↑ Back to top.
Product Setup ↑ Back to top
What is an API product? An API product is referred to as a an API resource, or more simply resource. The API Manager allows access to an API resource through an authentication process, which means clients must be logged in to access their resources, and to purchase API products. All API products are virtual, and can be sold as a service, software, or whatever the product represents in the virtual space. The product can be a WooCommerce Subscription, which is fully supported.
A product can be of type Simple, Simple Subscription, Variable, Variable Subscription, and Grouped. When setting up a Grouped product, the parent product in the group will not have the option to make it an API product, but the products in the group will if they are Simple or Variable products, including WooCommerce Subscription products. External/Affiliate products are placeholder products that link to an external product, and therefore cannot be API products.
To make any of the above products an API product select the API checkbox, and save the changes. Once the API checkbox is checked, it cannot be unchecked, and the product cannot be deleted from the store. There are also Virtual and Downloadable checkboxes, neither of which are required, and only serve to hide or display those options on the product screen to setup the product as needed.
Existing products that have been purchased before the API Manager was installed will have Product Order API Keys created when the API checkbox is selected and the product is updated.
If the product is Downloadable there are several options.
- The first is to upload a file on the local server. If this file is a WordPress plugin/theme then it needs to be a .zip file. At the moment, there should only be one download file on the product, with any future uploads replacing the previous, because the API Manager will find the most recent upload for downloads and software updates. These URLs are secure, will be generated on demand, and will expire depending on your setting, but will exist no longer than 60 minutes.
- The second option is to use an Amazon S3 URL, which requires further setup for the API Manager to create the Amazon S3 secure URLs. These URLs are secure, will be generated on demand, and will expire depending on your setting, but will exist no longer than 60 minutes.
- The last option is to provide an remote URL to file in another location.
Download limits and download expiration will no longer be honored. As of version 2.0 there are only subscription time limits used to limit API access.
Flexible Product Types ↑ Back to top.
API Access Expires ↑ Back to top
The API Access Expires option sets a time limit for an API Resource, which is the product purchased. If the value is left empty, the time limit is indefinite. A number (positive integer) sets a number of days to limit access to the API Resource, so 365 would be a limit of one year. After API access expires the product must be manually purchased again. There is no auto renewal after access expires, and currently no automated emails are sent to notify the customer access will be expiring.
The API Access Expires form field only displays on the Simple product API tab form, and on the variation API form for a Variable product, but not on the Variable product parent API tab form.
The API Access Expires form field will not be displayed on the Simple Subscription and Variable Subscription products created by the WooCommerce Subscription plugin, because those products will already have a subscription time limit set.
Simple/Simple Subscription ↑ Back to top
The following form fields are found on the Product > Product edit screen > API tab.
(Deprecated – Do Not Use) Software Title: This field was used prior to API Manager version 2.0. The value cannot be updated from API tab form. This value will still work if used by client software prior to version 2.0. Going forward only use the Product ID. This field should be empty if the product was created after version 2.0, or for first-time installations of version 2.0.
Product ID: This unique ID cannot be changed, or the client will no longer have access to the product data. The product ID matches the actual product ID.
Activation Limit: (Default is one activation if left blank) The APIs will only allow this number of activations to access this API resource. Activation is accomplished through the authentication of an API Key. Unlimited activations must have a value set, such as 1000. Do not leave this field blank. All resources are limited. Allowing a truly unlimited number of activations would consume an unknown amount of resources, and could cripple the server performance. Setting a reasonable limit will prevent downtime.
Note: If the Activation Limit is increased after the product has been purchased, all API resources for this product will have the Activation Limit raised to the new value. The API resources for this product will not have the Activation Limit lowered if it is decreased.
The remaining fields are specifically used for WordPress plugins/themes, however this data can be used with any client application.
Version: The currently available software version in the store.
Page URL: For WordPress plugins this is the plugin homepage. For WordPress themes, this is the View Version x.x.x Details page. For other software this can be whatever you want it to be.
Author: The software author.
WP Version Required: The version of WordPress required for the plugin/theme to run without errors. Not required for non WordPress software.
WP Version Tested Up To: Highest version of WordPress software was tested on. Not required for non WordPress software.
Last Updated: When the software was last updated.
Upgrade Notice: A notice displayed when an update is available.
Unlimited Activations: Sets a default number of activations to 100,000. If Unlimited Activations is selected, then Activation Limit is hidden. This number can be increased using the filter hook
wc_api_manager_unlimited_activation_limit. All resources are limited. Increasing this to a larger number than your Operating System can handle can cause expected resource limitation issues, so either keep the default, or choose something reasonable. Once this option is enabled, customers who already purchased this product will have their API Key activation limit increased to match the limit value that is set, but the update is only triggered when the customer data for the product is pulled from the database, such as when the customer views the My Account dashboard > API Key tab. This minimizes the resources needed to implement this feature for a product that exists on a lot of orders.
Activation Limit: Sets the API Key activation limit for the product. If the number is set above 100,000, the set the product for Unlimited Activations. If Unlimited Activations is selected, then Activation Limit is hidden.
The remaining fields are docs linked to plugin “view details” tabs. All docs are optional except for the Changelog. Once a page/doc is selected, an Edit link appears. The docs that are displayed can be enabled/disabled on the settings screen.
Variable/Variable Subscription ↑ Back to top
There are several differences with the API tab on a Variable type product. Referer to the Simple product setup for a description of all other form fields.
- The activation field is absent, as it is displayed on each variation product.
- The Product ID displayed on the API tab is for the parent product only, and is not used in the client software.
- All other data entered in the parent product API tab is copied into the corresponding fields for each variation to make data entry easier across multiple variations.
When multiple pricing options are needed for the same product, a variable product is currently the only viable option.
Variable Product Variations
Once the number of variation products has been created, and the Variable product has been updated with the API tab on the parent product filled out, each variation product will have the same form fields populated, except for the Activation Limit. Creating variations allows different numbers of API Key activations to be sold per product, such as 1, 5, and 25 API Key activations using 3 variations each with 1, 5, and 25 set for the Activation Limit value respectively.
It is possible to set unique values in the variation form fields by checking the “Set API options for this variable product only.” checkbox. If all the product details, other than the number of API Key activations should be the same, don’t check the “Set API options for this variable product only.” checkbox. Below is what the form looks like when setting unique values for a variation, which has already been populated by the parent API tab form field values initially.
Attributes
To create a pull-down menu on the frontend product page a custom product attribute is needed. Below is an example of how to set this up with a Variable product that has three variations of the same product with different API Key activation limits.
Under the Variations tab a Default Form Value will need to be set as shown below.
The frontend product page will then display the different activation limit and pricing for each variation of the variable product as shown below.
Free Products ↑ Back to top
It is possible to create free products, just set the price to zero. It is also possible to give an existing customer a product for free by creating an order assigned to the customer, and adding the product for zero cost. To access the free product, free API Resource, customers will still be required to create, or have, an account as a login is required to maintain API endpoint security.
API Key Types ↑ Back to top
After an API product, which is defined as an API Resource, is purchased, an API Key can be used to access that API Resource after the client software activates the access to the API Resource with the API Key. Product Order API Key and Associated API Key are granted limited privileges by the Master API Key.
The Master API Key can be used for any and all API Resources. As API Resources are added or removed, the Master API Key keeps track of those resources, and only allows access to the API Resources available. For example, if a customer buys the same API Resource twice, it accumulates the activations, and if that API Resource is a subscription, then it will reduce the activations when one of the subscriptions is no longer in Active or Pending Cancellation status or add more activations when a new subscription for that API Resource is added.
The Product Order API Key and Associated API Key get their privileges from the Master API Key, and only manage a single API Resource from a single order. If a new order for the API Resource is generated, a new Product Order API Key is generated to manage that API Resource, which means the customer will need to deactivate their software and reactivate using the new Product Order API Key. This is especially true if the API Resource is a subscription. It is always best practice for subscription customers to use the Master API Key.
API Key/License Key Model ↑ Back to top
There are several types of API Key, or Licensing, models that exist, for example node-locked, OEM, and floating.
- A node-locked license is an API key for use on one computer, and is tied only to that computer.
- An OEM license unlocks software preinstalled, for example, windows using an API Key.
- A floating license is an API Key that can be used to unlock a computer or software if there is a free license available.
The API Manager uses a floating and OEM license model for API Keys. The API Manager assigns a limited number of API Key activations per product purchase. When an activation is used, there is one less activation available for other software or computers to use as in a floating license model, and if an activation is deactivated, then that activation can be used by other software or computers. This model can also be used for OEM.
The Product Order API Key can limit those activations for a specific product on a specific order.
The Associated API Key can use imported, or local server generated API Keys, assigned to a product, and then assigned when an order is placed for that product. The Associated API Key could be as an example, Windows OEM license keys customers can use to activate their product.
The API Manager expects the API Key used to request activation, along with other values as defined by the API function, documented below.
User Profile ↑ Back to top
Master API Key ↑ Back to top
The store administrator can control the user Master API Key under Users > “user profile.” Under the API Manager API Key section, the Master API Key can be replaced, and disabled, but never deleted permanently. If the Master API Key is disabled, the Product Order API Keys, and Associated API Keys, available to the user will also stop working. A disabled Master API Key means no access to API Resources for the customer.
If the Master API Key is replaced, all activations for the user who owns the Master API Key will be deleted. Activations created using the Product Order API Key, and Associated API Key, are also deleted when the Master API Key is replaced. Product Order API Key and Associated API Key are granted limited privileges by the Master API Key.
Disable User Account ↑ Back to top
When the API Manager API Key > Disable Master API Key box is checked on the User Profile, the user will no longer have access to any API functions even if valid activations exist. The user account will be completely disabled, and the My Account dashboard will reflect this disabled status.
User Switching Plugin ↑ Back to top
The User Switching plugin is a very useful tool that allows the store owner/manager to virtually login as a user. Once switched to the user login the user’s My Account dashboard can be viewed, along with the API Keys and API Downloads tab. This is useful when troubleshooting, investigating issues, and confirming template overrides are working as expected from the user’s view. Don’t forget to switch back using the switch link. Do a Google search for tutorials on how to get the most out of the User Switching plugin.
Order Status ↑ Back to top
By default API Resources only exist for orders that have a “completed” status. API Resources for the order status “processing” will only exist if WooCommerce > Settings > Products > Access Restriction > “Grant access to downloadable products after payment” is selected. Orders that have a “processing” status before “Grant access to downloadable products after payment” is selected will not have API Resources available until the order status has changed to “completed,” or the order is new and has a “processing” status, or has been changed to “processing” status..
See the WooCommerce Subscriptions section regarding Subscription statuses.
Order Screen ↑ Back to top
After a customer purchases an API Product/API resource, the API access information will appear on the Order screen, and in the My Account dashboard. On the Order Screen there will be three meta boxes: Master API Key, API Resources, and API Resource Activations.
Master API Key ↑ Back to top
The Master API Key is displayed on each Order screen for reference.
API Resources ↑ Back to top
The API Resources meta box displays information about the API Resource:
- Product Order API Key
- Always unique, and used only as an option.
- Activation Limit
- Can be changed to give the customer more activations.
- Resource Title
- Makes product identification easier.
- Product ID
- The unique ID used by the API Manger to identify a specific API Resource when clients send API requests.
- Access Expires
- All API Resource access is governed by a subscription type time limit, that can optionally be indefinite.
API Resource Activations ↑ Back to top
- API Key Used
- Displays the API Key type.
- Product ID
- The product ID number.
- Time
- Time API Key was activated.
- Object
- Where/what activated the API Key.
-
- API Key activation can be deleted.
WooCommerce Subscriptions ↑ Back to top
Order Screen ↑ Back to top
Orders containing WooCommerce Subscription line items are displayed based on how WC Subscriptions works. All WooCommerce Subscriptions have the API Resources and API Resource Activations displayed on the WooCommerce Subscriptions Parent order screen.
API Resources and API Resource Activations will be displayed on the Switched Subscription order screen rather than the parent order screen after the client chooses to switch their subscription.
Subscription Status ↑ Back to top
A subscription is only considered active if it has a Active or Pending Cancellation status. As mentioned in the Order Status section, any order not in completed, or processing if other conditions are met, status will be considered inactive. WooCommerce Subscriptions has its own subscription statuses, but it shares one with WooCommerce, and that is the on-hold status, which is considered an inactive subscription. Read the following sections to see how API Keys are handled in these situations.
Switched Subscription ↑ Back to top
A WooCommerce Subscription product that is a Variable Subscription type has a Parent product that has one or more variation products, each with their own unique Product ID created by WooCommerce, and used by the API Manager to identify each variation, and locate information stored for that product, such as the download URL for software updates. Client software, such as the API Manager PHP Library, must authenticate using a Master API Key, or a Product Order API Key, and the Product ID unique to every product in the WooCommerce store. If a Subscription has been switched to a different Variable Subscription product variation, the API Resources and API Resource Activations will be displayed on the Switched Subscription order screen rather than the parent order screen, and the Product Order API Key will change, but the Master API Key will remain unchanged. The API Resource Activations for the switched Variable Subscription product variation will be deleted because the Product ID will have changed. If the API Resource Activations were not deleted, the client would receive an error the next time the client software checked for a software update, and the client would see many activations already exist for a product they are no longer subscribed to, and the client would need to login to the My Account dashboard, go to API Keys, and figure out which activations to delete for a product they no longer subscribe to.
If the client is using a Product Order API Key then the client needs to change the Product ID and the Product Order API Key, or download new software that has the Product ID, to reactivate the API Key. If the client is using a Master API Key then the client only needs to change the Product ID, or download new software that has the Product ID, to reactivate the API Key. It is much easier for the client to always use the Master API Key with WooCommerce Subscription products.
The API Manager currently has an API Access Expires field for Simple Product types that are not a WooCommerce Subscription product. In the future it is planned to allow this to become a feature rich alternative to WooCommerce Subscription products, that will allow clients to increase/decrease activations without changing subscriptions, and to continue to use the same activations without disruption.
API Key Expiration ↑ Back to top
API Keys for WooCommerce Subscription products will only work if the subscription has an Active or Pending Cancellation status. If a subscription has a status other than Active or Pending Cancellation status, and the client queries the API, views the API Keys or API Downloads, or if the store owner views an order containing an API Product, the API Manager automatically checks if the API Products still have an Active or Pending Cancellation status, and if not then the API Key activations associated with that API Product are deleted.
Subscriptions not in Active or Pending Cancellation status, and, which also applies to a subscription that returns to Active status..
My Account Dashboard ↑ Back to top
API Keys ↑ Back to top
The API Keys page displays the unique Master API Key that can be used to activate any, and all, API Resources, and the API Resources table. The API Resources table shown below displays the Product Order API Key. Each product is listed individually alongside the corresponding Product Order API Key. If more than one order for the same product was made, each product order item would appear on different rows in the table, and their API Key activations available would be totalled individually.
API Keys table
API Resources in the API Keys table:
Product by Product #
- Displays the product title and product ID number.
- Product Order API Key
- The unique API Key used to access and activate a single API Resource from the order.
- Expires
- The time limit for the API Resource. This example shows API Resources with indefinite limits.
- Activations
- The number of activations used out of the total activations available.
The Delete button will remove an activation.
The API Resources table shown below hides the Product Order API Key. Hiding or displaying the Product Order API Key can be set on the API Manager settings screen. In the table below, each of the same product is grouped together, even if they were purchased on different orders, and their API Key activations available are totalled together.
API Downloads ↑ Back to top
API Downloads table
- Product by Product #
- Displays the product title and product ID number.
- Version
- The version of this API Resource.
- Version Date
- Date this download was released. Value set on the product edit screen.
- Documentation
- A link to the Changelog and Documentation pages, if either value was set on the product edit screen.
- Download
- A local, an Amazon S3, or remote URL to download the API Resource. The URL is secure, and expires between 5 – 60 minutes after the page is loaded, depending on the settings value. If a Dropbox key value was set in settings, the customer can save the API Resource directly to their Dropbox account.
Pre 2.0 API Keys ↑ Back to top
What happened to the API Keys that existed before version 2.0? The old API Keys were migrated to the Associated API Key database table. The Associated API Key database table can be used to associate a custom API Key to a product, and assigned to that product when it is purchased. The old API Keys, pre 2.0, are considered custom API Keys since they vary from the current API Key format. The Associated API Keys are not displayed in the My Account dashboard, or on the backend Order screen, at this time. We are looking at a meaningful way to display the Associated API Keys that will work best for all use cases.
API Documentation ↑ Back to top
As of version 2.1, the request key has been changed to wc_am_action. See changelog for details.
Postman is recommended for remote API testing.
The original intent of the API Manager was to provide API Key management, and software updates, for WordPress plugins and themes, however, over time this evolved to allow use cases for software, services, and everything in-between. The response data from the APIs can be appended and modified using filters to expand the use case possibilities. This documentation summarizes the default use.
Some required query string keys such as
plugin_name and
slug are used for WordPress plugin and theme software updates. If the software is not a WordPress plugin and theme, then any values can be paired with
plugin_name and
slug, since the desired response from the API in this case is a download URL (package), and new version available as part of the data needed to determine if a software update is available.
Some API responses may duplicate some data for backward compatibility pre version 2.0, which allows legacy client software to continue to work with version >= 2.0.
If you are still using an alphanumeric Software Title for the product_id, it is best practice as of version 2.0 to use the positive integer product_id, especially for variable products to avoid errors. If you are using the WooCommerce API Manager PHP Library for Plugins and Themes to connect to the API Manager on your store, you should update your WordPress plugins and themes to the latest version, since it is optimized for the latest version of the API Manager.
Client software sends a query string in an HTTP(s) request to the API Manager using either POST or GET. The query string contains a series of keys and values.
Description of keys sent in HTTP(s) API requests: ↑ Back to top
- wc_am_action (request pre version 2.1) – What action is being requested such as activate, when an API Key is being activated.
- instance – A unique alphanumeric string that is not repeated.
- product_id – A positive integer that corresponds to a real product in the WooCommerce store.
- api_key – A unique alphanumeric string that is not repeated, and is in authenticated requests. There are three types: Master API Key, Product Order API Key, and an Associated API Key.
- plugin_name
- WordPress Plugin: The lowercase name of the plugin directory, a forward slash, then the name of the root file. For example:
search-engine-ping/search-engine-pingor optionally
search-engine-ping/search-engine-ping.phpwith the .php ending.
- WordPress Theme: The lowercase name of the theme directory, a forward slash, then the lowercase name of the plugin directory. For example:
search-engine-ping/search-engine-ping.
- Non WordPress Software: Use something with a similar format.
- version – An iterative version of a software release, service, or some other product. Used for WordPress plugin update requests.
- object – Used to identify where the API Key is being activated from. The object could be a server, smart phone, or anything capable of sending an HTTP(s) request.
- slug – The lowercase name of the plugin/theme directory. For example:
search-engine-ping.
Note: WordPress plugins and themes require the ‘plugin_name’ and ‘slug’ keys and values depending on the API request, however non WordPress software can send fake data formatted as if it were WordPress software.
What is an instance ID? ↑ Back to top
An instance ID is generated on the object, such as inside client software or on a device, when it sends the first activate request to activate an API Key to gain access to API Resources. The instance ID is similar to a password, in that it must be unique, and should never be repeated elsewhere. It is okay to save the instance ID on the device, however it is important to understand how it will be used. For example, if a WordPress plugin or theme were being sold, each plugin/theme would create a unique instance ID for each API Key activation. For all other API queries, that same instance ID would be sent. When an API Key is deactivated, which is when a deactivate request is sent to the API, the instance ID could be deleted, and an new instance ID created when the API Key is activated again, or saved to be used when the API Key is activated again.
An instance ID is used for an activation, and for all API requests related to that activation, such as a status request. New activations require a new unique instance ID. An instance ID should only exist from the start of an activation, throughout all API requests using that activation, until that activation is deactivated, and then you could save the instance ID if that object will be activated again later, or delete the instance ID so that a new unique instance ID is created for a new unique activation of the same object. This allows the API Manager to distinctly identify which activation has access to API resources that the API Key activated allows access to.
How you create an instance ID is up to you. In the WooCommerce API Manager PHP Library for Plugins and Themes we use a core WordPress password generator.
Trusted Sources ↑ Back to top
The constant
WC_AM_TRUSTED_SOURCES is used to restrict access to the APIs by specific IP addresses. These IP addresses can be either IPv4 or IPv6. The format to define the constant is an array as shown below.
define( 'WC_AM_TRUSTED_SOURCES', array( 'ip address 1', 'ip address 2', 'ip address 3' );
Add the constant
WC_AM_TRUSTED_SOURCES definition to wp-config.php.
When
WC_AM_TRUSTED_SOURCES is defined only those IP addresses in the array list will be allowed to access the APIs, while all other IP addresses will be denied access. The use case for this implementation might be a remote server(s) hosting a membership service that requires an API Key to access services. The product would be hosted on a WooCommerce server with WooCommerce API Manager generating the API Key, and authenticating access via the APIs.
HTTP(S) Requests ↑ Back to top
The API Manager listens for API requests at the root URL of the web site. For example the root URL might be. The forward slash at the end of the URL should be taken into account when building the query string so that there is not a double forward slash (//) between the root URL and the query string. The endpoint used to connect to the APIs is wc-api as the key, and the value is wc-am-api, so the query string would start as the following:
At the end of this URL + query string any added keys and values would be added using ampersand (&), so it would look something like this:
The key and value would be replaced with something like
&product_id=19. The ampersand (&) cancontenates the next key=value to the query string.
The next key and value is
wc_am_action={value}. Below is a list of values for the wc_am_action key.
Each action, such as activate, deactivate, etc., is an API Endpoint that performs specific actions detailed in sections below.
- wc_am_action=activate
- wc_am_action=deactivate
- wc_am_action=status
- wc_am_action=information
- wc_am_action=update
- wc_am_action=plugininformation (Deprecated: for legacy use only)
- wc_am_action=pluginupdatecheck (Deprecated: for legacy use only)
To build on the URL + query string we could add a request to activate an API Key such that:
More would need to be added to the query string to provide all the required information to activate the API Key.
Next is a list of the request values with additional required keys each API function requires, along with other details.
wc_am_action=activate ↑ Back to top
Purpose: To activate an API Key that will then allow access to one or more API resources depending on the API Key type.
Response format: JSON
Required keys: api_key, product_id, instance.
Optional keys: object, version. (These values will be recorded in the database for this activation.)
Example query string:
Note: if the object value is defined as a URL, remote the http:// or https://, since some server security will mangle the entire query string, and break it as a result.
Example JSON success response:
{ "activated": true, "message": "0 out of 4 activations remaining", "success": true, "data": { "unlimited_activations": false, "total_activations_purchased": 4, "total_activations": 4, "activations_remaining": 0 }, "api_call_execution_time": "0.057487 seconds" }
Example JSON error response:
{ "code": "100", "error": "Cannot activate API Key. The API Key has already been activated with the same unique instance ID sent with this request.", "success": false, "data": { "error_code": "100", "error": "Cannot activate API Key. The API Key has already been activated with the same unique instance ID sent with this request." }, "api_call_execution_time": "0.027128 seconds" }
wc_am_action=deactivate ↑ Back to top
Purpose: To deactivate an API Key so the API Key.
Response format: JSON
Required keys: api_key, product_id, instance.
Example query string:
Example JSON success response:
{ "deactivated": true, "activations_remaining": "1 out of 4 activations remaining", "success": true, "data": { "unlimited_activations": false, "total_activations_purchased": 4, "total_activations": 3, "activations_remaining": 1 }, "api_call_execution_time": "0.062482 seconds" }
Example JSON error response:
{ "code": "100", "error": "The API Key could not be deactivated.", "success": false, "data": { "error_code": "100", "error": "The API Key could not be deactivated." }, "api_call_execution_time": "0.023881 seconds" }
wc_am_action=status ↑ Back to top
Purpose: Returns the status of an API Key activation. Default data returned for product_id includes total activations purchased, total activations, activations remaining, and if the API Key is activated.
Response format: JSON
Required keys: api_key, product_id, instance.
Optional key: version. (The version will be updated.)
Note: If the return value for
status_check is
active, or for
activated is
true, then the time limit has not expired and the API Key is still active. If this is for a subscription, then the subscription is still active. The API Manager verifies the API Key activation should still exists, and deletes it if it should not, due to an expired time limit or inactive subscription, before returning a response.
Example query string:
Example JSON success response:
{ "status_check": "active", "success": true, "data": { "unlimited_activations": false, "total_activations_purchased": 4, "total_activations": 4, "activations_remaining": 0, "activated": true }, "api_call_execution_time": "0.021769 seconds" }
{ "status_check": "inactive", "success": true, "data": { "total_activations_purchased": 4, "total_activations": 3, "activations_remaining": 1, "activated": false }, "api_call_execution_time": "0.01851 seconds" }
Example JSON error response:
{ "code": "100", "error": "No API resources exist.", "success": false, "data": { "error_code": "100", "error": "No API resources exist." }, "api_call_execution_time": "0.011719 seconds" }
wc_am_action=information ↑ Back to top
Purpose: Data returned depends on if the request was authenticated or not.
Response format: JSON
Required keys if authenticating: api_key, product_id, plugin_name, instance, version.
Example query string:
Note: For
plugin_name requirements see Description of keys sent in HTTP(s) API requests.
Example JSON success response:
{ "success": true, "data": { "package": { "product_id": "62912" }, "info": { "name": "Search Engine Ping", "active_installs": 4, .034218 seconds" }
Example JSON error response:
{ "code": "100", "error": "The product ID 62912222 could not be found in this store.", "success": false, "data": { "error_code": "100", "error": "The product ID 62912222 could not be found in this store." }, "api_call_execution_time": "0.001816 seconds" }
Required keys if not authenticating: product_id, plugin_name.
Example query string:
Note: For
plugin_name requirements see Description of keys sent in HTTP(s) API requests.
Example JSON success response:
{ "success": true, "data": { "info": { "name": "Search Engine Ping", .018425 seconds" }
Example JSON error response:
{ "code": "100", "error": "The product ID 629122 could not be found in this store.", "success": false, "data": { "error_code": "100", "error": "The product ID 629122 could not be found in this store." }, "api_call_execution_time": "0.001346 seconds" }
wc_am_action=update ↑ Back to top
Purpose: Returns whether a software update is available. If the request is authenticated with an instance ID, then the URL to the file download is returned.
Response format: JSON
Required keys if authenticating: api_key, product_id, plugin_name, instance, version.
Optional key: slug. (slug is optional, but preferred.)
Example query string:
Note: For
plugin_name requirements see Description of keys sent in HTTP(s) API requests.
Example JSON success response:
Note:
package is the file download URL.
{ "success": true, "data": { "package": { "product_id": "62912", "id": "search-engine-ping-62912", "slug": "search-engine-ping", "plugin": "search-engine-ping/search-engine-ping.php", "new_version": "1.4", "url": "", "tested": "4.2", "upgrade_notice": "", "package": "..." } }, "api_call_execution_time": "0.049395 seconds" }
Example JSON error response:
{ "code": "100", "error": "The product ID 629122 could not be found in this store.", "success": false, "data": { "error_code": "100", "error": "The product ID 629122 could not be found in this store." }, "api_call_execution_time": "0.001345 seconds" }
Required keys if not authenticating: product_id, plugin_name.
Example query string:
Note: For
plugin_name requirements see Description of keys sent in HTTP(s) API requests.
Example JSON success response:
{ "success": true, "data": { "package": { "id": "search-engine-ping-62912", "slug": "search-engine-ping", "plugin": "search-engine-ping/search-engine-ping.php", "new_version": "1.4", "url": "", "tested": "4.2", "upgrade_notice": "", "package": "" } }, "api_call_execution_time": "0.010199 seconds" }
Example JSON error response:
{ "code": "100", "error": "The product ID 629122 could not be found in this store.", "success": false, "data": { "error_code": "100", "error": "The product ID 629122 could not be found in this store." }, "api_call_execution_time": "0.001348 seconds" }
wc_am_action=plugininformation ↑ Back to top
Response format: serialized
Same as wc_am_action=information, except the response format is serialized, which is required to work with WordPress plugin and theme update requests that use the pre-2.0 API Manager PHP Library.
wc_am_action=pluginupdatecheck ↑ Back to top
Response format: serialized
Same as wc_am_action=update, except the response format is serialized, which is required to work with WordPress plugin and theme update requests that use the pre-2.0 API Manager PHP Library.
Troubleshooting ↑ Back to top
As mentioned in the Settings section, several different options are available to log API requests and responses by turning those options on in the API Manager settings page. Postman is recommended for API testing.
API Load/Speed Test ↑ Back to top
Test Description ↑ Back to top ↑ Back to top
-. It costs $15/month.
The server used in the test is the live server at toddlahman.com, which has a large database of customers, so the test could reflect a real-world result.
Test Results ↑ Back to top.
Overriding templates ↑ Back to top
Templates in the WooCommerce API Manager can be overridden using the same approach as is used to override a WooCommerce template. Directions are as follows:.
WooCommerce API Manager Hook Reference (Not Completed) ↑ Back to top
WooCommerce API Manager Hook Reference
Data Structures and Storage (Not Completed) ↑ Back to top
Data Structures and Storage
Preventing Unauthorized Software Use ↑ Back to top
The API Manager can be used to prevent software, or a service, from being used until after the API Key has been activated. The documentation for the WooCommerce API Manager PHP Library for Plugins and Themes has an example on how to prevent use of software until after the API Key has been activated. This method can also be used to disable software if the API Key has expired. How this is implemented is completely up to the software author. A close examination of the API functions will provide more information on which will be needed to take appropriate action.
The approach taken most often is to allow the customer to continue using the software after the API Key has expired, just like a desktop version of the software would, but to deny software updates, which the API Manager will do if the API Key time limit for that API Resource has expired. If the software is still in use the customer can still see if software updates are available even after the API Key time limit has expired, but the customer cannot get the update until they renew the API Key time limit through a new purchase, or renewing a WooCommerce Subscription.
Troubleshooting ↑ Back to top
Read the Self-Service Guide ↑ Back to top
Review the WooCommerce Self-Service Guide.
Data Update Not Completing ↑ Back to top
The first step is to make sure all plugins, themes, and theme template overrides are up-to-date. If you have your own server, make sure all software is up-to-date. Anything that is out-of-date, and needs to be updated, can throw an error that can prevent things from working in the background even though you are not seeing the error, and out-of-date software is security risk. Also, make sure you know what the latest WordPress, WooCommerce, and WooCommerce API Manager requirements are for versions of PHP and MySQL, as outdated versions can and will cause issues. Keeping software up-to-date is the surest way to avoid a ton of time trying to figure out what is going wrong.
The next step is to disable all plugins except WooCommerce, WooCommerce API Manager, and WooCommerce Subscriptions, if you have that plugin, then see if the data update completes. It could be that the server does not have sufficient RAM to process the update.
Go to WooCommerce > Status and look for recommendations for updates.
Below are some server settings to check, if you have your own server, which are listed below as Apache errors, but are not Apache specific.
- max_execution_time = 30 – is changed in the php.ini file on Linux servers.
- max_allowed_packet = 128M – is a MySQL setting and is changed on Linux servers in /etc/my.cnf.
If a data update does not seem to be completing, it could be because the server does not have enough RAM or CPUs to handle the normal workload, and an update as well. The updater in the API Manager monitors the memory usage to make sure it never exceeds 90%, and it will pause if it does then starts again where it left off. Slow servers will take a while to update, and that is compounded by a large database.
If you see 500 Internal Server Error, or Error Connection Timed Out, then the machine either exhausted its memory at that moment, or a plugin or theme is throwing a fatal error. Updating software will prevent fatal errors, and increasing RAM, or modifying WordPress settings to increase memory, will fix memory issues, but only if there is enough RAM to start with.
Go to WooCommerce > Status > Logs > and check for any fatal-errors … logs.
Go to WooCommerce > Status > Logs > wc_am_db_updates … will display the current status of the API Manager data update process.
If all else fails check the PHP, and web server (Apache, Nginx, etc.), error logs for clues as to what errors are preventing the update process from completing.
Pre 2.0 API Keys don’t work ↑ Back to top
API Keys that existed before version 2.0 still exist, but they have moved to the Associated API Key database table. The pre 2.0 API Keys will still work for activation, deactivation, and status queries, but may not work for updates in all cases if the pre 2.0 version of the PHP Library is being used for WordPress plugins and themes. If updates are not working for pre 2.0 API Keys for WordPress plugins and themes, update to the post 2.0 version of the PHP Library.
Pre 2.0 API Keys will not be displayed on the order screen, or on the My Account dashboard. The Master API Key will be displayed. If set under settings, the Product Order API Key will also be displayed, however the default is to only show the Master API Key.
If you are NOT using the PHP Library for WordPress plugins and themes, then refer to the API documentation to make sure your queries contain all the required keys and values.
“No API resources exist” Error Message ↑ Back to top
If you see the message “No API resources exist” when attempting to activate an API Key, this means no API Resource record could be found for this purchase. To know if an API Resource exists, go to the Order screen where there should be a record in the API Resources meta box. API Resources only exist for an order if the order has a “completed” or “processing” status. The record is searched for using the API Key and Product ID, so check to make sure both of those values are correct on the Order screen API Resources meta box, or the API Keys tab in the customer’s My Account dashboard.
Make sure to check the product itself, because the API checkbox should be checked on the Product edit screen, or an API Resource will not be created when the product is purchased. Checking the API check box on the Product edit screen after purchases have been made will trigger a background process to create API Resources from completed or processing orders previously made for that product.
If the product is an API product and a WooCommerce Subscription, the API Resource will not be displayed in the Order screen API Resources meta box, be available to the customer via the API, or be displayed in the My Account dashboard, if the subscription has expired.
If the product is an API product, and the API Access Expires time limit has expired, the API Resource will not be displayed in the Order screen API Resources meta box, be available to the customer via the API, or be displayed in the My Account dashboard..
No file defined or Software Download/Update Failed ↑ Back to top
When a Downloadable file is added to a product, that first file URL is used for software updates and My Account API Downloads. Prior to API Manager 2.0 the
woocommerce_downloadable_product_permissions table was used to verify download permission, and other criteria, but it has created unexpected issues over time, as a result the API Manager no longer relies on the
woocommerce_downloadable_product_permissions table for local server download data as of version 2.0.7, but rather only looks for the first Downloadable files URL on the product. One of the issues in using the
woocommerce_downloadable_product_permissions table began in WooCommerce version 3.0. Please read for more information on how adding/changing Downloadable files URLs on products after WooCommerce 3.0 would cause download/update URLs to stop working.
If the error message “No file defined” appears for a local server download, check that the product has a Downloadable files URL, and it is the first file listed. If the error persists, try removing the Downloadable files, update the product, and add them back, then click update again. Misconfigured web servers, firewalls, or file blocking rules in a plugin or the web server can also cause file download failures. To completely avoid the local server download issues serve downloads from Amazon S3..
Apache error AH01067 ↑ Back to top
Error log message: AH01067: Failed to read FastCGI header, referrer: …
Edit the php.ini file to at least max_execution_time 30 or greater, but not too much or PHP scripts will take too long to complete:
max_execution_time = 30
If you are running a Apache proxy, the timeout must be greater than ( > ) the php max_execution_time.
Apache error AH01075 ↑ Back to top
Error log message: The timeout specified has expired: AH01075: Error dispatching request to …
In the Apache config file set the ProxyTimeout to 1800, or whatever works for you, but the timeout must be greater than ( > ) the php max_execution_time in the php.ini file.
ProxyTimeout 1800
Apache error AH01071 ↑ Back to top
Error log message: AH01071: Got error ‘PHP message: PHP Warning: Error while sending QUERY packet …
This error happens when the database cannot handle more connections, because there are too many requests for it to handle. If the server is on a shared host, you need to consider upgrading to an account that has a database that can handle more traffic. If you have your own server then edit the /etc/my.cnf file as follows keeping in mind that you can use a lower allocation than 128 MB.:
max_allowed_packet = 128M | https://docs.woocommerce.com/document/woocommerce-api-manager/ | 2020-01-17T19:06:25 | CC-MAIN-2020-05 | 1579250590107.3 | [] | docs.woocommerce.com |
Quarantine hits -
quarantine_hits
The default quarantine action for malware hits.
Default: 0
Quarantine clean -
quarantine_clean
Try to clean string based malware injections. NOTE: quarantine action must be set to "move to quarantine and alert".
Default: 0
Suspend user -
quarantine_suspend_user
The default suspend action for users wih hits. When enabled a users shell access will be disabled via the command: /usr/sbin/usermod -s /bin/false user
Default: 0
Suspend user min userid -
quarantine_suspend_user_minuid
The minimum userid value that can be suspended.
Default: 10000
Quarantine on error -
quarantine_on_error
When using an external scan engine, such as ClamAV, should files be quarantined if an error from the scanner engine is received? This is defaulted to 1, always quarantine, as ClamAV generates an error exit code for trivial errors such as file not found. As such, a large percentage of scans will have ClamAV exiting with error code 2.
Default: 1 | https://docs.danami.com/sentinel/settings/antimalware/quarantine-settings | 2020-01-17T18:37:53 | CC-MAIN-2020-05 | 1579250590107.3 | [] | docs.danami.com |
Using Dotscience with arbitrary programs
If you have arbitrary code you need to run, you can use the ds command-line tool to integrate it with Dotscience.
This tutorial demonstrates how Dotscience can be used in Script mode. You can run any script (Python, R, bash, etc) on your workspace by installing the
ds command-line tool on your Mac or Linux workstation, and using
ds to interact with your Dotscience project and runners. For more information on the different modes in which Dotscience can be used, see the reference section on Dotscience modes.
Install the Dotscience (ds) client
sudo mkdir -p /usr/local/bin sudo curl -sSL -o /usr/local/bin/ds(uname -s)/ds
Make the binary executable
sudo chmod +x /usr/local/bin/ds
Log into Dotscience and go to Account > Keys and copy your API key.
echo <api key> | ds login <username>
Configure the (ds) client
By default the
ds is configured to speak to. If you are a user of our SaaS service you don’t need to configure ds.
Note: If you have an enterprise installation of Dotscience you would need to set the location of the Dotscience hub with
ds set server-url http(s)://your-dotscience-hub-url
A hello world run
Create a directory to run our tutorial
mkdir ds-run-test cd ds-run-test
Create a project in Dotscience with
ds
$ ds project create hello-ds 2f6fc697-fd46-4ba9-a41f-05bcd569e36e
Run a sample program - the first argument is the local directory which get copied into the workspace, the second argument is the workspace name, the third argument is a docker image name and then the command you want to run
echo 'import dotscience as ds; ds.script(); ds.start(); ds.publish("hello, world")' > test.py ds run -p hello-ds --upload-path . python test.py Executing run ID 60691107-51b7-4d51-9e23-a0a972378161... [[DOTSCIENCE-RUN:e4729f40-437a-45e1-bd70-e023df59961d]]{ "description": "hello, world", "end": "20191111T151011.932338", "input": [], "labels": {}, "output": [], "parameters": {}, "start": "20191111T151011.932327", "summary": {}, "version": "1", "workload-file": "test.py" }[[/DOTSCIENCE-RUN:e4729f40-437a-45e1-bd70-e023df59961d]] Task succeeded. Submitted by userxxx. Type: command Image: quay.io/dotmesh/dotscience-python3:0.6.7 ('quay.io/dotmesh/[email protected]:bab2a7f55599a6222690c318a633b71cb65018fb7c5b8d78c3630ef8bcfb5dd6') Command: "python" "test.py" Ran from 2019-11-11T15:10:10.319691039Z until 2019-11-11T15:10:17.52391425Z ...
Use the Dotscience Python Library to annotate your script with input, output, params & stats, and runs will start appearing in the Runs tab.
The
ds run we did above registered the following Dotscience run
The compute for this is provided by the runner and this can be either your own self-service compute or one that is available on Dotscience. You can run any scripts with
ds run and Dotscience has an internal scheduler that sends the script to be run on the first available runner. | https://docs.dotscience.com/tutorials/script-based-development/ | 2020-01-17T18:54:43 | CC-MAIN-2020-05 | 1579250590107.3 | [array(['/hugo/script-automation/provenance.png', None], dtype=object)] | docs.dotscience.com |
This procedure trains a classification model and stores the model file to disk.
A new procedure of type
classifier.train named
<id> can be created as follows:
mldb.put("/v1/procedures/"+<id>, { "type": "classifier.train", "params": { "mode": <ClassifierMode>, "multilabelStrategy": <MultilabelStrategy>, "trainingData": <InputQuery>, "algorithm": <string>, "configuration": <JSON>, "configurationFile": <string>, "equalizationFactor": <float>, "modelFileUrl": <Url>, "functionName": <string>, "runOnCreation": <bool> } })
with the following key-value definitions for
params:
This procedures supports many training algorithm. The configuration is explained on the classifier configuration page.
The status of a Classifier procedure training will return a JSON representation of the model parameters of the trained classifier, to allow introspection.
The
mode field controls which mode the classifier will operate in:
booleanmode will use a boolean label, and will predict the probability of the label being true as a single floating point number.
regressionmode will use a numeric label, and will predict the value of the label itself.
categoricalmode will use a categorical (multi-class) label, and will predict the probability of each of the categories independently. This style therefore produces multiple outputs.
multilabelmode will do multi-label classification by using a set of categorical (multi-class) labels, and will predict the probability of each of the categories independently. This style therefore produces multiple outputs. The
multilabelStrategyfield controls how multilabel classification is handled.
In all operation modes but
multilabel, the label is a single scalar value. The
multilabel handles
categorial classification problems where each example has a set of labels instead of a single one.
To this end the
label input must be a row. In this row each column with a non-null value will be a
label value in the example's set. The column name is used to identify the label, while the value itself is disregarded.
This makes multi-label classification easy to use with bag of words, for example.. | https://docs.mldb.ai/doc/builtin/procedures/Classifier.md.html | 2020-01-17T18:27:27 | CC-MAIN-2020-05 | 1579250590107.3 | [] | docs.mldb.ai |
TaxJar for WooCommerce provides sales tax calculations, reporting and filing for WooCommerce merchants and developers.
Visit TaxJar.com to learn more about TaxJar for WooCommerce.
Requirements ↑ Back to top
- WordPress 4.0+
- WooCommerce 3.0+
- A TaxJar account with API token
Installation ↑ Back to top
- Download the TaxJar for WooCommerce Extension.
- Go to: WordPress Admin > Plugins > Add New and Upload Plugin with the file you downloaded with Choose File.
- Install Now and Activate the extension.
More information at: Install and Activate Plugins/Extensions.
- Go to:
- Sign up for a TaxJar free trial by entering your email address and password.
- Enter preferences for three steps – Your information, Select data sources, Start import.
- Most will want Store Integration > WooCommerce or CSV Import.
- It’s also possible to skip the Data and Import steps and complete later.
- Go to Account > SmartCalcs API, and select Generate API Live Token.
- Copy your SmartCalcs API Token. Or leave window open for copy/paste in below setup.
Setup and Configuration ↑ Back to top
On your WordPress/WooCommerce site:
- Go to: WooCommerce > TaxJar.
- Copy and paste your API token from your new or existing TaxJar account into the box under Step 1: Activate your TaxJar WooCommerce Plugin.
- Save changes. This opens Step 2: Configure your sales tax settings on the same screen.
- Go to: WooCommerce > Settings > General and verify your Store Address BEFORE enabling TaxJar. TaxJar automatically detects your Ship From Address by looking at your Store Address.
- If you make changes, don’t forget to Save changes.
- If everything is correct, no need to save.
- Go back to: WooCommerce > Settings > Integration.
- Tick the box for Enable TaxJar Calculations.
- Sales Tax Reporting is covered in the section below, and the box for Enable order downloads need not be ticked at this time.
- Tick the box for Enable logging under Debug Log. Optional but recommended as it can be helpful for troubleshooting purposes.
- Save changes.
Nexus Addresses
Once TaxJar is enabled for your WooCommerce store, a list of nexus states/regions appears under the Sales Tax Calculations checkbox.
- If none appear, you need to select:
- Sync Nexus Addresses if you recently made changes.
or
- Manage Nexus Locations and edit/delete/add them in your TaxJar account, and then Sync.
Confirm that all of your Nexus addresses are saved in TaxJar.
Currently TaxJar supports:one region from your Store Address in other countries
If you’re unsure where you need to collect sales tax, read our post on Sales Tax Nexus Defined. We also provide Sales Tax Guides for each U.S. state.
Product Taxability
To exempt certain product categories such as clothing from sales tax, create a custom tax class and assign it to your products:
- Go to: WooCommerce > Settings > Tax.
- Next to “Additional tax classes” there’s a box where you can type in a new tax class.
- To set up a clothing tax class, add “Clothing – 20010” on a new line:
20010is the clothing product tax code passed to our sales tax API for exemptions. If your products belong in another category, you can find a list of categories and tax codes we support here:
- Once you add a new tax class, make sure your products are assigned to the new tax class. When editing a product, change the tax class to “Clothing – 20010” under the General tab and save it:
Now we’ll pass a product tax code with this product when making calculations through our sales tax API. For variable products, make sure each variation tax class is set to “Same as parent”:
Customer Taxability
To exempt customers from sales tax, edit a given user in the WordPress admin panel and update the options listed under TaxJar Sales Tax Exemptions:
- Go to: Users > All Users.
- Edit a customer / user.
- To exempt the customer, change “Exemption Type” to “Wholesale / Resale”, “Government”, or “Other”:
- To exempt the customer in one or more states, use the multi-select field to select the states. If no states are selected, the customer will be exempt in all states:
- Click the “Update User” button at the bottom of the screen to save the customer and sync them to TaxJar.
Sales Tax Reporting ↑ Back to top
To import orders into TaxJar for sales tax reporting and filing, perform the following steps:
- Go to: WooCommerce > TaxJar.
- Tick the box for Sales Tax Reporting:
- Save changes.
Our plugin will automatically sync your orders to TaxJar through the API. You can see how this works behind-the-scenes by going to the TaxJar tab under WooCommerce > Settings and clicking the “Sync Queue” link:
If you need to backfill your historical WooCommerce orders into TaxJar, use the Transaction Backfill tool:
How SmartCalcs Works ↑ Back to top
Your store seems to be calculating sales tax correctly, but how many SmartCalcs API calls are made per order?
The latest version of our plugin only makes live API calls when the order resides in a state where you have Nexus. This saves you a lot of API calls and money.
API calls are cached using the WordPress Transients API. If you have a customer repeatedly loading the checkout page without changing their shipping info, your store will not make additional API calls. On average, the SmartCalcs integration makes 2-3 API calls per order in a Nexus state.
API calls are only made under three conditions:
- Cart shipping and tax estimate for a Nexus state
- Calculating taxes from an order inside WooCommerce > Orders
Our SmartCalcs integration hooks onto the
woocommerce_after_calculate_totals action, only if sales tax calculations are enabled.
After a customer completes an order using SmartCalcs, our plugin stores the rate region ID and sales tax amount (both order and shipping tax) in your database.
How Reporting Works ↑ Back to top
Once enabled, TaxJar will automatically import your order and refund transactions from WooCommerce through our API. Orders must be in a
completed or
refunded status for TaxJar to import them. This ensures that only completed orders and refunds are imported into the system.
Previous versions of our plugin (before version 3.0) used the WooCommerce REST API to import transactions on a nightly basis. Our plugin generated API keys after enabling order downloads to TaxJar. If you’ve already upgraded to version 3.0 or later, we automatically migrated your connection to our new transaction sync. You’ll no longer use the TaxJar app to backfill older transactions. Instead, use the Transaction Backfill tool under WooCommerce > Settings > TaxJar to push historical orders into TaxJar.
International Stores ↑ Back to top
TaxJar supports checkout calculations powered by SmartCalcs in more than 30 countries, including VAT in the EU and Canada. To perform international calculations, set the country as your Store Address under WooCommerce > Settings > General.
FAQ ↑ Back to top
The Sales Tax Reporting feature isn’t importing orders, what’s the issue? ↑ Back to top
To troubleshoot, go to Connect Your WooCommerce Account article. Ensure that:
- Your server meets WooCommerce system requirements.
- Your PHP memory limit is 64 MB or higher (128 MB+ recommended). If you’re unsure, contact your hosting provider.
Questions & Feedback ↑ Back to top
Have a question or need assistance? Get in touch with TaxJar experts.
To help us help you, please include the TaxJar plugin version in your support email and any relevant log entries. You can find the plugin version under Plugins > Installed Plugins. | https://docs.woocommerce.com/document/taxjar/ | 2020-01-17T19:58:09 | CC-MAIN-2020-05 | 1579250590107.3 | [array(['https://docs.woocommerce.com/wp-content/uploads/2016/09/nexus-addresses.png?w=950',
'TaxJar Nexus Addresses'], dtype=object)
array(['https://docs.woocommerce.com/wp-content/uploads/2016/10/product-variations-same-as-parent.png?w=550',
'Product Variation Tax Classes'], dtype=object)
array(['https://docs.woocommerce.com/wp-content/uploads/2016/10/taxjar-woocommerce-sync-queue.png',
'TaxJar Transaction Sync Queue'], dtype=object)
array(['https://docs.woocommerce.com/wp-content/uploads/2016/10/taxjar-woocommerce-transaction-backfill.png',
'TaxJar Transaction Sync Backfill'], dtype=object) ] | docs.woocommerce.com |
Yes. We provide support by email and via the in-app chat window for all our customers.
See our support article for how to contact us.
We're based in the UK so our core hours are between 8AM and 6PM GMT Monday - Friday, but if you have any questions outside of those hours don't be afraid to get in touch - you can often get a reply from us outside of our official support hours. | http://docs.gearset.com/en/articles/606064-is-support-included-in-the-price | 2020-01-17T18:27:17 | CC-MAIN-2020-05 | 1579250590107.3 | [] | docs.gearset.com |
.
MultiDirectory recommends a few plugins to extend it's functionality. These plugins can be installed automatically from the notification in WordPress Administration Panel (Dashboard). Although, MultiDirect.
This plugin enable the directory creation.
If you need a contact form, you can use this plugin.
We have included MultiDirectoryYou must install all the recommended plugins before import the demo MultiDirectory blank child theme
You can read more about Child theme on WordPress Codex →
If you are migrating from another theme, then upon activating MultiDirectory you may find previously added images display strangely with weird aspect ratios and sizes. Don't worry. It is normal. The image thumbnails are required to be recreated using MultiDirectory's presets.
There is a very easy fix to this issue. Just install Force Regenerate Thumbnails plugin.
The Site Icon is used as a browser and app icon for your site. Icons must be square, and at least 512px wide and tall.
By default, MultiDirectory styles the site title with pre-defined styles and displays it as Text Logo. If you'd like to upload your custom logo for your site, please follow the steps below:
By default, MultiDirectory uses Heebo font as default theme font family. But MultiDirectory provides an option to change it in case you dont like it. It is easy to change it, please follow the steps below.
MultiDirectory allows you to change almost every element colors, but if you just want to change the accent color, please follow these steps:
Container layouts is a variation of the theme container. MultiDirectory provides 3 option to choose:
MultiDirect MultiDirect
multidirectory', '' );
multidirectory.potin Poedit. Translate as needed.
de_DE.po. | http://docs.theme-junkie.com/multidirectory/ | 2020-01-17T18:51:51 | CC-MAIN-2020-05 | 1579250590107.3 | [] | docs.theme-junkie.com |
A control that is used to publish (preview, print and export) documents in ASP.NET applications and supports HTML5/JS technology.
Namespace: DevExpress.XtraReports.Web
Assembly:
DevExpress.XtraReports.v19.2.Web.WebForms.dll
public class ASPxWebDocumentViewer :
ASPxWebClientUIControl,
IControlDesigner
Public Class ASPxWebDocumentViewer
Inherits ASPxWebClientUIControl
Implements IControlDesigner
Main Features
The HTML5 Document Viewer includes the following main features:
Quick Start
To add a Web Document Viewer to your application, do the following:
In the application's Web.config file, add the "resources" section as shown below.
<devExpress>
<!-- ... -->
<resources>
<add type="ThirdParty" />
<add type="DevExtreme" />
</resources>
</devExpress>
Alternatively, to avoid automatic loading of any libraries by a control (e.g., when such libraries are already referenced on the web page), declare an empty "resources" section and manually attach DevExtreme resources and the required third-party libraries to the web page.
<resources>
</resources>
Deleting the DevExpress "resources" section from the Web.config file will enable the default behavior (with automatic loading only of DevExtreme, without adding third-party libraries).
To learn more about this configuration, see Embedding Third-Party Libraries.
To use the Document Viewer on mobile devices, enable the ASPxWebDocumentViewer.MobileMode property.
For the Mobile Viewer to properly render document pages in a mobile browser, include the viewport <meta> tag to your HTML file inside the <head> block as shown below.
<meta name="viewport" content="width=device-width, initial-scale=1, user-scalable=0" />
To learn more about this mode, see Mobile Mode.
Additional Information
To learn more about using the Web Document Viewer, refer to the following topics: | https://docs.devexpress.com/XtraReports/DevExpress.XtraReports.Web.ASPxWebDocumentViewer | 2020-01-17T19:46:02 | CC-MAIN-2020-05 | 1579250590107.3 | [] | docs.devexpress.com |
All content with label as5+batching+build+gridfs+infinispan+jbossas+jta+listener+locking+mvcc+read_committed+snapshot+xsd.
Related Labels:
expiration, publish, datagrid, coherence, interceptor, server, replication, recovery, transactionmanager, dist, release, query, deadlock, archetype, lock_striping, nexus, guide, schema, cache,
amazon, s3, grid, test, jcache, api, ehcache, maven, documentation, write_behind, ec2, 缓存, hibernate, aws, interface, custom_interceptor, setup, clustering, eviction, concurrency, jboss_cache, index, events, hash_function, batch, configuration, buddy_replication, loader, xa, write_through, cloud, tutorial, notification, xml, jbosscache3x, distribution, meeting, cachestore, data_grid, resteasy, hibernate_search, cluster, websocket, transaction, async, interactive, xaresource, searchable, demo, installation, scala, ispn, client, non-blocking, migration, jpa, filesystem, tx, gui_demo, eventing, client_server, infinispan_user_guide, standalone, hotrod, webdav, repeatable_read, docs, consistent_hash, store, faq, 2lcache, jsr-107, jgroups, rest, hot_rod
more »
( - as5, - batching, - build, - gridfs, - infinispan, - jbossas, - jta, - listener, - locking, - mvcc, - read_committed, - snapshot, - xsd )
Powered by a free Atlassian Confluence Open Source Project License granted to Red Hat, Inc.. Evaluate Confluence today. | https://docs.jboss.org/author/label/as5+batching+build+gridfs+infinispan+jbossas+jta+listener+locking+mvcc+read_committed+snapshot+xsd | 2020-01-17T18:20:33 | CC-MAIN-2020-05 | 1579250590107.3 | [] | docs.jboss.org |
All content with label async+br+client+distribution+gridfs+guide+hash_function+infinispan+locking+query+replication+searchable+snapshot.
Related Labels:
podcast, expiration, publish, datagrid, coherence, interceptor, server, rehash, transactionmanager, dist, release, partitioning, deadlock, intro, archetype, lock_striping, jbossas, nexus, schema,
listener, state_transfer, cache, s3, amazon, grid, memcached, test, jcache, api, xsd, ehcache, maven, documentation, youtube, userguide, write_behind, ec2, 缓存, hibernate, aws, interface, clustering, setup, eviction, out_of_memory, concurrency, jboss_cache, index, events, configuration, batch, buddy_replication, loader, colocation, cloud, remoting, mvcc, tutorial, notification, presentation, murmurhash2, jbosscache3x, read_committed, xml, meeting, cachestore, data_grid, cacheloader, resteasy, hibernate_search, cluster, development, websocket, transaction, interactive, xaresource, build, hinting, demo, scala, installation, ispn, command-line, non-blocking, migration, rebalance, filesystem, jpa, tx, user_guide, gui_demo, eventing, shell, client_server, infinispan_user_guide, murmurhash, standalone, webdav, hotrod, repeatable_read, docs, consistent_hash, batching, store, jta, faq, 2lcache, as5, lucene, jgroups, rest, hot_rod
more »
( - async, - br, - client, - distribution, - gridfs, - guide, - hash_function, - infinispan, - locking, - query, - replication, - searchable, - snapshot )
Powered by a free Atlassian Confluence Open Source Project License granted to Red Hat, Inc.. Evaluate Confluence today. | https://docs.jboss.org/author/label/async+br+client+distribution+gridfs+guide+hash_function+infinispan+locking+query+replication+searchable+snapshot | 2020-01-17T19:17:06 | CC-MAIN-2020-05 | 1579250590107.3 | [] | docs.jboss.org |
Directory
Searcher. Find All Method
Definition
Executes the search and returns a collection of the entries that are found.
public: System::DirectoryServices::SearchResultCollection ^ FindAll();
public System.DirectoryServices.SearchResultCollection FindAll ();
member this.FindAll : unit -> System.DirectoryServices.SearchResultCollection
Public Function FindAll () As SearchResultCollection
Returns
A SearchResultCollection object that contains the results of the search.
Exceptions
The specified DirectoryEntry is not a container.
Searching is not supported by the provider that is being used.
Remarks
Due to implementation restrictions, the SearchResultCollection class cannot release all of its unmanaged resources when it is garbage collected. To prevent a memory leak, you must call the Dispose method when the SearchResultCollection object is no longer needed. | https://docs.microsoft.com/en-us/dotnet/api/system.directoryservices.directorysearcher.findall?redirectedfrom=MSDN&view=netframework-4.8 | 2020-01-17T19:38:53 | CC-MAIN-2020-05 | 1579250590107.3 | [] | docs.microsoft.com |
Private locations allow you to monitor internal-facing applications or any private URLs that aren’t accessible from the public internet. They can also be used to create a new custom Synthetics location.
The private location worker is shipped as a Docker container, so it can run on a Linux based OS or Windows OS if the Docker engine is available on your host and can run in Linux containers mode.
By default, every second, your private location worker pulls your test configurations from Datadog’s servers using HTTPS, executes the test depending on the frequency defined in the configuration of the test, and returns the test results to Datadog’s servers.
Once you create a private location, the process of configuring a Synthetics API test from that private location is completely identical to that for Datadog managed locations.
In the Datadog app, hover over UX Monitoring and select Settings -> Private Locations. Add a new private location:
Fill out the Location Details and click Save and Generate to generate the configuration file associated with your private location on your worker.
Copy and paste the first tooltip to create your private location configuration file.
Note: The configuration file contains secrets for private location authentication, test configuration decryption, and test result encryption. Datadog does not store the secrets, so store them locally before leaving the Private Locations screen. You need to be able to reference these secrets again if you decide to add more workers, or to install workers on another host.
Launch your worker as a standalone container using the Docker run command provided and the previously created configuration file:
docker run --init --rm -v $PWD/worker-config-<LOCATION_ID>.json:/etc/datadog/synthetics-check-runner.json datadog/synthetics-private-location-worker
If your private location reports correctly to Datadog, you will see the corresponding health status displayed if the private location polled your endpoint less than five seconds before loading the settings or create test pages:
In your private locations list, in the Settings section:
In the form when creating a test, below the private locations section:
You will also see private location logs populating similar to this example:
2019-12-17 13:05:03 [info]: Fetching 10 messages from queue - 10 slots available 2019-12-17 13:05:03 [info]: Fetching 10 messages from queue - 10 slots available 2019-12-17 13:05:04 [info]: Fetching 10 messages from queue - 10 slots available
You are now able to use your new private location as any other Datadog managed locations for your Synthetics API tests. This is specifically useful to monitor any internal endpoints you might have., which is a test web application.
For a more advanced setup, use the command and see
Learn more about Private Locations below:
docker run --rm datadog/synthetics-private-location-worker --help and check
After you set up your private location: | https://docs.datadoghq.com/getting_started/synthetics/private_location/ | 2020-01-17T19:47:33 | CC-MAIN-2020-05 | 1579250590107.3 | [] | docs.datadoghq.com |
Gets a command to add a caption (numbered label) to a table.
readonly insertTablesCaption: InsertTablesCaptionCommand
Call the execute method to invoke the command. The method checks the command state (obtained via the getState method) to determine whether the action can be performed.
This command adds "Table {SEQ Table }" text at the current position in the document.
Usage example: | https://docs.devexpress.com/AspNet/js-RichEditCommands.insertTablesCaption | 2020-01-17T18:51:24 | CC-MAIN-2020-05 | 1579250590107.3 | [] | docs.devexpress.com |
Fedora CoreOS Frequently Asked Questions
If you have other questions than are mentioned here or want to discuss further, join us in our IRC channel, irc://irc.freenode.org/#fedora-coreos, or on our will embrace a variety of containerized use cases, Red Hat CoreOS will provide a focused immutable host for OpenShift, released and life-cycled in tandem with the platform.
Does Fedora CoreOS replace Container Linux? What happens to CL?
Fedora CoreOS will eventually become the successor to Container Linux. The Container Linux project has a large installed base - it is a top priority to not disrupt that. The project will continue to be supported at least throughout 2019, allowing users ample time to migrate and provide feedback. Existing Container Linux users can be confident that support will continue while the next version is being created in parallel, in a non-disruptive way.
Does Fedora CoreOS replace Fedora Atomic Host? What happens to Fedora Atomic Host and CentOS Atomic Host?
Fedora CoreOS will also become the successor to Fedora Atomic Host. The current plan is for Fedora Atomic Host to have at least a 29 version and 6 months of lifecycle.
#fedora-coreos on IRC Freenode
forum at
website at
Twitter at @fedora (all Fedora and other relevant news)
Technical FAQ
Where can I download Fedora CoreOS?
Fedora CoreOS is under active development. There is currently a preview release available at getfedora.org.
Does Fedora CoreOS embrace the Container Linux Update Philosophy?
The CoreOS Update Philosophy stays as important to us as always. Yes, Fedora CoreOS comes with automatic updates and regular releases. Multiple update channels are provided catering to different users' needs. It will introduce?
How do I migrate from Container Linux to Fedora CoreOS?
How do I migrate from Fedora Atomic Host to Fedora CoreOS?
As with Container Linux, the best practice will be re-provisioning, due to the cloud-init/Ignition transition at least. Since Fedora CoreOS will be using rpm-ostree technology, it may be possible to rebase from Fedora Atomic Host to Fedora CoreOS, but it will not be recommended. It will be preferable to gain experience deploying systems using Ignition so that they can be re-provisioned easily if needed. This will all be part of a "migrating from Fedora Atomic Host" guide which will be published soon.
Which container runtimes are available on Fedora CoreOS?
Which platforms does Fedora CoreOS support? in production. For more about this, please refer to upcoming documentation. | https://docs.fedoraproject.org/en-US/fedora-coreos/faq/ | 2020-01-17T19:46:24 | CC-MAIN-2020-05 | 1579250590107.3 | [] | docs.fedoraproject.org |
Inmanta Documentation¶
Welcome to the Inmanta documentation!
Inmanta is an automation and orchestration tool to efficiently deploy and manage your software services, including all (inter)dependencies to other services and the underpinning infrastructure. It eliminates the complexity of managing large-scale, heterogeneous infrastructures and highly distributed systems.
The key characteristics of Inmanta are:
Integrated: Inmanta integrates configuration management and orchestration into a single tool, taking infrastructure as code to a whole new level.
Powerful configuration model: Infrastructure and application services are described using a high-level configuration model that allows the definition of (an unlimited amount of) your own entities and abstraction levels. It works from a single source, which can be tested, versioned, evolved and reused.
Dependency management: Inmanta’s configuration model describes all the relations between and dependencies to other services, packages, underpinning platforms and infrastructure services. This enables efficient deployment as well as provides an holistic view on your applications, environments and infrastructure.
End-to-end compliance: The architecture of your software service drives the configuration, guaranteeing consistency across the entire stack and throughout distributed systems at any time. This compliance with the architecture can be achieved thanks to the integrated management approach and the configuration model using dependencies.
Currently, the Inmanta project is mainly developed and maintained by Inmanta nv.
- Quickstart
- Setting up the tutorial
- Automatically deploying Drupal
- Create your own modules
- Next steps
- Installation
- Architecture
- Language Reference
- Module guides
- Model developer documentation
- Create a configuration model
- Environment variables
- Module Developers Guide
- Test plugins
- Platform developer documentation
- Creating a new server extension
- Database Schema Management
- Define API endpoints
- Documentation writing
- Model Export Format
- Type Export Format
- Platform Developers Guide
- Administrator documentation
- Setting up authentication
- Configuration
- Logging
- Performance Metering
- Migrate from MongoDB to PostgreSQL
- Frequently asked questions
- Glossary
- Inmanta Reference
- Command Reference
- Configuration Reference
- Environment Settings Reference
- Compiler Configuration Reference
- Inmanta modules
- Module ansible
- Module apache
- Module apt
- Module aws
- Module bind
- Module collectd
- Module cron
- Module dns
- Module docker
- Module drupal
- Module exec
- Module graphite
- Module ip
- Module logging
- Module mongodb
- Module monitoring
- Module mysql
- Module net
- Module openstack
- Module openvswitch
- Module param
- Module php
- Module platform
- Module postgresql
- Module redhat
- Module rest
- Module ssh
- Module std
- Module ubuntu
- Module user
- Module varnish
- Module web
- Module yum | https://docs.inmanta.com/community/2019.5dev/ | 2020-01-17T18:21:12 | CC-MAIN-2020-05 | 1579250590107.3 | [] | docs.inmanta.com |
routing profile
A Mapbox routing profile is a set of rules that a Mapbox routing engine uses to find the optimal route between two points. Routing profiles are generally optimized for the mode of transportation being used to get between locations.
The Mapbox Navigation service APIs have access to the following routing profiles:
Note that the Isochrone API and the Optimization API do not support the
mapbox/driving-traffic profile.
Related resources:
Was this page helpful? | https://docs.mapbox.com/help/glossary/routing-profile/ | 2020-01-17T19:08:01 | CC-MAIN-2020-05 | 1579250590107.3 | [] | docs.mapbox.com |
Authenticate to the API
The Meridian API uses token-based authentication, in addition to cookie-based session ID authentication.
Get a Token to Authenticate
Complete these steps to get a token to authenticate to the Meridian API.
You'll generate application tokens on the Locations list page. If you're looking at a specific location, click the Aruba Meridian logo in the top left corner to return to the Locations list page.
- Once you've logged in to the Editor, click the Application Tokens tab.
- Click Add +.
- In the NAME field, give the application token a meaningful name.
- Click Generate Token.
- In the Permission Type dropdown, choose either Organization or Location. If you select Organization, the token will provide access to all of the locations in that organization.
- In the new Organization/Location dropdown, browse or search for a specific organization or location.
- In the Level dropdown choose either Owner or Read-Only. Owner tokens provide read and write access.
- Click Save.
Use the Token
In order to use the token to authenticate to the Meridian API, include an Authorization header with the token value in every request.
For example:
Token 1ab2cd345ef12gh34h45h67f12gh34h45h6712gh
Tokens don't expire, but they can be deleted. The easiest way to do this is on the Meridian Editor Permissions page on the Application Tokens tab. | https://docs.meridianapps.com/article/772-authenticate-to-the-api | 2020-01-17T18:15:13 | CC-MAIN-2020-05 | 1579250590107.3 | [] | docs.meridianapps.com |
@Target(value={TYPE,METHOD}) @Retention(value=RUNTIME) @Documented public @interface DependsOn
A depends-on declaration can specify both an initialization-time dependency and, in the case of singleton beans only, a corresponding destruction-time dependency. Dependent beans that define a depends-on relationship with a given bean are destroyed first, prior to the given bean itself being destroyed. Thus, a depends-on declaration can also control shutdown order.. | https://docs.spring.io/spring-framework/docs/current/javadoc-api/org/springframework/context/annotation/DependsOn.html | 2020-01-17T19:40:06 | CC-MAIN-2020-05 | 1579250590107.3 | [] | docs.spring.io |
There are a number of details you must be aware of when baking light on models which use the LOD (level of detail) feature.
Baked indirect light on LOD models only works correctly when Realtime Global Illumination is switched off. If Realtime Global Illumination is switched on, the lower LOD models from the LOD group will not be lit correctly.
When you are using Unity’s LOD system in a scene with baked lighting, the highest level-of-detail model out of the LOD group is lit as if it was a regular static model, using lightmaps for the direct and indirect lighting.
For all the lower level-of-detail models in the group, only the direct lighting is baked, and the LOD system relies on light probes to sample indirect lighting.
This means if you want your lower level-of-detail models to look correct with baked light, you must position light probes around them to capture the indirect lighting during the bake.
If you do not use light probes, your lower LOD models will have direct light only, and will look incorrect:
To set up LOD models correctly for baked lighting, mark the LOD objects as Static for lightmapping:
Place light probes around the LOD objects using the light probes component.
After baking the light, your lower level-of-detail models show correctly show the indirect and bounced light, matching the highest level-of-detail model:
You should also be aware that only the highest level-of-detail model will affect the lighting on the surrounding geometry (for example, shadows or bounced light on surrounding buildings). In most cases this should not be a problem since your lower level-of-detail models should closely resemble the highest level-of-detail model.
2017–06–08 Page published with limited editorial review
Updated in 5.6
Did you find this page useful? Please give it a rating: | https://docs.unity3d.com/2017.2/Documentation/Manual/LODForBakedGI.html | 2020-01-17T20:31:14 | CC-MAIN-2020-05 | 1579250590107.3 | [] | docs.unity3d.com |
If user needs to use a Remote Human Task to provide the Work List functionality instead of the Embedded Human Task or if user wants to subscribe for Work List notifications, uncomment the following configuration in registry.xml file. Please note that it is essential to provide credentials and URL to connect to the remote instance (ex:- remote BPS server) if the use case is to use a Remote Human Task. The user should provide credentials and URL to connect to the local instance if you simply want to use Work List notifications.
<workList serverURL="local://services/" remote="false"> <username>admin</username> <password>admin</password> </workList>
Overview
Content Tools
Activity | https://docs.wso2.com/display/Governance540/Configuration+for+Work+List | 2020-01-17T18:34:26 | CC-MAIN-2020-05 | 1579250590107.3 | [] | docs.wso2.com |
sh2peaks¶
Usage¶
sh2peaks [ options ] SH output
- SH: the input image of SH coefficients.
- output: the output image. Each volume corresponds to the x, y & z component of each peak direction vector in turn.
Description¶
Peaks of the spherical harmonic function in each voxel are located by commencing a Newton search along each of a set of pre-specified directions
The spherical harmonic coefficients are stored according the conventions described the main documentation, which can be found at the following link:
Options¶
- -num peaks the number of peaks to extract (default: 3).
- -direction phi theta (multiple uses permitted) the direction of a peak to estimate. The algorithm will attempt to find the same number of peaks as have been specified using this option.
- -peaks image the program will try to find the peaks that most closely match those in the image provided.
- -threshold value only peak amplitudes greater than the threshold will be considered.
- -seeds file specify a set of directions from which to start the multiple restarts of the optimisation (by default, the built-in 60 direction set is used)
- -mask image only perform computation within the specified binary brain mask image.
- -fast use lookup table to compute associated Legendre polynomials (faster, but approximate).
Jeurissen, B.; Leemans, A.; Tournier, J.-D.; Jones, D.K.; Sijbers, J. Investigating the prevalence of complex fiber configurations in white matter tissue with diffusion magnetic resonance imaging. Human Brain Mapping, 2013, 34(11), 2747-2766. | https://mrtrix.readthedocs.io/en/dev/reference/commands/sh2peaks.html | 2020-01-17T20:10:57 | CC-MAIN-2020-05 | 1579250590107.3 | [] | mrtrix.readthedocs.io |
Fluent Bit in normal operation mode allows to be configurable through text files or using specific arguments in the command line, while this is the ideal deployment case, there are scenarios where a more restricted configuration is required: static configuration mode.
Static configuration mode aims to include a built-in configuration in the final binary of Fluent Bit, disabling the usage of external files or flags at runtime.
The following steps assumes you are familiar with configuring Fluent Bit using text files and you have experience building it from scratch as described in the Build and Install section.
In your file system prepare a specific directory that will be used as an entry point for the build system to lookup and parse the configuration files. It is mandatory that this directory contain as a minimum one configuration file called fluent-bit.conf containing the required SERVICE, INPUT and OUTPUT sections. As an example create a new fluent-bit.conf file with the following content:
[SERVICE]Flush 1Daemon offLog_Level info[INPUT]Name cpu[OUTPUT]Name stdoutMatch *
the configuration provided above will calculate CPU metrics from the running system and print them to the standard output interface.
Inside Fluent Bit source code, get into the build/ directory and run CMake appending the FLB_STATIC_CONF option pointing the configuration directory recently created, e.g:
$ cd fluent-bit/build/$ cmake -DFLB_STATIC_CONF=/path/to/my/confdir/
then build it:
$ make
At this point the fluent-bit binary generated is ready to run without necessity of further configuration:
$ bin/fluent-bitFluent-Bit v0.15.0Copyright (C) Treasure Data[2018/10/19 15:32:31] [ info] [engine] started (pid=15186)[0] cpu.local: [1539984752.000347547, {"cpu_p"=>0.750000, "user_p"=>0.500000, "system_p"=>0.250000, "cpu0.p_cpu"=>1.000000, "cpu0.p_user"=>1.000000, "cpu0.p_system"=>0.000000, "cpu1.p_cpu"=>0.000000, "cpu1.p_user"=>0.000000, "cpu1.p_system"=>0.000000, "cpu2.p_cpu"=>0.000000, "cpu2.p_user"=>0.000000, "cpu2.p_system"=>0.000000, "cpu3.p_cpu"=>1.000000, "cpu3.p_user"=>1.000000, "cpu3.p_system"=>0.000000}] | https://docs.fluentbit.io/manual/installation/build_static_configuration | 2020-01-17T19:01:25 | CC-MAIN-2020-05 | 1579250590107.3 | [] | docs.fluentbit.io |
responsemean¶
Usage¶
responsemean inputs output [ options ]
- inputs: The input response functions
- output: The output mean response function
Description¶
Example usage: responsemean input_response1.txt input_response2.txt input_response3.txt … output_average_response.txt
All response function files provided must contain the same number of unique b-values (lines), as well as the same number of coefficients per line.
As long as the number of unique b-values is identical across all input files, the coefficients will be averaged. This is performed on the assumption that the actual acquired b-values are identical. This is however impossible for the responsemean command to determine based on the data provided; it is therefore up to the user to ensure that this requirement is satisfied.
Options¶
- -legacy Use the legacy behaviour of former command ‘average_response’: average response function coefficients directly, without compensating for global magnitude differences between input files. | https://mrtrix.readthedocs.io/en/dev/reference/commands/responsemean.html | 2020-01-17T20:11:31 | CC-MAIN-2020-05 | 1579250590107.3 | [] | mrtrix.readthedocs.io |
,.
Optional section title
Add one or more sections with content | https://docs.telerik.com/reporting/report-items-barcode-qrcode-visual-structure | 2018-02-18T01:30:55 | CC-MAIN-2018-09 | 1518891811243.29 | [array(['/reporting/media/barcode-qrcode-version1.png',
'barcode-qrcode-version 1'], dtype=object)
array(['/reporting/media/barcode-qrcode-version40.png',
'barcode-qrcode-version 40'], dtype=object)
array(['/reporting/media/barcode-qrcode-structure.png',
'barcode-qrcode-structure'], dtype=object) ] | docs.telerik.com |
An enumeration defines a list of options. An attribute can be of type enumeration and that means that its value is one of the options of the enumeration. For example, the status of an order can be Open, Closed, In progress. An enumeration would be ideal to represent these options.
An enumeration has one or more enumeration values. Each value represents one option. An attribute of type enumeration can also be empty to represent an uninitialized state.
Common Properties
Name
The name of the enumeration.
Enumeration Values
Each enumeration value has a caption, a name and an image.
See Enumeration Values. | https://docs.mendix.com/refguide5/enumerations | 2018-02-18T01:17:34 | CC-MAIN-2018-09 | 1518891811243.29 | [] | docs.mendix.com |
This article explains the manual changes required when upgrading to Telerik Reporting Q1 2014 (8.0.14.225).
WPF Report Viewer Dependencies
The viewer is build with Telerik UI Controls for WPF 2013.3.1202.40. If you are using a newer version consider adding binding redirects. For more information see: How to: Add report viewer to a WPF application
Silverlight Report Viewer Dependencies
The viewer is build with Telerik UI Controls for Silverlight 2013.3.1202.50.
WPF/Silverlight Report Viewer Implicit Styling
Starting from Telerik Reporting Q1 2014 both the Silverlight and WPF report viewers will support only the implicit styling, i.e. style without the x:Key attribute. For more information regarding the implicit styling please check the respective Setting a Theme (Using Implicit Styles) help article for WPF Report Viewer or Silverlight Report Viewer.
Because of that after upgrading to Q1 2014 both WPF/Silverlight report viewers may become blank. That is because the themes are no longer embedded in the assembly and are instead distributed as separate files. This means that the report viewer has no theme applied and it becomes blank. In order to apply a theme you will have to migrate from Style Manager to Implicit Styling. To do so follow these steps:
Add references to the following Telerik UI for WPF assemblies, which are usually located in C:\Program Files (x86)\Telerik\Reporting Q1 2014\Examples\CSharp\WpfDemo\bin (respectively for Silverlight are located in C:\Program Files (x86)\Telerik\Reporting Q1 2014\Examples\CSharp\SilverlightDemo\bin):
Telerik.Windows.Controls.dll
Telerik.Windows.Controls.Input.dll
Telerik.Windows.Controls.Navigation.dll
Telerik.ReportViewer.Wpf.dll (for Silverlight it is Telerik.ReportViewer.Silverlight.dll)
Add the respective xaml files for the desired theme. The themes are usually located in C:\Program Files (x86)\Telerik\Reporting Q1 2014\WPF\Themes (respectively for Silverlight are located in C:\Program Files (x86)\Telerik\Reporting Q1 2014\Silverlight\Themes). You will need these xaml files for each theme:
System.Windows.xaml
Telerik.Windows.Controls.xaml
Telerik.Windows.Controls.Input.xaml
Telerik.Windows.Controls.Navigation.xaml
Telerik.ReportViewer.Wpf.xaml (for Silverlight it is Telerik.ReportViewer.Silverlight.xaml)
Remove the telerikControls:StyleManager.Theme=”Vista” attribute from the report viewer - it is no longer required since the style manager is no longer used. Instead the themes are applied implicitly to all report viewers in the application, without setting any attribute.
Build, run and test the project.
Standalone Report Designer
TRDX files created by the Standalone Report Designer contain XML version | https://docs.telerik.com/reporting/upgrade-path-2014-q1 | 2018-02-18T01:27:23 | CC-MAIN-2018-09 | 1518891811243.29 | [] | docs.telerik.com |
This checkbox is at the bottom line of Search Library. Select Limit to
Available to limit results to those titles that have items with a circulation
status of "available" (by default, either Available or Reshelving).
© 2008-2017 GPLS and others. The Evergreen Project is
a member of the Software
Freedom Conservancy. | http://docs.evergreen-ils.org/reorg/dev/cataloging/_limit_to_available.html | 2018-02-18T01:26:04 | CC-MAIN-2018-09 | 1518891811243.29 | [] | docs.evergreen-ils.org |
Changing the Location of the Cache for Block-Level Restores for Exchange Databases
When you restore messages from a block-level backup, a cache of the restored extents is located in the Job Results directory on the MediaAgent. You can change the location of the cache, for example, to a location that has more space, using the s3dfsRootDir additional setting.
Procedure
- On the MediaAgent computer, add the s3dfsRootDir additional setting as shown in the following table.
For instructions on adding the additional setting from the CommCell Console, see Add or Modify an Additional Setting. | http://docs.snapprotect.com/netapp/v11/article?p=products/exchange_database/t_exdb_cache_location_changing.htm | 2018-02-18T00:58:23 | CC-MAIN-2018-09 | 1518891811243.29 | [] | docs.snapprotect.com |
Basic vs advanced editor
You can edit the view with two different editors, basic and advanced. We recommend starting with basic and then switching to advanced as needed. Certain functionality within Kumu is only available by using the advanced editor.
Basic editor
Click the settings icon () on the right side of the map to open the basic editor:
You'll see tabs for filter, cluster, decorate, and settings.
Advanced editor
Click "switch to advanced editor" at the bottom of the sidebar to open the advanced editor:
Everything you change in the basic editor is automatically transferred into raw code in the advanced editor. | https://docs.kumu.io/overview/basic-vs-advanced-editor.html | 2018-02-18T01:33:19 | CC-MAIN-2018-09 | 1518891811243.29 | [array(['../images/introduction-settings.png', None], dtype=object)
array(['../images/advanced-editor-hf.png', None], dtype=object)] | docs.kumu.io |
For a lot of businesses it makes sense to build the schedule based on estimated sales for the week. This way you can make sure you are never spending more on labor than you should be. In ZoomShift you can track your labor-to-sales ratio really easily.
If you click Tools -> Labor to Sales Calculator you will see a pop-up like the one below. You can use this pop-up to enter your estimated sales numbers and then see your labor to sales ratio calculated.
| http://docs.zoomshift.com/employee-scheduling/labor-to-sales-calculator | 2018-02-18T01:03:32 | CC-MAIN-2018-09 | 1518891811243.29 | [array(['https://images.contentful.com/7m65w4g847me/47UKiBHyY0keAscuyEgQoS/093a4fd6004fca206ace25746b27ee33/labor-to-sales-1.png',
None], dtype=object) ] | docs.zoomshift.com |
Overview¶
The Open Procurement ESCO procedure is plugin to Open Procurement API software.
REST-ful interface to plugin is in line with core software design principles.
Conventions¶
This plugin conventions follow the Open Procurement API conventions.
Main responsibilities¶
ESCO procedure is applied for all energy service procurements regardless their price. The main assessment criterion for this type of procurement procedure is Net Present Value (NPV). ESCO procedure features reverse approach compared to the other openprocurement procedures: tender is won by supplier who offered the highest Net Present Value.
The procurementMethodType is esco.
ESCO contracts use separate extension:
Project status¶
The project is in active development and has pilot installations.
The source repository for this project is on GitHub:
You can leave feedback by raising a new issue on the issue tracker (GitHub registration necessary). For general discussion use Open Procurement General maillist.
API stability¶
API is highly unstable, and while API endpoints are expected to remain relatively stable the data exchange formats are expected to be changed a lot. The changes in the API are communicated via Open Procurement API maillist. | http://esco.api-docs.openprocurement.org/en/latest/overview.html | 2018-02-18T00:48:39 | CC-MAIN-2018-09 | 1518891811243.29 | [] | esco.api-docs.openprocurement.org |
...
To get an HTML report, set the
sonar.issuesReport.html.enable property to
true. To define its location, set the
sonar.issuesReport.html.location property to an absolute or relative path to the destination folder for the HTML report. The default value is .sonar/issues-report.html/ for the SonarQube Runner and Ant, and target/sonar/issues-report.html/ for Maven. By default 2 html reports are generated:
- The full report (default name is issues-report.html)
- The light report (default name is issues-report-light.html) that will only contains new issues.
The light report is useful when working on legacy projects with a lot of many issues, since the full report may be hard to display in your web browser. You can skip full report generation using property
sonar.issuesReport.lightModeOnly.
You can also configure the filename of the generated html reports using property
sonar.issuesReport.html.name.
To display a short report in the console, set the
sonar.issuesReport.console.enable property to true:
Finally, run a preview analysis that generates an HTML report:
... | http://docs.codehaus.org/pages/diffpages.action?pageId=230398911&originalId=241270883 | 2014-10-20T13:11:18 | CC-MAIN-2014-42 | 1413507442900.2 | [array(['/download/attachments/230398911/issues-report-console.png?version=1&modificationDate=1362498681211&api=v2&effects=drop-shadow',
None], dtype=object) ] | docs.codehaus.org |
have your .02 considered as to where the Groovy Eclipse plugin goes next! Goto the following Wiki Page and have your say!:
-.
Archived snapshots of the plugin
Archived snapshots of the plugin are available as zip files. You can find them here:: this plugin will only install on Eclipse 3.4.2 or Eclipse 3.5..): | http://docs.codehaus.org/pages/viewpage.action?pageId=133464359 | 2014-10-20T13:28:01 | CC-MAIN-2014-42 | 1413507442900.2 | [] | docs.codehaus.org |
Video Tutorial:
Whether this is your first time creating a campaign, or you’re just looking for a refresher, we hope you’ll find this guide helpful.
If at any point you have a question, please reach out to us via the chat box in the bottom right hand corner. We’re in there Mon-Fri, 6am-6pm Eastern Time, and we’re happy to help! 🙂.
Shopify Promotion: Create an evergreen Shopify campaign (beta available for ActiveCampaign and ConvertKit). Integrate Deadline Funnel with your email platform. (If applicable)
Some Blueprints will include this step to configure the API integration with your email platform
Step 4. Select your deadline length.
Step 4. Add your Pages.
Your Pages are the landing/sales/checkout pages that you want connected to your deadline. This includes your special offer page, and a page for after the deadline expires (ie. ‘Sorry you missed it’, a regular price page, etc.)
You can add additional pages if you need them by clicking ‘Add New Page’. | https://docs.deadlinefunnel.com/en/articles/4160544-how-to-create-a-deadline-funnel-campaign | 2021-09-17T00:55:37 | CC-MAIN-2021-39 | 1631780053918.46 | [array(['https://downloads.intercomcdn.com/i/o/217871334/d408bc19e52b3bfff92e3686/image.png',
None], dtype=object)
array(['https://downloads.intercomcdn.com/i/o/251917217/8833031f8050f17221399791/2020-10-02_08-52-38.jpg',
None], dtype=object)
array(['https://downloads.intercomcdn.com/i/o/217871371/0c1b36d04359e12846e392e7/image.png',
None], dtype=object)
array(['https://downloads.intercomcdn.com/i/o/217871379/776a69e0bce0a6e6a191d81b/image.png',
None], dtype=object)
array(['https://downloads.intercomcdn.com/i/o/217871390/f727817f539f3d424db8c25d/image.png',
None], dtype=object) ] | docs.deadlinefunnel.com |
We integrate with just about every email platform available (some directly and some through Zapier) and you can look up your email platform integration here.
But you'll have a much easier time setting up your evergreen funnel when you understand how Deadline Funnel operates.
Please watch this 9-minute video overview of the API integration. It covers the concept and some keys points to understand.
Common Questions:
What happens if someone who's "not in my funnel" visits one of my pages with Deadline Funnel?
First, ideally, your leads and prospects can't easily find your funnel pages. And they really would need to go through your email sequence to get the URLs.
But we also know that someone could share a sales page link with a friend or in a group on social media. In that case, the default behavior is that the new visitor will be given a tracking record by Deadline Funnel. In other words, they'll see the page and the countdown will start.
If you don't want an untracked visitor from seeing your funnel pages, then we have a solution.
It's called Pre-Launch and we have an article that explains how Pre-Launch works for preventing untracked visitors from seeing your pages. In fact, you can use this feature to send these visitors to an optin form so they will go through your sequence the way you intend.
(Note, despite the name, this feature is great for exactly the situation described above.)
How do I calculate the number of days for my email sequence?
We have an article that will walk you through how to calculate the number of days or hours for your Deadline Funnel campaign.
When should I have the webhook sent out to Deadline Funnel?
In almost all cases, we recommend having the webhook as the very first step in your email sequence.
Do I need to use the Deadline Funnel email links in my emails?
Yes. The reason why is explained best in the video at the top of this article.
Do I use the Deadline Funnel email links in my ads, webinars, or webpages?
No. They're specially designed for use in your email software. There are very few situations where you use our email links outside of an email.
With ads, your sales pages, and webinars, or anywhere that's not email, you should just use your regular URL instead of our email links.
If I'm using an automated webinar, do I need to do anything differently?
Most likely, yes. Here's our article on the extra details for automated webinars.
Can I have the countdown start when my subscriber OPENS my email?
Short answer: no.
You have no way of knowing when someone will open your email and therefore it's tricky or impossible to start the Deadline Funnel tracking in a way that makes your deadline synchronized with an email open.
In addition, many email software platforms don't even try to have a trigger based on an email open because it's so unreliable.
For these reasons and more, we recommend you follow the concepts shown in the video at the top of this page. It's reliable and profitable for thousands of our clients.
Can I have the countdown start when my subscriber clicks the first time from the email link to a page in my funnel?
Yes.
But if you want the following emails after that link click to be synchronized with the Deadline Funnel deadline you would also need to move that subscriber to a new email sequence when they click.
This is an advanced tactic but the way it works is that you'll be synchronizing Deadline Funnel with an email sequence that starts when a link from another sequence or broadcast is clicked.
The trigger that starts them in the critical email sequence is a link click.
Click link => subscriber gets moved to a new email sequence => webhook is sent to Deadline Funnel
How do I set up the timing so I can send multiple emails on the last day?
We have an article on how to do this with ActiveCampaign. Even if you're using a different email platform, the overall concept is what you can learn from this video and article. Then use the functionality in your platform to perform the same.
If I need to change the number of days or hours of my evergreen campaign in Deadline Funnel, will it affect people already in the funnel?
Yes.
And for most clients we speak to, this is good news.
Anyone currently tracked by Deadline Funnel will have their deadline impacted by the changes you make in the Settings of the Deadline Funnel admin re the length of the campaign.
Need more help?
If you have any other questions, please try searching our docs by typing in your search phrase on this very page... and if you can't find what you need you can reach us through the chat in the bottom right corner of the page.
We're here to help! | https://docs.deadlinefunnel.com/en/articles/5122627-how-evergreen-works-with-your-email-software | 2021-09-17T00:43:16 | CC-MAIN-2021-39 | 1631780053918.46 | [] | docs.deadlinefunnel.com |
Support the documentation
This documentation was written by volunteers. Please help support our effort:
Help us improve the documentation
Versioning of Neos UI
Since Neos 5.0 (scheduled April 2019) this repository will become obsolete and neos-ui will be versioned and releases together with the rest of Neos core packages.
Until then, the following version conventions are in place:
- 2.x versions are Neos 3.3 compatible (released from the 2.x branch)
- 3.x branch is Neos 4.x compatible (released from master)
- We follow semver, but do not make bugfix releases for previous minor branches | https://docs.neos.io/cms/contributing-to-neos/neos-ui | 2021-09-17T00:50:37 | CC-MAIN-2021-39 | 1631780053918.46 | [] | docs.neos.io |
Spinnaker Nomenclature and Naming Conventions
Spinnaker™ terminology
You are viewing version 2.25 of the documentation, which is no longer maintained. For up-to-date documentation, see the latest version.
Watch Ethan, an Armory engineer, explain Spinnaker in three minutes.). More functionality is being added all the time. For example, work is being done to improve integrations with all cloud the providers, including Cloud Foundry and Microsoft Azure.
Spinnaker not only enables businesses to move to the cloud but makes it easier for them to adopt the cloud’s advantages.
Today’s world revolves around software and services working reliably and continuously – the internet is accessible 24 hours a day, and users expect 100% uptime. The cost of business services experiencing downtime, planned or unplanned, is only growing. Businesses need to be able to deploy software in a safe way with velocity.
Shipping changes more frequently allows developers to gather real user feedback sooner, enabling them to iterate and build based on actual input from customers. Additionally, Spinnaker abstracts away much of the cloud configuration details, giving developers more time to focus on meaningful tasks instead of infrastructure details.
In the past, releases were large monoliths, and ensuring uptime (or deployment safety) meant a long wait between each release, including maybe even extended code freezes. A company that wanted to maintain a stable environment could become averse to pushing out new features, leading to a slowdown in innovation. This tradeoff is getting more and more difficult to justify. To thrive, businesses need a way to deploy software with velocity.
Spinnaker solves these problems by enabling safer and faster deployments with the following benefits:
Immutable infrastructure that builds trust by making sure infrastructure matches an understood and explicit pattern that does not change once it is deployed. If changes are required, a brand new instance gets deployed. Having unique instances for each build enables the use of different deployment strategies, which are another benefit of Spinnaker.
Deployment strategies to fit your needs and infrastructure. The strategies include the following:
Automated canary analysis through Kayenta, a canary analysis tool that is integrated with Spinnaker. Without manual intervention, Kayenta can determine if a canary deployment should be pushed to production.
Multi-cloud deployments to avoid lock-in and allow you to optimize for things like cost, latency, and geographic distribution.
A typical workflow with Spinnaker starts with baking a Linux-based machine image. This image along with your launch configurations define an immutable infrastructure that you can use to deploy to your cloud provider with Spinnaker. After the deployment, run your tests, which can be integrated with Spinnaker and automatically triggered. Based on your deployment strategy and any criteria you set, go live with the build.
Armory’s platform includes an enterprise-grade distribution of Spinnaker that forms the core of Armory’s platform. It is preconfigured and runs in your Kubernetes cluster. The platform is an extension of open source Spinnaker and includes all those benefits as well as the following:
Spinnaker™ terminology
The services that work together in Spinnaker™
Halyard is a versatile command line interface (CLI) to configure and deploy Spinnaker™.
Learn how Fiat manages permissions in Spinnaker™.
Learn how to control ingress in Spinnaker with load balancers.
Create an application in Spinnaker.
Create your first pipeline, which bakes an Amazon Machine Image (AMI).
This glossary is a list of words and phrases and their definitions as they apply to Spinnaker. | https://v2-25.docs.armory.io/docs/overview/ | 2021-09-17T00:28:04 | CC-MAIN-2021-39 | 1631780053918.46 | [] | v2-25.docs.armory.io |
Scratch Projects¶
In this chapter, you will learn to use Scratch on RasPad 3, which includes 10 examples.
If you are a user who has just used Scratch, we recommend that you try the teaching examples in order so that you can quickly get started with Scratch.
If the Raspberry Pi system you downloaded comes with recommended software, you can find Scratch 3 in Programming.
If you download a system with only a desktop, you can click Preferences -> Recommended Software -> Programming to install the Scratch 3 on RasPad 3.
Note
Before trying the teaching examples, you should have downloaded the relevant materials and code files.
Open a Terminal and enter the following command to download them from github.
git clone | https://docs.raspad.com/en/latest/scratch_programming.html | 2021-09-17T01:20:46 | CC-MAIN-2021-39 | 1631780053918.46 | [array(['_images/scratch_3_install.png', '_images/scratch_3_install.png'],
dtype=object) ] | docs.raspad.com |
Filtering by field
You can filter the search results to display only those documents that contain a particular value in a field. You can also create negative filters that exclude documents that contain the specified field value.
You add field filters from the Fields list, the Documents table, or by manually adding a filter. Positive Filter (
). This includes only those documents that contain that value in the field.
- To add a negative filter, click Negative Filter (
). This excludes documents that contain that value in the field.
To add a filter from the Documents table:
Expand a document in the Documents table by clicking Expand (
) to the left of the document’s table entry.
- To add a positive filter, click Positive Filter (
) to the right of the field name. This includes only those documents that contain that value in the field.
- To add a negative filter, click Negative Filter (
) to the right of the field name. This excludes documents that contain that value in the field.
- To filter on whether or not documents contain the field, click Exists (
) to the right of the field name. This includes only those documents that contain the field.
To manually add a filter:
Click Add.
Note
To make the filter editor more user-friendly, you can enable the
filterEditor:suggestValues advanced setting. Enabling this will cause the editor to suggest values from your indices if you are filtering against an aggregatable field. However, this is not recommended for extremely large data sets, as it can result in long queries.
Managing filters
To modify a filter, move the moue pointer over it and click one of the action buttons.
" } } ] } } | https://docs.siren.io/10.0.4/platform/en/discover/filtering-by-field.html | 2021-09-17T01:49:58 | CC-MAIN-2021-39 | 1631780053918.46 | [array(['../image/15c9e1f58afd9e.png', 'Filter action buttons.'],
dtype=object) ] | docs.siren.io |
The SNMP Trap Adapter that is configured as a trap receiver uses the rules that are specified in the trap_mgr.conf file to map a ConfigChange trap into the data fields of a ConfigChange notification. The Adapter Platform, in turn, creates the ConfigChange notification (object) from the data fields and exports the notification to the Global Manager.
The parsing rules for the ConfigChange trap are defined in the “Cisco Configuration Management Traps” and “Cisco Configuration change Traps” sections of the trap_mgr.conf file. The trap_mgr.conf file is located in the BASEDIR/smarts/conf/icoi directory of the Service Assurance Manager installation area. | https://docs.vmware.com/en/VMware-Smart-Assurance/10.1.0/ip-manager-user-guide-101/GUID-45F72FF7-A54A-48E6-9458-B99ABEB580F7.html | 2021-09-17T01:36:19 | CC-MAIN-2021-39 | 1631780053918.46 | [] | docs.vmware.com |
After you’ve created a policy, you can run an assessment that scans the targeted assets against the latest advisories.
From the vulnerability workspace, you can run assessments from one more policies at once by clicking the checkboxes next to each policy and clicking Run assessment.
To view policy details and then run assessment on a single policy:
Prerequisites
Before you can run a vulnerability assessment, you must have an existing vulnerability policy. For more information, see How do I create a vulnerability policy.
Procedure
- In the Vulnerability workspace, select a policy to open the policy's dashboard.
- In the policy dashboard, click Run assessment and then click Run assessment in the confirmation dialog box.
Results
SaltStack SecOps Vulnerability scans your system against the latest advisories. During assessment, no changes are made to any of your systems. After the assessment is complete, you can remediate any advisories. You can view the status of current or past assessments by clicking a policy in the Vulnerabilty workspace and the clicking on the Activity tab. The results page lists all queued, in progress, and completed scans. | https://docs.vmware.com/en/VMware-vRealize-Automation-SaltStack-SecOps/services/using-and-managing-saltstack-secops/GUID-E1E40149-F498-4AD7-B267-E86732C8981A.html | 2021-09-17T01:41:37 | CC-MAIN-2021-39 | 1631780053918.46 | [] | docs.vmware.com |
Glossary¶
- bloom.conf
- A file that lives on a special orphan branch bloom in a release repository which contains bloom meta-information (like upstream repository location and type) and is used when making releases with bloom.
- wet
- A catkin-ized package.
- dry
- A non-catkin, rosbuild based software package or stack.
- FHS
- The Linux Filesystem Hierarchy Standard
- release repository
- A git repository that bloom operates on, which is loosely based on git-buildpackage. This repository contains snapshots of released upstream source trees, any patches needed to release the upstream project, and git tags which point to source trees setup for building platform specific packages (like debian source debs).
- git-buildpackage
- Suite to help with Debian packages in Git repositories.
- package
- A single software unit. In catkin a package is any folder containing a valid package.xml file. An upstream repository can have many packages, but a package must be completely contained in one repository.
- stack
- A term used by the ROS Fuerte version of catkin and the legacy rosbuild system. In the context of these systems, a stack is a software release unit with consists of zero to many ROS packages, which are the software build units, i.e. you release stacks, you build ROS packages.
- project
- CMake’s notion of a buildable subdirectory: it contains a CMakeLists.txt that calls CMake’s project() macro. | https://bloom.readthedocs.io/en/0.5.10/glossary.html | 2021-09-17T01:14:36 | CC-MAIN-2021-39 | 1631780053918.46 | [] | bloom.readthedocs.io |
.
Bank invoice to NetSuite Cash App invoice (add)
The flow syncs the invoice information from the Bank to a custom record (Celigo Cash App Invoice) in NetSuite. This flow is triggered when “Bank File to NetSuite” flow is run and has transactions and invoices on the transactions.This flow is available in the Flows > General section.
Bank credit memo to NetSuite Cash App credit memo (add)
The flow syncs the credit memo information from the Bank to a custom record (Celigo Cash App Credit Memo) in NetSuite.This flow is triggered when “Use Credit Memos” checkbox is checked and the “Bank File to NetSuite” flow is run having transactions and credit memos. This flow is available in the Flows > General section.
Please sign in to leave a comment. | https://docs.celigo.com/hc/en-us/articles/360054616992-Understand-the-Cash-Application-Manager-NetSuite-dependent-flows | 2021-09-17T00:47:53 | CC-MAIN-2021-39 | 1631780053918.46 | [array(['/hc/article_attachments/360084384251/flows.png', 'flows.png'],
dtype=object) ] | docs.celigo.com |
Date: Wed, 5 Oct 2011 13:56:30 +0100 From: krad <[email protected]> To: [email protected] Cc: [email protected] Subject: Re: updating 8.1 release Message-ID: <CALfReyd5LJqW9J1ZCOaapzXYS=jpbYwBGzv6u3Rubm6d1-bs2A@mail.gmail.com> In-Reply-To: <[email protected]> References: <CAN_bkffTRZ1mLbnXCL12v_agrraA6ffEqs61NM_43-oa9B0XOg@mail.gmail.com> <[email protected]>
Next in thread | Previous in thread | Raw E-Mail | Index | Archive | Help
On 3 October 2011 10:28, Michael Powell <[email protected] mailing list > > To unsubscribe, send any mail to " > freebsd-questions-unsubscribe.
Want to link to this message? Use this URL: <> | https://docs.freebsd.org/cgi/getmsg.cgi?fetch=261878+0+archive/2011/freebsd-questions/20111009.freebsd-questions | 2021-09-17T01:52:43 | CC-MAIN-2021-39 | 1631780053918.46 | [] | docs.freebsd.org |
This Sandbox API is only applicable for integration testing only, not intended for live production. Because this Sandbox API is intended for simulation purposes, the payload response for the similarity result is defined by predefined input.
You can use the sandbox API to simulate API request and to check:
You’re posting all required data in the correct Dukcapil Validation API
You’re handling Dukcapil VAlidation API response correctly
Your face verification purpose in the IdentifAI AI performance with your defined data.
sandbox check data is not processed by the Dukcapil gateway service — this means that sandbox responses are faster than live responses and the result of similarity comes from Nodeflux Server.
Sandbox check results are pre-determined for the predefined input, but you can get real similarity results by enrolling images to the sandbox environment.
sandbox applicants are isolated from the live environment.
you won't be charged for checks in the sandbox, but we apply a rate limit for the real use case on enrollment (10 images for an account) and face matching 10 hit/24 hours.
To use the sandbox, you need to generate a sandbox access key and secret key in your IdentifAI Dashboard.
the rate limit is only applicable for real data from Predetermined Face by Enrollment. Each account gets 10 enrollment quota images to store biometrics data and 10 hits per-24 hours for face matching validation.
To help you check the integration in the sandbox API, you can trigger by using these predetermined sample photos and the pre-determined NIK (Indonesia Citizen Number).
To get similarity result using our predertemined input, please use these use case:
Sample photo1.jpg paired with NIK 3275052806930015
Sample photo2.jpg paired with NIK 3174054110970002
To get error response please use these use case
NIK
1111111111111111 you will get error response
invalid Response from Dukcapil
NIK
0000000000000000 you will get response
error gateway not responding
To check our face recognition performance, you can use Predetermined Face by Enrollment. Using the Enrollment API you will get the real similarity result.
For the sandbox API, it has a dedicated access key and secret key, you can not modify the account. Visit your dashboard to get the access key on the tab Access Key for Sandbox API Dukcapil Validation.
Please cek the guideline for generate access key.
{"job": {"id": <job_id>,"result": {"status": "success","analytic_type": "DUKCAPIL_VALIDATION","result": [{"dukcapil_validation": {"similarity": 0.8}}]}},"message": "Dukcapil Validation Success","ok": true}
The request body should follow this format:
{"additional_params":{"nik": "{16 digits of NIK}","transaction_id": "{random digit}","transaction_source": "{device}"},"images": ["{INSERT_JPEG_IMAGE_AS_BASE64_STRING_FOR_SELFIE_PHOTO}"]}
The base64 encoded jpeg string should follow the data URI scheme format. See below:
data:[<media type>][;base64],<data>
To help you test integration in the sandbox API, you can trigger pre-determined positive responses by using sample images and NIK below:
Sample photo1.jpg paired with NIK
3275052806930015
Sample photo2.jpg paired with NIK
3174054110970002
For success response, you will get similarity result of the photos:
{"job": {"id": <job_id>,"result": {"status": "success","analytic_type": "DUKCAPIL_VALIDATION","result": [{"dukcapil_validation": {"similarity": 0.8}}]}},"message": "Dukcapil Validation Success","ok": true}
To test the negative response for error code
4xx Invalid Response from Gateway, input this NIK:
1111111111111111, then you will get this response:
{"job": {"id": <job_id>,"result": {"status": "incompleted","analytic_type": "DUKCAPIL_VALIDATION","result": []}},"message": "Invalid Response from Gateway","ok": false}
To test the negative response for error code
5xx Dukcapil Gateway Not Responding, input this NIK:
0000000000000000, then you will get this response:
{"job": {"id": <job_id>,"result": {"status": "failed","analytic_type": "DUKCAPIL_VALIDATION","result": []}},"message": "Gateway not Responding","ok": false}
Using this API you can enroll your own data to our sandbox. By using the image that enrolled before, you can check the real face matching process between to photos that you defined by storing the NIK as identifier for verification.
The rate limit is applicable for the Predetermined Face by Enrollment because it is use our AI computational. Each account gets 10 enrollment quota images to store biometrics data and 10 hits per-24 hours for face matching validation, but for checking the integration you can still use our defined predetermined use case.
{"message": "Face enrolled successfully","nik": "<nik>","ok": true}
Example Body Request
{"nik": "3276020807980010","image": "<image base64>"} | https://docs.identifai.id/sandbox-api-testing/dukcapil-validation-sandbox-api | 2021-09-17T01:10:15 | CC-MAIN-2021-39 | 1631780053918.46 | [] | docs.identifai.id |
The vSphere Authentication documentation provides information to help you perform common tasks such as certificate management and vCenter Single Sign-On configuration.
At VMware, we value inclusion. To foster this principle within our customer, partner, and internal community, we create content using inclusive language.
vSphere Authentication explains how you can manage certificates for vCenter Server and related services, and set up authentication with vCenter Single Sign-On.
What Happened to the Platform Services Controller
Beginning in vSphere 7.0, deploying a new vCenter Server or upgrading to vCenter Server 7.0 requires the use of the vCenter Server appliance, a preconfigured virtual machine optimized for running vCenter Server. The new vCenter Server contains all Platform Services Controller services, preserving the functionality and workflows, including authentication, certificate management, tags, vCenter Server authentication and manage certificates. The information is written for experienced Linux system administrators who are familiar with virtual machine technology and data center operations. | https://docs.vmware.com/en/VMware-vSphere/7.0/com.vmware.vsphere.authentication.doc/GUID-31D0128A-8772-4355-839D-40F8453640AB.html | 2021-09-17T02:06:43 | CC-MAIN-2021-39 | 1631780053918.46 | [] | docs.vmware.com |
Create a New Financial Results Document
To create a new financial results document:
1. On the Codejig ERP Main menu, click the Accountant module, and then select Financial results.
A listing page of financial results entries opens
2. On the listing page of the document, click + Add new.
You are taken to a form page for entering details of the document. The page consists only of the General area.
3. Under the General area, you choose to close a fiscal year, specify bookkeeping accounts for financial results write-off, profits or losses carried forward and percent of the corporate tax, along with the general document-related information, such as its number and date.
4.Click Save.
For the document to affect the system, it has to be posted.
Financial Results: General Area | https://docs.codejig.com/en/entity2305843015656313960/view/4611686018427398768 | 2021-09-17T01:46:33 | CC-MAIN-2021-39 | 1631780053918.46 | [] | docs.codejig.com |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.