content
stringlengths 0
557k
| url
stringlengths 16
1.78k
| timestamp
timestamp[ms] | dump
stringlengths 9
15
| segment
stringlengths 13
17
| image_urls
stringlengths 2
55.5k
| netloc
stringlengths 7
77
|
---|---|---|---|---|---|---|
Apperance
The Appearance Tab controls what the container elements will look like. This includes the title bar, the content pane and the root container containing everything.
All elements have similar properties. Exception is the root container, which doesnโt include a General section because thereโs no font for the root container (itโs either the font of the title bar or the font of the content pane) and margin and padding were already specified on Layout screen. Instead, the Root Container contains the section for adding custom CSS Styles inside the โOther CSS Stylesโ tabโs text area.
Font
This section contains all the standard fields youโd find in any rich text application: font face, bold, italic, underline, all caps, letter spacing, align and color. Additionally, multi-line content also gets a line height property useful for controlling the space between lines. As additional info, if you want to enumerate more than one font you should use comma to separate the fonts.
Margin
The margin clears an area around an element (outside the border). The margin does not have a background color, and is completely transparent.
Youโll use this to create space between the current element and surrounding elements. Otherwise, nearby elements will appear glued one to another.
Padding
The padding clears an area around the content (inside the border) of an element. The padding is affected by the background of the element.
Youโll use padding to create space between content and the border.
Background
This section allows specifying a color for drawing the background of the container, title bar or content pane. It also can be configured with a background image which can be either a full scaled image (in which case youโd set background repeat to none) or can be a pattern supposed to get repeated horizontally or vertically.
Border
This option specified the border around the element (container, title bar or content pane). The border has a width, a color and a style. The style can be solid, dashed, dotted or double. | https://docs.dnnsharp.com/easy-container/settings/apperance.html | 2019-11-12T02:55:32 | CC-MAIN-2019-47 | 1573496664567.4 | [array(['\\easy-container\\assets\\easy-container-settings-overview-appearance.png',
'ec'], dtype=object) ] | docs.dnnsharp.com |
A page to tell the world about your project and engage supporters.
Accept donations and sponsorship.
Reimburse expenses and pay invoices.
Automatic financial reporting and accountability.
Levels or rewards for backers and sponsors.
Tickets sales go straight to your Collective budget.
Give the gift of giving.
Sponsor Collectives on behalf of a company.
Keep your supporters in the loop about your achievements and news.
Create an umbrella entity to support a community of Collectives.
Tell the world who you are and show off the Collectives you're managing or supporting. | https://docs.opencollective.com/help/product/product | 2019-11-12T03:22:22 | CC-MAIN-2019-47 | 1573496664567.4 | [] | docs.opencollective.com |
ยซ Jul 08, 2016
...
To view the status of a support ticket:
Armor exists to protect. Each employee feels our passion, knows the vision and lives the company values. Diversity is key. Every role is important to Armorโs success. We volunteer our best every day and go to any length to ensure our customers are protected. | https://docs.armor.com/pages/diffpagesbyversion.action?pageId=8391286&selectedPageVersions=31&selectedPageVersions=32 | 2019-11-12T03:43:08 | CC-MAIN-2019-47 | 1573496664567.4 | [] | docs.armor.com |
lstrcpynA function
Copies a specified number of characters from a source string into a buffer.
Syntax
LPSTR lstrcpynA( LPSTR lpString1, LPCSTR lpString2, int iMaxLength );
Parameters
lpString1
Type: LPTSTR
The destination buffer, which receives the copied characters. The buffer must be large enough to contain the number of TCHAR values specified by iMaxLength, including room for a terminating null character.
lpString2
Type: LPCTSTR
The source string from which the function is to copy characters.
iMaxLength
Type: int
The number of TCHAR values to be copied from the string pointed to by lpString2 into the buffer pointed to by lpString1, including a terminating null character.
Return Value
Type: LPTSTR
If the function succeeds, the return value is a pointer to the buffer. The function can succeed even if the source string is greater than iMaxLength characters.
If the function fails, the return value is NULL and lpString1 may not be null-terminated.
Remarks
The buffer pointed to by lpString1 must be large enough to include a terminating null character, and the string length value specified by iMaxLength includes room for a terminating null character.
The lstrcpyn function has an undefined behavior if source and destination buffers overlap.
Security WarningUsing this function incorrectly can compromise the security of your application. This function uses structured exception handling (SEH) to catch access violations and other errors. When this function catches SEH errors, it returns NULL without null-terminating the string and without notifying the caller of the error. The caller is not safe to assume that insufficient space is the error condition.
If the buffer pointed to by lpString1 is not large
enough to contain the copied string, a buffer overrun can occur. When copying an entire
string, note that sizeof returns the number of bytes.
For example, if lpString1 points to a buffer
szString1 which is declared as
TCHAR szString[100], then sizeof(szString1) gives the size of
the buffer in bytes rather than WCHAR, which could lead to a buffer
overflow for the Unicode version of the function..
Using
sizeof(szString1)/sizeof(szString1[0])
gives the proper size of the.
Review Security Considerations: Windows User Interface before continuing.
Requirements
See Also
Conceptual
Reference | https://docs.microsoft.com/en-us/windows/win32/api/winbase/nf-winbase-lstrcpyna?redirectedfrom=MSDN | 2019-11-12T04:10:33 | CC-MAIN-2019-47 | 1573496664567.4 | [] | docs.microsoft.com |
Back to Adaptive Vision Studio website
You are here: Start ยป Filter Reference ยป Xml ยป AccessXmlNode
Gets information from the XmlNode object.
Remarks
Read more about how to handle XML in Working with XML Trees article.
Errors
This filter can throw an exception to report error. Read how to deal with errors in Error Handling.
List of possible exceptions:
Complexity Level
See Also
- Xml_CreateNode โ Creates a new XmlNode. | https://docs.adaptive-vision.com/current/studio/filters/Xml/AccessXmlNode.html | 2019-04-18T18:50:06 | CC-MAIN-2019-18 | 1555578526228.27 | [] | docs.adaptive-vision.com |
Using the AWS SDKs, CLI, and Explorers
You can use the AWS SDKs when developing applications with Amazon S3. The AWS SDKs simplify your programming tasks by wrapping the underlying REST API. The AWS Mobile SDKs and the AWS Amplify JavaScript library are also available for building connected mobile and web applications using AWS.
This section provides an overview of using AWS SDKs for developing Amazon S3 applications. This section also describes how you can test the AWS SDK code examples provided in this guide.
Topics
- Specifying the Signature Version in Request Authentication
- Setting Up the AWS CLI
- Using the AWS SDK for Java
- Using the AWS SDK for .NET
- Using the AWS SDK for PHP and Running PHP Examples
- Using the AWS SDK for Ruby - Version 3
- Using the AWS SDK for Python (Boto)
- Using the AWS Mobile SDKs for iOS and Android
- Using the AWS Amplify JavaScript Library (AWS CLI) to manage Amazon S3 buckets and objects.
AWS Toolkit for Eclipse
The AWS Toolkit for Eclipse includes both the AWS SDK for Java and AWS Explorer for Eclipse. The AWS Explorer for Eclipse is an open source plugin for Eclipse for Java IDE that makes it easier for developers to develop, debug, and deploy Java applications using AWS. The easy-to-use GUI enables you to access and administer your AWS infrastructure including Amazon S3. You can perform common operations such as managing your buckets and objects and setting setup instructions, go to Setting Up the AWS Toolkit for Visual Studio. For examples of using Amazon S3 using the explorer, see Using Amazon S3 from AWS Explorer.
AWS SDKs
You can download only the SDKs. For information about downloading the SDK libraries, see Sample Code Libraries.
AWS CLI
The AWS CLI is a unified tool to manage your AWS services, including Amazon S3. For information about downloading the AWS CLI, see AWS Command Line Interface.
Specifying the Signature Version in Request Authentication
Amazon S3 supports only AWS Signature Version 4 in most AWS Regions. In some of the older AWS Regions, Amazon S3 supports both Signature Version 4 and Signature Version 2. However, Signature Version 2 is being deprecated, and the final support for Signature Version 2 will end on June 24, 2019. For more information about the end of support for Signature Version 2, see AWS Signature Version 2 Deprecation for Amazon S3.
For a list of all the Amazon S3 Regions and the signature versions they support, see Regions and Endpoints in the AWS General Reference.
For all AWS Regions, AWS SDKs use Signature Version 4 by default to authenticate requests. When using AWS SDKs that were released before May 2016, you might be required to request Signature Version 4, as shown in the following table.
AWS Signature Version 2 Deprecation for Amazon S3
Signature Version 2 is being deprecated in Amazon S3, and the final support for Signature Version 2 ends on June 24, 2019. After June 24, 2019, Amazon S3 will only accept API requests that are signed using Signature Version 4.
This section provides answers to common questions regarding the end of support for Signature Version 2.
What is Signature Version 2/4, and What Does It Mean to Sign Requests?
The Signature Version 2 or Signature Version 4 signing process is used to authenticate your Amazon S3 API requests. Signing requests enables Amazon S3 to identify who is sending the request and protects your requests from bad actors.
For more information about signing AWS requests, see Signing AWS API Requests in the AWS General Reference.
What Update Are You Making?
We currently support Amazon S3 API requests that are signed using Signature Version 2 and Signature Version 4 processes. After June 24, 2019, Amazon S3 will only accept requests that are signed using Signature Version 4.
For more information about signing AWS requests, see Changes in Signature Version 4 in the AWS General Reference.
Why Are You Making the Update?
Signature Version 4 provides improved security by using a signing key instead of your secret access key. Signature Version 4 is currently supported in all AWS Regions, whereas Signature Version 2 is only supported in Regions that were launched before January 2014. This update allows us to provide a more consistent experience across all Regions.
How Do I Ensure That I'm Using Signature Version 4, and What Updates Do I Need?
The signature version that is used to sign your requests is usually set by the tool or the SDK on the client side. By default, the latest versions of our AWS SDKs use Signature Version 4. For third-party software, contact the appropriate support team for your software to confirm what version you need. If you are sending direct REST calls to Amazon S3, you must modify your application to use the Signature Version 4 signing process.
For information about which version of the AWS SDKs to use when moving to Signature Version 4, see Moving from Signature Version 2 to Signature Version 4.
For information about using Signature Version 4 with the Amazon S3 REST API, see Authenticating Requests (AWS Signature Version 4) in the Amazon Simple Storage Service API Reference.
What Happens if I Don't Make Updates?
Requests signed with Signature Version 2 that are made after June 24, 2019 will fail to authenticate with Amazon S3. Requesters will see errors stating that the request must be signed with Signature Version 4.
Should I Make Changes Even if Iโm Using a Presigned URL That Requires Me to Sign for More than 7 Days?
If you are using a presigned URL that requires you to sign for more than 7 days, no action is currently needed. You can continue to use AWS Signature Version 2 to sign and authenticate the presigned URL. We will follow up and provide more details on how to migrate to Signature Version 4 for a presigned URL scenario.
More Info
For more information about using Signature Version 4, see Signing AWS API Requests.
View the list of changes between Signature Version 2 and Signature Version 4 in Changes in Signature Version 4.
View the post AWS Signature Version 4 to replace AWS Signature Version 2 for signing Amazon S3 API requests in the AWS forums.
If you have any questions or concerns, contact AWS Support.
Moving from Signature Version 2 to Signature Version 4
If you currently use Signature Version 2 for Amazon S3 API request authentication, you should move to using Signature Version 4. Support is ending for Signature Version 2, as described in AWS Signature Version 2 Deprecation for Amazon S3.
For information about using Signature Version 4 with the Amazon S3 REST API, see Authenticating Requests (AWS Signature Version 4) in the Amazon Simple Storage Service API Reference.
The following table lists the SDKs with the necessary minimum version to use Signature
Version 4 (SigV4).
If you are using presigned URLs with the AWS Java, JavaScript
(Node.js),
or Python (Boto/CLI) SDKs, you must set the correct AWS Region and set Signature Version
4
in the client configuration. For information about setting
SigV4 in the client
configuration, see Specifying the Signature Version in Request
Authentication.
AWS Tools for Windows PowerShell or AWS Tools for PowerShell Core
If you are using module versions earlier than 3.3.99, you must upgrade to 3.3.99.
To get the version information, use the
Get-Module cmdlet:
Get-Module โName AWSPowershell Get-Module โName AWSPowershell.NetCore
To update the 3.3.99 version, use the
Update-Module cmdlet:
Update-Module โName AWSPowershell Update-Module โName AWSPowershell.NetCore
You can use presigned URLs that are valid for more than 7 days that you will send Signature Version 2 traffic on. | https://docs.aws.amazon.com/AmazonS3/latest/dev/UsingAWSSDK.html | 2019-04-18T18:59:19 | CC-MAIN-2019-18 | 1555578526228.27 | [] | docs.aws.amazon.com |
Overall, timeseries per ELB.
Within Datadog, when you are selecting โminโ, โmaxโ, or โavgโ, you are controlling how multiple timeseries are combined. For example, requesting system.cpu.idle without any filter would return one series for each host that reports that metric and those series need to be combined to be graphed. On the other hand, if you requested system.cpu.idle from a single host, no aggregation would be necessary and switching between average and max would yield the same result.
If you would like to collect the Min/Max/Sum/Avg from AWS (Component Specific - Ec2, ELB, Kinesis, etc.) reach out to [email protected]. Enabling this feature would provide additional metrics under the following namespace format:
aws.elb.healthy_host_count.sum
aws.elb.healthy_host_count.min
aws.elb.healthy_host_count.max
Note, enabling this feature increases the number of API requests and information pulled from CloudWatch and may potentially impact your AWS billing.
More information on this behavior and AWS billing can be found here:
Do you believe youโre seeing a discrepancy between your data in CloudWatch and Datadog?
How do I monitor my AWS billing details? | https://docs.datadoghq.com/integrations/faq/additional-aws-metrics-min-max-sum/ | 2019-04-18T18:39:54 | CC-MAIN-2019-18 | 1555578526228.27 | [] | docs.datadoghq.com |
Introduction to hardware inventory in System Center Configuration Manager
Applies to: System Center Configuration Manager (Current Branch)
Use hardware inventory in System Center information Configuration Manager collects. These include the following:
Create queries that return devices that are based on a specific hardware configuration.
Create query-based collections that are based on a specific hardware configuration. Query-based collection memberships automatically update on a schedule. You can use collections for several tasks, which include software deployment. .
Run reports that display specific details about hardware configurations in your organization.
Use Resource Explorer to view detailed information about the hardware inventory that is collected from client devices.
When hardware inventory runs on a client device, the first inventory data that the client returns is always a full inventory. Subsequent inventory information contains only delta inventory information. The site server processes delta inventory information in the order in which it is received. If delta information for a client is missing, the site server rejects additional delta information and instructs the client to run a full inventory cycle.
Configuration Manager provides limited support for dual-boot computers. Configuration Manager can discover dual-boot computers but only returns inventory information from the operating system that was active at the time the inventory cycle ran.
Note
For information about how to use hardware inventory with clients that run Linux and UNIX, see Hardware inventory for Linux and UNIX in System Center Configuration Manager.
Extending Configuration Manager hardware inventory
In addition to the built-in hardware inventory in Configuration Manager, you can also use one of the following methods to extend hardware inventory to collect additional information:
You can enable, disable, add and remove inventory classes for hardware inventory from the Configuration Manager console.| IDMIF files to collect information about assets that are not associated with a Configuration Manager client, for example, projectors, photocopiers and network printers.
For more information about using these methods to extend Configuration Manager hardware inventory, see How to configure hardware inventory in System Center Configuration Manager.
Feedback
Send feedback about: | https://docs.microsoft.com/en-us/sccm/core/clients/manage/inventory/introduction-to-hardware-inventory | 2019-04-18T18:22:10 | CC-MAIN-2019-18 | 1555578526228.27 | [] | docs.microsoft.com |
Polyglot Virtual Node Server Frameworkยถ
Polyglot Virtual Node Server Framework is an application that makes it easy and quick to both develop and maintain virtual node servers for the ISY-994i home automation controller by Universal Devices Inc. Using virtual node servers, the ISY-994i is able to communicate with and control third-party devices to which the ISY-994i cannot natively connect.
Polyglot is written primarily with Python 2.7 and makes it easy to develop new Virtual Node Servers with Python 2.7. It should, however, by noted, that Virtual Node Servers may by developed using any language. Polyglot is intended to be run on a Raspberry Pi 2 Model B, but could potentially run on any ARM based machine running Linux with Python 2.7. FreeBSD, OSX, and x64 linux binaries are provided as well.
This document will document the usage of and development for Polyglot. For additional help, please reference the UDI Forum.
- Usage
- Node Server Development
- Python Node Server Example
- Polyglot Node Server API
- Polyglot Methods and Classes
- Optional Components
- Changelog | https://ud-polyglot.readthedocs.io/en/development/ | 2019-04-18T18:48:46 | CC-MAIN-2019-18 | 1555578526228.27 | [] | ud-polyglot.readthedocs.io |
The
set statement is used to configure a setting for use by a subsequent http or buffer statements.
set
setting value
A protocol such as http offers a number of configuration options. Any given option is either persistent or transient:
The following settings can be configured using
set:
set http_progress yes|no
Persistent. If set to yes then dots will be sent to standard output to indicate that data is downloading when an HTTP session is in progress. When downloading large files if a lengthy delay with no output is undesirable then the dots indicate that the session is still active.
set http_username
username
Persistent. Specifies the username to be used to authenticate the session if the
http_authtype setting is set to anything other than
none. If the username contains any spaces then it should be enclosed in double quotes.
set http_password
Persistent. Specifies the password to be used to authenticate the session if the
http_authtype setting is set to anything other than
none. If the password contains any spaces then it should be enclosed in double quotes.
set http_authtype
type
Persistent. Specifies the type of authentication required when initiating a new connection. The type parameter can be any of the following:
set http_authtarget
target
Persistent. Specifies whether any authentication configured using the
http_authtype setting should be performed against a proxy or the hostname specified in the http URL.
Valid values for target are:
server (default) - authenticate against a hostname directly
proxy - authenticate against the proxy configured at the Operating System level
set http_header
"name: value"
Persistent. Used to specify a single HTTP header to be included in subsequent HTTP requests. If multiple headers are required, then multiple
set http_header statements should be used.
An HTTP header is a string of the form name: value.
There must be a space between the colon at the end of the name and the value following it, so the header should be enclosed in quotes
Example:
set http_header "Accept: application/json"
Headers configured using
set http_header will be used for all subsequent HTTP connections. If a different set of headers is required during the course of a USE script then the clear statement can be used to remove all the configured headers, after which
set http_header can be used to set up the new values.
By default, no headers at all will be included with requests made by the http statement. For some cases this is acceptable, but often one or more headers need to be set in order for a request to be successful.
Typically these will be an
Accept: header for GET requests and an
Accept: and a
Content-Type: header for POST requests. However there is no hard and fast standard so the documentation for any API or other external endpoint that is being queried should be consulted in order to determine the correct headers to use in any specific scenario.
Headers are not verified as sane until the next HTTP connection is made
set http_body data
string - use the specified string as the body of the request
set http_body file
filename - send the specified file as the body of the request
set http_body
{named_buffer} - send the contents of the named buffer as the body of the request
Transient. By default no data other than the headers (if defined) is sent to the server when an HTTP request is made. The
http_body setting is used to specify data that should be sent to the server in the body of the request.
When using
http_body a
Content-Length: header will automatically be generated for the request. After the request this
Content-Length: header is discarded (also automatically). This process does not affect any other defined HTTP headers.
After the request has been made the
http_body setting is re-initialised such that the next request will contain no body unless another
set http_body statement is used.
set http_savefile
filename
Transient. If set, any response returned by the server after the next HTTP request will be saved to the specified filename. This can be used in conjunction with the buffer statement, in which case the response will both be cached in the named buffer and saved to disk.
If no response is received from the next request after using
set http_savefile then the setting will be ignored and no file will be created.
Regardless of whether the server sent a response or not after the HTTP request has completed, the
http_savefile setting is re-initialised such that the next request will not cause the response to be saved unless another
set http_savefile statement is used.
No directories will be created automatically when saving a file, so if there is a pathname component in the specified filename, that path must exist.
set http_savemode
mode
Persistent.
If mode is
overwrite (the default) then if the filename specified by the
set http_savefile statement already exists it will be overwritten if the server returns any response data. If no response data is sent by the server, then the file will remain untouched.
If mode is
append then if the filename specified by the
set http_savefile statement already exists any data returned by the server will be appended to the end of the file.
set http_timeout
seconds
Persistent. After a connection has been made to a server it may take a while for a response to be received, especially on some older or slower APIs. By default, a timeout of 5 minutes (300 seconds) is endured before an error is generated.
This timeout may be increased (or decreased) by specifying a new timeout limit in seconds, for example:
set http_timeout 60 # Set timeout to 1 minute
The minimum allowable timeout is 1 second.
set odbc_connect
connection_string
Persistent. Sets the ODBC connection string for use by the buffer statement's odbc_direct protocol. The connection string may reference an ODBC DSN or contain full connection details, in which case a DSN doesn't need to be created.
A DSN connection string must contain a DSN attribute and optional UID and PWD attributes. A non-DSN connection string must contain a DRIVER attribute, followed by driver-specific attributes.
Please refer to the documentation for the database to which you wish to connect to ensure that the connection string is well formed.
An example connection string for Microsoft SQL Server is:
set odbc_connect "DRIVER=SQL Server;SERVER=Hostname;Database=DatabaseName;TrustServerCertificate=No;Trusted_Connection=No;UID=username;PWD=password" | https://docs.exivity.com/diving-deeper/extract/language/set | 2019-04-18T19:24:33 | CC-MAIN-2019-18 | 1555578526228.27 | [] | docs.exivity.com |
Custom Security
This section assumes that you are familiar with basic security concepts, and with XAP-specific security configurations. Before implementing custom security from scratch, consider the following alternatives:
- Extending the default file-based security implementation that is already provided with XAP, which supports replacing the encoding, referencing a security file on an HTTP server, and more.
- Using or extending the Spring Security Bridge.
GigaSpaces security was designed with customization in mind. There are numerous security standards and practices, so users can implement the built-in security features out of the box, or customize them to suit the needs of the industry and environment.
You can customize the security protocols for the following:
- Authentication - How servers authenticate the clients that access them.
- User/Role Management - Creation and management of users and roles. can
We recommend that the custom security JAR contain only security-related classes. | https://docs.gigaspaces.com/latest/security/custom-security.html | 2019-04-18T18:41:47 | CC-MAIN-2019-18 | 1555578526228.27 | [] | docs.gigaspaces.com |
Projectile currently has several official & unofficial support channels.
For questions, suggestions and support refer to one of them. Please, don't use the support channels to report issues, as this makes them harder to track.
Gitter
Most internal discussions about the development of Projectile happen on its gitter channel. You can often find Projectile's maintainers there and get some interesting news from the project's kitchen.
Mailing List
The official mailing list is hosted at Google Groups. It's a low-traffic list, so don't be too hesitant to subscribe.
Freenode
If you're into IRC you can visit the
#projectile channel on Freenode.
It's not actively
monitored by the Projectile maintainers themselves, but still you can get support
from other Projectile users there.
Stackoverflow
We're also encouraging users to ask Projectile-related questions on StackOverflow.
When doing so you should use the
Projectile tag (ideally combined
with the tag
emacs).
Bountysource
If you're willing to pay for some feature to be implemented you can use Bountysource to place a bounty for the work you want to be done. | https://docs.projectile.mx/en/latest/support/ | 2019-04-18T19:35:21 | CC-MAIN-2019-18 | 1555578526228.27 | [] | docs.projectile.mx |
Scheme User Interface
The Scheme user interface is documented in this page. We do not document the Scheme language or the functions provided by libctl. See also the libctl User Reference section of the libctl manual.
- Scheme User Interface
- Input Variables
- Predefined Variables
- Output Variables
- Classes
- Functions
- Field Manipulation
- Inversion Symmetry
- Parallel MPB
Input Variables
These are global variables that you can set to control various parameters of the MPB computation. They are also listed, along with their current values, by the
(help) command. class ]
Specifies the geometric objects making up the structure being simulated. When objects overlap, later objects in the list take precedence. Defaults to no objects (empty list).
default-material [
material-type class ]
Holds the default material that is used for points not in any object of the geometry list. Defaults to air (epsilon of 1). See also
epsilon-input-file, below.
ensure-periodicity [
boolean] the special value
no-size,
no-size dimension(s) perpendicular to the others. Defaults to the orthogonal x-y-z vectors of unit length (i.e. a square/cubic lattice).
resolution [
number
(optimize-grid-size!) after setting the
resolution and
geometry-lattice to adjust the grid size for maximal performance. This rounds the grid size in each direction to the nearest integer with small factors, to improve FFT speed.
grid-size [
vector3]
Specifies the size of the discrete computational grid along each of the lattice directions. Deprecated: the preferred method is to use the
resolution variable, above, in which case the
grid-size defaults to
false. To get the grid size you should instead use the
(get-grid-size) function.
dimensions [
integer]
Explicitly specifies the dimensionality of the simulation; if the value is less than 3, the sizes of the extra dimensions in
grid-size are ignored (assumed to be one). Defaults to 3. Deprecated: the preferred method is to set
geometry-lattice to have size no-size
no-size [
number] [
number] control file is FILE.ctl. You can change this to
false).
simple-preconditioner? [
boolean]
Whether or not to use a simplified preconditioner. Defaults to
false which is fastest most of the time. Turning this on increases the number of iterations, but decreases the time for each iteration.
deterministic? [
boolean].
air,
vacuum [
material-type class ]
Two aliases for a predefined material type with a dielectric constant of 1.
nothing [
material-type class ]
A material that, effectively, punches a hole through other objects to the background (
default-material or
epsilon-input-file).
infinity [
number]
A big number (1e20) to use for "infinite" dimensions of objects.
Output Variables
Global variables whose values are set upon completion of the eigensolver.
freqs [ list of
number]
A list of the frequencies of each band computed for the last k point. Guaranteed to be sorted in increasing order. The frequency of band
b can be retrieved via
(list-ref) lists ].
MPB defines several types of classes, the most numerous of which are the various geometric object classes. You can also get a list of the available classes, along with their property types and default values, at runtime with the
(help) command. ctl file. the special size
no-size, then the dimensionality of the problem is reduced by one. Strictly speaking, the dielectric function is taken to be uniform along that dimension. In this case, the
no-size dimension should generally be orthogonal to the other dimensions.
material-type
This class is used to specify the materials that geometric objects are made of. Currently, there are three subclasses,
dielectric,
dielectric-anisotropic, and
material-function.
dielectric
A uniform, isotropic, linear dielectric material, with one property:
epsilon [
number]
The dielectric constant (must be positive). No default value. You can also use
(index n) as a synonym for
(epsilon (* n n)).
dielectric-anisotropic
A uniform, possibly anisotropic, linear dielectric material. For this material type, you specify the dielectric tensor , which is real-symmetric or possibly complex-hermitian, relative to the cartesian xyz axes:
\begin{pmatrix} a & u & v \\ u^* & b & w \\ v^* & w^* & c \end{pmatrix}
This allows your dielectric to have different dielectric constants for fields polarized in different directions. The epsilon tensor must be positive-definite (have all positive eigenvalues); if it is not, MPB exits with an error. This does not imply that all of the entries of the epsilon matrix need be positive. The components of the tensor are specified via three properties:
epsilon-diag [
vector3]
The diagonal elements (a b c) of the dielectric tensor. No default value.
epsilon-offdiag [
cvector3]
The off-diagonal elements (u v w) of the dielectric tensor. Defaults to zero. This is a
cvector3, which simply means that the components may be complex numbers (e.g.
3+0.1.
epsilon-offdiag-imag [
vector3]
Deprecated: The imaginary parts of the off-diagonal elements (u v w) of the dielectric tensor; defaults to zero. Setting the imaginary parts directly by specifying complex numbers in
epsilon-offdiag is preferred.
For example, a material with a dielectric constant of 3.0 for P-polarization and 5.0 for S-polarization would be specified via
(make (dielectric-anisotropic (epsilon-diag 3 3 5))). Please be aware that not all 2d anisotropic dielectric structures will have P- and S-polarized modes, however.
material-function
This material type allows you to specify the material as an arbitrary function of position. For an example of this, see the
bragg-sine.ctl file in the
examples/ directory. It has one property:
material-func [
function]
A function of one argument, the position
vector3 in lattice coordinates, that returns the material at that point. Note that the function you supply can return any material; wild and crazy users could even return another
material-function object which would then have its function invoked in turn.
Instead of
material-func, you can use
epsilon-func: for
epsilon-func, you give it a function of position that returns the dielectric constant at that point.
Normally, the dielectric constant is required to be positive or positive-definite, for a tensor. However, MPB does have a somewhat experimental feature allowing negative dielectrics (e.g. in a plasma). To use it, call the function
(allow-negative-epsilon) before
(run). In this case, it will output the (real) frequency squared in place of the (possibly imaginary) frequencies. Convergence will be somewhat slower because the eigenoperator is not positive definite.
geometric-object
This class, and its descendants, are used to specify the solid geometric objects that form the dielectric structure being simulated. The properties are:
material [
material-type class ]
The material that the object is made of which.
Recall that all 3-vectors, including the center of an object, its axes, and so on, are specified in the basis of the normalized lattice vectors normalized to
basis-size. Note also that 3-vector properties can be specified by either
(property (vector3 x y z)) or, equivalently,
(property x y z).). Properties:
size [
vector3]
The lengths of the block edges along each of its three axes. Not really a 3-vector (at least, not in the lattice basis), that the lattice directions (the basis) are just the ordinary unit axes, and
m is some material we have defined:
; A cylinder of infinite radius and height 0.25 pointing along the x axis, ; centered at the origin: (make cylinder (center 0 0 0) (material m) m) m) (size 1 1 1)) (make sphere (center 1 2 3) (material air) (radius 0.2))))
Functions
Here, we describe the functions that are defined by MPB. There are many types of functions defined ranging from utility functions for duplicating geometric objects to run functions that start the computation.
See also the reference section of the libctl manual, which describes a number of useful functions defined by libctl. lattice basis vectors,.
(point-in-object? point obj)
Returns whether or not the given 3-vector
point is inside the geometric object
obj.
->cartesian x),
(cartesian->lattice x)
Convert
x between the lattice basis (the basis of the lattice vectors normalized to
basis-size) and the ordinary cartesian basis, where
x is either a
vector3 or a
matrix3x3, returning the transformed vector/matrix. In the case of a matrix argument, the matrix is treated as an operator on vectors in the given basis, and is transformed into the same operator on vectors in the new basis.
(reciprocal->cartesian x),
(cartesian->reciprocal x)
Like the above, except that they convert to/from reciprocal space (the basis of the reciprocal lattice vectors). Also, the cartesian vectors output/input are in units of 2ฯ.
(reciprocal->lattice x),
(lattice->reciprocal x)
Convert between the reciprocal and lattice bases, where the conversion again leaves out the factor of 2ฯ (i.e. the lattice-basis vectors are assumed to be in units of 2ฯ).
Also, a couple of rotation functions are defined, for convenience, so that you don't have to explicitly convert to cartesian coordinates in order to use libctl's
rotate-vector3 function. See the Libctl User Reference:
(rotate-lattice-vector3 axis theta v),
(rotate-reciprocal-vector3 axis theta v)
Like
rotate-vector3 , except that
axis and
v are specified in the lattice/reciprocal bases. function Scheme expression:
(map first-brillouin-zone k-points).
Run Functions
These are functions to help you run and control the simulation. The ones you will most commonly use are the
run function and its variants. The syntax of these functions, and one lower-level function, is:
(run
band-func
...)
This runs the simulation described by the input parameters (see above), with no constraints on the polarization of the solution. That is, it reads the input parameters, initializes the simulation, and solves for the requested eigenstates of each k-point. The dielectric function is outputted to
epsilon.h5 before any eigenstates are computed..
(run-zeven
band-func
...),
).
(run-te
band-func
...),
.
(run-yeven
band-func
...),
.
(run-parity
p reset-fields band-func
...)
Like the
run function, except that it takes two extra parameters, a parity
p and a boolean (
true/
false) value
reset-fields.
p specifies a parity constraint, and should be one of the predefined variables:
NO-PARITY: equivalent to
run
EVEN-Z(or
TE): equivalent to
run-zevenor
run-te
ODD-Z(or
TM): equivalent to
run-zoddor
run-tm
EVEN-Y(like
EVEN-Zbut for y=0 plane)
ODD-Y(like
ODD-Zbut:
(define (my-band-func which-band) ...do stuff here with band index which-band... ).
(randomize-fields)
Initialize the fields to random values.
(vector3-scale magnitude (unit-vector3 kdir)). band index, output the corresponding field or compute some function thereof in the primitive cell of the lattice. They are designed to be passed as band functions to the
run routines, although they can also be called directly. See also the section on field normalizations.
(output-hfield which-band)
(output-hfield-x which-band)
(output-hfield-y which-band)
(output-hfield-z which-band)
Output the magnetic () field for
which-band; either all or one of the Cartesian components, respectively.
(output-dfield which-band)
(output-dfield-x which-band)
(output-dfield-y which-band)
(output-dfield-z which-band)
Output the electric displacement () field for
which-band; either all or one of the Cartesian components, respectively.
(output-efield which-band)
(output-efield-x which-band)
(output-efield-y which-band)
(output-efield-z which-band)
Output the electric () field for
which-band; either all or one of the Cartesian components, respectively.
(output-bpwr which-band)
Output the time-averaged magnetic-field energy density (bpwr = ) for
which-band. (Formerly called
output-hpwr, which is still supported as a synonym for backwards compatibility.)
(output-dpwr which-band)
Output the time-averaged electric-field energy density (dpwr = ) for
which-band.
(fix-hfield-phase which-band)
(fix-dfield-phase which-band)
(fix-efield-phase.
(run-tm fix-dfield-phase.
(compute-zparities)
Returns a list of the parities about the z=0 plane, one number for each band computed at the last k-point.
.
(compute-group-velocities)
Returns a list of group-velocity vectors (in the Cartesian basis, units of c) for the bands at the last-computed k-point.
.
(compute-1-group-velocity which-band),
(compute-1.
The simplest class of operations involve only the currently-loaded field, which we describe in the second subsection below. To perform more sophisticated operations, involving more than one field, one must copy or transform the current field into a new field variable, and then call one of the functions that operate on multiple field variables described below. functions follow. They should only be called after the eigensolver has run or after
init-params, in the case of
get-epsilon. One normally calls them after
run, or in one of the band functions passed to
run.
(get-hfield which-band)
Loads the magnetic () field for the band
which-band.
(get-dfield which-band)
Loads the electric displacement () field for the band
which-band.
(get-efield which-band)
Loads the electric () field for the band
which-band. This function actually calls
get-dfield followed by
get-efield-from-dfield, below.
(get-charge-density which-band)
Loads the bound charge density for the band
which-band.
(get-epsilon)
Loads the dielectric function.
Once loaded, the field can be transformed into another field or a scalar field:
(get-efield-from-dfield)
Multiplies by the inverse dielectric tensor to compute the electric field from the displacement field. Only works if a field has been loaded.
.
(compute-energy-in-dielectric min-eps max-eps)
Returns the fraction of the energy that resides in dielectrics with epsilon in the range
min-eps to
max-eps.
(compute-energy-in-objects objects...)
Returns the fraction of the energy inside zero or more geometric objects.
.
(compute-field-integral f)
Like
compute-energy-integral, but
f is a function
(f F eps r) that returns a number, possibly complex, where
F is the complex field vector at the given point.
(get-epsilon-point r)
Given a position vector
r (in lattice coordinates), return the interpolated dielectric constant at that point. (Since MPB uses a an effective dielectric tensor internally, this actually returns the mean dielectric constant.)
.
(get-energy-point r)
Given a position vector
r in lattice coordinates, return the interpolated energy density at that point.
(get-field-point r)
Given a position vector
r in lattice coordinates, return the interpolated (complex) field vector at that point.
.
(output-field [ nx [ ny [ nz ] ] ])
(output-field-x [ nx [ ny [ nz ] ] ])
(output-field-y [ nx [ ny [ nz ] ] ])
(output-field-z [ nx [ ny [ nz ] ] ])
Output the currently-loaded field. The optional (as indicated by the brackets) parameters
nx,
ny, and
nz indicate the number of periods to be outputted along each of the three lattice directions. Omitted parameters are assumed to be 1. For vector fields,
output-field outputs all of the Cartesian components, while the other variants output only one component.
(output-epsilon [ nx [ ny [ nz ] ] ]) field variables. Field variables come in three flavors, real-scalar (rscalar) fields, complex-scalar (cscalar) fields, and complex-vector (cvector) fields. There is a pre-defined field variable
cur-field representing the currently-loaded field (see above), and you can "clone" it to create more field variables with one of:
(field-make f)
Return a new field variable of the same type and size as the field variable
f. Does not copy the field contents. See
field-copy and
field-set!, below.
(rscalar-field-make f)
(cscalar-field-make f)
(cvector-field-make f)
Like
field-make, but return a real-scalar, complex-scalar, or complex-vector field variable, respectively, of the same size as
f but ignoring
f's type.
(cvector-field-nonbloch! f)
By default, complex vector fields are assumed to be Bloch-periodic and are multiplied by eikx in output routines. This function tells MPB that the complex vector field
f should never be multiplied by Bloch phases.
(field-set! fdest fsrc)
Set
fdest to store the same field values as
fsrc, which must be of the same size and type.
(field-copy f)
Return a new field variable that is exact copy of
f; this is equivalent to calling
field-make followed by
field-set!.
(field-load f)
Loads the field
f as the current field, at which point you can use all of the functions in the previous section to operate on it or output it.
Once you have stored the fields in variables, you probably want to compute something with them. This can be done in three ways: combining fields into new fields with
field-map! :
(field-map! fdest func [f1 f2 ...])
Compute the new field
fdest to be
(func f1-val f2-val ...) at each point in the grid, where
f1-val etcetera is the corresponding value of
f1 etcetera. All the fields must be of the same size, and the argument and return types of
func must match those of the
f1... and
fdest fields, respectively.
fdest may be the same field as one of the
f1... arguments. Note: all fields are without Bloch phase factors exp(ikx).
(integrate-fields func [f1 f2 ...])
Compute the integral of the function
(func r [f1 f2 ...]) over the computational cell, where
r is the position in the usual lattice basis and
f1 etc. are fields which must all be of the same size. The integral is computed simply as the sum over the grid points times the volume of a grid pixel/voxel. Note: all fields are without Bloch phase factors exp(ikx). See also the note below.
(cvector-field-get-point f r)
(cvector-field-get-point-bloch f r)
.
You may be wondering how to get rid of the field variables once you are done with them: you don't, since they are garbage collected automatically.
We also provide functions, in analogue to e.g
get-efield and
output-efield above, to "get" various useful functions as the current field and to output them to a file:
which-band)
(output-poynting-x which-band)
(output-poynting-y which-band)
(output-poynting-z which-band)
Output the Poynting vector field for
which-band; either all or one of the Cartesian components, respectively.
which-band)
Output the time-averaged electromagnetic-field energy density (above) for
which-band.
(output-charge-density which-band)
Output the bound charge density (above) for
which-band.
As an example, below is the Scheme source code for the
get-poynting function, illustrating the use of the various field functions:
(define (get-poynting which-band) (get-efield which-band) ; put E in cur-field (let ((e (field-copy cur-field))) ; ... and copy to local var. (get-hfield which-band) ; put H in cur-field (field-map! cur-field ; write ExH to cur-field (lambda (e h) (vector3-cross (vector3-conj e) h)) e cur-field) (cvector-field-nonbloch! cur-field))) ; see below create a field with
field-map!, the resulting field should not have any Bloch phase. For example, for the Poynting vector E*xH, the exp(ikx) cancels because of the complex conjugation. After creating this sort of field, we must use the special function
cvector-field-nonbloch! to tell MPB that the field is purely periodic:
(cvector-field-nonbloch! f)
Specify that the field
f is not of the Bloch form, but rather that it:
(get-eigenvectors first-band num-bands)
Return an eigenvector object that is a copy of
num-bands current eigenvectors starting at
first-band. e.g. to get a copy of all of the eigenvectors, use
(get-eigenvectors 1 num-bands).
(set-eigenvectors ev first-band)
Set the current eigenvectors, starting at
first-band, to those in the
ev eigenvector object (as returned by
get-eigenvectors). Does not work if the grid sizes don't match.
(load-eigenvectors filename)
.
(output-eigenvectors evects filename)
(input-eigenvectors filename num-bands)
output-eigenvectors is like
save-eigenvectors, except that it saves an
evects object returned by
get-eigenvectors.
Conversely
input-eigenvectors, reads back in the eigenvectors into an
evects object from a file that has
num-bands bands.
Currently, there's only one other interesting thing you can do with the raw eigenvectors, and that is to compute the dot-product matrix between a set of saved eigenvectors and the current eigenvectors. This can be used, e.g., to detect band crossings or to set phases consistently at different k points. The dot product is returned as a "sqmatrix" object, whose elements can be read with the
sqmatrix-size and
sqmatrix-ref routines.
(dot-eigenvectors ev first-band)
Returns a sqmatrix object containing the dot product of the saved eigenvectors
ev with the current eigenvectors, starting at
first-band. That is, the (
i,j)th output matrix element contains the dot product of the (
i+1)th vector of
ev conjugated with the (
first-band+j)th eigenvector. Note that the eigenvectors, when computed, are orthonormal, so the dot product of the eigenvectors with themselves is the identity matrix.
(sqmatrix-size sm)
Return the size n of an nxn sqmatrix
sm.
(sqmatrix-ref sm i j)
Return the (
i,
j)th element of the nxn sqmatrix
sm, where {
i,
j} range from 0..n-1.
Inversion Symmetry
If you
configure MPB with the
--with-inv-symmetry flag, then the program is configured to assume inversion symmetry in the dielectric function. This allows it to run at least twice as fast and use half as much memory as the more general case. This version of MPB is by default installed as
mpbi, so that it can coexist with the usual
mpb program.
Inversion symmetry means that if you transform (x,y,z) to (-x,-y,-z) in the coordinate system, the dielectric structure is not affected. Or, more technically, that (see our online textbook, ch. 3):
where the conjugation is significant for complex-hermitian dielectric tensors. This symmetry is very common; all of the examples in this manual have inversion symmetry, for example.
Note that inversion symmetry is defined with respect to a specific origin, so that you may "break" the symmetry if you define a given structure in the wrong wayโthis will prevent
mpbi from working properly. For example, the diamond structure that we considered earlier would not have possessed inversion symmetry had we positioned one of the "atoms" to lie at the origin.
You might wonder what happens if you pass a structure lacking inversion symmetry to
mpbi. As it turns out,
mpbi only looks at half of the structure, and infers the other half by the inversion symmetry, so the resulting structure always has inversion symmetry, even if its original description did not. So, you should be careful, and look at the
epsilon.h5 output to make sure it is what you expected.
Parallel MPB
We provide two methods by which you can parallelize MPB. The first, using MPI, is the most sophisticated and potentially provides the greatest and most general benefits. The second, which involves a simple script to split e.g. the
k-points list among several processes, is less general but may be useful in many cases.
MPB with MPI Parallelization
If you
configure MPB with the
--with-mpi flag, then the program is compiled to take advantage of distributed-memory parallel machines with MPI, and is installed as
mpb-mpi. See also the Installation manual. This means that computations will potentially run more quickly and take up less memory per processor than for the serial code. Normally, you should also install the serial version of MPB, if only to get the
mpb-data program, which is not installed with
mpb-mpi.
Using the parallel MPB is almost identical to using the serial version(s), with a couple of minor exceptions. The same ctl files should work for both. Running a program that uses MPI requires slightly different invocations on different systems, but will typically be something like:
unix% mpirun -np 4 mpb-mpi foo.ctl
to run on e.g. 4 processors. A second difference is that 1D systems are currently not supported in the MPI code, but the serial code should be fast enough for those anyway. A third difference is that the output HDF5 files (epsilon, fields, etcetera) from
mpb-mpi have their first two dimensions (x and y) transposed; i.e. they are output as YxXxZ arrays. This doesn't prevent you from visualizing them, but the coordinate system is left-handed; to un-transpose the data, you can process it with
mpb-data and the
-T option in addition to any other options.
In order to get optimal benefit (time and memory savings) from
mpb-mpi, the first two dimensions (nx and ny) of your grid should both be divisible by the number of processes. If you violate this constraint, MPB will still work, but the load balance between processors will be uneven. At worst, e.g. if either nx or ny is smaller than the number of processes, then some of the processors will be idle for part or all of the computation. When using inversion symmetry (
mpbi-mpi) for 2D grids only, the optimal case is somewhat more complicated: nx and (ny/2 + 1), not ny, should both be divisible by the number of processes.
mpb-mpi divides each band at each k-point between the available processors. This means that, even if you have only a single k-point (e.g. in a defect calculation) and/or a single band, it can benefit from parallelization. Moreover, memory usage per processor is inversely proportional to the number of processors used. For sufficiently large problems, the speedup is also nearly linear.
Alternative Parallelization: mpb-split
There is an alternative method of parallelization when you have multiple k points: do each k-point on a different processor. This does not provide any memory benefits, and does not allow one k-point to benefit by starting with the fields of the previous k-point, but is easy and may be the only effective way to parallelize calculations for small problems. This method also does not require MPI: it can utilize the unmodified serial
mpb program. To make it even easier, we supply a simple script called
mpb-split (or
mpbi-split) to break the
k-points list into chunks for you. Running:
unix% mpb-split num-split foo.ctl
will break the
k-points list in
foo.ctl into
num-split more-or-less equal chunks, launch
num-split processes of
mpb in parallel to process each chunk, and output the results of each in order. Each process is an ordinary
mpb execution, except that it numbers its
k-points depending upon which chunk it is in, so that output files will not overwrite one another and you can still
grep for frequencies as usual.
Of course, this will only benefit you on a system where different processes will run on different processors, such as an SMP or a cluster with automatic process migration (e.g. MOSIX).
mpb-split is actually a trivial shell script, though, so you can easily modify it if you need to use a special command to launch processes on other processors/machines (e.g. via GNU Parallel).
The general syntax for
mpb-split is:
unix% mpb-split num-split mpb-arguments...
where all of the arguments following
num-split are passed along to
mpb. What
mpb-split technically does is to set the MPB variable
k-split-num to
num-split and
k-split-index to the index (starting with 0) of the chunk for each process. | https://mpb.readthedocs.io/en/latest/Scheme_User_Interface/ | 2019-04-18T18:22:31 | CC-MAIN-2019-18 | 1555578526228.27 | [] | mpb.readthedocs.io |
XAP Manager
The XAP Manager (or simply The Manager) is a component which will use zookeeper instead of LUS, providing a more robust process (consistent when network partitions occur), and eliminating split brains.
- When using MemoryXtend, last primary will automatically be stored in Zookeeper (instead of you needing to setup a shared NFS and configure the PU to use it)
- The GSM will use Zookeeper for leader election (instead of an active-active topology used today). This provides a more robust process (consistent when network partitions occur). Also, having a single leader GSM means that the general behaviour is more deterministic and logs are easier to read.
- RESTful API for managing the environment remotely from any platform.
Getting Started
The easiest way to get started is to run a standalone manager on your machine - simply run the following command:
./gs-agent.sh --manager-local
gs-agent.bat --manager-local
In the manager log file (
$XAP_HOME/logs), you can see:
- The maanger has started LUS, Zookeeper, GSM and REST API have started and various other details about them.
- Zookeeper files reside in
$XAP_HOME/work/manager/zookeeper
- REST API is started on localhost:8090
The local manager is intended for local usage on the developerโs machine, hence it binds to
localhost, and is not accessible from other machines. If you wish to start a manager and access it from other hosts, follow the procedure described in High Availability below with a single host.
High Availability
In a production environment, youโll probably want a cluster of managers on multiple hosts, to ensure high availability. Youโll need 3 machines (odd number is required to ensure quorum during network partitions). For examplem, suppose youโve selected machines alpha, bravo and charlie to host the managers:
- Edit the
$XAP_HOME/bin/setenv-overrides.sh/batscript and set
XAP_MANAGER_SERVERSto the list of hosts. For example:
export XAP_MANAGER_SERVERS=alpha,bravo,charlie
- Copy the modified
setenv-overrides.sh/batto each machine which runs a
gs-agent.
- Run
gs-agent --manageron the manager machines (alpha, bravo, charlie, in this case).
Note that starting more than one manager on the same host is not supported.
Configuration
Ports
The following ports can be modified using system properties, e.g. via the
setenv-overrides script located in
$XAP_HOME/bin:
Zookeeper requires that each manager can reach any other manager. If you are changing the Zookeeper ports, make sure you use the same port on all machines. If that is not possible for some reason, you may specify the ports via the
XAP_MANAGER_SERVERS environment variable. For example:
XAP.
Zookeeper
ZooKeeperโs behavior is governed by the ZooKeeper configuration file (
zoo.cfg). When using XAP manager, an embedded Zookeeper instance is started using a default configuration located at
$XAP_HOME/config/zookeeper/zoo.cfg.
If you need to override the default settings, either edit the default file, or use the
XAP_ZOOKEEPER_SERVER_CONFIG_FILE environment variable or the
com.gs.zookeeper.config-file system property to point to your custom configuration file.
Default port of Zookeeper is 2181.
Additional information on Zookeeper configuration can be found at ZooKeeper configuration .
Backwards Compatibility
The Manager is offered side-by-side with the existing stack (GSM, LUS, etc.). We think this is a better way of working with XAP, scheme when selecting resources where to deploy a processing unit instance. The โLeastRoundRobinSelectorโ chooses the container which has the least amount of instances on it. To avoid choosing the same container twice when amounts equal, it keeps the containers in a round-robin order. This scheme is different from the previous โWeightedSelectorโ which assigned weights to the containers based on heuristics and state gathered while selecting the less weighted container. The reason for this is that in large deployments, the network overhead and the overall deployment time is costly and may result in an uneven resource consumption.
Notice that you may be experiencing a different instance distribution than before, in some cases non-even. To force the use of the previous selector scheme, use the following system property:
Set '-Dorg.jini.rio.monitor.serviceResourceSelector=org.jini.rio.monitor.WeightedSelector' when loading the manager (in XAP_MANAGER_OPTIONS environment variable).
Q. Why do I need 3 managers? In previous versions 2 LUS + 2 GSM was enough for high availability
With an even number of managers, consistency cannot be assured in case of a network partitioning, hence the 3 managers requirement.
Q. I want higher availability - can I use 5 managers instead of 3?
Theoretically this is possible (e.g. Zookeeper supports this), but currently this is not supported in XAP - starting 5 managers would also start 5 Lookup Services, which will lead to excessive chatinnes and performance drop. This issue is in our backlog, though - if itโs important for you please contact support or your sales rep to vote it up.
Q. Can I use a load balancer in front of the REST API?
Sure. However, make sure to use sticky sessions, as some of the operations (e.g. upload/deploy) take time to resonate to the other managers. | https://docs.gigaspaces.com/xap/12.1/admin/xap-manager.html | 2019-04-18T18:34:52 | CC-MAIN-2019-18 | 1555578526228.27 | [] | docs.gigaspaces.com |
API
- Azure Cosmos DB Table Java API
- Azure Cosmos DB Table Node.js API
- Azure Cosmos DB Table SDK for Python
Feedback
Send feedback about: | https://docs.microsoft.com/en-ca/azure/cosmos-db/table-introduction | 2019-04-18T18:48:31 | CC-MAIN-2019-18 | 1555578526228.27 | [] | docs.microsoft.com |
Contents Security Operations Previous Topic Next Topic Add a related observable Subscribe Log in to subscribe to topics and get notified when content changes. ... SAVE AS PDF Selected Topic Topic & Subtopics Share Add a related observable In addition to importing observables as STIX data, you can add related observables manually. Before you beginRole required: sn_ti.admin Procedure Navigate to Threat Intelligence > IoC Repository > Observables. Click the observable to which you want to add a related observable. Click the Child Observables related list. Click Edit. As needed, use the filters to locate the observable you want to relate to the current one. Using the slushbucket, add the observable to the Child Observables list. Click Save. Related TasksDefine an observableAdd a related IoC to an observableAdd associated tasks to an observableLoad more IoC dataIdentify observable sourcesPerform lookups on observablesPerform threat enrichment on observables On this page Send Feedback Previous Topic Next Topic | https://docs.servicenow.com/bundle/london-security-management/page/product/threat-intelligence/task/t_AddRelatedObservable.html | 2019-04-18T18:57:04 | CC-MAIN-2019-18 | 1555578526228.27 | [] | docs.servicenow.com |
In This Article
Writer
Zend\Feed\Writer is the sibling component to
Zend\Feed\Reader responsible
for generating feeds. is the inverse of
Zend\Feed\Reader. Where
Zend\Reader\Writer instance
The architecture of
Zend\Feed (Tombstone) containers, these are rendered only for Atom 2.0 and ignored for RSS.
Due to the system being divided between data containers and renderers,
extensions have more mandatory requirements than their equivalents in the
Zend\Feed\Reader subcomponent. A typical extension offering namespaced feed
and entry level elements must itself reflect the exact same architecture: i.e.
it must offer both feed and entry level data containers, and matching renderers.
There is, fortunately, no complex integration work required since all extension
classes are simply registered and automatically used by the core classes. We
cover extensions in more detail at the end of this chapter.
Getting Started
To use
Zend\Feed\Writer\Writer, you will provide it with data, and then
trigger the renderer. What follows is an example demonstrating generation of a
minimal Atom 1.0 feed. Each feed or entry uses a separate data container.
use Zend\Feed\Writer\Feed; /** * Create the parent feed */ $feed = new Feed; $feed->setTitle("Paddy's Blog"); $feed->setLink(''); $feed->setFeedLink('', 'atom'); $feed->addAuthor([ 'name' => 'Paddy', 'email' => '[email protected]', 'uri' => '', ]); $feed->setDateModified(time()); $feed->addHub(''); /** * Add one or more entries. Note that entries must * be manually added once created. */ $entry = $feed->createEntry(); $entry->setTitle('All Your Base Are Belong To Us'); $entry->setLink(''); $entry->addAuthor([ 'name' => 'Paddy', 'email' => '[email protected]', 'uri' => '', ]); $entry->setDateModified(time()); $entry->setDateCreated(time()); $entry->setDescription('Exposing the difficulty" href=""/> <id></id> <author> <name>Paddy</name> <email>[email protected]</email> <uri></uri> </author> <link rel="hub" href=""/> <entry> <title type="html"><![CDATA[All Your Base Are Belong To Us]]></title> <summary type="html"> <![CDATA[Exposing the difficultly of porting games to English.]]> </summary> <published>2009-12-14T20:28:18+00:00</published> <updated>2009-12-14T20:28:18+00:00</updated> <link rel="alternate" type="text/html" href=""/> <id></id> <author> <name>Paddy</name> <email>[email protected]</email>
a method for validating the data being set. By design, the API closely matches
that for
Zend\Feed\Reader to avoid undue confusion and uncertainty.
Zend\Feed, allowing method
overloading to the extension classes.
Here's a summary of the Core API for Feeds. You should note it comprises not
only the basic RSS and Atom standards, but also accounts for a number of
included extensions bundled with
Zend\Feed\Writer. The naming of these
extension sourced methods remain fairly generic; all extension methods operate
at the same level as the Core API, though we do allow you to retrieve any
specific extension object separately if required.
The Feed API for data is contained in
Zend\Feed\Writer\Feed. In addition to the API
detailed below, the class also implements the
Countable and
Iterator interfaces.
Feed API Methods
Retrieval methods
In addition to the setters listed above,
Feedinstances also provide matching getters to retrieve data from the
Feeddata container. For example,
setImage()is matched with a
getImage()method.
Setting Entry Data Points
Below is a summary of the Core API for entries and items. You should note that
it covers not only the basic RSS and Atom standards, but also a number of
included extensions bundled with
Zend\Feed\Writer. The naming of these
extension sourced methods remain fairly generic; all extension methods operate
at the same level as the Core API, though we do allow you to retrieve any
specific extension object separately if required.
The Entry API for data is contained in
Zend\Feed\Writer\Entry.
Entry API Methods
Retrieval methods
In addition to the setters listed above,
Entryinstances also provide matching getters to retrieve data from the
Entrydata container. For example,
setContent()is matched with a
getContent()method.
Extensions
- TODO
Found a mistake or want to contribute to the documentation? Edit this page on GitHub! | https://docs.zendframework.com/zend-feed/writer/ | 2019-04-18T18:32:33 | CC-MAIN-2019-18 | 1555578526228.27 | [] | docs.zendframework.com |
Using Measurementsยถ
Measurements are the combination of two quantities: the mean value and the error (or uncertainty). The easiest ways to generate a measurement object is from a quantity using the plus_minus operator.
>>> import numpy as np >>> from pint import UnitRegistry >>> ureg = UnitRegistry() >>> book_length = (20. * ureg.centimeter).plus_minus(2.) >>> print(book_length) (20.0 +/- 2.0) centimeter
You can inspect the mean value, the absolute error and the relative error:
>>> print(book_length.value) 20.0 centimeter >>> print(book_length.error) 2.0 centimeter >>> print(book_length.rel) 0.1
You can also create a Measurement object giving the relative error:
>>> book_length = (20. * ureg.centimeter).plus_minus(.1, relative=True) >>> print(book_length) (20.0 +/- 2.0) centimeter
Measurements support the same formatting codes as Quantity. For example, to pretty print a measurement with 2 decimal positions:
>>> print('{:.02fP}'.format(book_length)) (20.00 ยฑ 2.00) centimeter
Mathematical operations with Measurements, return new measurements following the Propagation of uncertainty rules.
>>> print(2 * book_length) (40.0 +/- 4.0) centimeter >>> width = (10 * ureg.centimeter).plus_minus(1) >>> print('{:.02f}'.format(book_length + width)) (30.00 +/- 2.24) centimeter
Note
only linear combinations are currently supported. | https://pint.readthedocs.io/en/0.9/measurement.html | 2019-04-18T18:51:14 | CC-MAIN-2019-18 | 1555578526228.27 | [] | pint.readthedocs.io |
Administration
- Start or stop services
- Connect to LAPP from a different machine
- Run console commands
- Create and restore application backups
- Upload files using SFTP
- the Apache event MPM
- Understand default .htaccess file configuration
- Use PageSpeed with Apache
- | https://docs.bitnami.com/oci/infrastructure/lapp/administration/ | 2019-04-18T19:39:56 | CC-MAIN-2019-18 | 1555578526228.27 | [] | docs.bitnami.com |
The
aggregate statement is used to reduce the number of rows in a DSET while preserving required information within them
aggregate
[dset.id]
[notime|daily] [offset
offset]
[nudge] [default_function
function] colname function [... colname function]
The
aggregate statement is a powerful tool for reducing the number of rows in a DSET. Aggregation is based on the concept of matching rows. Any two rows that match may be merged into a single row which selectively retains information from both of the original rows. Any further rows that match may also be merged into the same result row.
A match is determined by comparing all the columns which have a function of
match associated with them (further information regarding this can be found below). If all the column values match, then the rows are merged.
Merging involves examining all the columns in the data that were not used in the matching process. For each of those columns, it applies a function to the values in the two rows and updates the result row with the computed result of that function. For a full list of functions, please refer to the table further down in this article.
To illustrate this consider the following two row dataset:
id,colour,location,quantity1234,blue,europe,4.51234,green,europe,5.5
If we don't care about the
colour value in the above records, we can combine them together. We do care about the
quantity however, so we'll add the two values together to get the final result.
The statement to do this is:
aggregate notime id match location match quantity sum
id match means that the values in the
id columns must be the same
location match means that the values in the
location columns must be the same
quantity sum means that the resulting value should be the sum of the two existing values
by default, a function of
first is applied to the columns, such that the original row retains its value
Applying these rules to the above example we get the following single result record:
id,colour,location,quantity,EXIVITY_AGGR_COUNT1234,blue,europe,10,2
A column called
EXIVITY_AGGR_COUNT automatically created by the
aggregate statement and for each row in the output it will contain the number of source rows that were merged together to create that result row
The
aggregate statement accepts a range of parameters as summarised in the table below:
If two records are deemed suitable for merging then the function determines the resulting value in each column. The available functions are as follows:
When the notime parameter is specified, the aggregation process treats any columns flagged as start and end times in the data as data columns, not timestamp columns.
In this case when comparing two rows to see if they can be merged, the aggregation function simply checks to see if all the columns with a function of
match are the same, and if they are the two rows are merged into one by applying the appropriate function to each column in turn.
The following illustrates the
aggregate statement being used to remove duplicate rows from a DSET:
# The first column in the DSET being aggregated# is called subscription_idaggregate notime default_function match subscription_id match
The analysis of the statement above is as follows:
notime - we are not interested in timestamps
default_function match - by default every column has to match before records can be aggregated
subscription_id match - this is effectively redundant as the default_function is
match but needs to be present because at least one pair of colname function parameters is required by the
aggregate statement
The resulting DSET will have no duplicate data rows, as each group of rows whose column values were the same were collapsed into a single record.
The example shown at the top of this article used the
sum function to add up the two
quantity values, resulting in the same total at the expense of being able to say which source record contributed which value to that total.
The
sum function can therefore accurately reflect the values in a number of source rows, albeit with the above limitation. By using a function of
sum,
max or
min, various columns can be processed by
aggregate in a meaningful manner, depending on the specific use case.
When aggregating, columns containing start time and end time values in UNIX epoch format can be specified. Each record in the DSET therefore has start and end time markers defining the period of time that the usage in the record represents. As well as taking the start times and end times into account, time-sensitive aggregation can perform additinal manipulations on these start and end times.
Consider the following CSV file called
aggregate_test.csv:
startUsageTime,endUsageTime,id,subscription_id,service,quantity2017-11-03:00.00.00,2017-11-03:02.00.00,ID_1234,SUB_abcd,Large VM,22017-11-03:00.00.00,2017-11-03:03.00.00,ID_1234,SUB_abcd,Large VM,22017-11-03:00.00.00,2017-11-03:06.00.00,ID_3456,SUB_efgh,Medium VM,22017-11-03:00.00.00,2017-11-03:04.00.00,ID_1234,SUB_abcd,Large VM,22017-11-03:00.00.00,2017-11-03:05.00.00,ID_1234,SUB_abcd,Large VM,22017-11-03:00.00.00,2017-11-03:06.00.00,ID_1234,SUB_abcd,Large VM,22017-11-03:00.00.00,2017-11-03:07.00.00,ID_1234,SUB_abcd,Large VM,22017-11-03:00.00.00,2017-11-03:02.00.00,ID_3456,SUB_efgh,Large VM,22017-11-03:00.00.00,2017-11-03:03.00.00,ID_3456,SUB_efgh,Medium VM,22017-11-03:00.00.00,2017-11-03:04.00.00,ID_3456,SUB_efgh,Large VM,22017-11-03:00.00.00,2017-11-03:05.00.00,ID_3456,SUB_efgh,Large VM,22017-11-03:00.00.00,2017-11-03:07.00.00,ID_3456,SUB_efgh,Large VM,22017-11-03:00.00.00,2017-11-03:06.00.00,ID_3456,SUB_efgh,Medium VM,2
It is possible to aggregate these into 3 output records with adjusted timestamps using the following Transcript task:
import system/extracted/aggregate_test.csv source aggr alias testvar template = YYYY.MM.DD.hh.mm.sstimestamp START_TIME using startUsageTime template ${template}timestamp END_TIME using endUsageTime template ${template}timecolumns START_TIME END_TIMEdelete columns startUsageTime endUsageTimeaggregate aggr.test daily nudge default_function first id match subscription_id match service match quantity sumtimerender START_TIME as FRIENDLY_STARTtimerender END_TIME as FRIENDLY_END
Resulting in:
id,subscription_id,service,quantity,START_TIME,END_TIME,EXIVITY_AGGR_COUNT,FRIENDLY_START,FRIENDLY_ENDID_1234,SUB_abcd,Large VM,12,1509667200,1509692399,6,20171103 00:00:00,20171103 06:59:59ID_3456,SUB_efgh,Medium VM,6,1509667200,1509688799,3,20171103 00:00:00,20171103 05:59:59ID_3456,SUB_efgh,Large VM,8,1509667200,1509692399,4,20171103 00:00:00,20171103 06:59:59
As can be seen, for each unique combination of the values in the
id,
subscription-id and
service columns, the start and end times have been adjusted as described above and the
quantity column contains the sum of all the values in the original rows.
!!! warning When performing time-sennsitive aggregation, any records with a start or end time falling outside the current data date will be discarded.
The daily parameter to
aggregate means that the START_TIME and END_TIME columns are now recognised as containing timestamps. When aggregating with the daily option, timestamps within the current dataDate are combined to result in an output record which has the earliest start time and the latest end time seen within the day.
Optionally, following
daily an offset may be specified as follows:
aggregate aggr.test daily offset 2 id match subscription_id match quantity sum
In this case the start and end timestamps are adjusted by the number of hours specified after the word offset before aggregation is performed. This permits processing of data which has timestamps with timezone information in them, and which may start at 22:00:00 of the first day and end at 21:59:59 of the second day, as an offset can be applied to realign the records with the appropriate number of hours to compensate.
The nudge parameter shaves 1 second off end times before aggregating in order to avoid conflicts where hourly records start and end on 00:00:00 9the last second of the current hour is the same as the first second of the next hour) | https://docs.exivity.com/diving-deeper/transform/language/aggregate | 2019-04-18T18:42:44 | CC-MAIN-2019-18 | 1555578526228.27 | [] | docs.exivity.com |
Backup Overview page
inSync Private Cloud Editions:
Elite
Enterprise
Overview
This page gives you a complete status of data that inSync is backing up from user devices.
Backup Summary
The following table lists the fields in the Backup Summary area.
Backup Statistics
The following table lists the fields in the Backup Statistics area.
Total Backup Data By Profile
In a graphical representation, the Total Backup Data By Profile area displays the top five profiles in terms of total data stored.
Details
The following table describes the fields in the Details area: | https://docs.druva.com/010_002_inSync_On-premise/010_inSync_On-Premise_5.5/030_Get_Started%3A_Backup_and_Restore/inSync_Master_Management_Console_User_Interface/Backup_Overview_page | 2019-04-18T18:29:04 | CC-MAIN-2019-18 | 1555578526228.27 | [array(['https://docs.druva.com/@api/deki/files/3644/tick.png?revision=2',
'File:/tick.png'], dtype=object)
array(['https://docs.druva.com/@api/deki/files/3644/tick.png?revision=2',
'File:/tick.png'], dtype=object)
array(['https://docs.druva.com/@api/deki/files/11828/Backup_Summary.jpg?revision=4&size=bestfit&width=350&height=147',
'Backup_Summary.jpg'], dtype=object)
array(['https://docs.druva.com/@api/deki/files/11827/Backup_Statistics.png?revision=4&size=bestfit&width=350&height=149',
'Backup_Statistics.png'], dtype=object)
array(['https://docs.druva.com/@api/deki/files/11834/Total_Backup_Data_By_Profile.png?revision=1&size=bestfit&width=307&height=82',
'Total Backup Data By Profile.png'], dtype=object) ] | docs.druva.com |
Broadcasts module offers a full email marketing campaign management, it allows you to schedule automatic email for your audience segments.
SMTP Server Required
You need to specify an SMTP server before creating a broadcast.
To create your first Broadcast, go to My Audience and choose Broadcasts & Automation.
Click on NEW BROADCAST to create a new one.
Name it as you like, choose the desired Audience Segment previously created and click on the CREATE BROADCAST button.
After you create it, you will be able to customize your outgoing email, and add or change an audience segment who should receive this automated email.
This means you should previously create your desired audience segment.
Creating an audience segment means you can choose any specific category of your audience and send them an automatic email.
For example you can set that everyone who watched over 50% of your video will receive an email.
You can set a delay for the automatic email or even determine the exact date for your marketing campaign emails (when you disable Send Automatically).
| https://docs.vooplayer.com/vooplayer-modules/broadcasting | 2019-04-18T18:43:30 | CC-MAIN-2019-18 | 1555578526228.27 | [array(['https://downloads.intercomcdn.com/i/o/49256050/9ea028a7bad7d674a05bcb6d/Screenshot_202.png',
None], dtype=object)
array(['https://downloads.intercomcdn.com/i/o/49280411/3dd083405da42df503555d48/Screenshot_212.png',
None], dtype=object)
array(['https://downloads.intercomcdn.com/i/o/49280789/2e0b463b8920b5ce70a31ae6/Screenshot_213.png',
None], dtype=object)
array(['https://downloads.intercomcdn.com/i/o/49281304/f2389cb729ca85cc70e7194f/Screenshot_214.png',
None], dtype=object)
array(['https://downloads.intercomcdn.com/i/o/49281922/3a88bf196c14de8369d1daaa/Screenshot_215.png',
None], dtype=object) ] | docs.vooplayer.com |
Managing Spill Files Generated by Queries
A newer version of this documentation is available. Click here to view the most up-to-date release of the Greenplum 4.x documentation.
Managing Spill Files Generated by Queries
ERROR: number of workfiles per query limit exceeded
- Data skew is present in the queried data.
- The amount memory allocated for the query is too low.
You might be able to run the query successfully by changing the query, changing the data distribution, or changing the system memory configuration. You can use the gp_workfile_* views to see spill file usage information. You can control the maximum amount of memory that can used by a query with the Greenplum Database server configuration parameters max_statement_mem, statement_mem, or through resource queues.
- Information about skew and how to check for data skew
- Information about using the gp_workfile_* views
For information about server configuration parameters, see the Greenplum Database Reference Guide. For information about resource queues, see Workload Management with Resource Queues.
If you have determined that the query must create more spill files than allowed by the value of server configuration parameter gp_workfile_limit_files_per_query, you can increase the value of the parameter. | http://gpdb.docs.pivotal.io/43140/admin_guide/query/topics/spill-files.html | 2019-04-18T19:11:34 | CC-MAIN-2019-18 | 1555578526228.27 | [array(['/images/icon_gpdb.png', None], dtype=object)] | gpdb.docs.pivotal.io |
Apply and Unapply General Ledger Entries
Applying temporary general ledger entries allows companies to work with temporary and transfer accounts in the general ledger. Temporary and transfer accounts are used to store temporary ledger entries that are waiting for further processing into the general ledger.
You can use temporary accounts for:
- Money transfers from one bank account to another.
- Financial transaction transfers from one system to another in which part of the information temporarily resides on the original system.
- Transactions for which you have issued a sales invoice to a customer but have not yet received the corresponding purchase invoice from the vendor.
When the ledger entries have been processed, you can use the apply entries function to update the posted ledger entries and the posting account type.
You can unapply the applied general ledger entries and then open the closed entries to make changes.
To apply general ledger entries
Choose the
icon, enter G/L Registers, and then choose the related link.
Select a general ledger register, and then choose the General Ledger action.
On the General Ledger Entries page, choose the Apply Entries action.
All open ledger entries for the general ledger account are displayed on the Apply General Ledger Entries page.
Note
By default, the Include Entries field is set to Open. You can change the value of the Include Entries field to All or Closed. You can only apply general ledger entries that are Open.
Select the relevant general ledger entry, and then, on the Navigate tab, in the Application group, choose Set Applies-to ID.
The Applies-to ID field is updated with the user ID. The remaining amount is displayed in the Balance field on the Apply General Ledger Entries page.
Choose the Post Application.
You can post the application even if the balance amount is equal to 0. When posted, the Remaining Amount field is affected as follows:
If the Balance is equal to 0, then the Remaining Amount field on all ledger entries is set to 0.
If the Balance is not equal to 0, then the amount in the Balance field is transferred to the Remaining Amount field for the general ledger entry that was selected when you posted the application.
For all other general ledger entries, the Remaining Amount field is set to 0 and the Open, Closed by Entry No., Closed by Amount, and Closed at Date fields are updated.
Note
When posted, the general ledger entries which update the Applies-to ID field are deleted.
Choose the OK button.
To view the applied general ledger entries
Choose the
icon, enter G/L Registers, and then choose the related link.
Select a general ledger register, and then choose the General Ledger action.
Select the relevant general ledger entry, and then choose the Applied Entries action.
You can view all the applied general ledger entries.
Choose the OK button.
To unapply general ledger entries
Choose the
icon, enter G/L Registers, and then choose the related link.
Select a general ledger register, and then choose the General Ledger action.
Select the general ledger entry that you want to unapply, and then choose the Undo Application action.
The applied general ledger entries are unapplied.
Note
If an entry is applied to more than one application entry, you must unapply the latest application entry first. By default, the latest entry is displayed.
Choose the OK button.
See Also
Belgium Local Functionality
Commentaires
Envoyer des commentaires ร propos de :
Chargement du commentaire... | https://docs.microsoft.com/fr-fr/dynamics365/business-central/localfunctionality/belgium/how-to-apply-and-unapply-general-ledger-entries | 2019-06-16T05:52:28 | CC-MAIN-2019-26 | 1560627997731.69 | [] | docs.microsoft.com |
This guide provides instructions on how to add Spring Security to an existing Spring Boot application.
Setting up the sample
This section outlines how to setup a workspace within Spring Tool Suite (STS) so that you can follow along with this guide. The next section outlines generic steps for how to apply Spring Security to your existing application. While you could simply apply the steps to your existing application, we encourage you to follow along with this guide in order to reduce the complexity.
Obtaining the sample project
Extract the Spring Security Distribution to a known location and remember it as SPRING_SECURITY_HOME.
Import the insecure sample application
In order to follow along, we encourage you to import the insecure sample application into your IDE. You may use any IDE you prefer, but the instructions in this guide will assume you are using Spring Tool Suite (STS).
If you do not have STS installed, download STS from
Start STS and import the sample application into STS using the following steps:
FileโImport
Existing Maven Projects
Click Next >
Click Browseโฆ
Navigate to the samples (i.e. SPRING_SECURITY_HOME/samples application, it is important to ensure that the existing application works as we did in Running the insecure>4.2.13.BUILD-SNAPSHOT</version> </dependency> <dependency> <groupId>org.springframework.security</groupId> <artifactId>spring-security-config</artifactId> <version>4.2.13-insecure project in update the application to display the username. Update the. | https://docs.spring.io/spring-security/site/docs/4.2.13.BUILD-SNAPSHOT/guides/html5/helloworld-boot.html | 2019-06-16T04:47:01 | CC-MAIN-2019-26 | 1560627997731.69 | [] | docs.spring.io |
Applying for Masternode Candidacy
Once your full node is up and running, you need to apply to make it eligible as a masternode.
Masternodes will receive a significant amount of block rewards, which likely exceeds the cost for running the infrastructure. However, masternode candidates need to invest in TomoChain by depositing at least 50'000 Tomo, and stake them for a long term. Furthermore, after the initial deposit to become a candidate, if he doesn't make it to the top 150 most voted candidates, he will not be promoted as masternode and thus receive no rewards. Therefore, candidates have an incentive to do as much as they can such as signalling their capability to support TomoChain to get into top 150 most voted candidates.
Requirementsยถ.
Applying to become a masternodeยถ
You can applyk.
Resigning your masternodeยถ
In case you want to stop your node, you need to resign it from the governance first in order to retrieve your locked funds.
Access TomoMaster, go to your candidate detail page, and click the
Resign button.
Your funds will be available to withdraw 30 days after the resignation (1,296,000 blocks).
After resigning successfully, you can stop your node. If you ran it with
tmn, simply run:
tmn remove
At this point, your masternode is completely terminated. | https://docs.tomochain.com/masternode/applying/ | 2019-06-16T05:25:38 | CC-MAIN-2019-26 | 1560627997731.69 | [] | docs.tomochain.com |
{"_id":"5a3bf06d2e2f55003c5ee752",-21T17:33:33.879Z","link_external":false,"link_url":"","sync_unique":"","hidden":false,"api":{"results":{"codes":[]},"settings":"","auth":"required","params":[],"url":""},"isReference":false,"order":2,"body":"##Overview\n\nBatching runs the same workflow or tool multiple times with varying inputs in parallel executions. These inputs can be grouped by input files or via specified metadata criteria.\n\nLearn more about [setting up a batch task](doc:run-a-task#section--2-optional-designate-your-task-as-a-batch-task). On this page, learn more about batching.\n\n##What are batch tasks?\n\n**Batch analysis** separates files into batches or groups when running your analysis. The batching is done according to the specified metadata criteria of your input files or according to a files list you provide.\n\nWhen your batch task is run, it produces multiple child tasks: one for each group of files. The child tasks perform the same analysis independently.\n\nRunning a batch task will produce a number of child tasks equal to the number of groups your inputs have been separated into. There are a series of advantages to running a batch task:\n\n * The batch task automatically generates identical tasks to be run on your groups of files, saving you the effort of having to do this manually.\n * Child tasks are executed in parallel and are independent from each other.\n * If a child task fails, this does not affect the other child tasks, meaning that you only need to troubleshoot and rerun the failed task. \n * You can re-run any child task separately.\n\n##What is metadata?\n\n**Metadata** is information about the nature of your data and how it was obtained. This information is used by apps on Cavatica to make sure the right data gets analyzed together. Learn more about supplying [metadata for your files](doc:metadata-for-your-files). \n\nBatch tasks use values for metadata fields or file names in a given file list to group the files you are analyzing.\n\nSome common scenarios include batching by:\n\n * File name\n * Case ID\n * Sample ID\n * Library ID\n * Platform ID\n * File segment\n\n##How do I run a batch task?\n\nYou can set up a batch analysis on Cavatica both using the [visual interface](doc:run-a-task#section--2-optional-designate-your-task-as-a-batch-task), as well as [via the API](doc:api-batch-tutorial).\n\n##Example of batching by metadata\n\nLet's imagine having to run the **RNA-seq Alignment - STAR** workflow on many files.\n\nThe metadata fields by which batching is possible via the visual interface are described in the figure below.\n[block:image]\n{\n \"images\": [\n {\n \"image\": [\n \"\",\n \"batching_nested_hierarchy01.png\",\n 1564,\n 622,\n \"#cce3f3\"\n ]\n }\n ]\n}\n[/block]\n##Grouping input files into batches\n\nGroup input files by their metadata or by files or run the workflow as a single task, which will group all inputs together. The optimal grouping depends on your experimental design.\n\nFor example, suppose you want to run the public Whole Genome Analysis workflow, and you have multiple FASTQ files from many samples (two paired end reads per sample, resulting in two files per sample). In this case, you might want to analyze files from each sample in batches.\n\n###Batching by File\n\n\nTo batch by file, select **File** from the Batch by drop-down menu. Batching by File runs the workflow or tool for each individual file, initiating a new child task for each input file.\n\n###Batch by file metadata\n\nTo batch by file metadata, select **File metadata** from the Batch by drop-down menu. Specify the metadata field by which to batch. Files are grouped by their value for a metadata field. A separate child task will be created for each different grouping.","excerpt":"","slug":"about-batch-analyses","type":"basic","title":"About batch analyses"} | http://docs.cavatica.org/docs/about-batch-analyses | 2019-06-16T05:24:36 | CC-MAIN-2019-26 | 1560627997731.69 | [] | docs.cavatica.org |
Troubleshooting issues on your Amplia instance
Note
The documentation for this system is currently under construction. We apologize for any inconvenience this may cause. Please contact us if there's any information you need that is not currently documented.
- HTTP Error 502.5 - ANCM Out-Of-Process Startup Failure upon accessing the website
See also
- Checking the system logs...
- Enabling ASP.NET Core stdout log... | http://docs.lacunasoftware.com/en-us/articles/amplia/on-premises/troubleshoot/index.html | 2019-06-16T05:38:14 | CC-MAIN-2019-26 | 1560627997731.69 | [] | docs.lacunasoftware.com |
Select this option if you want the same action performed on all types of virus/malware, except probable virus/malware. If you choose "Clean" as the first action, select a second action that OfficeScan performs if cleaning is unsuccessful. If the first action is not "Clean", no second action is configurable.
If you choose "Clean" as the first action, OfficeScan performs the second action when it detects probable virus/malware. | http://docs.trendmicro.com/en-us/enterprise/control-manager-60/ch_policy_templates/osce_client/scan_types_manual_cnfg/scan_set_cmn_act_virus/scan_set_cmn_act_same.aspx | 2019-06-16T05:20:30 | CC-MAIN-2019-26 | 1560627997731.69 | [] | docs.trendmicro.com |
Summaryยถ
Anvil is a forging tool to help build OpenStack components and their dependencies into a complete package-oriented system.
It automates the git checkouts of the OpenStack components, analyzes & builds their dependencies and the components themselves into packages.
It allows a developer to setup an environment using the automatically created
packages (and dependencies, ex.
RPMs) with the help of anvil configuring
the components to work correctly for the developerโs needs.
The distinguishing part from devstack (besides being written in Python and not
shell), is that after building those packages (currently
RPMs) the same
packages can be used later (or at the same time) to actually deploy at a
larger scale using tools such as chef, salt, or puppet (to name a few).
Featuresยถ
Configurationsยถ
A set of configuration files (in yaml format) that is used for common, component, distribution, code origins configuration...
All the yaml configuration files could be found in:
conf/templates/keystone/
conf/components/
conf/distros/
conf/origins/
- subdirectories of
conf/personas/
Packagingยถ
- Automatically downloading source from git and performing tag/branch checkouts.
- Automatically verifying and translating requirement files to known pypi/rpm packages.
- Automatically installing and building missing dependencies (pypi and rpm) for you.
- Automatically configuring the needed files, symlinks, adjustments, and any patches.
Code decouplingยถ
- Components & actions are isolated as individual classes.
- Supports installation personas that define what is to be installed, thus decoupling the โwhatโ from the โhowโ.
Note
This encouraging re-use by others...
Extensive loggingยถ
- All commands executed are logged in standard output, all configuration files read/written (and so on).
Note
Debug mode can be activated with
-v option...
Package tracking and buildingยถ
- Creation of a single
RPMset for your installation.
- This freezes what is needed for that release to a known set of packages and dependencies.
- Automatically building and/or including all needed dependencies.
- Includes your distributions existing native/pre-built packages (when and where applicable). | https://anvil.readthedocs.io/en/latest/topics/summary.html | 2019-06-16T04:56:24 | CC-MAIN-2019-26 | 1560627997731.69 | [] | anvil.readthedocs.io |
-
5: Configure DSR mode when using TOS
Differentiated services (DS), also known as TOS (Type of Service), is a field that is part of the TCP packet header. TOS is used by upper layer protocols for optimizing the path for a packet. The TOS information encodes the Citrix ADC appliance virtual IP address (VIP), and the load balanced servers extract the VIP from it.
In the following scenario, the appliance adds the VIP to the TOS field in the packet and then forwards the packet to the load balanced server. The load balanced server then responds directly to the client, bypassing the appliance, as illustrated in the following diagram.
Figure 1. The Citrix ADC appliance in DSR mode with TOS
The TOS feature is specifically customized for a controlled environment, as described below:
- The environment must not have any stateful devices, such as stateful firewall and TCP gateways, in the path between the appliance and the load balanced servers.
- Routers at all the entry points to the network must remove the TOS field from all incoming packets to make sure that the load balanced server does not confuse another TOS field with that added by the appliance.
- Each server can have only 63 VIPs.
- The intermediate router must not send out ICMP error messages regarding fragmentation. The client will not understand the message, as the source IP address will be the IP address of the load balanced server and not the Citrix ADC VIP.
- TOS is valid only for IP-based services. You cannot use domain name based services with TOS.
In the example, Service-ANY-1 is created and bound to the virtual server Vserver-LB-1. The virtual server load balances the client request to the service, and the service responds to clients directly, bypassing the appliance. The following table lists the names and values of the entities configured on the appliance in DSR mode.
DSR with TOS requires that load balancing be set up on layer 3. To configure a basic load balancing setup for Layer 3, see Setting Up Basic Load Balancing. Name the entities and set the parameters using the values described in the previous table.
After you configure the load balancing setup, you must customize the load balancing setup for DSR mode by configuring the redirection mode to allow the server to decapsulate the data packet and then respond directly to the client and bypass the appliance.
After specifying the redirection mode, you can optionally enable the appliance to transparently monitor the server. This enables the appliance to transparently monitor the load balanced servers.
To configure the redirection mode for the virtual server by using the command line interface
At the command prompt, type:
set lb vserver <vServerName> -m <Value> -tosId <Value>
Example:
set lb vserver Vserver-LB-1 -m TOS -tosId 3
To configure the redirection mode for the virtual server by using the configuration utility
- Navigate to Traffic Management > Load Balancing > Virtual Servers.
- Open a virtual server, and in Redirect Mode, select TOS ID.
To configure the transparent monitor for TOS by using the command line interface
At the command prompt, type:
add monitor <MonitorName> <Type> -destip <DestinationIP> -tos <Value> -tosId <Value>
Example:
add monitor mon1 PING -destip 10.102.33.91 -tos Yes -tosId 3
To create the transparent monitor for TOS by using the configuration utility
- Navigate to Traffic Management > Load Balancing > Monitors.
- Create a monitor, select TOS, and type the TOS ID that you specified for the virtual server.
Wildcard TOS Monitors
In a load balancing configuration in DSR mode using TOS field, monitoring its services requires a TOS monitor to be created and bound to these services. A separate TOS monitor is required for each load balancing configuration in DSR mode using TOS field, because a TOS monitor requires the VIP address and the TOS ID to create an encoded value of the VIP address. The monitor creates probe packets in which the TOS field is set to the encoded value of the VIP address. It then sends the probe packets to the servers represented by the services of a load balancing configuration.
With a large number of load balancing configurations, creating a separate custom TOS monitor for each configuration is a big, cumbersome task. Managing these TOS monitors is also a big task. Now, you can create wildcard TOS monitors. You need to create only one wildcard TOS monitor for all load balancing configurations that use the same protocol (for example, TCP or UDP).
A wildcard TOS monitor has the following mandatory settings:
- Type =
<protocol>
- TOS = Yes
The following parameters can be set to a value or can be left blank:
- Destination IP
- Destination Port
- TOS ID
A wildcard TOS monitor (with destination IP, Destination port, and TOS ID not set) bound to a DSR service automatically learns the TOS ID and the VIP address of the load balancing virtual server. The monitor creates probe packets with TOS field set to the encoded VIP address and then sends the probe packets to the server represented by the DSR service.
To create a wildcard TOS monitor by using the CLI
At the command prompt, type:
add lb monitor <monitorName> <Type> -tos YES show lb monitor <monitorName>
To bind a wildcard TOS monitor to a service by using the CLI
At the command prompt, type:
bind lb monitor <monitorName> <serviceName> show lb monitor <monitorName>
To create a wildcard TOS monitor by using the GUI
- Navigate to Traffic Management > Load Balancing > Monitors.
- Add a monitor with the following parameter settings:
- Type =
<protocol>
- TOS = YES
To bind a wildcard TOS monitor to a service by using the GUI
- Navigate to Traffic Management > Load Balancing > Services.
- Open a service and bind a wildcard TOS monitor to it.
In the following sample configuration, V1, V2, and V3 are load balancing virtual servers of type ANY and has TOS ID set to 1, 2, and 3 respectively. S1, S2,S3, S4, and S5 are services of type ANY. S1 and S2 are bound to both V1 and V2. S3, S4 and S5 and bound to both V1 and V3. WLCD-TOS-MON is a wildcard TOS monitor with type TCP and is bound to S1, S2, S3, S4 and S5.
WLCD-TOS-MON automatically learns the TOD ID and VIP address of virtual servers bound to S1, S2, S3, S4, and S5.
Because S1 is bound to V1 and V2, WLCD-TOS-MON creates two types of probe packets for S1, one with TOS field set to the encoded VIP address (203.0.113.1) of V1 and and the other with the VIP address (203.0.113.2) of V2. The Citrix ADC then sends these probe packets to the server represented by S1. Similarly, WLCD-TOS-MON creates probe packets for S2, S3, S4, and S5.
add lb monitor WLCD-TOS-MON TCP -tos YES Done add lb vserver V1 ANY 203.0.113.1 * -m TOS โtosID 1 Done add lb vserver V2 ANY 203.0.113.2 * -m TOS โtosID 2 Done add lb vserver V3 ANY 203.0.113.3 * -m TOS โtosID 3 Done add service S1 198.51.100.1 ANY * Done add service S2 198.51.100.2 ANY * Done add service S3 198.51.100.3 ANY * Done add service S4 198.51.100.4 ANY * Done add service S5 198.51.100.5 ANY * Done bind lb monitor WLCD-TOS-MON S1 Done bind lb monitor WLCD-TOS-MON S2 Done bind lb monitor WLCD-TOS-MON S3 Done bind lb monitor WLCD-TOS-MON S4 Done bind lb monitor WLCD-TOS-MON S5 Done bind lb vserver V1 S1, S2, S3, S4, S5 Done bind lb vserver V2, S1, S2 Done bind lb vserver V3 S3, S4, S5 Done
Use case 5: Configure DSR mode when using TOS
In this article
- To configure the redirection mode for the virtual server by using the command line interface
- To configure the redirection mode for the virtual server by using the configuration utility
- To configure the transparent monitor for TOS by using the command line interface
- To create the transparent monitor for TOS by using the configuration utility
- Wildcard TOS Monitors | https://docs.citrix.com/en-us/citrix-adc/13/load-balancing/load-balancing-dsrmode-tos.html | 2019-06-16T06:43:55 | CC-MAIN-2019-26 | 1560627997731.69 | [array(['/en-us/citrix-adc/media/LB-DSR_TOS-a.png', 'dsrwithtos'],
dtype=object) ] | docs.citrix.com |
Running Replications.
SG Replicate vs Master/Master and Master/Slave when the no-conflicts mode is enabled (i.e "allow_conflicts": false). When running two Sync Gateway clusters with
allow_conflicts: false, cross-cluster document conflicts will result in that document no longer being replicated. To avoid this, the application must ensure concurrent, cross-cluster updates are not made to a given document. behavior.mitting the "continuous" property itโs value will default to false, a replication must also have been started as a one-shot to match.
{ "cancel":true, "source": "db", "target": "db-copy" }
When an active task is canceled," } | https://docs.couchbase.com/sync-gateway/2.1/running-replications.html | 2019-06-16T04:43:34 | CC-MAIN-2019-26 | 1560627997731.69 | [array(['_images/running-replications.png', 'running replications'],
dtype=object) ] | docs.couchbase.com |
Contents
Network T-Server NGSN
Supported Operating Systems
Notes:
- An asterisk (*) indicates the oldest operating systems supported for the Genesys 7.x and Genesys 8.x Maintenance Interoperable Components, including AIX Power PC, HP-UX, and Solaris SPARC.
-. modified on January 16, 2019, at 11:51.
Feedback
Comment on this article: | https://docs.genesys.com/Documentation/System/Current/SOE/NetworkTServerNGSN | 2019-06-16T05:05:01 | CC-MAIN-2019-26 | 1560627997731.69 | [] | docs.genesys.com |
protobj: Prototype-Delegation Object Model
1 Introduction
2 Tour
- Bind a to the new object that is created by cloning the default root object (% is special syntax for invoking the clone method):
> (define a (%))
- Verify that a is an object and that aโs parent is the default root object:
- Add to a a slot named x with value 1:
> (! a x 1)
- Get aโs slot xโs value:
- Bind b to a clone of a:
> (define b (% a))
- Get bโs slot xโs value, which is inherited from a:
- Set aโs slot xโs value to 42, and observe that b inherits the new value:
> (! a x 42)
- Set bโs slot xโs value to 69, and observe that a retains its own x value although โs x value has been changed:
> (! b x 69)
-:
- Bind to c an object that clones a and adds slot y with value 101:
> (define c (% a (y 101)))
- Get the values of both the x and y slots of c:
- Finally, bind d to a clone of a that overrides aโs x slot:
> (define d (% a (x 1) (y 2) (z 3)))
3 Basic Interface
4 Terse Syntax
5 Known Issues
Needs more units tests.
6 History
- Version 2:0 โ
2016-02-29
Moving from PLaneT to new package system.
- Version 1:22011-11-08
Fixed object? not being exported. (Thanks to Shviller for reporting.)
- Version 1:1 โ
2009-03-03
License is now LGPL 3.
Converted to authors new Scheme administration system.
Changed slot lists and slot pairs to be explicitly mutable, for PLT 4.x.
- Version 1:0 โ
2005-06-19
Fixed bug in %protobj:apply* (thanks to Benedikt Rosenau for reporting).
Changed $ syntax to ?, so that $ could be used for โselfโ in methods.
Documentation changes.
- Version 0.1 โ
2005-01-05
Initial release.
7 Legal
Copyright. | https://docs.racket-lang.org/protobj/index.html | 2019-06-16T05:28:04 | CC-MAIN-2019-26 | 1560627997731.69 | [] | docs.racket-lang.org |
Contents Now Platform Capabilities Previous Topic Next Topic Widgets Subscribe Log in to subscribe to topics and get notified when content changes. ... SAVE AS PDF Selected Topic Topic & Subtopics All Topics in Contents Share libraryWidgets included with Service Portal can be customized to suit your own needs or as a basic code sample for you to refer to as you are building your own widgets. Widget instances instances.Widget context menuFrom any rendered Service Portal page you can CTRL+right-click a widget to see more configuration options in a context menu.Widget developer guideDevelop custom widgets in the Service Portal using AngularJS, Bootstrap, and the ServiceNow API.Related TopicsAngularJSBootstrap documentation On this page Send Feedback Previous Topic Next Topic | https://docs.servicenow.com/bundle/istanbul-servicenow-platform/page/build/service-portal/concept/c_Widgets.html | 2019-06-16T05:24:05 | CC-MAIN-2019-26 | 1560627997731.69 | [] | docs.servicenow.com |
TOC & Recently Viewed
Recently Viewed Topics
Tenable Appliance 4.6.0 Release Notes - 6/14/2017
This release is our quarterly scheduled release which includes security patches and the latest versions of SecurityCenter (5.5), Nessus (6.10.7), and Nessus Network Monitor (formerly PVS, version 5.3).
Supported Upgrade Paths
Tenable Appliance 4.6 supports the following direct upgrade paths:
- 4.4, 4.5 -> 4.6.0
Note: If your upgrade path skips versions of Appliance, Tenable recommends reviewing the release notes for all skipped versions. You may need to update your configurations because of features and functionality added in skipped versions.
File Names & Checksums | https://docs.tenable.com/releasenotes/appliance/appliance40.htm | 2019-06-16T05:28:37 | CC-MAIN-2019-26 | 1560627997731.69 | [] | docs.tenable.com |
An Act to amend 71.05 (23) (b) 1., 71.06 (1q) (intro.), 71.06 (2) (i) (intro.), 71.06 (2) (j) (intro.), 71.06 (2e) (a), 71.06 (2e) (b), 71.06 (2m), 71.06 (2s) (d), 71.125 (1), 71.125 (2), 71.17 (6), 71.64 (9) (b) (intro.), 71.67 (5) (a) and 71.67 (5m); and to create 71.05 (23) (be), 71.06 (1r), 71.06 (2) (k), 71.06 (2) (L), 71.06 (2e) (bg) and 71.07 (5n) (e) of the statutes; Relating to: increasing certain individual income tax rates and expanding the number of brackets, increasing the personal exemption for certain individuals, and sunsetting the manufacturing and agriculture tax credit. (FE) | https://docs-preview.legis.wisconsin.gov/2017/proposals/ab200 | 2019-06-16T05:46:21 | CC-MAIN-2019-26 | 1560627997731.69 | [] | docs-preview.legis.wisconsin.gov |
Database migration
The eazyBI database migration is available starting from the Private eazyBI version 4.0.0.
eazyBI stores all data in a separate database. You can use the eazyBI database migration to migrate either all eazyBI data from one server to another or you can use it to migrate just couple eazyBI accounts from, for example, a production server to a test server.
You can migrate data from a Private eazyBI database or also from a eazyBI Jira add-on database.
If you would like to migrate existing eazyBI database to a new server then, at first, set up a new eazyBI database and start Private eazyBI using this new database.
If you would like to use Atlassian Connect integration with Jira Server and would like to migrate existing eazyBI Jira add-on data to Private eazyBI then at first configure Atlassian Connect integration with your Jira Server. In the following step you will need to specify the source eazyBI Jira add-on database connection parameters. In the Select data for migration page, you will need to select your Jira server site where you have installed eazyBI Atlassian Connect add-on.
After the initial setup, go to System Administration / Settings and in the top navigation click System Administration and then from System administration tasks select Database migrator.
Source eazyBI database
In the first step specify the database connection parameters to the existing eazyBI database from which you would like to migrate data.
Click Continue and if there will be any problems with the database connection then the errors will be displayed. If the database connection will be successful then the next page will be shown.
Select data for migration
The list of eazyBI accounts will be displayed which are available for import. If some of these accounts are disabled then it means that either they are already imported or there are existing accounts with the same name in the current eazyBI database.
Starting from the version 4.2.3 you can select the option Preserve original IDs. This option is recommended (and selected by default) when migrating an existing eazyBI database to a new empty database as then all existing account, report and dashboard ID values will be the same as in the original database. When you publish eazyBI reports or dashboards using iframes or as gadgets then these IDs are used as a reference.
Select all or some accounts that you would like to migrate and click the Migrate button.
The migration will copy the following data from the source database to the current database:
- Accounts
- Account users
- Source application definitions (without imported data)
- Source file definitions (without uploaded files)
- Cube definitions (with no data)
- Cube report definitions
- Calculated member formulas
- Dashboard definitions
After successful import, you will be redirected to the home tab.
Re-import source data
Only source application (like Jira, REST API or SQL) definitions will be migrated but no source data are migrated. After the migration, you need to go to the Source Data tab in each account and start the data import from all source applications. After the data import, you can go to the Analyze tab and open your existing reports.
If you had any uploaded source files then please re-upload them in the Source Data tab and re-import data from all source files. | https://docs.eazybi.com/eazybiprivate/set-up-and-administer/system-administration/database-migration | 2019-06-16T04:42:04 | CC-MAIN-2019-26 | 1560627997731.69 | [] | docs.eazybi.com |
Custom alert actions overview
Unique use cases can require custom alerting functionality and integration.
Use the Splunk custom alert action API to create alert action apps that admins can download and install from Splunkbase. Users can access and configure installed custom alert actions in Splunk Web. The API lets you create a user experience consistent with the standard Splunk alerting workflow.
Developer resources
Use the following resources to learn how to build a custom alert action.
- API overview
- Custom alert action component reference
- Build custom alert action components
- Create custom alert configuration files
- Create a custom alert script
- Define a custom alert action user interface
- Optional custom alert action components
- Advanced options for working with custom alert actions
Additional resources
To try out a custom alert action, you can use the built-in webhook alert action to send notifications to a web resource, like a chat room or blog. For more information, see Use a webhook alert action in the Alert! | https://docs.splunk.com/Documentation/Splunk/6.5.2/AdvancedDev/ModAlertsIntro | 2019-06-16T04:59:20 | CC-MAIN-2019-26 | 1560627997731.69 | [array(['/skins/OxfordComma/images/acrobat-logo.png', None], dtype=object)] | docs.splunk.com |
TOC & Recently Viewed
Recently Viewed Topics
Scan Results
Path: Scans > Scan Results
The Scan Results page displays scan results from active and agent scans (queued, running, completed, or failed).
Note: SecurityCenter does not include all agent scans in the scan results table. If an agent scan imports scan results identical to the previous agent scan, SecurityCenter omits the most recent agent scan from the scan results table.
Results appear in a list view, and you can drill down into individual scan details. For more information, see Manage Scan Results and Upload Scan Results. | https://docs.tenable.com/sccv/5_5/Content/ScanResults.htm | 2019-06-16T04:45:23 | CC-MAIN-2019-26 | 1560627997731.69 | [] | docs.tenable.com |
Category:Projects
From CSLabsWiki
This category exists to classify articles associated with past and present lab projects. (See also: Category:Infrastructure and Category:Research)
Current Projects
This section is for any projects with active members.
COSI Robotics
COSI Robotics is a project to tinker with the two Turtlebots COSI owns.
Team Lead: Michael Fulton
COSI Webdev
Webdev is a collective of cosinaughts continually enhancing COSI's presence on the web and unifying our pages under a single brand.
Team Lead:. | http://docs.cslabs.clarkson.edu/mediawiki/index.php?title=Category:Projects&oldid=8081 | 2019-06-16T04:32:28 | CC-MAIN-2019-26 | 1560627997731.69 | [] | docs.cslabs.clarkson.edu |
Once you have upgraded to a paid plan from a trial account your billing period will begin immediately and your credit card will be charged for the first period. Each subsequent periods after the first you will be charged the rate of the selected plan.
We'll send bills to the email address provided with the billing information. If you can't find it, please look into the spam folder as well and consider marking [email protected] as a safe sender so next emails won't be classified as spam.
The bills can be downloaded from the workspace's Manage / Manage subscription menu as well. | http://docs.storiesonboard.com/articles/608650-how-are-accounts-on-paid-plans-billed | 2019-06-16T05:44:45 | CC-MAIN-2019-26 | 1560627997731.69 | [] | docs.storiesonboard.com |
The action to register or unregister child servers does not generate the same result as to enable or disable child servers. Unregistering a child server permanently cuts the parent and child server connection. Disabling a child server temporarily suspends the connection and maintains the heartbeat connection between the parent and child servers.
When you want to balance the server load between servers a and b, these are the common scenarios:
Parent server A is managing more child servers than parent server B
Parent server A becomes overloaded and you want to reduce the load and transfer some child servers to parent server B
Use the Parent Control Manager Settings screen to unregister a child server from a parent server.
The Parent Control Manager Settings screen appears. | http://docs.trendmicro.com/en-us/enterprise/control-manager-60-service-pack-2/ch_ag_child_server_mgmt/child_server_reg_unreg/unregister_child_server.aspx | 2019-06-16T05:37:40 | CC-MAIN-2019-26 | 1560627997731.69 | [] | docs.trendmicro.com |
:
Atlassian Cache Servlet contains classes that facilitate a caching servlet
Unnamed - javax.servlet:servlet-api:jar:2.3
A library to give systems the ability to have plugins, make them more pluggable and hence add pluggability (it's late - that makes sense in my head).
Commons.Lang, a package of Java utility classes for the classes that are in java.lang's hierarchy, or are considered to be so standard as to justify existence in java.lang.{pom.artifactId.substring(8)}/
Unnamed - dom4j:dom4j:jar:1.4
Unnamed - jaxen:jaxen:jar:1.0-FCS
Unnamed - saxpath:saxpath:jar:1.0-FCS
Unnamed - msv:msv:jar:20020414
Unnamed - relaxngDatatype:relaxngDatatype:jar:20020414
Unnamed - isorelax:isorelax:jar:20020414
Unnamed - xerces:xercesImpl:jar:2.6.
Seraph is a Servlet security framework for use in J2EE web applications.
Unnamed - opensymphony:propertyset:jar:1.3
Unnamed - osuser:osuser:jar:1.0-dev-log4j-1.4jdk-7Dec05
Unnamed - webwork:webwork:jar:12Dec05-jiratld
POM was created from deploy:deploy-file
Unnamed - backport-util-concurrent:backport-util-concurrent:jar:3.0
POM was created from deploy:deploy-file
Commons-IO contains utility classes, stream implementations, file filters, file comparators and endian classes.
Unnamed - commons-collections:commons-collections:jar:3.1
Types that extend and augment the Java Collections Framework.
Unnamed - log4j:log4j:jar:1.2.7
DWR is easy Ajax for Java.
Commons Logging is a thin adapter allowing configurable bridging to other, well known logging systems. | https://docs.atlassian.com/atlassian-cache-servlet/0.11-SNAPSHOT/dependencies.html | 2019-06-16T06:04:37 | CC-MAIN-2019-26 | 1560627997731.69 | [] | docs.atlassian.com |
Import issue links
This page describes advanced JIRA and eazyBI integration. If you need help with this please contact eazyBI support.
Available from the eazyBI version 4.1.
On this page:
Configuration
If you would like to import selected issue links into eazyBI you need to define additional calculated custom fields using advanced settings for custom fields.
At first, you need to give a name for this issue link calculated custom field, for example:
[jira.customfield_feature] name = "Feature"
Then specify if it will be an
outward_link or
inward_link from an issue and specify the name of this link, for example:
outward_link = "is a feature for"
You can specify also several outward or inward links, for example:
outward_link = ["is a feature for", "is an theme for"]
If issues with different types can be linked using the specified links then optionally you can also limit that only issues with specific issue type will be selected:
issue_type = "Feature"
or several issue types:
issue_type = ["Feature", "Theme"]
If you would like to update this calculated issue link field for sub-tasks from their parent issues then specify:
update_from_issue_key = "parent_issue_key"
If you would like to import this custom field as a dimension (that means not just as a property) then add
dimension = true
If there can be several linked issues with the specified link type and you would like to import them as a multi-value dimension, then specify:
multiple_values = true
After you have saved the advanced settings with this issue link field definition then go to the Source Data tab and see if you have this custom field in the list of available custom fields.
If you would like to test if the field is defined correctly then select it and click Add custom JavaScript code and test it with some issue key. Expand fields and see if the corresponding customfield_NAME field contains the linked issue key that you are expecting.
Examples
Features and Themes
This is an example where issues are grouped into features (by linking issues to features) and then features are grouped into themes (by linking features to themes):
[jira.customfield_feature] name = "Feature" outward_link = "is a feature for" issue_type = "Feature" update_from_issue_key = "parent_issue_key" dimension = true [jira.customfield_theme] name = "Theme" outward_link = "is a theme for" issue_type = "Theme" update_from_issue_key = "customfield_feature" dimension = true
In this example, all issues (with their subtasks) within features will also be related to the corresponding theme. At first,
customfield_feature will be updated for sub-tasks using the
parent_issue_key field (which stores the parent issue key for sub-tasks). And then
customfield_theme will be updated for parent issues and sub-tasks using the
customfield_feature.
Bugs
If you have many bugs that can be related to one story or feature and you would like to see by bug which issues it is related to then you can import them as a multi-value dimension:
[jira.customfield_bugs] name = "Bugs" outward_link = "relates to" issue_type = "Bug" dimension = true multiple_values = true
Troubleshooting
There might be cases when issue link configuration parameters seems not working as expected. Here follow hints which might help faster find out the reasons why issue link information is not imported in eazyBI correctly.
- Check if you have correctly used inward_link or outward_link direction of the link.
The general rule: check the link name in the issue screen you consider as a "parent" issue (for instance, if you want to get all defects logged against a task, then check the link name in the task issue screen). Then go to Jira administration and check whether this name is for inward on outward endpoint of the link. Keep in mind that link name could differ accordingly to its direction!
- You need to use the correct name (all spaces, upper/lower case) of the inward or outward link configured in Jira. To avoid misspellings, it is a good idea to copy the name from the Jira configuration screen. It is even a better idea to copy the link name from the edit screen since it will show some possible leading or trailing spaces which might accidentally be entered in the link name:
You also can check the specific issue to see if the issue link is used as intended by the configuration.
For the example above, following settings should be used to create a correct custom field for the hierarchy (mind the space in inward link name!):
[jira.customfield_epic_feature] name = "Feature" inward_link = "Parent Feature " issue_type = "Feature" update_from_issue_key = โโฆโ
- Whenever you change the advanced settings for a custom field or hierarchy it is recommended to either do the full reimport of the data in the account, or to clear the previously loaded custom field configuration (run the import with the custom field un-checked) and load it again (run the import with the custom field checked). | https://docs.eazybi.com/eazybijira/data-import/advanced-data-import-options/import-issue-links | 2019-06-16T04:56:47 | CC-MAIN-2019-26 | 1560627997731.69 | [] | docs.eazybi.com |
Contents
Digital Administration
Refer to the Digital Administration product page for the latest information.
UTF-8 in Classification and Training Servers
Classification and Training Servers implement UTF-8 support as follows:
- IF the server finds the JVM property โDgenesys.mcr.stdserverex.file.encoding=UTFโ8
- THEN the server configures all connections with Genesys servers and clients as UTF-8.
- The server does not change any JVM settings. To output non-ASCII characters to log files correctly, you must manually set the JVM property โDfile.encoding=UTFโ8.
- IF the JVM property โDgenesys.mcr.stdserverex.file.encoding is not found, and the JVM option โDfile.encoding=UTFโ8 is found
- THEN the server and the whole JVM work in UTF-8 mode, including all connections with Genesys servers and clients. The server does not change any JVM settings
- ELSE IF the server is informed that Configuration Server uses UTF-8
- THEN the server configures all connections with Genesys servers and clients as UTF-8.
- This does not change any JVM settings. To output non-ASCII characters to log files correctly, you must manually set the JVM property โDfile.encoding=UTF-8.
- ELSE the server does nothing.
Note also the following:
- To display data encoded as UTF-8 on Windows, you must adjust the Windows console as follows:
- Set it to a non-raster font capable of showing non-ASCII symbols.
- Set the console's code page with the command chcp 65001.
- If an application uses Platform SDK to connect with a Genesys server, and the application and the server have different localization settings, some manual adjustment of the configuration may be required.
eServices Manager
eServices Manager allows users to create and store Knowledge Management objects (categories, screening rules, standard responses, and so on) in a multi-tiered hierarchical structure. At the time of release, Composer and Orchestration Server do not support this new hierarchical structure. For this reason, customers using release 8.1.4. or earlier of Composer and Orchestration Server are advised to not design their screening rules in multilevel hierarchy trees.
Knowledge Manager (legacy)
This section deals with the Knowledge Manager component that is a standalone Windows: | https://docs.genesys.com/Documentation/ES/8.5.1/Admin/KMgr | 2019-06-16T05:32:41 | CC-MAIN-2019-26 | 1560627997731.69 | [] | docs.genesys.com |
Usage:
{{navigation}}
Description
{{navigation}} is a template-driven helper which outputs formatted HTML of menu items defined in the Ghost admin panel (Settings > Navigation). By default, the navigation is marked up using a preset template.
Default template
By default, the HTML output by including
{{navigation}} in your theme, looks like the following:
<ul class="nav"> <li class="nav-home nav-current"><a href="/">Home</a></li> <li class="nav-about"><a href="/about">About</a></li> <li class="nav-contact"><a href="/contact">Contact</a></li> ... </ul>
Changing The Template
If you want to modify the default markup of the navigation helper, this can be achieved by creating a new file at
./partials/navigation.hbs. If this file exists, Ghost will load it instead of the default template. Example:
<div class="my-fancy-nav-wrapper"> <ul class="nav"> <!-- Loop through the navigation items --> {{#foreach navigation}} <li class="nav-{{slug}}{{#if current}} nav-current{{/if}}"><a href="{{url{{label}}</a></li> {{/foreach}} <!-- End the loop --> </ul> </div>
The up-to-date default template in Ghost is always available here.
List of Attributes
A navigation item has the following attributes which can be used inside your
./partials/navigation.hbs template file...
- {{label}} - The text to display for the link
- {{url}} - The URL to link to - see the url helper for more options
- {{current}} - Boolean true / false - whether the URL matches the current page
- {{slug} - Slugified name of the page, eg
about-us. Can be used as a class to target specific menu items with CSS or jQuery.
These attributes can only be used inside the
{{#foreach navigation}} loop inside
./partials/navigation.hbs. A navigation loop will not work in other partial templates or theme files.
Examples
The navigation helper doesn't output anything if there are no navigation items to output, so there's no need to wrap it in an
{{#if}} statement to prevent an empty list. However it's a common pattern, as in Casper, to want to output a link to open the main menu, but only if there are items to show.
The data used by the
{{navigation}} helper is also stored as a global variable called
@blog.navigation. You can use this global variable in any theme file to check if navigation items have been added by a user in the Ghost admin panel.
{{#if @blog.navigation}} <a class="menu-button" href="#"><span class="word">Menu</span></a> {{/if}} | https://docs.ghost.org/api/handlebars-themes/helpers/navigation/ | 2019-06-16T05:00:52 | CC-MAIN-2019-26 | 1560627997731.69 | [] | docs.ghost.org |
Describes two
conflicting changes that were made on opposite sides of a merge.
When performing a merge, there an implicit third revision is used to produce a three-way merge. This third revision
is the most recent shared ancestor between the two branches being merged. In the revisions on the incoming branch
("their" side) and destination branch ("our" side) after the common ancestor it is possible that the same files have
been modified in different ways. When this happens, the resulting conflict consists of the
change made on the destination branch (
"our" change) and the change made on the incoming
branch (
getTheirChange() "their" change).
Because the
conflicting changes only describe the changed paths and the
type of change, it is possible for both changes to be the same. Such a case indicates that the same type of change
was made differently to the same files. For example, the same file may be
modified in
different ways on both branches.
merge(PullRequestMergeRequest)
Retrieves "our" change, which describes the change that was made on the destination branch relative to a shared ancestor with the incoming branch.
Retrieves "their" change, which describes the change that was made on the incoming branch relative to a shared ancestor with the destination branch. | https://docs.atlassian.com/bitbucket-server/javadoc/6.2.0/api/reference/com/atlassian/bitbucket/content/Conflict.html | 2019-06-16T04:29:11 | CC-MAIN-2019-26 | 1560627997731.69 | [] | docs.atlassian.com |
Support for Intel Coleto SSL chip based platforms
The following appliances ship with Intel Coleto chips:
- MPX 5901/5905/5910
- MPX 8905/8910/8920/8930
- MPX/SDX 26100-100G/26160-100G/26100-200G
- MPX/SDX 15020-50G/15030-50G/15040-50G/15060-50G/15080-50G/15100-50G
Use the โshow hardwareโ command to identify whether your appliance has Coleto (COL) chips.
> sh hardware Platform: NSMPX-8900 8*CPU+4*F1X+6*E1K+1*E1K+1*COL 8955 30010 Manufactured on: 10/18/2016 CPU: 2100MHZ Host Id: 0 Serial no: CRAC5CR8UA Encoded serial no: CRAC5CR8UA Done
Note: Secure renegotiation is supported on the back end for these platforms.
Limitations:
- DH 512 cipher is not supported.
- SSLv3 protocol is not supported.
- GnuTLS is not supported.
- ECDSA certificates with ECC curves P_224 and P521 are not supported (Not supported on platforms with Cavium chips also.)
- DNSSEC offload is not supported. (DNSSEC is supported in software but offload to hardware is not supported.) | https://docs.citrix.com/en-us/citrix-adc/13/ssl/support-for-mpx-5900-8900-platforms.html | 2019-06-16T06:28:52 | CC-MAIN-2019-26 | 1560627997731.69 | [] | docs.citrix.com |
This article covers:
- Preferred Exporter
- Payable Invoice Status and Date
- Reimbursable and Non-reimbursable Expenses
- Default Vendor
- Payable Invoice Mapping
Preferred Exporter
You can select any admin on the policy to receive notifications in their Inbox when reports are ready to be exported.
Payable Invoice Status and Date
Here you select if you want your reports to be exported as Complete or In Progress.
For the date, you have three options to select:
- Date of last expense
- Submitted date
- Exported date
Reimbursable and Non-reimbursable Expenses
Both reimbursable and non-reimbursable reports are exported as payable invoices. If you have both Reimbursable and Non-Reimbursable expenses on a single report, we will create a separate payable invoice for each type.
Default Vendor
Here is where you select the default vendor to be applied to the non-reimbursable payable invoices. The full list of vendors from your FinancialForce FFA account will be available in the dropdown.
Payable Invoice Mapping
Summary
Expense Detail
Still looking for answers? Search our Community for more content on this topic! | https://docs.expensify.com/articles/1263287-financialforce-ffa-export-options | 2019-06-16T05:02:49 | CC-MAIN-2019-26 | 1560627997731.69 | [array(['https://downloads.intercomcdn.com/i/o/37433933/9268a33af7b9ea54a79fb90a/B.png',
None], dtype=object)
array(['https://downloads.intercomcdn.com/i/o/37434268/f0cb70c112cbe6d1a86936c5/C.png',
None], dtype=object)
array(['https://downloads.intercomcdn.com/i/o/37434302/f6ec1adc240cf27ab0894ab4/D.png',
None], dtype=object) ] | docs.expensify.com |
gamma-catยถ
gamma-cat () is an open data collection
and source catalog for TeV gamma-ray astronomy.
As explained further in the
gamma-cat docs, it provides two data products:
- the full data collection
- a source catalog with part of the data
Catalogยถ
The gamma-cat catalog is available here:
$GAMMAPY_DATA/catalogs/gammacat/gammacat.fits.gz: latest version.
To work with the gamma-cat catalog from Gammapy, pick a version and create a
SourceCatalogGammaCat:
from gammapy.catalog import SourceCatalogGammaCat filename = '$GAMMAPY_DATA/catalogs/gammacat/gammacat.fits.gz' cat = SourceCatalogGammaCat(filename)
TODO: add examples how to use it and links to notebooks.
Data collectionยถ
The gamma-cat data collection consists of a bunch of files in JSON and ECSV format, and thereโs a single JSON index file summarising all available data and containing pointers to the other files. (we plan to make a bundled version with all info in one JSON file soon)
It is available here:
$GAMMAPY_DATA/catalogs/gammacat/gammacat-datasets.json: latest version
To work with the gamma-cat data collection from Gammapy, pick a version and
create a
GammaCatDataCollection class:
from gammapy.catalog import GammaCatDataCollection filename = '$GAMMAPY_DATA/catalogs/gammacat/gammacat-datasets.json' gammacat = GammaCatDataCollection.from_index(filename)
TODO: add examples how to use it and links to notebooks. | https://docs.gammapy.org/0.12/catalog/gammacat.html | 2019-06-16T05:29:03 | CC-MAIN-2019-26 | 1560627997731.69 | [] | docs.gammapy.org |
Cachable VoiceXML Content Test Cases
In the following test cases, maximum capacity was achieved within the constraints of specific thresholds. However, the system was also tested beyond the recommended capacity to determine the extent of performance degradation.
GVP can cache internal, compiled VoiceXML objects. Caching VoiceXML objects saves a significant amount of compilation time, resulting in less CPU usage. The VoiceXML_App1 application (see VoiceXML Application Profiles) was used for the test case in Figure: Port Density vs. CPU (VoiceXML_App2) and was based on the peak capacity indicated in Table: GVP VOIP VXML/CCXML Capacity Testing.
The more complex the VoiceXML content, the greater the benefit of having cachable content. The test case in Figure: Port Density vs. CPU (VoiceXML_App2) (below) is similar to the one in Figure: Port Density vs. CPU (VoiceXML_App1) (above), except that the more complex VoiceXML_App2 application was used (see VoiceXML Application Profiles).
In Figure: Port Density vs. CPU (VoiceXML_App1) and Figure: PD vs. CPU (VoiceXML_App2), the processing of cachable and non-cachable content are compared with the Media Control Platform using the same level of CPU consumption for both applications. The following results show the benefits of using cachable content:
CPU ConsumptionโMedia Control Platform at peak capacity:
- 15% less consumption than non-cached content using VoiceXML_App1.
- ~30% less consumption than non-cached content using VoiceXML_App2.
Port DensityโCPU consumption at same level for both applications:
- ~30-35% greater than non-cached content using VoiceXML_App1.
- ~50% greater than non-cached content using VoiceXML_App2.
Feedback
Comment on this article: | https://docs.genesys.com/Documentation/GVP/85/GVP85HSG/PerfPlanScaleCVCTC | 2019-06-16T04:37:16 | CC-MAIN-2019-26 | 1560627997731.69 | [array(['/images/7/7e/PD_vs_CPU_%28VoiceXML_App1%29.png', None],
dtype=object)
array(['/images/5/59/PD_vs_CPU_%28VoiceXML_App2%29.png', None],
dtype=object) ] | docs.genesys.com |
Corporate card surcharges
GOV.UK Pay supports adding a surcharge when a user pays with a corporate credit or debit card. You can contact GOV.UK Pay support to ask the team to set this up for you.
Each surcharge is a flat amount added to a payment. If youโd like to discuss adding surcharges as percentages, please contact us.
You can set a different surcharge for each of the following 4 corporate card types:
non-prepaid credit cards
non-prepaid debit cards
prepaid credit cards
prepaid debit cards
In a payment journey, when a user enters a card number the GOV.UK Pay platform checks to determine if itโs one of the 4 corporate card types. If a surcharge is applicable, the platform will inform them that a surcharge of a certain amount will be added to the total.
If the platform cannot determine the exact type of a card being used, it will not apply a surcharge.
You can sign in to the GOV.UK Pay admin tool to see the payment amount and the surcharge amount listed separately.
If a payment has a surcharge applied, the full amount available for refund includes the surcharge. You can also use the partial refund feature to only refund the total amount excluding the surcharge.
If a surcharge is applied to a particular payment, you can see this in API response fields: | https://docs.payments.service.gov.uk/optional_features/corporate_card_surcharges/ | 2019-06-16T04:35:02 | CC-MAIN-2019-26 | 1560627997731.69 | [] | docs.payments.service.gov.uk |
Yup, we Slack.
If your team is a part of the over eight million daily active users of Slack, the TINYpulse integration is definitely for you! Enable survey delivery to let employees respond to their pulse directly in Slack, connect the Wins feed to increase visibility of your efforts, and bring recognition to the forefront of your organization with a dedicated channel for Cheers. Setup only takes a few minutes so your Slack integration with TINYpulse will be rockinโ and rollinโ before you know it.
Note: You must be an TINYpulse Engage Admin or a Super Admin in order to complete the configuration.<<
In this article
- Features
- Configure Slack for TINYpulse
- Enable survey delivery to Slack
- Configure the Wins Feed
- Set up a live Cheers Feed
- Configure Slack for multiple TINYpulse orgs
- Limitations
Features
The TINYpulse for Slack integration has three main features.
Survey delivery
Want higher response rates and better engagement with TINYpulse? TINYpulse has a bot to send your employees their regular pulse survey directly to their Slack app. Once you enable the configuration, the TINYpulse-bot will let employees know when they have a new survey ready and they can anonymously respond without even having to visit TINYpulse.
Wins feed
As employees are submitting feedback to TINYpulse and you're taking action to address these items, are you keeping employees in the loop? The Wins feed for Slack improves visibility and transparency tenfold by streaming suggestions, initiatives, and Wins directly to Slack for everyone to view and add their thoughts with a comment. And any comments on these items that are posted from Slack are also recorded in TINYpulse so the conversation isn't lost.
Note that anonymous suggestions are only sent to the Slack Wins feed if the Stream anonymous suggestion to the LIVEpulse feed option is enabled in Settings. If this option is disabled, employees can't see suggestions anywhere unless you choose to share them in a summary report.
Cheers feed
Another option to boost engagement and recognition is to stream your Cheers to a dedicated Slack channel. Employees can interact by adding their comments and emojis, continuing to spread sunshine throughout your organization. Comments are even posted to the original Cheers in TINYpulse so the receiver collects even more recognition for their efforts.
Configure Slack for TINYpulse
The first step in setting started is to connect your Slack workspace with TINYpulse. From there, you can enable survey delivery, the Wins feed, and / or the Cheers feed as you like, but this first step is a must.
1. Log in to TINYpulse and go to Users and Settings -> Integrations.
2. Find Slack in the list, click to open the configuration page, and select Connect.
3. Slack will verify your email address and request access to TINYpulse. From here, a list will automatically appear with all Slack teams that you belong to. Authorize the default team or click Change Teams to add a different one.
You've now enabled TINYpulse to access your company's Slack team, but you still have a few more steps to take to enable different functionality.
Enable survey delivery to Slack
After you've allowed TINYpulse Engage to access your Slack team (above), you'll see the Slack is now connected in the Settings -> Apps section of Engage. Follow these steps to allow the TINYpulse-bot to notify employees when there's a new survey and let them respond directly in Slack.
- From the configuration page in Users and Settings ->Apps -> Slack, click the dropdown list and select TINYpulse-bot.
2. Click Add and you're done.
Employees will now be notified by the TINYpulse-bot when the next survey becomes available depending on your timezone and survey cadence.
* Note that if you integrated Slack with TINYpulse before August 2018, you'll have to disconnect the integration and then reenable it following steps in the two sections above.
Configure the Wins feed
Remember that the Wins feed is a live stream of anonymous suggestions, initiatives, and Wins to Slack, where employees can add their comments which post to TINYpulse. Now that you've got the gist of it, here's how to set it up!
Before you start, you'll need to think about where to stream these items. While you can stream the Wins feed to any one of your public Slack channels like #general or #random, we recommend you create a new one specifically for this integration. Some options to get you thinking are TINYpulse, TINYpulse-feed, TINYfeed, etc.
So go on and make that decision, create a new public channel in Slack if needed, and let's get started.
- Select Wins Feed from the dropdown list.
- Choose the Slack channel for this live-feed.
- Click Add.
And that's all there is to it! If you created a new channel, don't forget to notify employees that they'll need to join because there isn't an automatic notification for this integration.
Set up a live Cheers feed
The last option for ultra TINYpulse engagement is the real-time Cheers feed. Since Cheers is likely to be the most frequently used TINYpulse feature in your org, we recommend setting up a fully dedicated channel for just Cheers. We don't think it's a great idea to use the same channel that you used for the Wins Feed because those items could get drowned out by Cheers, but it's completely up to you.
Similar to the Wins feed, configuration takes 30 seconds or less.
- Select Cheers Feed.
- Choose the Slack channel.
- Click Add to finish.
Head on over to your Slack feed and you'll now see Cheers rolling in for everyone's viewing pleasure. Pin Cheers, add reactions, and comment directly in the channel to further recognize the Cheers recipient.
Configure Slack for multiple TINYpulse orgs
You can still take advantage of this integration if your company is broken down into multiple TINYpulse orgs but share one single Slack workspace. From each TINYpulse org, just follow the steps above to configure TINYpulse for Slack, enable survey delivery, and set up a real-time Cheers feed. Note that this configuration must be done individually in TINYpulse for each org. Apologies for the tedium, but we promise it will be worth the small effort!!
Users belonging to multiple TINYpulse orgs with one Slack workspace
This integration allows for multiple TINYpulse orgs to connect to one Slack workspace, but doesn't allow users to respond to multiple surveys from multiple orgs while connected to one workspace.
So if you meet all three of these criteria, creating aSlack perfect storm, you cannot respond to any of your TINYpulse surveys via Slack.
- Multiple TINYpulse orgs
- One Slack workspace
- Receive surveys from more than one of these orgs
Please visit the TINYpulse web application to continue responding to your pulses..
Please sign in to leave a comment. | https://docs.tinypulse.com/hc/en-us/articles/115004750954-Configure-Slack-for-TINYpulse | 2019-06-16T04:56:39 | CC-MAIN-2019-26 | 1560627997731.69 | [array(['https://downloads.intercomcdn.com/i/o/36149768/5f5444fa406741f9f19ce193/Screen+Shot+2017-10-10+at+2.59.11+PM.png',
None], dtype=object)
array(['/hc/article_attachments/360011155613/Screen_Shot_2018-08-24_at_11.41.23_AM.png',
'Screen_Shot_2018-08-24_at_11.41.23_AM.png'], dtype=object)
array(['/hc/article_attachments/360015867433/Screen_Shot_2018-10-15_at_12.07.36_PM.png',
'Screen_Shot_2018-10-15_at_12.07.36_PM.png'], dtype=object)
array(['/hc/article_attachments/360011220274/Screen_Shot_2018-08-27_at_16.03.56.png',
'Screen_Shot_2018-08-27_at_16.03.56.png'], dtype=object)
array(['/hc/article_attachments/360015867493/Screen_Shot_2018-10-15_at_12.10.41_PM.png',
'Screen_Shot_2018-10-15_at_12.10.41_PM.png'], dtype=object)
array(['/hc/article_attachments/115005433213/Screen_Shot_2017-12-28_at_12.13.05_PM.png',
'Screen_Shot_2017-12-28_at_12.13.05_PM.png'], dtype=object)
array(['https://d2mckvlpm046l3.cloudfront.net/3f9260d6cce5a9ad8a2421d8f156e104066df04a/http%3A%2F%2Fd33v4339jhl8k0.cloudfront.net%2Fdocs%2Fassets%2F5637af3cc697910ae05f001a%2Fimages%2F59390f8b04286305c68cead7%2Ffile-dyrr27HQMn.png',
None], dtype=object)
array(['/hc/article_attachments/360015014474/Screen_Shot_2018-10-12_at_12.08.05_PM.png',
'Screen_Shot_2018-10-12_at_12.08.05_PM.png'], dtype=object)
array(['https://downloads.intercomcdn.com/i/o/33982288/73c033c7e7eec60dd7d9e9f7/image.png',
None], dtype=object) ] | docs.tinypulse.com |
Lithoxylยถ
Lithoxyl is a next-generation instrumentation toolkit for Python applications, offering a semantic, action-oriented approach to logging and metrics collection. Lithoxyl integration is compact and performant, minimizing impact on codebase readability and system performance.
Sectionsยถ
- Lithoxyl Overview
- The Action
- The Logger
- The Sink
- The Sensible Suite
- The Logging Tradition
- Frequently Asked Questions
- Glossary | http://lithoxyl.readthedocs.io/en/latest/ | 2017-10-17T07:36:06 | CC-MAIN-2017-43 | 1508187820930.11 | [] | lithoxyl.readthedocs.io |
Note: Most user interface tasks can be performed in Edge Classic or the New Edge experience. For an overview, getting started topics, and release notes specific to the New Edge experience, see the docs.
Cache internals an environment cache that you create.
- Shared cache -- By default, your proxies have access to a shared cache included for environments you create. The shared cache works well for basic use cases.
You can work with the shared cache only by using caching policies. It can't be managed using the management API. You can have a caching policy use the shared cache by simply omitting. There is no limit to the number of caches you can create. You can have a caching policy use the environment cache by specifying the cache name in the policy's
<CacheResource>element.
Object size limit in cache
The maximum size of each cached object is 512 kb. For more information, see Managing cache limits.
In-memory and persistent cache levels
Both the shared and environment caches are built on a two-level system made up of an in-memory level and a persistent level. Policies interact with both levels as a combined framework. The relationship between the levels is managed by the system.
- Level 1 is an in-memory cache (L1) for fast access. Each message processing node data store).
- This cache is limited in that only entries of a certain size may be cached. See Managing cache limits.
- There is no limit on the number of cache entries. The entries are expired in the persistent cache only on the basis of expiration settings.
How policies use the cache
The following describes how Apigee Edge handles cache entries as your caching policies do their work.
- When a policy writes a new entry to the cache (PopulateCache or ResponseCache policy):
- The entry is written to the in-memory L1 cache on only the message processor that handled the request. If the memory limits on the message processor are reached before the entry expires, the entry is removed from L1 cache.
- The entry is also written to L2 cache, as long as it is no larger than 512 kb. Any entry larger than that is not written to L2 cache.
- When a policy reads from the cache (LookupCache or ResponseCache policy):
- The system looks first for the entry in the in-memory L1 cache of the message processor handling the request.
- If there's no corresponding in-memory entry, the system looks for the entry in the L2 persistent cache.
- If the entry isn't in the persistent cache:
- LookupCache policy: No value is retrieved from the cache.
- ResponseCache policy: The actual response from the target is returned to the client, and the entry is stored expires or is removed when message processor memory limits are reached.
- The broadcast also updates or deletes the entry in L2 cache.
Managing cache limits
Through configuration, you can manage some aspects of the cache.
- In-memory (L1) cache. Memory limits for your cache are not configurable. Limits are set by Apigee for each message processor that hosts caches for multiple customers.
In a hosted. There are no limits on the number of entries in the cache, though the size of each object is limited to 512 kb. Entries evicted from the in-memory cache remain in the persistent cache in keeping with configurable time-to-live settings.
Configurable optimizations
The following table lists settings you can use to optimize cache performance. You can specify values for these settings when you create a new environment cache, as described in Creating and editing an environment cache.
Help or comments?
- If something's not working: Ask the Apigee Community or see Apigee Support.
- If something's wrong with the docs: Send Docs Feedback
(Incorrect? Unclear? Broken link? Typo?) | http://docs.apigee.com/api-services/content/cache-internals | 2017-10-17T07:44:32 | CC-MAIN-2017-43 | 1508187820930.11 | [array(['http://d3grn7b5c5cnw5.cloudfront.net/sites/docs/files/cache_internals_imagemap4.png',
None], dtype=object) ] | docs.apigee.com |
Cameo Business Modeler 18.4 Documentation
This is the home page of Cameo Business Modeler documentation.
Cameo Business Modeler is based on the award-winning MagicDraw modeling platform. The solution retains all the best diagramming, collaboration, persistence, and documentation capabilities while offering more customized capabilities tailored to business modeling needs.
The documentation of Cameo Business Modeler is a package that includes the documentation of these products and plugins:
Introduces the main features of modeling tool: working with projects, UML 2 modeling and diagramming, collaboration capabilities, and many more core features.
Provides descriptions of BPMN diagrams and elements, plus introduces BPMN specific features as well as gives guidelines for modeling business processes. | https://docs.nomagic.com/display/CBM184/Cameo+Business+Modeler+Documentation | 2017-10-17T07:56:35 | CC-MAIN-2017-43 | 1508187820930.11 | [] | docs.nomagic.com |
Fill Data Gaps
(RapidMiner Studio Core)
SynopsisThis operator fills the gaps (based on the ID attribute) in the given ExampleSet by adding new examples in the gaps. The new example will have null values.
Description
The Fill Data Gaps operator fills the gaps (based on gaps in the ID attribute) in the given ExampleSet by adding new examples in the gaps. The new examples will have null values for all attributes (except the id attribute) which can be replenished by operators like the Replace Missing Values operator. It is ideal that the ID attribute should be of integer type. This operator performs the following steps:
- The data is sorted according to the ID attribute
- All occurring distances between consecutive ID values are calculated.
- The greatest common divisor (GCD) of all distances is calculated.
- All rows which would have an ID value which is a multiple of the GCD but are missing are added to the data set. gaps in the ExampleSet are filled with new examples and the resulting_gcd_for_step_sizeThis parameter indicates if the greatest common divisor (GCD) should be calculated and used as the underlying distance between all data points. Range: boolean
- step_sizeThis parameter is only available when the use gcd for step size parameter is set to false. This parameter specifies the step size to be used for filling the gaps. Range: integer
- startThis parameter can be used for filling the gaps at the beginning (if they occur) before the first data point. For example, if the ID attribute of the given ExampleSet starts with 3 and the start parameter is set to 1. Then this operator will fill the gaps in the beginning by adding rows with ids 1 and 2. Range: integer
- endThis parameter can be used for filling the gaps at the end (if they occur) after the last data point. For example, if the ID attribute of the given ExampleSet ends with 100 and the end parameter is set to 105. Then this operator will fill the gaps at the end by adding rows with ids 101 to 105. Range: integer
Tutorial Processes
Introduction to the Fill Data Gaps operator
This Example Process starts with the Subprocess operator which delivers an ExampleSet. A breakpoint is inserted here so that you can have a look at the ExampleSet. You can see that the ExampleSet has 10 examples. Have a look at the id attribute of the ExampleSet. You will see that certain ids are missing: ids 3, 6, 8 and 10. The Fill Data Gaps operator is applied on this ExampleSet to fill these data gaps with examples that have the appropriate ids. You can see the resultant ExampleSet in the Results Workspace. You can see that this ExampleSet has 14 examples. New examples with ids 3, 6, 8 and 10 have been added. But these examples have missing values for all attributes (except the id attribute) which can be replenished by using operators like the Replace Missing Values operator. | https://docs.rapidminer.com/studio/operators/cleansing/missing/fill_data_gaps.html | 2017-10-17T07:50:20 | CC-MAIN-2017-43 | 1508187820930.11 | [] | docs.rapidminer.com |
The APWG is pleased to present the 2007 Fall General Meeting. Please join us at this opportunity to bring yourself up-to-date on phishing's evolution across the globe. Count on two full days of presentations, panel discussions and in-depth round-tables.
At this two-day members-only meeting, the APWG will examine crimeware's evolution, the roles of Registrars, Registries and DNS in managing phishing attacks, public health approaches to managing the Botnet scourge, behavioral vulnerabilities and human factors that contribute to phishing's success as well as breaking news on Counter-eCrime tools and resources..
APWG Members and Non-Members Please Note: The sessions at the Fall General Meeting are open to APWG Members Only. APWG organizers will vet all registrants
that sign up for the conference. Interlopers will not be accomodated. If you haven't already, check membership rules and benefits at: Membership is open to qualified financial institutions, online
retailers, ISPs, the law enforcement community, security solutions providers and research institutions. Members at the individual level and above are eligible to attend
conferences.
REGISTRATION NOTICE : Registration for this General Members meeting is for the ONE event on October 2 and 3 and does NOT entitle the registree to attend the eCrime
Researchers Summit on October 4 and 5. To also attend that event will require a separate registration, see this page for details.
Dave Jevans
Chairman
APWG
Peter Cassidy
Secretary-General
APWG
Bassam Khan
Cloudmark
Steve Martin
Australian High Tech Crime Centre
Economic & Special Operations
Australian Federal Police
Dave Woutersen
Security Specialist
GOVCERT.NL
Jason Milletary
Internet Security Analyst
CERT
Nick Ianelli
Internet Security Analyst
CERT
Jeff Gennari
Internet Security Analyst
CERT
Amit Klein
Trusteer, Inc.
Jacomo Piccolini
Brazilian Academic Research Network CSIRT
CAIS/RNP
Afternoon Break and Kaffee Klatsch
APWG Operational Resources Session
Working with Law Enforcement Session
Joel Yusim
IT Project Manager
The Cyberpol Proposal
An eScotland Yard for Cybercrime?
Cst. Kathy Macdonald, CPP
Crime Prevention Unit
Calgary Police Service
Industry Collaborations at the Speed of eCrime:
A Colloquy and Call to Action by the National Cyber-Forensics & Training Alliance
Ron Plesco
CEO NCFTA
SSA Tom Grasso
FBI CIRFU at NCFTA
A Special NCFTA Presentation
SSA Mike Eubanks
FBI CIRFU at NCFTA
David Bonasso
Program Director, NCFTA
Moderator:
Dave Jevans
APWG Chairman
Breakfast
Domain Name System Policy Working Group
Presentations and Panel
This session will give an update on the activities of the Domain Name System Policy Working Group (DNSPWG). The four teams on the DNSPWG will give reports on their respective areas. These updates will include the status of the changes proposed to WHOIS by ICANN, progress in working with registries to suspend domain names used for phishing, best practices being prescribed for registrars and registries, statistics on the use of domain tasting in the phishing industry, an overview of DNSPWG's participation in the June 2007 ICANN meeting, and plans for the October ICANN meeting. In addition, there will be an update on the various documents recently published by this sub-committee.
Moderator:
Laura Mather, Ph.D.
MarkMonitor & DNSPWG Co-chair
Panelists:
Rod Rasmussen
InternetIdentity & APWG DNSPWG Co-chair
Mario Maawad
LaCaixa CSIRT
Pat Cain
Cooper Cain, Inc.
APWG Resident Research Fellow
Greg Aaron
Afilias
John L. Crain
Chief Technical Officer
I.C.A.N.N.
Dave Piscitello
SSAC Fellow
ICANN
David Maher
Senior Vice President
Law & Policy
Public Interest Registry
Mike Rodenbaugh
ICANN
Councilor
Generic Names Supporting Organization
APWG Roundtable
Botnets, Network Forensics and the Diplomatic Aspects of the Private Sector/Law Enforcement Interface in eCrime Suppression
Following Botnet Controllers Home:
Infiltrating and Monitoring eCrime Communications
Geopolitical and Diplomatic Aspects of eCrime Networks
Moderator
Randy Vaughn
Baylor University
Panelists:
Gary Warner
University of Alabama at Birmingham
Andre DiMino
ShadowServer
Mike Collins
CERT/NetSA
Don Blumenthal
Infragard - Michigan Chapter
Moderator:
Dave Jevans
Chairman, APWG
Panelists:
Dan Schutzer
Executive Chairman
Financial Services Technology Consortium
Financial Institutions & Transaction Space Strategies and Priorities
Dr Randy Vaughn
Graduate Faculty
Baylor University
Networtk Protection: Hygiene Strategies and Priorities
Craig Spiezle
Director
Microsoft
Desktop Protection:
Hygiene Strategies and Priorities
John L. Crain
Chief Technical Officer
I.C.A.N.N.
Dr. Lorrie Cranor
Cylab
Carnegie Mellon University
Security Usability and User Behavior Strategies and Priorities
Gary Warner
University of Alabama at Birmingham
Private Sector Response Strategies and Priorities
SSA Tom Grasso
FBI CIRFU at NCFTA
APWG Field Trip to the NCFTA Labs
APWG Program National Cyber Forensics and Training Alliance has arranged for APWG conferees to tour their Pittsburg laboratories
Birds of a Feather Sessions at the NCFTA
Crimeware and Crimeware-Spreading URL Reporting and Data Sharing
Botnet Data Reporting and Data Sharing
APWG eCrime Fighters Night Out
This year's General Meeting.
Local Attractions
* Heinz Field & PNC Park
* Andy Warhol Museum
* Phipps Conservatory
* Kennywood Amusement Park
* Sandcastle Water Park
* Mellon Arena
* The Waterfront Shops & Restaraunts
* Station Square Freight House Shops
* Mt. Washington & Duquense Incline
* Shadyside Shopping District
* Soldier's & Sailors Hall
* Southside Nightlife District at 5:00 PM EST. Please mention
"APW" when registering for this special rate. After September 14,.
APWG is offering an opportunity to build relationships while marketing your company to a targeted audience of communications service, technology companies, security companies,
regulatory and law enforcement officials and financial institutions during its Fall 2007 General Meeting in October..
For more information on sponsoring oportunities please contact APWG Deputy Secretary-General Foy Shiver at [email protected]. 2500 members and more than 1600 member companies, police and government agencies worldwide was
officially established as an independent organization in June 2004, controlled by its executives and board of directors and an advisory steering committee. | http://docs.apwg.org/events/2007_generalMeeting.html | 2017-10-17T07:34:14 | CC-MAIN-2017-43 | 1508187820930.11 | [] | docs.apwg.org |
_19<<
- From the top menu, select Scene > Scene Settings.
The Scene Settings dialog box opens.
- In the Resolution tab, Choose a resolution from the list or enter a new one.
- Set your frame rate.
- Click OK..
_26<<.
_30<<
Once the layout or animatic is set, you can import the background elements.
There are two types of background elements you can import: bitmap and vector-based.
.
_36<<
-.
_39<<
. | https://docs.toonboom.com/help/harmony-12/essentials/Content/_CORE/_Workflow/017_Paperless_Animation/001_H1_Preparation.html | 2017-10-17T07:49:55 | CC-MAIN-2017-43 | 1508187820930.11 | [array(['../../../Resources/Images/_ICONS/Home_Icon.png', None],
dtype=object)
array(['../../../Resources/Images/HAR/_Skins/stagePremium.png', None],
dtype=object)
array(['../../../../Skins/Default/Stylesheets/Images/transparent.gif',
'Closed'], dtype=object)
array(['../../../Resources/Images/HAR/_Skins/stageAdvanced.png',
'Toon Boom Harmony 12 Stage Advanced Online Documentation'],
dtype=object)
array(['../../../../Skins/Default/Stylesheets/Images/transparent.gif',
'Closed'], dtype=object)
array(['../../../Resources/Images/HAR/_Skins/stageEssentials.png', None],
dtype=object)
array(['../../../../Skins/Default/Stylesheets/Images/transparent.gif',
'Closed'], dtype=object)
array(['../../../Resources/Images/HAR/_Skins/controlcenter.png',
'Installation and Control Center Online Documentation Installation and Control Center Online Documentation'],
dtype=object)
array(['../../../../Skins/Default/Stylesheets/Images/transparent.gif',
'Closed'], dtype=object)
array(['../../../Resources/Images/HAR/_Skins/scan.png', None],
dtype=object)
array(['../../../../Skins/Default/Stylesheets/Images/transparent.gif',
'Closed'], dtype=object)
array(['../../../Resources/Images/HAR/_Skins/stagePaint.png', None],
dtype=object)
array(['../../../../Skins/Default/Stylesheets/Images/transparent.gif',
'Closed'], dtype=object)
array(['../../../Resources/Images/HAR/_Skins/stagePlay.png', None],
dtype=object)
array(['../../../../Skins/Default/Stylesheets/Images/transparent.gif',
'Closed'], dtype=object)
array(['../../../Resources/Images/HAR/_Skins/Activation.png', None],
dtype=object)
array(['../../../../Skins/Default/Stylesheets/Images/transparent.gif',
'Closed'], dtype=object)
array(['../../../Resources/Images/_ICONS/download.png', None],
dtype=object)
array(['../../../../Skins/Default/Stylesheets/Images/transparent.gif',
'Closed'], dtype=object)
array(['../../../../Skins/Default/Stylesheets/Images/transparent.gif',
'Closed'], dtype=object)
array(['../../../Resources/Images/HAR/Stage/Paperless/HAR11_paperless_scene_setting_res.png',
'Scene Setting dialog box'], dtype=object)
array(['../../../Resources/Images/HAR/Stage/Paperless/HAR11_paperless_scene_setting_fps.png',
'Setting the frame rate'], dtype=object)
array(['../../../../Skins/Default/Stylesheets/Images/transparent.gif',
'Closed'], dtype=object)
array(['../../../Resources/Images/HAR/Stage/Paperless/HAR11_paperless_scene_length.png',
'Set Scene Length dialog box'], dtype=object)
array(['../../../Resources/Images/HAR/Stage/Paperless/HAR11_paperless_scene_length2.png',
'Setting new scene length'], dtype=object)
array(['../../../Resources/Images/HAR/Stage/Paperless/HAR11_paperless_animatic_concept.png',
'An imported animatic'], dtype=object)
array(['../../../../Skins/Default/Stylesheets/Images/transparent.gif',
'Closed'], dtype=object)
array(['../../../Resources/Images/HAR/Stage/Paperless/HAR11_paperless_importanimatic3.png',
'Select QuickTime Movie dialog box'], dtype=object)
array(['../../../../Skins/Default/Stylesheets/Images/transparent.gif',
'Closed'], dtype=object)
array(['../../../../Skins/Default/Stylesheets/Images/transparent.gif',
'Closed'], dtype=object)
array(['../../../Resources/Images/HAR/Stage/Paperless/HAR11_paperless_background_concept.png',
'Imported background elements'], dtype=object)
array(['../../../../Skins/Default/Stylesheets/Images/transparent.gif',
'Closed'], dtype=object)
array(['../../../../Skins/Default/Stylesheets/Images/transparent.gif',
'Closed'], dtype=object)
array(['../../../Resources/Images/HAR/Sketch/HAR11/Sketch_addLayers.png',
None], dtype=object)
array(['../../../../Skins/Default/Stylesheets/Images/transparent.gif',
'Closed'], dtype=object)
array(['../../../Resources/Images/HAR/Stage/Paperless/HAR11_paperless_createelementsconcept.png',
None], dtype=object)
array(['../../../../Skins/Default/Stylesheets/Images/transparent.gif',
'Closed'], dtype=object)
array(['../../../../Skins/Default/Stylesheets/Images/transparent.gif',
'Closed'], dtype=object)
array(['../../../../Skins/Default/Stylesheets/Images/transparent.gif',
'Closed'], dtype=object)
array(['../../../Resources/Images/HAR/Stage/Paperless/HAR11_paperless_drawingconcept.png',
'All drawings in the Xsheet with the same name are linked to the same original file'],
dtype=object)
array(['../../../../Skins/Default/Stylesheets/Images/transparent.gif',
'Closed'], dtype=object) ] | docs.toonboom.com |
Controlling Mule From Startup
This topic describes how to control Mule ESB 3 using the Java Service Wrapper, as well as passing arguments to the JVM to control Mule.
Understanding the Java Service Wrapper
When you run the
mule command, it launches the
mule.bat or
mule.sh script in your
MULE_HOME/bin directory. These scripts invoke the Java Service Wrapper (), which controls the JVM from your operating system and starts Mule. The Wrapper can handle system signals and provides better interaction between the JVM and the underlying operating system. It also provides many advanced options and features that you can read about on the Wrapper website.
The Java Service Wrapper allows you to run Mule as a UNIX daemon, and it can install (or remove) Mule as a Windows NT Service.
Passing Additional Arguments to the JVM to Control Mule
The Wrapper provides several properties you can set as described here. will. | https://docs.mulesoft.com/mule-user-guide/v/3.2/controlling-mule-from-startup | 2017-10-17T07:27:26 | CC-MAIN-2017-43 | 1508187820930.11 | [] | docs.mulesoft.com |
N.
What is package ID prefix reservation?
Package owners are able to reserve and protect their identity by reserving ID prefixes. Package consumers are provided with additional information when consuming packages that the package they are consuming are not deceptive in their identifying properties. Read on to see how ID prefix reservation works in details..
The below section describes the feature in detail, as well as the more advanced functionality. will only appear will see the below visual indicators on the NuGet.org gallery and in Visual Studio 2017 version 15.4 or later:
NuGet.org Gallery
Visual Studio
ID prefix reservation application process
To apply for a prefix reservation, follow the below steps.
- Review the acceptance criteria for prefix ID reservation.
- Determine the namespaces will be notified of acceptance or rejection (with the criteria that caused rejection). We may need to ask additional identifying questions to confirm owner identity.
ID prefix reservation criteria
When reviewing any application for ID prefix reservation, the NuGet.org team will evalaute the application against the below criteria. Not all criteria needs to be met for a prefix to be reserved, but the application may be denied if there is not substantial evidence of the criteria being met (with an explanation given):
- Does the package ID prefix properly and clearly identify the package owner?
- Are a significant number of the packages that have already been submitted by the owner under the package ID prefix?
- Is the package ID prefix something common that should not belong to any individual owner or organization?
- Would not reserving the package ID prefix cause ambiguity and confusion for the community?
- Are the identifying properties of the packages that match the package ID prefix clear and consistent (especially the package author)?
3rd party feed provider scenarios
If a 3rd party feed provider is interested in implementing their own service to provide prefix reservations, you can do so by modifying the search service in the NuGet v3 feed providers. The addition in the feed search service is to add the verified property, with examples for the v3 feeds below. The NuGet client will not support the added property in the v2 feed.
v3 search service example
"data": [ { "@id": "", "@type": "Package", "registration": "", "id": "MySql.Data.Entity", "version": "6.9.9", "description": "Entity Framework 6.0 supported", "summary": "", "title": "MySql.Data.Entity", "iconUrl": "", "licenseUrl": "", "projectUrl": "", "tags": [], "authors": [], "totalDownloads": 434685, "verified": true, "versions": [] }, ] | https://docs.microsoft.com/en-us/nuget/reference/id-prefix-reservation | 2017-10-17T08:01:16 | CC-MAIN-2017-43 | 1508187820930.11 | [array(['media/nuget-gallery-reserved-prefix.png', 'NuGet.org Gallery'],
dtype=object)
array(['media/visual-studio-reserved-prefix.png', 'Visual Studio'],
dtype=object) ] | docs.microsoft.com |
Subform form field type
From Joomla! Documentation
The subform form field type provides a method for using XML forms inside one another or reuse forms inside an existing form. If attribute multiple is set to true then the included form will be repeatable.
The Field has two "predefined" layouts for displaying the subform as either a table or as a div container, as well as support for custom layouts.
An example XML field definition for single mode:
<field name="field-name" type="subform" formsource="path/to/exampleform.xml" label="Subform Field" description="Subform Field Description" />
An example XML field definition for multiple mode:
<field name="field-name" type="subform" formsource="path/to/exampleform.xml" multiple="true" label="Subform Field" description="Subform Field Description" />
Example XML of exampleform.xml
<?xml version="1.0" encoding="UTF-8"?> <form> <field name="example_text" type="text" label="Example Text" /> <field name="example_textarea" type="textarea" label="Example Textarea" cols="40" rows="8" /> </form>
An example XML of exampleform.xml with fieldsets
<?xml version="1.0" encoding="UTF-8"?> <form> <fieldset name="section1" label="Section1"> <field name="example_text" type="text" label="Example Text" /> <field name="example_textarea" type="textarea" label="Example Textarea" cols="40" rows="8" /> </fieldset> <fieldset name="section2" label="Section2"> <field name="example_list" type="list" default="1" class="advancedSelect" label="Example List"> <option value="1">JYES</option> <option value="0">JNO</option> </field> </fieldset> </form>
The subform XML may also be specified inline as an alternative to placing the subform XML in a separate file. The following example illustrates this:
<?xml version="1.0" encoding="UTF-8"?> <field name="field-name" type="subform" label="Subform Field" description="Subform Field Description" multiple="true" min="1" max="10" > <form> <field name="example_text" type="text" label="Example Text" /> <field name="example_textarea" type="textarea" label="Example Textarea" cols="40" rows="8" /> </form> </field>
Field attributes:
- type (mandatory) must be subform.
- name (mandatory) is the unique name of the field.
- label (mandatory) (translatable) is the descriptive title of the field.
- description (optional) (translatable) is text that will be shown as a tooltip when the user moves the mouse over the drop-down box.
- required (optional) The field must be filled before submitting the form.
- message (optional) The error message that will be displayed instead of the default message.
- default (optional) is the default value, JSON string.
- formsource (mandatory) the form source to be included. A relative path to the xml file (relative to the root folder for the installed Joomla site) or a valid form name which can be found by JForm::getInstance().
- multiple (optional) whether the subform fields are repeatable or not.
- min (optional) count of minimum repeating in multiple mode. Default: 0.
- max (optional) count of maximum repeating in multiple mode. Default: 1000.
- groupByFieldset (optional) whether to group the subform fields by its fieldset (true or false). Default: false.
- buttons (optional) which buttons to show in multiple mode. Default: add,remove,move.
- layout (optional) the name of the layout to use when displaying subform fields.
- validate (optional) should be set to SubForm (note that this is case-sensitive!) to ensure that fields in the subform are individually validated. Default: Fields in the subform are not validated, even if validation rules are specified.
Available layouts:
- joomla.form.field.subform.default render the subform in a div container, without support of repeating. Default for single mode.
- joomla.form.field.subform.repeatable render the subform in a div container, used for multiple mode. Support groupByFieldset.
- joomla.form.field.subform.repeatable-table render the subform as a table, used for multiple mode. Supports groupByFieldset. By default each field is rendered as a table column, but if groupByFieldset=true then each fieldset is rendered as a table column.
Be aware
If your field in the subform has additional JavaScript logic then it may not work in multiple mode, because do not see the fields which added by the subform field dynamically. If it happened then you need to adjust your field to support it. Next example may help:
jQuery(document).ready(function(){ ... here the code for setup your field as usual... jQuery(document).on('subform-row-add', function(event, row){ ... here is the code to set up the fields in the new row ... }) });
Because of this some extra Joomla! fields may not work for now.
Fields Validation and Filters
The subform form field does not provide the Validation and Filters for child fields.
Addition: Since a security fix in Joomla 3.9.7 the
filter="example" attributes in subform child fields are supported and the fields will be validated; but NOT in custom form fields that extend the
JFormFieldSubform class. You have to adapt such custom fields yourself!
Beware!
All extensions that use subform fields MUST add an attribute
filter to their subform child fields of type
editor,
textarea,
text (maybe others, too) since Joomla 3.9.7 like it's common for "normal" JForm fields, if you want to allow HTML input. Otherwise the validation falls back to STRING, which is the common behavior for "normal" JForm fields. Examples:
filter="safehtml" filter="JComponentHelper::filterText" filter="raw" (bad decision in most cases)
Example
Problem
After adding new rows selects are not "chosen".
Solution
Here is an example how to reinit jQuery Chosen on newly added repeated rows:
jQuery(document).ready(function(){ jQuery(document).on('subform-row-add', function(event, row){ jQuery(row).find('select').chosen(); }) });
Or a PHP snippet to be used in e.g. your plugin in **onBeforeCompileHead** method or in your component view.
$doc = JFactory::getDocument(); $js = ' jQuery(document).on(\'subform-row-add\', function(event, row){ jQuery(row).find(\'select\').chosen(); }) '; $doc->addScriptDeclaration($js);
So newly added rows now are "chosen" now | https://docs.joomla.org/Subform_form_field_type | 2021-01-15T21:19:37 | CC-MAIN-2021-04 | 1610703496947.2 | [] | docs.joomla.org |
Some New Relic accounts are able to create a more complex account structure with master accounts and sub-accounts. switch between accounts in the UI, see Account access.
Requirements:
- Pro or Enterprise pricing tier.
- Your must be on our original user model (if you're on our New Relic One user model, you need assistance from New Relic).
If you're able to create sub-accounts, you can do that with these instructions:
- Go.
Set up accounts with SAML SSO.
Add sub-account users
Requirements:
- Pro or Enterprise pricing tier.
- Your must be on our original user model (if you're on our New Relic One user model, you need assistance from New Relic).
In general, permissions for a user on a master account are automatically inherited for that user on sub-accounts. However, in order to receive alert notifications or weekly email reports, or to hide specific applications from a sub-account, they must be explicitly added to the sub-account.: account dropdown > Account settings > Account > Summary.
- From the master account's Sub-accounts list, select the sub-account's link.
- From the sub-account's Active users section, select Add user, fill in the user's email, name, role, and title as appropriate, and then select Add this user.
- Optional: Change the Owner or Admin roles for the sub-account as appropriate.
Update sub-account applications
Requirements:
- Pro or Enterprise pricing tier.
- Your must be on our original user model (if you're on our New Relic One user model, you need assistance from New Relic).
- Must be Owner or Admin.: account dropdown > Switch account > (select a sub-account).
- Go to one.newrelic.com > APM.
- From the sub-account's Applications index, select the app's gear [gear icon] icon, then select Manage alert policy. | https://docs.newrelic.co.jp/docs/accounts/accounts-billing/account-structure/mastersub-account-structure | 2021-01-15T21:40:15 | CC-MAIN-2021-04 | 1610703496947.2 | [array(['https://docs.newrelic.co.jp/sites/default/files/thumbnails/image/Master-Sub_Hierarchy.png',
'Master-Sub_Hierarchy.png Master-Sub_Hierarchy.png'], dtype=object) ] | docs.newrelic.co.jp |
The Mobile Components Module for Game Creator provides additional features for Game Creator that are not available in the core product. This Module contains everything Mobile and supports Android and iOS devices. With this Module you get 38 new configurable Actions. โก 6 x Mobile Camera Actions โก 6 x Gyroscope/Accelerometer Actions for Movement and Camera โก 4 x Haptics (Vibration) Actions โก 15 x TouchStick Actions for Movements and Camera โก 7 x Mobile Utility Actions
You also get 8 new configurable Conditions.
โก 4 x General Mobile Conditions โก 2 x Android Conditions โก 2 x iOS Conditions
But wait, there is more. You also get 4 new configurable Triggers for touch Gestures.
โก On Pinch โก On Rotate โก On Swipe (8 way) โก On Tap
All features are activated and controlled using Game Creator, no coding is required. With this Asset, you also get: โก 9 x Detailed Example Scenes โก 9 x New TouchStick Prefabs โก 2 x Bonus Scripts Documentation and Tutorials can be found at docs.piveclabs.com. These additional Actions are an essential extension for using Game Creator with Mobile Devices. They will not work without Game Creator being installed first. Make your Mobile game different and exciting using these unique Actions. | https://docs.piveclabs.com/game-creator/mobile-components-for-game-creator | 2021-01-15T20:03:54 | CC-MAIN-2021-04 | 1610703496947.2 | [] | docs.piveclabs.com |
[โ][src]Crate paris
Simple way to output beautiful text in your CLI applications. Only limit is your imagination.
How to use
use paris::Logger; let mut log = Logger::new(); log.info("It's that simple!");
Simple api
// You can have icons at the start of your message! log.info("Will add โน at the start"); log.error("Will add โ at the start"); // or as macros info!("Will add โน at the start"); error!("Will add โ at the start");
See the Logger struct for all methods
Chaining
All methods can be chained together to build more intricate log/message combinations, in hopes of minimizing the chaos that every log string becomes when you have to concatenate a bunch of strings and add tabs and newlines everywhere.
log.info("this is some info") .indent(4).warn("this is now indented by 4") .newline(5) .success("and this is 5 lines under all other messages");
Customisation
Outputting text is cool. Outputting text with a colored icon at the start is even cooler! But this crate is all about customisation, about making the logs feel like home, if you will. Included in the crate are a variety of keys you can use to colorize your logs just the way you want them to be.
log.info("I can write normal text or use tags to <red>color it</>"); log.warn("Every function can contain <on-green><black>tags</>"); log.info("If you don't write them <what>correctly</>, you just get an ugly looking tag");
There's a key for all colors supported by the terminal
(white, black, red, blue, magenta, etc.)
If you add the word
on to any of those colors, it becomes the
background color instead
(on red, on blue, on green).
// How useful... log.info("<on-red> This has red background </>");
Maybe you'd like to use your terminals brighter colors, if that's the case
you just have to add
bright to your tag. Makes sense.
log.info("<blue><on-bright-red> This text is blue on a bright red background</> it's a pain");
If you feel like writing a lot of colors by hand is too tedious, or if you know you're going
to be using the same combination of colors over and over again you can create a
custom style
that encapsulates all those colors.
log.add_style("lol", vec!["green", "bold", "on-bright-blue"]); // '<lol>' is now a key that you can use in your strings log.info("<lol>This is has all your new styles</>");
See the README for a full list of keys if you're not feeling confident in your ability to name colors. It happens.
Resetting
You've probably seen the
</> tag in the above logs. It's not there to
"close the previously opened tag" no no. You can open as many tags as you want
and only use
</> once, it's just the "reset everything to default" tag, You might
decide you don't ever want to use it. It's up to you.
However, resetting everything to default might not be what you want. Most of the time
it'll be enough, but for those times when it isn't there are a few other tags such as:
<///>only resets the background
<//>only reset the foreground
Macros
With the macros feature enabled, you get access to macro equivalents of the logger functions.
Advantages of using macros:
- You don't have to instantiate the logger
Logger::new()
- Simple to write
- Can format parameters like
print!and
println!
Disadvantages of using macros:
- Can't chain calls
- Manual newlines and tabs with
\nand
\t
- There's no loading animation for macros
You get to decide whether you want to use macros or not.
Every macro has the same functionality as its
Logger
equivalent. Colors and icon keys work just the same.
See the Logger struct for all methods and their macro equivalents | https://docs.rs/paris/1.5.7/paris/ | 2021-01-15T20:46:03 | CC-MAIN-2021-04 | 1610703496947.2 | [] | docs.rs |
Monitoring and adjusting caching
Use nodetool to make changes to cache options and then monitor the effects of each change.
Make changes to cache options in small, incremental adjustments, then monitor the effects of each change using
In the event of high memory consumption, consider tuning data caches.
The location of the cassandra.yaml file depends on the type of installation: | https://docs.datastax.com/en/cassandra-oss/2.2/cassandra/operations/opsMonitoringCache.html | 2021-01-15T21:51:26 | CC-MAIN-2021-04 | 1610703496947.2 | [] | docs.datastax.com |
Yes they can, these details can be accessed from right within the student portal. Students are able to view all absences that have been recorded against them by clicking โAbsencesโ under the โClassingโ navigation item in the student portal. This feature can be switched off if necessary via the settings page in the Student Portal Administration site. | https://docs.edmiss.com/faq/faqs/can-students-see-their-absence-details-2/ | 2021-01-15T20:54:20 | CC-MAIN-2021-04 | 1610703496947.2 | [] | docs.edmiss.com |
EDMISS requires TCP access on Port 22. If you are running anti-virus or internet security software, this may block EDMISS from communicating on this port.
Here are some suggestions to work around this problem:
- Give the EDMISS application permission to update. Adding the EDMISS application to your security programโs โsafeโ or โallowedโ list will usually resolve any connectivity problems that youโre having. Consult your individual programโs software manual, FAQ, or help site for details on how to configure your safe list.
- Close background programs before running the EDMISS client. Disabling your security software can instantly cure connectivity issues, but it can also leave you open to actual threats. If you opt to turn off your antivirus or firewall program, be sure to re-enable it when youโre done using the EDMISS application.
Below is a list of programs that may cause interference with EDMISS updates. Please note that this is not an exhaustive list.
- Anti-Virus Programs
- Avast | https://docs.edmiss.com/faq/faqs/what-programs-might-conflict-with-edmiss/ | 2021-01-15T21:15:46 | CC-MAIN-2021-04 | 1610703496947.2 | [] | docs.edmiss.com |
UIElement.
Drag Over Event
Definition
public: virtual event DragEventHandler ^ DragOver;
// Register event_token DragOver(DragEventHandler const& handler) const; // Revoke with event_token void DragOver(event_token const* cookie) const; // Revoke with event_revoker.
You can initiate a drag-drop action on any UIElement by calling the StartDragAsync method. Once the action is initiated, any UIElement in the app can potentially be a drop target so long as AllowDrop is true on that element. Any elements that the drag-drop action passes over can handle DragEnter, DragLeave or DragOver.
DragOver Events and routed events overview.. | https://docs.microsoft.com/en-us/windows/winui/api/microsoft.ui.xaml.uielement.dragover?view=winui-3.0-preview | 2021-01-15T22:14:04 | CC-MAIN-2021-04 | 1610703496947.2 | [] | docs.microsoft.com |
our infrastructure monitoring. APM agents with AWS Elastic Beanstalk
See the appropriate procedure to install AWS Elastic Beanstalk with your infrastructure monitoring for AWS host monitoring
To install infrastructure monitoring APM agents with other AWS tools
To install an APM agent with Amazon EC2, S3, RDS, CloudWatch, or OpsWorks, follow New Relic's standard procedures:
- Go agent install
- Java agent install
- .NET agent install
- Node.js agent install
- PHP agent install
- Python agent install
- Ruby agent install
For additional options to monitor your AWS hosts, see our documentation for setting up AWS integrations with New Relic.
For more help
Additional documentation resources include New Relic's resource page for AWS, including case studies, tutorials, data sheets, and videos. | https://docs.newrelic.co.jp/docs/accounts/install-new-relic/partner-based-installation/new-relic-aws-amazon-web-services | 2021-01-15T20:16:08 | CC-MAIN-2021-04 | 1610703496947.2 | [] | docs.newrelic.co.jp |
ip_pool.exe: IP Pools
The
ip_pool.exe utility allows managing IP addresses within
resellersโ IP pools. With this utility you can addremove IP addresses
tofrom resellerโs IP pool, and set the IP address type.
Note: You cannot add IP addresses to or remove them from a network interface with enabled DHCP. To add or remove an IP address, disable DHCP for a given network interface.
Location
%plesk_cli%
Usage
ip_pool.exe <command> [<IP address>] [ <option_1> [<param>] [<option_2> [<param>]] ]
Example
The following command adds the 192.0.2.94 shared IP address to the IP pool of the JDoe reseller account:
plesk bin ip_pool.exe --add 192.0.2.94 -type shared -owner JDoe | https://docs.plesk.com/en-US/onyx/cli-win/60272/ | 2021-01-15T21:51:55 | CC-MAIN-2021-04 | 1610703496947.2 | [] | docs.plesk.com |
AsyncMysqlConnectionPool
An asynchronous MySQL connection pool
This class provides a mechanism to create a pool of connections to a MySQL client that can be utilized and reused as needed.
When a client requests a connection from the pool, it may get one that already exists; this avoids the overhead of establishing a new connection.
This is the highly recommended way to create connections to a MySQL
client, as opposed to using the
AsyncMysqlClient class which does not give
you nearly the flexibility. In fact, there is discussion about deprecating
the
AsyncMysqlClient class all together.
Interface Synopsis
class AsyncMysqlConnectionPool {...}
->__construct(array $pool_options): void
Create a pool of connections to access a MySQL client
->connect(string $host, int $port, string $dbname, string $user, string $password, int $timeout_micros = -1, string $extra_key = ""): Awaitable<AsyncMysqlConnection>
Begin an async connection to a MySQL instance
->connectWithOpts(string $host, int $port, string $dbname, string $user, string $password, AsyncMysqlConnectionOptions $conn_options, string $extra_key = ""): Awaitable<AsyncMysqlConnection>
->getPoolStats(): array
Returns statistical information for the current pool | https://docs.hhvm.com/hack/reference/class/AsyncMysqlConnectionPool/ | 2018-05-20T15:47:33 | CC-MAIN-2018-22 | 1526794863626.14 | [] | docs.hhvm.com |
The SoS tool takes periodic backups of the Cloud Foundation racks in your environment. After you insert the imaged management switch in the physical rack, you can restore the backup configuration on the switch.
Prerequisites
Retrieve the following files.
Backup file of the failed management switch's configuration. This file is named cumulus-192.168.100.1.tgz.
The hms.tar.gz backup file of the rack on which the management switch is to be replaced.
For the location of these file within the SoS tool's output, see Back Up ESXi and Physical Switch Configurations.
Procedure
- Retrieve the password of the management switch to be replaced and note it down.
- SSH to the SDDC Manager Controller VM.
- Run the following command.
#/home/vrack/bin/lookup-password
- Unplug the management switch you are replacing.
- Note down the current connections to TOR switches, rack-interconnect switches, and hosts in the rack.
- Disconnect all host OOB connections to management switch.
- Remove the management switch from the rack.
- Use PuTTY to log into the management switch IP address 192.168.100.1 with username cumulus and password CumulusLinux! .
- Add the following line to the end of the /etc/dhcp/dhcpd.config file:
ping-check false;
If this line already exists in the file, leave it as is.
- Use WinSCP to copy the hms.tar.gz and cumulus-192.168.100.1.tgz files to the /home/cumulus directory of the new management switch.
- Use PuTTY to log into the management switch IP 192.168.100.1 with username cumulus and password CumulusLinux! .
- Type the following command.
sudo su
- Restore the backup configuration to the new switch.
- Change to the root directory.
cd /
- Unpack the contents of the hms.tar.gz file.
tar zxvf /home/cumulus/hms.tar.gz
- Unpack the contents of the cumulus-192.168.100.1.tgz file.
tar zxvf /home/cumulus/cumulus-192.168.100.1.tgz
The /etc/dhcp/dhcpd.conf file is restored.
- To ensure that new hosts added to the rack are not assigned IP addresses reserved for existing hosts, edit the DYNAMIC LEASES section of the dhcpd.conf file. For example, if you have 8 servers in the rack, specify the range as follows.
# DYNAMIC LEASES option routers 192.168.0.1; range 192.168.0.60 192.168.0.100;
If you have 15 servers in the rack, specify the range as follows.
# DYNAMIC LEASES option routers 192.168.0.1; range 192.168.0.70 192.168.0.100;
- Navigate to the dhcpd.leases file and note down the host_id, oob ip, and idrac mac address mapping.
- Install the replacement management switch into the rack and wire it according to the wiring connections of the previous switch. Refer to your notes from step 3.
- Change the password of the new management switch to the current password for your Cloud Foundation system's management switches, as obtained from the
lookup-passwordcommand.
- Reboot the new management switch.
- The Error loading rack details. Follow these steps to resolve this error. page on the Dashboard displays a message
- Collect the following information.
- Using the root account, SSH to the SDDC Manager Controller VM.
- Run the following script with Python 2.7.
#/opt/vmware/bin/python2.7 /opt/vmware/sddc-support/fru-mgmtsw.py
Wait for the script to complete.
- Reboot the management switch and the SDDC Manager (VRM) VM.
- Confirm that the switches and hosts are displayed correctly on the Physical Resources page. | https://docs.vmware.com/en/VMware-Cloud-Foundation/2.2/com.vmware.vcf.admin.doc_22/GUID-40BFD399-2D2D-4C7F-8B3F-937D01496457.html | 2018-05-20T15:50:11 | CC-MAIN-2018-22 | 1526794863626.14 | [] | docs.vmware.com |
Decision Insight 20180319 Precomputings 101 Precomputings are an optimization to efficiently compute all aggregations that meet the eligibility criteria. They reduce the load of the server(s) and split the data process between eligible computations. Precomputing aggregates data on the fly as it is being stored in the product database and produces elementary aggregation contexts. As a result, the final computation of a precomputed aggregate is simply a decrease of the right contexts. Goal and principles of precomputing In Axway Decision Insight, configured attributes (vs. collected ones) are all computed by the deployment and stored. When an attribute is displayed in a dashboard or used as an input to compute another attribute, the value of this first attribute has already been computed and stored previously. This way, long computation chains are avoided. Computations are scheduled by batch, frequently enough for dashboard content to always be available in live mode. Nevertheless, a batch still requires scanning every input data necessary for the computation, which may involve a huge amount of data to process. The overhead induced by batches is usually high for aggregations, whether they are structural or temporal (aka. consolidation). What's more, several computed attributes often share most of their configuration, and sometimes only the aggregation function changes (SUM, AVG, STDDEV, COUNT, etc...). Precomputings are an optimization targeting exactly this latter kind of computed attributes. By processing data on the fly as it is being stored and by sharing aggregation contexts when only the aggregating function changes, precomputings make batches less sensitive to data volume and smoothen the batches runtime load. The precomputing optimization relies on a structure internally called cube. Cubes hold and maintain aggregation contexts on the fly. For more information, see the section about settings. Eligibility of a computed attribute for precomputing To be eligible for precomputings, an attribute must meet the following criteria: It must be: an aggregation โ many inputs on an entity to one output on another. or a consolidation โ many inputs along time for one entity to one output on the same entity. Its function must be one of the following: AND, AVG, COUNT, OR, RATIO, STDDEV, SUM, COUNT. Indicator input: Value (monovalued only), Age (for age, only AVG, STDDEV and SUM are eligible) or Instance (only COUNT is available). Carried by one entity only. Arrhythmic. The input must be on the same entity as the dimension or on a remote entity provided that the path length to the dimension is 1 and the relations are carried by the same entity as the input, and they are not multivalued-head. Time configurations: Either at reference time or on time range, but with the same configuration for every time configuration. Or correlated with one of the relations from the input to aggregate's dimensions Filters: Only by attribute, using a boolean attribute, eventually compared with a boolean constant (EQUALS or NOT EQUALS) The boolean attribute must be on the same entity as the input The constraints on the boolean are the same as those on the input The computed attribute is rhythmic. Monitoring Manage precomputings from the Computings and Precomputings screens. Computings In this screen, precomputings information is in the precompute column. The values for an attribute may be: N/A โ the computed attribute is not eligible for precomputings. Stop โ the computed attribute is already plugged on a precomputing. Clicking on this action will trigger the recomputation of all already computed values. Precompute โ the computed attribute is eligible but not plugged on a precomputing. Clicking on this action will trigger the recomputation of all already computed values. Precomputings This screen provides monitoring for active precomputings: Attribute names โ the attributes eligible for the cube Entity โ the dimensions of the aggregates managed by the precomputing Status โ what is the cube doing (idle, bootstrapping, processing events) Pending โ number of transactions remaining to be processed by the cube Processing โ number of transactions processed by the cube and remaining to be flushed (precomputed values are only available once flushed) Actions โ the trash bin is to delete the precomputing. It is equivalent to the Stop action in the Computings screen. Behavior of precomputings When an indicator configuration is created/updated, if an attribute is eligible for precomputing, a new cube will be created unless an eligible one already exists and the attribute computation will be plugged on it. On application import, cubes will be automatically created for imported attributes that are eligible for them. Precomputing is fully compatible with recomputing on correction feature. Related Links | https://docs.axway.com/bundle/DecisionInsight_20180319_allOS_en_HTML5/page/precomputings_101.html | 2018-05-20T15:51:51 | CC-MAIN-2018-22 | 1526794863626.14 | [] | docs.axway.com |
DataSet and XmlDataDocument Synchronization
The ADO.NET DataSet provides you with a relational representation of data. For hierarchical data access, you can use the XML classes available in the .NET Framework. Historically, these two representations of data have been used separately. However, the .NET Framework enables real-time, synchronous access to both the relational and hierarchical representations of data through the DataSet object and the XmlDataDocument object, respectively.
When a DataSet is synchronized with an XmlDataDocument, both objects are working with a single set of data. This means that if a change is made to the DataSet, the change will be reflected in the XmlDataDocument, and vice versa. The relationship between the DataSet and the XmlDataDocument creates great flexibility by allowing a single application, using a single set of data, to access the entire suite of services built around the DataSet (such as Web Forms and Windows Forms controls, and Visual Studio .NET designers), as well as the suite of XML services including Extensible Stylesheet Language (XSL), XSL Transformations (XSLT), and XML Path Language (XPath). You do not have to choose which set of services to target with the application; both are available.
There are several ways:
Dim dataSet As DataSet = New DataSet ' Add code here to populate the DataSet with schema and data. Dim xmlDoc As XmlDataDocument = New XmlDataDocument(dataSet)
DataSet dataSet = new DataSet(); // Add code here to populate the DataSet with schema and data. XmlDataDocument xmlDoc = new XmlDataDocument(dataSet); the entire XML document even though the DataSet only exposes a small portion of it. (For a detailed example of this, see Synchronizing a DataSet with an XmlDataDocument.)
The following code example shows the steps for creating a DataSet and populating its schema, then synchronizing it with an XmlDataDocument. Note that the DataSet schema only needs to match the elements from the XmlDataDocument that you want to expose using the DataSet.
Dim dataSet As DataSet = New DataSet ' Add code here to populate the DataSet with schema, but not data. Dim xmlDoc As XmlDataDocument = New XmlDataDocument(dataSet) xmlDoc.Load("XMLDocument.xml")
DataSet dataSet = new DataSet(); // Add code here to populate the DataSet with schema, but not data. XmlDataDocument xmlDoc = new XmlDataDocument(dataSet); xmlDoc.Load("XMLDocument.xml");.
Dim xmlDoc As XmlDataDocument = New XmlDataDocument Dim dataSet As DataSet = xmlDoc.DataSet ' Add code here to create the schema of the DataSet to view the data. xmlDoc.Load("XMLDocument.xml")
XmlDataDocument xmlDoc = new XmlDataDocument(); DataSet dataSet = xmlDoc.DataSet; // Add code here to create the schema of the DataSet to view the data. xmlDoc.Load("XMLDocument.xml");
Another advantage of synchronizing an XmlDataDocument with a DataSet is that the fidelity of an XML document is preserved. If the DataSet is populated from an XML document using ReadXml, when the data is written back as an XML document using WriteXml it may differ dramatically from the original XML document. This is because the DataSet does not maintain formatting, such as white space, or hierarchical information, such as element order, from the XML document. The DataSet also does not contain elements from the XML document that were ignored because they did not match the schema of the Dataset. Synchronizing an XmlDataDocument with a DataSet allows the formatting and hierarchical element structure of the original XML document to be maintained in the XmlDataDocument, while the DataSet contains only data and schema information appropriate to the DataSet.
When synchronizing a DataSet with an XmlDataDocument, results may differ depending on whether or not your DataRelation objects are nested. For more information, see Nesting DataRelations.
In This Section.
Related Sections
Using XML in a DataSet
Describes how the DataSet interacts with XML as a data source, including loading and persisting the contents of a DataSet as XML data.
Nesting DataRelations
Discusses the importance of nested DataRelation objects when representing the contents of a DataSet as XML data, and describes how to create these relations.
DataSets, DataTables, and DataViews
Describes the DataSet and how to use it to manage application data and to interact with data sources including relational databases and XML.
XmlDataDocument
Contains reference information about the XmlDataDocument class.
See Also
ADO.NET Managed Providers and DataSet Developer Center | https://docs.microsoft.com/en-us/dotnet/framework/data/adonet/dataset-datatable-dataview/dataset-and-xmldatadocument-synchronization | 2018-05-20T16:21:14 | CC-MAIN-2018-22 | 1526794863626.14 | [] | docs.microsoft.com |
Database Mirroring Operating Modes
This topic describes the synchronous and asynchronous operating modes for database mirroring sessions.
Note
For an introduction to database mirroring, see Database Mirroring (SQL Server).
Terms and Definitions
This section introduces a few terms that are central to this topic. whether to initiate an automatic failover. Unlike the two failover partners, the witness does not serve the database. Supporting automatic failover is the only role of the witness.
Asynchronous Database Mirroring (High-Performance Mode)
This section describes how asynchronous database mirroring works, when it is appropriate to use high-performance mode, and how to respond if the principal server fails.
Note
Most editions of SQL Server 2017 support only synchronous database mirroring ("Safety Full Only"). For information about editions that fully support database mirroring, see "High Availability (Always On)" in Editions and Supported Features for SQL Server 2016...
Note
Log shipping can be a supplement to database mirroring and is a favorable alternative to asynchronous database mirroring. For information about the advantages of log shipping, see High Availability Solutions (SQL Server). For information on using log shipping with database mirroring, see Database Mirroring and Log Shipping (SQL Server).
The Impact of a Witness on High-Performance Mode.
Note
For information about the types of quorums, see Quorum: How a Witness Affects Database Availability (Database Mirroring).
Responding to Failure of the Principal.
Note
If the tail-log backup failed and you cannot wait for the principal server to recover, consider forcing service, which has the advantage of maintaining the session state. Role Switching During a Database Mirroring Session (SQL Server).
Synchronous Database Mirroring (High-Safety Mode)..
Note
To monitor state changes in a database mirroring session, use the Database Mirroring State Change event class. For more information, see Database Mirroring State Change Event Class..
In This Section:
High-Safety Mode Without Automatic Failover
The following figure shows the configuration of high-safety mode without automatic failover. The configuration consists of only the two partners.
>>IMAGE Role Switching During a Database Mirroring Session (SQL Server).
High-Safety Mode with Automatic Failover_6<< Database Mirroring Witness and Quorum: How a Witness Affects Database Availability (Database Mirroring). Role Switching During a Database Mirroring Session (SQL Server)..
Note
If you expect the witness to remain disconnected for a significant amount of time, we recommend that you remove the witness from the session until it becomes available.
Transact-SQL Settings and Database Mirroring Operating Modes
This section describes a database mirroring session in terms of the ALTER DATABASE settings and states of the mirrored database and witness, if any. The section is aimed at users who manage database mirroring primarily or exclusively using Transact-SQL, rather than using SQL Server Management Studio.
Tip
As an alternative to using Transact-SQL, you can control the operating mode of a session in Object Explorer using the Mirroring page of the Database Properties dialog box. For more information, see Establish a Database Mirroring Session Using Windows Authentication .
*If the witness becomes disconnected, we recommend that you set WITNESS OFF until the witness server instance becomes available.
*:
SELECT mirroring_safety_level_desc, mirroring_witness_name, mirroring_witness_state_desc FROM sys.database_mirroring
For more information about this catalog view, see sys.database_mirroring (Transact-SQL).
Factors Affecting Behavior on Loss of the Principal Server
The following table summarizes the combined effect of the transaction safety setting, the state of the database, and the state of the witness on the behavior of a mirroring session on the loss of the principal server.
Related Tasks)
See Also
Monitoring Database Mirroring (SQL Server)
Database Mirroring Witness | https://docs.microsoft.com/en-us/sql/database-engine/database-mirroring/database-mirroring-operating-modes?view=sql-server-2017 | 2018-05-20T16:06:04 | CC-MAIN-2018-22 | 1526794863626.14 | [array(['../../includes/media/yes.png?view=sql-server-2017', 'yes'],
dtype=object)
array(['../../includes/media/no.png?view=sql-server-2017', 'no'],
dtype=object)
array(['../../includes/media/no.png?view=sql-server-2017', 'no'],
dtype=object)
array(['../../includes/media/no.png?view=sql-server-2017', 'no'],
dtype=object)
array(['media/dbm-high-performance-mode.gif?view=sql-server-2017',
'Partner-only configuration of a session Partner-only configuration of a session'],
dtype=object)
array(['media/dbm-high-protection-mode.gif?view=sql-server-2017',
'Partners communicating without a witness Partners communicating without a witness'],
dtype=object)
array(['media/dbm-high-availability-mode.gif?view=sql-server-2017',
'The witness and two partners of a session The witness and two partners of a session'],
dtype=object) ] | docs.microsoft.com |
Decision Insight 20180319 How to create multi-dimensional datagrids Case You want to aggregate messages which depend on 2 dimensions that have no links between them. In this example, we want to look for anomalies which depend on โSiteโ and โCodeโ. How-to do that Create a datagrid and put the 2 dimensions, either as โRowsโ, โColumnsโ or โGlobal dimensionโ : Then click on โAdd valueโ, and โWould you like to bring attributes from another entityโ. In the next window, select each of your โPath from dimensionโ, and reach โAnomalieโ : Click on โDoneโ, you will be on the list of attributes of โAnomalieโ. You may then aggregate the appropriate attribute : After aggregations, you will have something similar : Tips If you find yourself with 3 dimensions on a datagrid, and only want to aggregate 2 dimensions, you just need to only specify 2 out of the 3 possible paths and this will create an aggregate on only the specified dimensions. Related Links | https://docs.axway.com/bundle/DecisionInsight_20180319_allOS_en_HTML5/page/how_to_create_multi-dimensional_datagrids.html | 2018-05-20T15:28:04 | CC-MAIN-2018-22 | 1526794863626.14 | [] | docs.axway.com |
Fuji and Earlier Releases Product documentation for Fuji and earlier releases was provided on a Wiki site. The information previously provided on the Wiki has been saved into PDF files. Some examples and graphics depicted herein are provided for illustration only. No real association or connection to ServiceNow products or services is intended or should be inferred. You can download these PDFs to view and share in your environment. Content Map4 MB Get Started30 MB Use15 MB Administer75 MB Script13 MB Build10 MB Deliver111 MB Integrate25 MB Release Notes5 MB Customer Support5 MB All PDFs280 MB | https://docs.servicenow.com/bundle/archive/page/archive/fuji-archive.html | 2018-05-20T15:39:58 | CC-MAIN-2018-22 | 1526794863626.14 | [] | docs.servicenow.com |
To use App Volumes with VHD In-Guest Operation mode, the machines where the App Volumes Manager and agents are installed require special permissions on the CIFS file share. Volumes Manager
Agents: Machines that receive App Volumes and writable volumes assignments
Capture Agents: Machines that are used for provisioning new App Volumes agents | https://docs.vmware.com/en/VMware-App-Volumes/2.12.1/com.vmware.appvolumes.user.doc/GUID-D402CC33-F3E1-46A6-9999-F0B3D0FB01B4.html | 2018-05-20T15:27:18 | CC-MAIN-2018-22 | 1526794863626.14 | [] | docs.vmware.com |
Know more about your documents.
Highlight what matters.
When Memphisโ Commercial Appeal published Ernest Withers: Exposed, an extensive investigation which revealed that the civil rights photographer had been working as an FBI informant, they were able to point readers right to the passage that revealed his true identity.. | http://docs.ontario.ca/home | 2018-05-20T15:16:09 | CC-MAIN-2018-22 | 1526794863626.14 | [array(['/images/home/entities_small.png', None], dtype=object)
array(['/images/home/timeline2.png', None], dtype=object)
array(['/images/home/note.png', None], dtype=object)
array(['/images/home/viewer.png', None], dtype=object)] | docs.ontario.ca |
- JetApps Documentation/
- WHMCS Addons /
- WHMCS Reply Above This Line /
- General Information /
- WHMCS Reply Above This Line :: Installation
NOTICE: Our WHMCS Addons are discontinued and not supported anymore.
Most of the addons are now released for free in github -
You can download them from GitHub and conribute your changes ! :)
WHMCS Reply Above This Line :: Installation
Step one โ download the hook.
First, you will need to login to your client area and download your copy of โWHMCS Reply Above This Lineโ.โ).
Step two โ Set your string
Our default string is โ*** Reply above this line ***โ, you can change that by editing the code under the settings section โ
/********************* Reply Above This Line Settings *********************/ function serverReplyAboveThisLine_settings() { return array( 'breakrow' => "*** Reply above this line ***", ); } /********************/
* This creates a smarty tag โ {$breakrow} that returns the defined value (*** Reply above this line***).
You will be able to use this smarty tag in the email tempaltes.
Step three โ Add our string to your email templates.
Use WHMCS Email template editor and add our smarty tag โ {$breakrow} to the first line of the email tempalate (this should come first, before everything).
A โSupport ticket replyโ template for example, should look like this โ
{$breakrow} {$ticket_message} ---------------------------------------------- Ticket ID: #{$ticket_id} Subject: {$ticket_subject} Status: {$ticket_status} Ticket URL: {$ticket_link} ----------------------------------------------
Thatโs all ! | https://docs.jetapps.com/whmcs-reply-above-this-line-installation | 2018-05-20T15:22:22 | CC-MAIN-2018-22 | 1526794863626.14 | [] | docs.jetapps.com |
Overview of Windows Server 2003, Web Edition
Applies To: Windows Server 2003, Windows Server 2003 R2, Windows Server 2003 with SP1, Windows Server 2003 with SP2
Overview of Windows Server 2003, Web Edition
Microsoftยฎ Windows Serverโข 2003, Web Edition, is a part of the Microsoftยฎ
Revolutionary Microsoft ASP.NET, for deploying Web services and applications rapidly
Remotely administered management, with an easy-to-use, task-driven, internationalized Web user interface (UI)
Web Distributed Authoring and Versioning (WebDAV), so that users can easily publish, manage, and share information over the Web
Additional wizards to make it easier for administrators to set up and manage secure authentication and Secure Sockets Layer (SSL) security
Improved CPU throttling, which helps organizations to allocate CPU resources on a per-site basis
Scalability through Network Load Balancingยฎ Windows Serverยฎ 2003, Standard Edition, or Microsoftยฎ Windows Serverโข 2003, Enterprise Edition, the feature limitations in this topic do not apply.
For the latest product information, see Windows Update at the Microsoft Web site. Windows Update is a catalog of items, such as drivers, patches, the latest Help files, and Internet products, that you can download to keep your computer up to date. | https://docs.microsoft.com/en-us/previous-versions/orphan-topics/ws.10/cc755761(v=ws.10) | 2018-05-20T16:55:06 | CC-MAIN-2018-22 | 1526794863626.14 | [] | docs.microsoft.com |
Use ClusterCopy to Move Files to and from Nodes in a Windows HPC Cluster
Applies To: Microsoft HPC Pack 2008, Microsoft HPC Pack 2008 R2, Microsoft HPC Pack 2012, Windows HPC Server 2008, Windows HPC Server 2008 R2
This topic describes how to use ClusterCopy to rapidly copy files to on-premises compute nodes on a Windows HPC cluster. You can also use ClusterCopy to help collect files from cluster nodes. ClusterCopy is included in the HPC Pack Tool Pack. For more information, see Microsoft HPC Pack: Tools and Extensibility.
This topic includes the following sections:
Requirements
How ClusterCopy works
Using ClusterCopy
ClusterCopy configuration parameters
Requirements
To use ClusterCopy, you need the following software and permissions:
HPC Pack client utilities installed on the client computer.
ClusterCopy installed on the head node or on the client computer that you are using to access the head node.
User permissions on the HPC cluster.
Read permissions to the source folder.
Read/write permissions to the destination folder.
How ClusterCopy works
File distribution
When distributing files to cluster nodes, ClusterCopy creates a series of tasks that define the copy behavior (if you specify the
UseMutlipleJobs parameter, ClusterCopy creates a series of jobs instead). The first task copies files from the source folder to a set of nodes. The number of nodes in each set is defined by the
ParentSpan parameter (by default, this has a value of 3).
After the first task completes, the files exist in
ParentSpan + 1 locations. Each of these locations is used as a source for the next round of copy tasks, until the files have been copied to all of the specified nodes. If files are not copied successfully to a particular node, all copy jobs or tasks that depend on that node fail. ClusterCopy creates a log file in the install directory that lists failed nodes.
The following diagram illustrates an example of how ClusterCopy distributes files to cluster nodes:
.jpg)
File collection
When collecting files from cluster nodes, ClusterCopy creates a series of tasks or jobs that define the copy behavior. Each task copies files to the destination folder from the number of nodes that are specified by the
ParentSpan parameter. When the first task is finished (successfully or not), the next task begins, and so on until ClusterCopy has attempted to collect files from all of the specified nodes. ClusterCopy creates a log file in the install directory that lists failed nodes.
Using ClusterCopy
You can configure and run ClusterCopy by using a Command Prompt window or the ClusterCopy dialog box. To use a Command Prompt window, run ClusterCopyConsole.exe. To use the dialog box, run ClusterCopy.exe.
The following procedures describe how to use ClusterCopy. For detailed information about the configuration parameters, see ClusterCopy configuration parameters.
Note
The default installation folder is C:\Program Files\Microsoft HPC Pack <Version> Tool Pack\ClusterCopy.
To use ClusterCopy at a command prompt
Open a Command Prompt window and go to the ClusterCopy installation folder. For example, type a command similar to the following:
cd C:\Program Files\Microsoft HPC Pack 2012 Tool Pack\ClusterCopy
Run ClusterCopyConsole, and then specify the parameters for the head node, source folder, destination folder, and the compute nodes that you want files copied to or from. To collect files from the nodes, include the
/Collectparameter.
The following code samples provides examples with and without the
/Collectparameter:
To copy all files from C:\hpc\data on the head node to C:\hpc\scratch on all nodes in the ComputeNodes and WorkstationNodes groups, type:
clustercopyconsole /HeadNode:HEADNODE /SourceFolder:C:\hpc\data /DestinationFolder:C:\hpc\scratch /NodeGroups:ComputeNodes;WorkstationNodes
To collect file1.dat and file2.dat from C:\hpc\scratch on all nodes to C:\hpc\results on the head node:
clustercopyconsole /HeadNode:HEADNODE /SourceFolder:C:\hpc\scratch /TargetFiles:file1.dat;file2.dat /DestinationFolder:C:\hpc\data /Collect
When the job is complete, a message appears that indicates the number of successful and failed copies. You can read the list of failed nodes in a log file in the ClusterCopy install folder.
To use the ClusterCopy dialog box
Navigate to the location where you installed ClusterCopy, and then double-click ClusterCopy.exe.
Specify the head node, source folder, destination folder, and the compute nodes that you want files copied to or from.
To distribute files to nodes, click Copy to Nodes (Distribute). To collect files from nodes, click Copy from Nodes (Collect).
The ClusterCopy dialog box displays the progress of your copy job.
When the job is complete, a message box appears that indicates the number of successful and failed copies. You can read the list of failed nodes in a log file in the ClusterCopy install folder.
The following screenshot illustrates the ClusterCopy dialog box:
.jpg)
ClusterCopy configuration parameters
The following table describes the parameters that you can use to configure ClusterCopyConsole (you can configure the same parameters in the ClusterCopy dialog box). | https://docs.microsoft.com/en-us/previous-versions/windows/it-pro/windows-hpc-server-2008/ff972646(v=ws.10) | 2018-05-20T16:03:28 | CC-MAIN-2018-22 | 1526794863626.14 | [array(['images%5cff972646.198b11d1-a841-41b2-a6e2-9c2d7a88bd71(ws.10',
'How ClusterCopy distributes files to nodes How ClusterCopy distributes files to nodes'],
dtype=object)
array(['images%5cff972646.4abe64ee-e82f-499f-b59e-c77e476d5102(ws.10',
'Dialog box for ClusterCopy Dialog box for ClusterCopy'],
dtype=object) ] | docs.microsoft.com |
Security Enhancements (Database Engine)
Security enhancements in the Database Engine include new encryption functions, the addition of the transparent data encryption, auditing,).
Auditing).
See Also | https://docs.microsoft.com/en-us/previous-versions/sql/sql-server-2008/cc645578(v=sql.100) | 2018-05-20T16:35:25 | CC-MAIN-2018-22 | 1526794863626.14 | [] | docs.microsoft.com |
PhysicsMaterial2D that is applied to this collider.
If no PhysicsMaterial2D is specified then the Rigidbody2D.sharedMaterial on the Rigidbody2D that the collider is attached to is used. If the collider is not attached to a Rigidbody2D or no Rigidbody2D.sharedMaterial is specified then the global PhysicsMaterial2D is used. If no global PhysicsMaterial2D is specified then the defaults are: PhysicsMaterial2D.friction = 0.4 and PhysicsMaterial2D.bounciness = 0.
In other words, a PhysicsMaterial2D specified on the Collider2D has priority over a PhysicsMaterial2D specified on a Rigidbody2D which has priority over the global PhysicsMaterial2D.
See Also: Rigidbody2D.sharedMaterial & PhysicsMaterial2D.
Did you find this page useful? Please give it a rating: | https://docs.unity3d.com/ScriptReference/Collider2D-sharedMaterial.html | 2018-05-20T15:51:49 | CC-MAIN-2018-22 | 1526794863626.14 | [] | docs.unity3d.com |
Changelog:WHMCS V5.1
Contents
- 1 Version 5.1.15
- 2 Version 5.1.14
- 3 Version 5.1.13
- 4 Version 5.1.12
- 5 Version 5.1.11
- 6 Version 5.1.10
- 7 Version 5.1.9
- 8 Version 5.1.8
- 9 Version 5.1.7
- 10 Version 5.1.6
- 11 Version 5.1.5
- 12 Version 5.1.4
- 13 Version 5.1.3
- 14 Version 5.1.2
- 14.1 Admin Area
- 14.2 Affiliates
- 14.3 API
- 14.4 Billing
- 14.5 Client Area
- 14.6 General
- 14.7 Domains
- 14.8 Fraud
- 14.9 Payment Gateways
- 14.10 Hooks
- 14.11 Invoicing
- 14.12 Licensing Addon
- 14.13 Modules
- 14.14 Ordering
- 14.15 Products
- 14.16 Project Management
- 14.17 Quotes
- 14.18 Domain Registrars
- 14.19 Reports
- 14.20 Security
- 14.21 Support Tools
- 14.22 Bug Fixes
- 15 Version 5.1.1
- 16 Version 5.1.0
Version 5.1.15
- Release Type: MAINTENANCE RELEASE
- Release Date: 27th November 2013
Bug Fixes
Case #3075 - 5.3 Backport: Update to ECB Exchange Rates Data Feed URL
Case #3482 - Currency type must be calculated prior to feed aggregation
Case #3662 - Improved the emptying of template cache
Case #3663 - Client area additional currency selection not working
Case #3665 - Improved HTML quoting to handle all character sets in admin logs
Case #3666 - Added required token to Block Sender Action in Admin Ticket View
Case #3670 - Update to WHOIS Lookup Links to use form submission for lookups
Case #3672 - Add Predefined Product link in Quotes leads to invalid token error
Case #3674 - Updated plain-text email generation to strip entity encoding
Case #3675 - Admin Order failing at Configurable Options with Token Error
Case #3676 - Admin Ticket Merge via Options Tab resulting in Invalid Token.1.14
- Release Type: TARGETED RELEASE
- Release Date: 21th November 2013
General
Case #2989 - Downgrade orders failing when no payment due #3586 -- Redacted --
Case #3587 -- Redacted --
Case #3589 -- Redacted --
Case #3603 -- Redacted --
Case #3605 -- Redacted --
Case #3606 -- Redacted --
Version 5.1.13
- Release Type: SECURITY PATCH
- Release Date: 25th October 2013
General
Case #3444 - Improved validation of monetary amounts.1.12
- Release Type: SECURITY PATCH
- Release Date: 20th October 2013
General
Case #3431 - Resolved SQL error in getting ticket departments
Case #2566 - Resolved admin clients list displaying duplicates in certain conditions
Security
Case #3246 - Enforce privilege bounds for ticket actions
Case #3426 - Additional CSRF Protection Added to Product Configuration
Case #3232 - Added additional input validation to SQL numeric manipulation routines
Version 5.1.11
- Release Type: SECURITY PATCH
- Release Date: 18th October 2013
Security
Case #3100 - Remove exposure of SQL from user interface
Case #3364 - Additional validation on user IP
Case #3425 - Potential SQL Injection Fix
Case #3428 - Added password verification requirement to admin user management operations
Case #3430 - Potential SQL Injection Fix
Version 5.1.10
- Release Type: SECURITY PATCH
- Release Date: 3rd October 2013
Security
Case 3353 - Add sanitization for pre-formatted AES_Encrypt in queries
Version 5.1.9
- Release Type: MAINTENANCE RELEASE
- Release Date: 26th July 2013
Bug Fixes
Case #2949 - Bad function name "db_escaoe_string"
Case #2950 - Invalid token on Mass Mailer steps
Case #2951 - Fix for PayPal callback returning HTTP 406 error on no amount
Case #2953 - Duplicate admin log entries upon login
Case #2955 - Invalid Entity Requested for Support Page/Module
Case #2960 - Improve installer logic
Case #2963 - Additional Domain Fields not saving input
Case #2965 - Correct SQL statement for Ticket Escalations Cron routine
Case #2967 - Domain registrar module command not running via order accept routine
Case #2974 - Fix for invoices with a zero total not being auto set to paid on generation
Case #2975 - Fix for Calendar Entry Type Checkboxes not retaining selection
Case #2977 - Calendar Entries Missing Addon Name for Predefined Addons
Version 5.1.8
- Release Type: SECURITY PATCH
- Release Date: 23rd July 2013
Security
Case #2755 - Audit & Code refactor backport
Version 5.1.7
- Release Type: SECURITY PATCH
- Release Date: 16th May 2013
Security
Case #2620 - Improved sanitization in client area
Version 5.1.6
- Release Type: SECURITY PATCH
- Release Date: 23rd April 2013
Security
- Details to be released in due course
Version 5.1.5
- Release Type: MAINTENANCE
- Release Date: 15th March 2013
Bug Fixes
- Added CSRF Token Management User Configurable Settings to General Settings > Security
Version 5.1.4
- Release Type: SECURITY PATCH
- Release Date: 12th March 2013
Security
- Details to be released in due course
Version 5.1.3
- Release Type: SECURITY PATCH
- Release Date: 3rd December 2012
Security
- Update for Google Checkout Module
Version 5.1.2
- Release Type: STABLE
- Release Date: 6th July 2012
Version 5.1.1
- Release Type: RELEASE CANDIDATE
- Release Date: 15th June 2012
Version 5.1.0
- Release Type: BETA
- Release Date: 11th May 2012 | https://docs.whmcs.com/Changelog:WHMCS_V5.1 | 2018-05-20T15:29:05 | CC-MAIN-2018-22 | 1526794863626.14 | [] | docs.whmcs.com |
glCreateFramebuffers โ create framebuffer objects
n
Number of framebuffer objects to create.
framebuffers
Specifies an array in which names of the new framebuffer objects are stored.
glCreateFramebuffers returns
n previously unused framebuffer names in
framebuffers, each representing a new framebuffer object initialized to the default state.
GL_INVALID_VALUE is generated if
n is negative.
glGenFramebuffers, glBindFramebuffer, glFramebufferRenderbuffer, glFramebufferTexture, glFramebufferTexture1D, glFramebufferTexture2D, glFramebufferTexture3D, glFramebufferTextureFace, glFramebufferTextureLayer, glDeleteFramebuffers, glIsFramebuffer
Copyright ยฉ 2014 Khronos Group. This material may be distributed subject to the terms and conditions set forth in the Open Publication License, v 1.0, 8 June 1999.. | http://docs.gl/gl4/glCreateFramebuffers | 2018-05-20T15:55:01 | CC-MAIN-2018-22 | 1526794863626.14 | [] | docs.gl |
Calculating Statistics on Demand
During installation of Plesk, several scheduled tasks are automatically created. One of such tasks,
statistics, collects statistical information about resources used by sites, such as inbound and outbound traffic, disk space occupied by web content, log files, databases, mailboxes, web applications, mailing list archives, and backup files.
You can adjust which data the
statistics task should count and run the statistics calculation on demand. To do this, run the
statistics task with a necessary combination of options specifying the parts of statistics you want to collect.
To run the statistics task with required options, follow these steps:
- Using Remote Desktop, log in as administrator to the Panel-managed server.
- Start
cmd.exe.
- Change directory to
%plesk_dir%\admin\bin(where
%plesk_dir%is the system variable defining the folder where Plesk is installed).
- Run the
statistics.exetask with required options. See the list of options and their descriptions in the tables below.
For example, to calculate statistics in the mode that will skip all FTP logs, you can use the following command:
statistics.exe --http-traffic --disk-usage --mailbox-usage --mail-traffic --notify --update-actions
The resource usage information kept in the Panel's database and shown in reports in Panel will be updated with the new data.
Main options
Each main option defines the part of statistics to be calculated. When only main options are used, the specified statistics will be collected for all subscriptions and sites.
Additional options
Additional options allow you to specify the set of domain names for which statistics will be calculated. Domain names or masks specified in these options should be separated by the '
,' or '
;' symbol. You may combine additional options and use them without main options. If you use additional options without main ones, complete statistics will be calculated only for selected domain names. Domains specified directly have higher priority than those specified by masks. Also, the 'skip' list has higher priority than the 'process'. | https://docs.plesk.com/en-US/12.5/advanced-administration-guide-win/statistics-and-logs/calculating-statistics-on-demand.57685/ | 2018-05-20T15:38:51 | CC-MAIN-2018-22 | 1526794863626.14 | [] | docs.plesk.com |
Changelog:WHMCS V5.2
Contents
- 1 Version 5.2.17
- 2 Version 5.2.16
- 3 Version 5.2.15
- 4 Version 5.2.14
- 5 Version 5.2.13
- 6 Version 5.2.12
- 7 Version 5.2.10
- 8 Version 5.2.9
- 9 Version 5.2.8
- 10 Version 5.2.7
- 11 Version 5.2.6
- 12 Version 5.2.5
- 13 Version 5.2.4
- 14 Version 5.2.3
- 15 Version 5.2.2
- 16 Version 5.2.1
- 17 Version 5.2.0
Version 5.2.17
- #2622 - Use localized date format for domain renewal reminders
Case #3375 - Improved Redundancy for License Verification
Case #3977 - Allows promo code recurring cycles to be edited after checkbox is ticked
Case #3333 - Delete custom field entries for a product when the order is deleted
Case #4113 - Add extended TLD attributes for new .UK TLD
Version 5.2.16
- Release Type: SECURITY RELEASE
- Release Date: 21st January 2014
Maintenance
Case #2557 - 2Checkout Gateway: Update to currency variable
Security
Case #3637 - Improve Access Controls in Project Management Addon
Version 5.2.15
- Release Type: SECURITY RELEASE
- Release Date: 23rd December 2013
Security
Case #3785 - SQL Injection via Admin Credit Routines
Bug Fixes
Case #3706 - Some graphs failing after recent Google Graph API Update
Case #3711 - CSV Export content should not contain HTML entities
Case #3726 - PDF Line Items failing to output some specific characters
Case #3727 - Admin password reset process failing to send new password email
Case #3738 - Sub-account password field's default text must be removed on focus/click events
Version 5.2.14
- Release Type: MAINTENANCE RELEASE
- Release Date: 27th November 2013
Bug Fixes
Case #2555 - 5.3 Backport: Select all checkboxes not working in manage orders & invoices
Case #2941 - 5.3 Backport: Company ID Number being ignored via Nominet
Case #3075 - 5.3 Backport: Update to ECB Exchange Rates Data Feed URL
Case #3482 - Currency type must be calculated prior to feed aggregation
Case #3662 - Improved the emptying of template cache
Case #3663 - Client area additional currency selection not working
Case #3664 - Alingment correction for intelli-search in V4 admin theme
Case #3665 - Improved HTML quoting to handle all character sets in admin logs
Case #3666 - Added required token to Block Sender Action in Admin Ticket View
Case #3670 - Update to WHOIS Lookup Links to use form submission for lookups
Case #3671 - Admin New Ticket Insert Predefined Reply lacking required token
Case #3672 - Add Predefined Product link in Quotes leads to invalid token error
Case #3674 - Updated plain-text email generation to strip entity encoding
Case #3676 - Remove debug die statement from Admin Ticket Merge via Options Tab.2.13
- Release Type: TARGETED RELEASE
- Release Date: 21th November 2013
General
Case #2989 - Downgrade orders failing when no payment due
Case #3325 - Credit card processing fails with weekly retries enabled #3587 -- Redacted --
Case #3589 -- Redacted --
Case #3603 -- Redacted --
Case #3605 -- Redacted --
Case #3606 -- Redacted --
Version 5.2.12
- Release Type: SECURITY PATCH
- Release Date: 25th October 2013
General
Case #3444 - Improved validation of monetary amounts
Case #3313 - Moneris Vault Gateway compatibility update
Case #3323 - Credit cards not processing under certain conditions
Case #3138 - Correction to internal logic for testing Authorize.net payment gateway.2.10
- Release Type: SECURITY PATCH
- Release Date: 20th October 2013
General
Case #3433 - Mass mail client filter for default language not working
Case #2566 - Resolved admin clients list displaying duplicates in certain conditions
Security
Case #3246 - Enforce privilege bounds for ticket actions
Case #3426 - Additional CSRF Protection Added to Product Configuration
Case #3232 - Added additional input validation to SQL numeric manipulation routines
Case #3437 - Prevent user input from manipulating IP Ban logic
Version 5.2.9
- Release Type: SECURITY PATCH
- Release Date: 18th October 2013
Security
Case #2978 - Fix for improper logging of admin login IP
Case #3100 - Remove exposure of SQL from user interface
Case #3364 - Additional validation on user IP
Case #3425 - Potential SQL Injection Fix
Case #3428 - Added password verification requirement to admin user management operations
Case #3430 - Potential SQL Injection Fix
Version 5.2.8
- Release Type: SECURITY PATCH
- Release Date: 3rd October 2013
Security
Case #3353 - Add sanitization for pre-formatted AES_Encrypt in queries
Version 5.2.7
- Release Type: MAINTENANCE RELEASE
- Release Date: 26th July 2013
Bug Fixes
Case #2950 - Invalid token on Mass Mailer steps
Case #2951 - Fix for PayPal callback returning HTTP 406 error on no amount
Case #2953 - Duplicate admin log entries upon login
Case #2954 - Repair link for Admin Clients Services Add New Addon
Case #2955 - Invalid Entity Requestd for Support Page/Module
Case #2956 - Revert SQL changes introduced by build 5.2.6.3
Case #2963 - Additional Domain Fields not saving input
Case #2965 - Correct SQL statement for Ticket Escalations Cron routine
Case #2960 - Improve installer logic
Case #2969 - Do not encode 3rd-party TCPDF
Case #2970 - Do not encode 3rd-party PHP Mailer
Case #2971 - Do not encode 3rd-party Google QR code library
Case #2974 - Fix for invoices with a zero total not being auto set to paid on automated generation
Case #2975 - Fix for Calendar Entry Type Checkboxes not retaining selection
Case #2977 - Calendar Entries Missing Addon Name for Predefined Addons
Version 5.2.6
- Release Type: TARGETED RELEASE
- Release Date: 23rd July 2013
Security
Case #2755 - Internal Security Audit & Code Refactor
Version 5.2.5
- Release Type: SECURITY PATCH
- Release Date: 16th May 2013
Security
Case #2633 - Correct security enhancement regression
Version 5.2.4
- Release Type: RELEASE
- Release Date: 23rd April 2013
General
Case #2139 - Updates to cron report email format to make it easier to read
Case #2045 - Added Affiliates Overview Report
Case #2053 - Added amount filter option to admin orders list
Case #2134 - Update to Transactions CSV Export to show Currency Code rather than Currency ID
Case #2045 - Domain Sync Cron Updated to not keep re-attempting to connect to the same registrar if a connection error occurs
Case #2045 - Update to Yubico module to remove hard-coded WHMCS references
Case #2115 - Fix for hard-coded text Manage and Disable in Client Area Domain Details Template File
Case #1880 - Update to Client Area Module Change Password in Default Template to return to password tab on submit
Case #1567 - Update to invoice generation process to not invoice billable items on new orders
Case #2127 - Added Support to Force Two-Factor Auth for Clients
Case #1852 - Fixed Missing Language Vars in Two-Factor Activation Process
Case #2058 - Within Last Month filter on transactions list updated to maintain between pages
Case #2020 - Language update to credit log to remove manual adjustment reference
Case #2057 - Admin manual attempt CC captures process updated to display processing results
Case #2064 - Update to continuous invoice generation logic to not invoice pending items on a recurring basis
Case #1950 - Update to admin Remember Me cookie name to resolve issues some are experiencing with remember me not working
Case #2123 - Added 30 minute time expiry to login failures IP logging
Case #2122 - Added support for wildcards in whitelisted IPs
Case #2118 - Update to Ticket Close routine to check ticket is not closed already before performing actions
Case #2117 - Update to logic of Ticket Notification Emails to only send to the assigned admin for a flagged ticket
Case #2045 - Update to admin ticket interface to not show replying message to own admin
Case #2045 - Updated clients summary view orders link to use new clientid variable
Case #2113 - Added access restriction to files that generate an error when visited directly
Case #2045 - Updates to Client Side Arabic, Farsi and Norweigan language files
Case #2045 - Update for jquery dialog to use new admin js variable
Case #2110 - Admin Area Homepage widget adjustments to optimise load times
Case #2063 - Updated Admin Credit Card Info Window to not allow viewing/input when credit card storage is disabled
Case #2109 - Updated Disable Credit Card Storage Security Setting to auto remove all existing card data
Case #2108 - Changed admin post login redirect variable to avoid possible confusion with client area redirect urls
Case #2107 - Update to auto focus cursor to first input box in login/two-factor verification and setup/disable two-factor processes
Case #2067 - Updated Default template to use a template include to remove code duplication
Case #1961 - Update to domain validation rules when IDN domains are enabled to perform stricter checks
Case #2098 - Cron update to allow cancellation requests to process for free products (those with no next due date)
Case #2039 - Updated module change password input field names in Classic & Portal templates + added backwards compatibility
Case #2047 - Update to ticket flagging logic to not send email notification when assigning a ticket to yourself
Case #2045 - Updated clients, orders, tickets & invoices filter lists to maintain filters on mass actions
Case #2044 - Update to support ticket department deletion routine to remove custom fields & their values
Case #2045 - Updated Admin Support Ticket Flagged Email Notification Template to link directly to the flagged ticket
Case #2016 - Added client name field to all data export reports
Case #2045 - Update to export reports to display friendly payment method name
Case #2018 - Added Registration Date field to Domains Data Export Report + Capitalised first letter of Registrar
Case #2007 - Cleaned up client area product details HTML output
Case #2045 - Added refresh protection to the client area affiliates withdrawal request
Case #2019 - Update to surpress Support Ticket Flagged notification for those admins it's not enabled for
Module Updates
Case #2104 - Skrill Gateway: Re-branded MoneyBookers module to Skrill
Case #2036 - TPPWholesale Registrar: Fixes + Added support for registrar lock, private ns registration & epp code requests
Case #2045 - WebNIC Registrar: Updates to resolve problem with transfers & contact editing
Case #2045 - Plesk 10: Update to allow API packet version overriding
Case #2138 - CentovaCast: Update from them themselves for CentovaCast V3.x
Case #2137 - Project Management: Bug fix for staff log report always displaying a whole year rather than selected date range
Case #1941 - Project Management: Fix for status being empty for newly created projects
Case #2136 - Project Management: Update to replace .live discontinued jQuery functionality
Case #2135 - IPMirror Registrar: Version 2.1 Module Update from them themselves
Case #2032 - 2CheckOut: Update to language detection to work with new lowercase names
Case #2022 - DirectAdmin: Update to disk/bw usage stats importing to handle URL encoding being applied in DirectAdmins latest update
Case #2045 - PayPal Payments Pro Reference Payments: Implemented 3D Secure Functionality
Case #2015 - Enom: Added new extension field requirements for .es, .au, .sg, .pro & .it
Case #2015 - Enom: Updated to prevent WHOIS contacts being edited when disallowed by registry rules
Case #2015 - Enom: Updated TransferSync function to use more call effective method of retrieving transfer info
Case #2006 - WHOIS Server Additions: .rs, .co.rs, .org.rs, .edu.rs, .in.rs, .ae, .pw
Bug Fixes
Case #2141 - Fix for JavaScript Error Occurring in Product Domain Config Step of Modern & Slider Order Forms
Case #2140 - Correction to language variable for Bundle Items in Bundle Product Configuration
Case #2045 - Fix for domain renewals page in cart not listing all renewal term options
Case #2089 - Fix for Project Management Activity Log Pagination not working
Case #2133 - Fix for admin page field alignment when custom fields share same name as default fields
Case #1955 - Fix for javascript error in Original admin view ticket template
Case #2132 - Fix for Admin Initiated Currency Update displaying update results
Case #1930 - Fix for KB Category display articles shifting up to beside to sub-categories
Case #2131 - Fix for Vertical Steps Order Form Template Complete Step missing formatting
Case #2023 - API AddTicketReply command ignoring passed in adminusername variable
Case #2045 - Banned Emails Config Page always displaying an empty table
Case #2000 - Fix for one time fixed amount promo codes giving a zero discount on invoice under certain conditions
Case #2059 - Client area cancellation request cancel domain option non-functional
Case #2045 - Cookie unset not applying WHMCS prefix to cookie name
Case #2004 - Fix for client area support ticket list returning no results under certain conditions
Case #2070 - Fix for contact sub-account activation client side for existing contacts
Case #2009 - Fix for Mass Domain Enable Auto-Renew leaving auto-renew disabled & WHOIS Contact Info returning error
Case #2116 - Fix for CVV Number not being passed into 3D Secure process on new card entry
Case #2112 - Fix for contact ID setting being lost on admin ticket options save
Case #2024 - Fix for $invoice_html_contents email merge field displaying double line breaks in item descriptions
Case #2092 - Fix for product group order form template override not taking effect for all cases
Case #1972 - Anniversary Prorata not working correctly under certain conditions
Case #2061 - Admin notification emails being sent to disabled administrator users
Case #2031 - Cron notification email not listing service ID used in terminations list due to incorrect var
Case #2038 - Fix for payment gateway ordering in new invoice view
Case #2072 - Credit card remote token storage being called before new name/address info was saved
Case #2068 - Fix for predefined product price not being loaded correctly in quotes
Case #2068 - Update to states dropdown javascript to support tab index value being defined
Case #2010 - Fix for Client Area Two-Factor Backup Code Login Input Field Restriction
Case #2066 - Update to Default Client Area Products listing to not show dropdown menu if no menu items available
Case #2021 - MyIDEAL gateway module referencing incorrect path
Case #2005 - Product bundle display order not being honoured
Case #1289 - Added addon status change hook function calls to UpdateClientAddon API function
Case #2099 - Fix for link type custom fields saving values in an HTML link format
Case #2045 - Suspension Reason was not always being cleared on unsuspend
Case #2045 - Client stats for number of refunded/collections invoices were incorrect
Case #2027 - Fix for API GetClientsDetails function causing iPhone/Android App Failure
Case #2025 - Fix for invoice not displaying tax names under certain conditions
Case #2026 - Client area not displaying login incorrect message when login form submitted blank
Case #2046 - Update to prevent Support Ticket Flagged admin notification email sending upon unflagging
Case #2045 - Custom module action success language variable named incorrectly
Version 5.2.3
- Release Type: RELEASE
- Release Date: 28th March 2013
Bug Fixes
Case #1999 - Added the ability to disable two-factor auth for a client from the admin profile page
Case #1980 - Fix for DirectAdmin Module having fatal error in certain conditions
Case #1997 - SagePay Tokens: Update to fix incorrect CVV number parameter name and to force skip 3D Secure on recurring transaction captures
Case #1980 - Project Management Addon: Update to handle no due date better and display message instead of long time days overdue
Case #1980 - Boleto Gateway: Update to ensure bank value is one of the supported options
Case #1913 - Update to custom fields validation logic in Validate class to only enforce rules on non admin only fields
Case #1980 - Correction to language used in Send Message & Email Marketer re clients who have opted out of marketing emails
Case #1980 - Added tag search option to admin area advanced search
Case #1988 - Update to admin invoice view to make invoice payment methods clearer with notices re no transactions, full paid by credit and/or partially paid by credit
Case #1839 - Update to WHOIS Servers for new response formats
Case #1923 - Fix for Two-Factor Auth failing to enable within the client area
Case #1914 - Fix in Income by Product Report for negative value on discounts
Case #1980 - Clients chosen language was not being loaded for addon modules client area output
Case #1980 - Update to Licensing Addon to auto clean up orphaned mod_licensing records where product table entry is deleted and to optimise licensing log via daily cron
Case #1980 - Update to Ticket Escalations page to make auto reply box bigger by default
Case #1978 - Update to client email sendMessage() function to override default X-Mailer PHPMailer value with company name
Case #1919 - Fix for client area WHOIS edit always erroring out re empty details if not using a contact + fix for child nameservers management missing variables
Case #1995 - Fix for date filters in Client Statement not working due to new toMySQLDate() formatting and filter not including end date
Case #1993 - Fix for PDF Invoices showing raw HTML in notes with multiple lines and adding double line spacing to line items
Case #1816 - Update to addon suspensions via cron to adhere to the parent products override suspension settings also
Case #1794 - Updated Auto-Termination via cron to apply to Addon Products also
Case #1693 - Update to Support Ticket email sending routine to use client area language setting if a guest
Case #1680 - Update to omit Recurring Amount line from Order Confirmation Email for One-Time products
Case #1620 - Custom Invoice Number not being set by EU VAT Addon Hook before invoice payment confirmation is sent if invoice is auto paid by credit
Case #1987 - Fix for invoice data amountpaid variable not being formatted as currency
Case #1939 - Fix for addon item calendar links linking to old file and with incorrect parameters
Case #1920 - Fix for adding calendar event mangling date/time
Case #1921 - Fix for Time Based Tokens displaying WHMCS company name to clients
Case #1903 - Fix to prevent systpl or carttpl template override parameters validating when empty
Case #1796 - TransIP Registrar: Major update to module for improved reliability and functionality
Case #1802 - VentraIP Registrar: Update to only perform remote callout to their API if module is activated
Case #1980 - Fix for warning error being generated by domain $params not being passed into domain modules AdminCustomButtonArray function
Case #1817 - Added TPP Wholesale Domain Registrar Module which replaces DistributeIT, PlanetDomain & TPPInternet
Case #1968 - Implemented all new methodology for admin services page ajax module commands to resolve issues with certain areas of the page not updating following actions
Case #1916 - Update to client summary mass update logic to only run SQL queries if there is at least one update to perform
Case #1980 - Fix for Admin Area On Demand Invoice Generation no longer displaying number of invoices generated
Case #1870 - Added trim to custom ticket statuses to avoid erroneous spaces at the beginning or end of a status
Case #1757 - GoCardless Gateway: Update to replace a linked button which doesn't work in IE with a standard form
Case #1980 - Update to logic of hidden configurable options to ensure they only show up within the admin area
Case #1994 - Fixed bug where modules containing underscores in their names would not be loaded
Case #1942 - RRPProxy Registrar Module: Updated to handle curl connection errors better
Case #1931 - Correction to charset encoding of Arabic language file + additional translations
Case #1974 - Fix for Client Two-Factor Auth Login Processing
Case #1992 - Integrated Enom New TLDs Addon Module as a bundled addon
Case #1980 - Update to admin side quotes creation page to only load line items if ID is set to prevent new quotes ever showing orphaned line item records
Case #1911 - Fixed LocalAPI validation warning errors occurring in AddClient request
Case #1944 - Fix for fatal error occurring due to missing function in API GetAdminDetails function
Case #1983 - Fix for check all box not working on support tickets list when a user has assigned tickets
Case #1952 - Reverted change to .de whois server which was causing lookups to fail
Case #1982 - Fix for ajax ticket flag/assign not sending ticket flagged notification email
Case #1967 - Updated the admin ticket list to remember and return to previous filters after replying to a ticket
Case #1909 - Fix for client area applying credit to invoice failing
Case #1991 - Fix for auto recalculate on save using old packageid and promoid values and therefore not re-calculating price correctly
Case #1989 - Fix for registrar lock not enabling due to missing input name in the Default template
Case #1956 - Update to MyIdeal payment gateway certificate file
Case #1980 - Update to 3D Secure template file iframe to increase default width for newer wider 3D Secure processes
Case #1986 - Fix for Client Profile checkbox settings change logging not working correctly and added No Changes notice when form submitted without any changes
Case #1989 - Switched positioning of Add Response and Insert Predefined Replies/KB Article buttons in new admin View Ticket interface in Blend and V4 themes
Case #1980 - Added quick Close and Assign to Me links to new Blend and V4 Admin Theme View Ticket Pages + removed extra div causing extended blank space in V4 version
Case #1980 - Updated administrator roles admin page to show disabled users as greyed out
Case #1980 - Fixed bug where in use admin roles were being allowed to be deleted
Case #1980 - Fixed assigned departments listing for disabled administrators
Case #1953 - Update to billable items edit/save logic to work for decimal quantities of less than 1 and zero
Case #1958 - Correction to last reply field label in ticket feedback template in both classic and portal templates
Case #1984 - Update to license expiry date formatting in admin area
Case #1840 - Removed duplicate client area contact navigation client area language file variable
Case #1845 - Replaced hardcoded text in admin support tickets list and Blend admin homepage template
Case #1980 - Update to support tickets admin assignment/flag list to only show active admins (plus the one a ticket is actually flagged to if not active)
Case #1977 - Update to MoneyBookers Gateway Module
Case #1976 - Update to client details change notification email to fix missing client name and admin area profile link
Case #1965 - Fix for department names and emails not being loaded in Tickets Management
Case #1962 #1963 - Added disk and bandwidth percent usage return values to getDiskUsageStats function
Case #1938 - Bulk Domain Transfer in Default client area template displaying registration pricing and periods not even enabled for transfers
Case #1883 - Update to Ticket Closure routine to only send Feedback Request Email if feedback not already provided for a ticket
Case #1461 - Added userid variable to AdminAreaClientSummaryPage hook point
Case #1904 - API AddClientNote Command inverting sticky attribute
Case #1940 - Userid not being populated when admin clients domains page linked to with only an id
Case #1943 - Fix for admin client profile page always selecting English in client language dropdown when none set due to validateLanguage validation function
Case #1910 - Fix for product/service modules _ClientArea function not passing returned vars to template correctly
Case #1928 - Fix for Ticket Tags not saving initial delete change
Case #1954 - Updated payment gateway descriptions to use invoice number if set rather than invoice id
Case #1934 - Updated Email Prompt in Expired & No Connection License Error Messages + Some Minor Text Adjustments/Improvements
Case #1935 - Fix for Service Class not passing vars into buildParams function correctly
Case #1937 - Gateway ID not being passed into token gateway modules storeremote delete function when clearing card
Case #1948 & #1945 - Captcha input not being shown on client area homepage when enabled + update to naming language in default template
Case #1932 - Fix for client status update setting not being saved in Automation Settings
Case #1933 - Update to Affiliate Signup Button Code in Classic & Portal Templates
Case #1936 - Update to make admin side transaction list filter use a like match on description field
Case #1935 - Fix for client area change password function not passing new password into modules because module params already loaded prior
Case #1905 - Fix for client area product upgrade process fatal error on checkout step order confirmation
Case #1929 - Default language select option was being duplicated in mass mail
Case #1925 - Fix for product name email template var empty in New Cancellation Request admin notification & type not being sanitized prior to email
Case #1907 - Fixed missing include in API UpdateTicket function causing ticket closure to fail
Case #1900 - Project Management Addon Editing Task Times formatting error leading to empty value
Case #1899 - Fix for cron not adhering to Exchange Rates & Product Pricing Update Automation Config Settings
Case #1902 - Fix for currency update failing
Case #1901 - Fix for override auto suspend setting being ignored in cron
Case #1908 - Fix for cancellation request reason being overwritten by type, and type always being set to End of Billing Period
Version 5.2.2
- Release Type: RELEASE
- Release Date: 14th March 2013
Bug Fixes
Case #1896 - Domain registrar modules reporting function not found erroneously
Case #1855 - Added CSRF Token Management User Configurable Settings to General Settings > Security
Case #1855 - Updated Domain Checker to default to no token check
Case #1895 - Updates to allow for Smarty Backwards Compatability in Third Party Pages & Addons
Case #1890 - Fix for total balance always showing as zero
Case #1865 - Reverted upgrade process changes temporarily to resolve upgrade process debug output & errors
Case #1857 - Update client area chage of default payment method not passed into ClientEdit hook
Case #1861 - Update shopping cart header redirect to CC Processing page logout due to lack of token
Case #1893 - Update JS Class for Yubico Key Setup Process
Case #1868 - Quotes PDF File missing notes
Case #1881 - Email Registrar module displaying Function not Found on admin side due to missing GetNameservers function
Case #1891 - Admin side domain management function calls refactored to include $params array
Case #1869 - Fix Client area module template output failing when custom template is utilized
Case #1853 - Client area ticket search causing logout due to token check failure
Case #1873 - Correct SQL query to use selected server for server revenue forecast report
Case #1887 - Admin profile language not being stored during logout
Case #1871 - Update Domain Sync functions for license handling
Case #1876 - Invoice payment link variable not populated in invoice related email templates
Case #1888 - Implement new dbconnect.php file to maintain backwards compatibility with files that rely on it.
Case #1886 - Added handling of pattern matching for custom fields
Case #1882 - Product Group Re-Ordering due to SQL order keyword not escaped
Case #1874 - Remove second duplicate invoice button from admin invoice list
Case #1848 - Update to captcha variable name
Case #1886 - Revert smart class customizations to not error out blank page upon syntax errors
Case #1884 - Credit Card details cannot be cleared form the admin area
Case #1885 - Client Area Credit Card process attempting to validate custom fields
Case #1850 - Password reset failing due to email not passing to templates
Case #1879 - Update client area module change function not updating displayed password until page reload
Case #1878 - Update client area module change password function calls
Case #1877 - Return from registrar modules not being handled correctly when not an array
Case #1875 - Fix for failing domain management actions due to incorrect function call params
Case #1822 - Two Factor SQL Fields updates
Case #1856 - Fix Domain checker attempting to validate captcha input even when not enforced
Case #1864 - Fix PHP Fatal Error occurring when registrar module saving name server returns an error admin side
Case #1863 - Fix admin side filtering order list by date
Case #1862 - Update auto-recalc reoccurring amount and logging calculation
Case #1862 - Correct servers losing ID in array_merge causing selected server to be lost on Admin Client Profile
Case #1849 - Fix for API Allowed IPs being cleared when settings are saved
Case #1860 - Update PHPMailer class to address bug with email validation logic
Case #1822 - 5.2.0 SQL update skipped when updating from 5.1.4
Module Updates
Case #1858 - [ Live Chat ] - Update license checking mechanism
Case #1859 - [ Live Chat ] - Updated Client Side Hook file to be compatible with 5.2.x
Case #1889 - [ ResellerClub ] - Update module to return friendly error when API is missing
Version 5.2.1
- Release Type: RELEASE
- Release Date: 12th March 2013
New Features
Case #1772 - Update to log date & ip with ticket feedback submissions
Case #1772 - Added New Reports: Ticket Feedback Scores & Ticket Feedback Comments
Case #1418 - Added New Client Sources Report (aka How Did You Find Us)
Case #1779 - Updated VAT Number validation hook to use the SOAP service provided at VIES directly
Case #1746 - Re-factored invoice display logic
Case #1768 - Update to support ticket bounce email to add global header/footer email wrapper
Case #1768 - Added graceful exit handling to admin side clients domains page when no domains found for user
Case #1788 - Updated invoice totals to show as total+credit in all invoice lists both client & admin side
Case #1418 - Updated order details view to show exact invoice payment status and disable Cancel & Refund option once refunded
Case #1662 - Licensing mechanism updates to add further license server redundancy supportlicense server redundancy support
Case #1768 - Update to clients services page to immediately change status dropdown value (both main status and license status when licensing module in use) upon success result from new ajax module commands
Case #1795 - Update to conditionally include payment modules in cart for integrated checkout
Case #1255 - Implemented Two-Factor Authentication Logic & Support to Admin Login Process
Case #1418 - Updated transactions & gateway log query logic & added default date range filters to speed up initial page load on larger installations
Case #1586 - Fixed ticket tagging JS code double calls on load and incorrect saving on update with class update and function call changes
Case #1586 - Optimised admin support ticket page loads by separating JavaScript code into separate JS file
Case #1418 - Updated old wiki/docs link in all locations and added new comment format to open sample files
Case #1586 - Implemented Tag Cloud to admin ticket list & created ticket tag report/chart
Case #1803 - Redesigned admin reports interface to display reports in groups, removed legacy CSV export options, converted transactions and pdf export methods into report modules, and updated admin templates to display most used reports in reports menu dropdown
Case #1804 - Implemented line graph to Daily Performance Report
Case #1255 - Added Staff Management & Two Factor Authentication management links
Case #1803 - Fix for reports dropdown menu list in original and v4 templates
Case #1768 - Reverted TCPDF Class to previous version due to memory leak issue in latest update
Case #1586 - Implemented support for ticket tags with auto-complete suggestions
Case #1586 - Optimised & improved admin side handling of JavaScript code
Case #1586 - Optimised blend template loading by moving common JS into separate file
Case #1811 - Began re-factoring of client area
Case #1798 - Ported new admin view ticket styling from Blend template into V4 template
Case #1797 - Added checkbox to allow for splitting replies to Blend & V4 admin templates
Case #1815 - [API] Added Windows 8 App Addon Licensing Status return to GetAdminDetails API Function for use in upcoming Windows 8 App
Case #1818 - Refactored Session Handling product wide and updated to apply HTTPOnly attribute
Case #1819 - Refactored cookie handling and updated to apply HTTPOnly attribute by default for all cookies + updated affiliate & link cookies
Case #1822 - Added gridlines and minorgridlines count options support to graph class and updated head output for new admin interface array method
Case #1822 - Added protection against sending of blank emails to customers when email processing fails
Case #1824 - Added an option to enable showing client only departments to non logged in users visiting the ticket submission department selection page
Case #1822 - Various minor improvements and fixes to new code
Case #1825 - Updated get user ip function to use X-Forwarded-For value from apache request headers if available - primarily for our server setup
Case #1827 - Update to language of both ResellerClub and Enom modules account signup promo
Case #1811 - Created New Client Area & Service Classes & Re-factored frontend client side code
Case #1681 - Updated shopping cart to use localised status name in domain renewals
Case #1409 - Added extra conditional link parameters for affiliates and domain reg options and updated all client area templates to show/hide affiliate and domain reg menu options based on conditional status
Case #1761 - Updated domain breadcrumb links to include link back to domains details
Case #1823 - Moved admin homepage optimize image tag call from after closing HTML tag to bottom of the page body using AdminAreaFooterOutput hook point
Case #1751 - Updated ticket submission page in all client area templates to display a no departments found error msg when no support departments are configured
Case #1822 - Added check to 2FA time based tokens module to ensure GD image library is available before attempting to display QR image
Case #1830 - Updated Request Support page to provide additional help links and to provide customised message to reseller customers
Case #1822 - Removed sidebar workaround for Blend template in admin internal browser page since Blend template now has a sidebar
Case #1832 - Refactored cron process to make it possible to not only skip certain actions, but also to request only specific actions are performed
Case #1832 - Added CLI Output & Debugging flag options to make troubleshooting cron issues easier
Case #1822 - Added the ability to link to the internal browser page with a link pre-selected (?link=x)
Case #1806 - Update to WebsitePanel module to use hostname instead of IP for control panel links when hostname is specified
Case #1768 - Update to automatically grant access permissions to new functionality to default admin role groups as appropriate
Case #1768 - Added missing ticket notifications language file variable and additional variable for when no support departments exist
Case #1768 - Added the ability to specify a different department and/or priority for split ticket & updated to hide split tickets button when no replies available to split
Case #1822 - Added label tags to many more of the admin interface config fields/settings
Case #1649 - Added new escalation rules text to language file and previously missing priorities
Case #1822 - Added dedicated isLoggedIn function for checking for active client login
Case #1822 - Update to admin ticket departments config page to prevent refresh resubmits and to remove empty space displaying for admins with only a first name specified
Case #1822 - Updated in product links to use our go.whmcs.com link tracking for MaxMind, Enom, ResellerClub, Licensing & Project Management modules
Case #1822 - Added Premium badge to paid addon modules and improved/streamlined license enforcing/purchase/refreshing process
Case #1768 - Update for contact permissions error not working on pages using the new client area class
Case #1822 - Added new permissions for viewing/managing credits
Module Updates
Case #1755 - [ResellerClub] Implemented New API Key Auth Method for Improved Security
Case #1822 - [ResellerCamp] Removed old ResellerCamp sync module file and replaced with domain sync cron functions
Case #1822 - [Enkompass] Removed x3 theme from Enkompass login links
Bug Fixes
Case #1768 - Fix for endless redirects on shopping cart when no product groups have been setup
Case #1768 - Correction to image path in Original and V4 admin templates for dropdown menu popout icon
Case #1768 - Install process confirm password field type corrected to hide password, automatic url detection fixed to exclude step variable, and validation added to prevent install form being submitted with blank admin details
Case #1768 - Shopping Cart checkout step is grabbing IP directly from REMOTE_ADDR value instead of using get_user_ip function which was resulting in IP displaying incorrectly in certain scenarios
Case #1768 - Update to installer to create admin user under utf-8 charset like rest of app runs under
Case #1746 - Update to ticket department reassignment emails to obey ticket notification settings per admin
Case #1768 - Adds the missing closing </a> tag for Edit Product Icon image on configproducts.php
Case #1768 - V5.2 Upgrade was not working for users of V5.1.3 Patch Release
Case #1768 - Ticket Duration calculating incorrectly when ticket contains no replies & generic comments row being created even when no comments submitted
Case #1768 - Update to menu expand icon to be black by default for lighter menu backgrounds, and white expand icon made blend template specific only
Case #1791 - Provide a valid return value (the PDF object) in the createPDF method of the WHMCS_Invoice class
Case #1709 - Sorting My Domains list by Auto Renew wasn't working
Case #1763 - Missing "Success" message when domain contacts are edited
Case #1793 - NetworkIssueClose should run when editing network issue status to closed
Case #1768 - Invoice related emails not sending due to userid not being populated correctly
Case #1768 - Fixed admin homepage popup not hiding until next content update correctly
Case #1812 - Added missing login to enkompass language file variable and updated module to use it
Case #1418 - Corrected SQL query for calculating addons ordered in the Monthly Orders report - was previously giving total for entire year
Case #1808 - Correction to gid int casting which was causing cart to permanently redirect to domain registration step on initial visit
Case #1822 - Default template KB search not remembering search term and returning to homepage on 2nd search if empty catid parameter
Case #1768 - Fix to client area details validation routine giving error relating to email and uneditable profile fields
Case #1768 - SQL Error Occurring in specific admin email send routine + Optimization to logActivity function to only query username once per runtime
Case #1822 - No addons message in Default client area product details template incorrect colspan
Case #1768 - Suspension reason stops being recorded after & character due to missing url encoding
Case #1768 - Incorrect billing cycle variable for when adding a new addon & Services dropdown menu showing last rows color for active services
Case #1768 - Service edit form not being closed when addons are edited causing send message to fail
Case #1771 - Update to support ticket billing entry to auto prune any non numerical chars from amount
Case #1821 - Replaced hardcoded word "Go" with language variable in 2 client area & 4 order form template files
Case #1822 - Fixed create new project dialog not saving ticket number
Case #1822 - Stats query optimisations & bug fix for SQL error that was being generated every time support ticket page was accessed when admin not assigned to any departments
Case #1768 - Some addon downloads were not being displayed in the client area product details downloads tab
Case #1768 - No Totals to Display text was not being shown on empty Transactions list page
Case #1768 - Added support ticket notification customisation settings back to admin users My Account page
Case #1768 - Changed email encoding from 8bit to quoted-printable to resolve issue of erroneous characters/spaces on long lines of text
Case #1768 - Update to automatic ticket close logic to only send Support Ticket Auto Close Notification email template if Ticket Feedback is not enabled since it already sends it's own email on closure
Case #1768 - Update to invoice loadData function which was failing in some situations due to subquery for gateway name returning more than 1 row
Case #1833 - Update to various third party classes to remove deprecated assigning of return value by reference
Case #1768 - Update to init file to prevent it erroring or failing with a blank page during upload of the new version
Case #1768 - File download page erroring out blank when login was required due to missing var
Case #1768 - Include product downloads in directory setting being displayed twice in General Settings
Case #1773 - Update to predefined search box so that field doesn't expand past the edge of the box when no predefined replies exist + added search icon to search box as background
Case #1800 - Affiliates commission list showing incorrect amount if no payment made yet and has a different first payment amount
Case #1768 - Configurable Options Radio Button was echoing checkbox checked rather than appending to input code HTML
Case #1768 - Free addons generating invoice upon adding from admin side due to no exclusion on free billing cycles in specific items invoicing routine
Case #1768 - Addon products on services page using wrong variable for ID in edit and delete links rendering them unmanagable
Version 5.2.0
- Release Type: BETA
- Release Date: 1st February 2013
New Features
Case #1585 โ Implemented new ticket listing interface which separates flagged tickets from others
Case #1644 โ Added friendly warning if adding payments to an already paid invoice
Case #1760 โ Admin side WHMCS news/notification popup for release announcements & special offers
Case #1626 โ Introduced IP Whitelisting Support from Bans
Case #1719 โ Updated provisioning modules to return rather than echo
Case #1756 โ Introduced dedicated product news feed
Case #1756 โ Updated news widget to use dedicated product news feed
Case #1756 โ Updated check for updates page to use dedicated product news feed
Case #1418 โ Update module command buttons to use ajax to avoid page reload
Case #1719 โ Add additional logging for admin services actions to activity log
Case #1505 โ Allow client to enter desired new password when visiting reset verification URL
Case #1449 โ Add logic for API addorder for invoices paid by credits
Case #1418 โ Add pagination to spam control page
Case #1418 โ Optimize ticket counts query for admin pages
Case #1418 โ Language Case update for admin account page
Case #1418 โ Redraw charts for when no chart widgets are active
Case #1418 โ Update to Knowledge Base categories listing
Case #1726 โ Implemented search for predefined replies management
Case #1725 โ Added Arabic client area language file
Case #1573 โ Improvements to tblcontacts
Case #1725 โ Added Catalan client area language file
Case #1725 โ Added Croatian client area language file
Case #1725 โ Added Farsi client area language file
Case #1725 โ Added Hungarian client area language file
Case #1725 โ Improvements to Portugese & Portuguese Brazil client area language files
Case #1725 โ Improvements to Spanish client area language file
Case #1725 โ Implemented new Spanish admin language file
Case #1481 โ Improvements to French Language file
Case #1612 โ Added HTML stripping to default template client area homepage news snippet
Case #1585 โ Added the ability to split support ticket replies out to new tickets
Case #688 โ Added the ability to enter transaction ID for manual refund
Case #688 โ Updated invoice interface to disable refund button if unavailable
Case #1754 โ Implement code to obtain custom fields and update data based on values posted
Case #1728 โ Created WHMCS API Helper File v1.0
Case #1672 โ Implemented Email Marketing Unsubscribe Option for Clients
Case #1575 โ Additional logging relating to quote management & quick links from log itself
Case #1651 โ Added custom fields display to printable version of support tickets
Case #1649 โ Add memory of ticket list filter selections between page loads
Case #1418 โ Removed arbitrary credit balance edit field and added dedicated Remove Credit option
Case #1599 โ Added LicensingAddonReissue hook
Case #1345 โ Ability to edit security questions
Case #1440 โ Added email template merge field for product description
Case #1556 โ Log date/time to ticket logs when auto-closing ticket
Case #1536 โ Allow knowledge base articles to be available when opening a new ticket for client
Case #1437 โ Added the ability to duplicate an existing invoice and line item(s)
Case #1418 โ Allow mass mails to be sent from services listing
Case #1537 โ Allow company name in client sort filters for admin invoice list
Case #1418 โ Add variable to load template dropdown ensuring that Send Multiple is carried across
Case #1565 โ Ability to disable admin accounts
Case #1418 โ Refactoring of system wide page structure to use new single initialisation file
Case #1565 โ Hiding of deactivated admin users from Tickets & To-Do Lists
Case #1474 โ Ability to disable auto-status change to inactive for clients without products/services
Case #1312 โ Introduce duplicate bundle function
Case #1582 โ Introduce ability to restrict subdomains when offering free subdomains
Case #1743 โ Admin ticket notification system now works independently from department assignments
Case #1418 โ Introduced credit card info full clear function for admin usage for local & remote storage
Case #1449 โ Introduced API function AffiliateActivate
Case #1449 โ Introduced API function GetAffiliates
Case #1449 โ Introduced API function GetCancelledPackages
Case #1449 โ Updated API function AddOrder
Case #1449 โ Updated API function AddProduct
Case #1449 โ Updated API function GetInvoices
Case #1449 โ Updated API function UpdateClient
Case #1449 โ Updated API function UpdateProject
Case #1465 โ Introduce autolinking of urls to client & ticket notes
Case #1418 โ Introduce permission check to admin invoicing within ticket
Case #1752 โ Introduce new global validation logic & implemented throughout
Case #1398 โ Added AfterFraudCheck Action Hook
Module Updates
Case #1742 โ [ VentraIP ] โ Commit updates to latest version v1.5.2
Case #1212 โ [ 2CheckOut ] โ Updated transaction callbacks logging for refund processing on reoccurring payments
Case #1669 โ [ WeNIC ] โ Add handling for .asia & .tw specific field requirements
Case #1418 โ [ BizCN ] โ UTF-Bytecode fix for handling IDN domains
Case #1602 โ [ cPanel ] โ Not retaining dedicated IP on package change
Case #1418 โ [ Amazon Simple Pay ] โ Updated to allow proper refund processing
Case #1698 โ [ IPPay ] โ Update for new transaction processing URLs
Case #1686 โ [ FreeRadius ] โ Introduced Free Radius module
Case #1687 โ [ Ahsay Backups ] โ Introduced Ahsay Backups Module
Case #1694 โ [ Helm ] โ Updated class to resolve login button in clientarea
Case #1692 โ [ CCAvenue ] โ Allow display notice at Invoice Payment to client informing a manual review is required
Case #1470 โ [ VPS.Net ] โ Added missing images folder
Case #1594 โ [ SecureTrading ] โ Update to latest version
Case #1593 โ [ ResellerClub SSL Module ] โ Strip URL prefixing from domains when generating approval emails
Case #1600 โ [ Stargate ] โ Update domain sync functionality
Case #1600 โ [ NetEarthOne ] โ Update domain sync functionality
Case #1418 โ [ Plesk ] โ Packet version loaded from configuration file
Case #1690 โ [ ResellerClub ] โ Improve handling for >64 Character Addressโ
Case #1460 โ [ ResellerClub] โ Transfer function not defining the full state value
Bug Fixes
Case #441 โ License check code to now show branding for branding free live chat
Case #1396 โ Numerous WHOIS Server definition updates
Case #1418 โ Ticket Masks containing โ%iโ failed to generate
Case #1746 โ Refactor class design for future expansion & optimization
Case #1623 โ Improve duplicate TLD Routine to automatically at โ.โ prefix if missing
Case #1722 โ Update cart.php to not redirect when confdomains exists
Case #1418 โ Updated API variables to allow separation of send to registrar and autosetup
Case #1418 โ Update API Variables in AcceptOrder function
Case #1418 โ Clean up second renewals SQL Query
Case #1449 โ Clean up if statements in updateclientdomain
Case #1583 โ Prevent gateway from being disabled if only 1 is enabled
Case #1540 โ Split permissions for Manage Predefined Replies
Case #1577 โ Update CVV Fields
Case #1433 โ Addclientnote & Addticketnote API function not parsing carriage returns
Case #1449 โ Improve autorecalc section to include promotion codes that were passed in update
Case #1590 โ Added autoauthkey to configuration.php when key is updated
Case #1584 โ Mail in Payment option now redirects straight to invoice
Case #1553 โ Printable Version within Quotes unavailable by default
Case #1684 โ Update function to use existing next due date for incrementing nextinvoicedate
Case #1388 โ Check if admin has โAdd โTransactionโ permission when applying payments to invoices
Case #1418 โ Remove hard coded text in KB Search box default template
Case #1542 โ Upgrade/Downgrade section in client area shows free domain is offered โ misleading
Case #1585 โ Improve split ticket functions
Case #1474 โ Expand logic around auto-status change for clients
Case #1418 โ Introduce error message to all error checks as not all contain โresponse_textโ
Case #1431 โ Add logging of changed fields to activity log
Case #1677 โ Prevent admins deleting themselves
Case #1743 โ Updated Smarty class to latest 2.x release
Case #1743 โ Updated PHPMailer class to latest stable v5.2.2
Case #1418 โ Added custom fields array to clients detail
Case #1555 โ Complete refactor of language handling system
Case #1418 โ Implement nl2br formatting to admin client notes
Case #1577 โ Add CVV input field for Admin & Client side cart update forms
Case #1673 โ Prevent unknown editing of client side card data
Case #1431 โ Add logging on ticket boxes with status as Enabled or Disabled
Case #1418 โ Correct language for billable items invoice confirmation dialog
Case #1431 โ Improve logic for logging change fields
Case #1609 โ Bulk domain check may result in unformatted return
Case #1418 โ Password input field type to hide input in Web 2.0 Cart login template
Case #1549 โ Added โemptyโ to configurableoptions variable in recalcRecurringProductProce function
Case #1433 โ Carriage returns not parsed by client notes
Case #1538 โ Unable to filter tickets in client area
Case #1532 โ Server revenue forecast includes inactive servers
Case #1086 โ Ajax cart domain addons not refreshing cart summary
Case #1418 โ Complete button changed to please wait upon click
Case #1294 โ Message preview stopping at โ&โ character
Case #1485 โ Re-introduce TinyMCE rich text editor for admin area text fields that support HTML input
Case #1418 โ Introduce delete transaction permission check to admin invoice transaction deletion
Case #1569 โ Billing Cycle & Configurable options not updating price summary
Case #1535 โ Ensure that multiple partial refunds donโt exceed the original transaction fee amount
Case #1644 โ Allow admin to add payments to a paid invoice
Case #1645 โ Next Due Date not being emptied when changing from reoccurring to free in products & product addons
Case #1626 โ Refactor Whitelisting IP logic to remove unnecessary queries and improve logic
Case #1418 โ Reintroduce SMTP Debug flag for configuration.php
Case #1672 โ Reintroduce portral template files
Case #1626 โ Whitelisted IP address shown never be banned
Case #1754 โ Revert naming of AddtoLog function to addTicketLog
Case #1701 โ Remove duplicate pwstrength JS code from clientregister on default theme
Case #1418 โ Correct language whitespace output
Case #1675 โ Adjust sales tax liability report to tax + credit = total
Case #1675 โ Switch Tax & Credits column for more human friendly readability
Case #1565 โ Fix SQL errors caused by no department set for admin
Case #1418 โ Update variable โtypeโ to โlisttypeโ to avoid conflict
Case #1702 โ Update income by products report to work with currency selector
Case #1702 โ Resolve units sold column always empty
Case #1730 โ Remove duplicate Cleint ID field in admin lang file
Case #1412 โ Domain Renewals Grace Period & Minimums loading from config file
Case #1417 โ Correct Admin invoice number search
Case #1479 โ Client area Tasks โDue Inโ corrected
Case #1419 โ Update MySQL list tables function
Case #1418 โ Mass domain management auto renew improvements
Case #1191 โ Backups failing due to database name unavailable
Case #1418 โ Message preview button returns no results with rich text editor
Case #1418 โ Update dbconnect.php for API Access & Disable Vars
Case #1418 โ Fix new admin session vars for mobile login
Case #1425 โ Lanugage change in client area not being retained between page loads
Case #1418 โ Currency ID has been updated to show currency code
Case #1430 โ Resolve CSV download error when reports contain a graph or geo-chart
Case #1429 โ Resolve load problems with graph or geo-chart data when viewing print report version
Case #1435 โ Update important field to sticky on orders detail view for client notes
Case #1418 โ Knowledge base searches fail with syntax error when no term is specified
Case #1418 โ Cron user status switch generating MySQL error due to incorrect function call
Case #1448 โ Resolved support ticket edits when attachments are present
Case #1454 โ Introduced human readable error message for graphs when JSON is not available in PHP
Case #1301 โ Resolved Slider Order Form only accepting lowercase domain input
Case #1439 โ If due date is 0000-00-00 thens suppress auto suspend/terminate
Case #1428 โ Resolved Support Ticket overview widget JS error when a department contains a special character
Case #1747 โ Custom order status removing provisioning/welcome email option from order details page
Case #1682 โ Annual income reporting adding previous years fees & amount out instead of sutracting
Case #1740 โ changeOrderStatus() now savesPending Transfer correctly for Domains with type=Transfer
Case #1446 โ Support Ticket Ratings Review Report update to apply message formatting to ticket replies
Case #1727 โ Paid invoice process attempting to combat multiple invoices being assigned the same number
Case #1418 โ Add payment button in admin invoice page not greyed out when disabled
Case #1723 โ Require admin addon modules to validate module name
Case #1418 โ Cart should pre-select stored country when passed via session
Case #1707 โ Implement smarty variables to allow template mods to query more info on affiliate referrals
Case #1418 โ Better handling of invalid affiliate ID passed into manage affiliates page
Case #1418 โ Reimplement TinyMCE Rich Text Editor for network issues
Case #1746 โ Ensure gateway module callback files reference correct include path
Case #1439 โ Reverted unneeded query change for optimization
Case #1683 โ Resolved TinyMCE converting URLs
Case #1558 โ Password password checking not working on comparison template
Case #1552 โ Updated POP Import field names to be more conscience
Case #1666 โ .DE Domains registration showing text field rather than yes|no ticket box
Case #1638 โ Client notes area does expand correctly caused by additional divs end
Case #1423 โ Admin support ticket widget not handling zero ticket count present on new installs correctly
Case #1566 โ Update billable items logic to automatically recalculate amount when qty/hrs is updated
Case #1739 โ Require ID presence in URL when downloading PDF
Case #1449 โ addcancelrequest to return error if a cancellation requests exists
Case #1444 โ Add Local API Support for custom provisioning module functions
Case #441 โ Prevent conflicts for branding removal
Case #1731 โ Show enabled WHMCS addons on licensing info page
Case #1418 โ Added VentraIP Registrar Logo
Case #1726 โ Update if statement to not show predefines on root category
Case #1418 โ Add missing global declaration for ICONV disable
Case #1418 โ Add backwards compatable getValidLanguages function
Case #1418 โ Update admin knowledge base config page to use getlanguages function
Case #1697 โ Ensure email addressโ cannot be empty
Case #1697 โ Ensure sub-accounts do not retain the same address
Case #1474 โ Invalid select query in cron during client status change
Case #1646 โ Validate invoice ID exists when passing into PDF download link
Case #1657 โ _GetEmailForwarding error message is treaded as forwarding record when listing forwarders
Case #1641 โ Convert config servers page to use language file variables
Case #1601 โ Ticket list sorting by department ID instead of name
Case #1420 โ Remove hardcoded text in template files
Case #1463 โ New customers report export generates invalid data
Case #1621 โ Typo in English language file
Case #1469 โ Curacao missing from countries list
Case #1639 โ Admin ticket log should have URLs converted to links
Case #1575 โ Modifying Quotes does not log to the Activity Log
Case #1653 โ _GetDNS error message is treated as DNS Record when listing dnsrecords
Case #1650 โ View quotes failing ifTOS accept not enabled
Case #1485 โ TinyMCE Editor not loading for announcements
Case #1643 โ Enkompass using archaic API for IPs
Case #1642 โ Affiliate payments on renewal generated regardless if one time option is select
Case #1557 โ Corrected typo โoccuredโ throughout.
Case #1591 โ Logging in as client resets admin session token
Case #1589 โ CSS scaling issues in 5.1 v4 template
Case #1588 โ Associated invoices displaying unrelated invoices when an associated ticket is not found in project view
Case #1581 โ Deleting invoice doesnโt pass through vars
Case #1415 โ Affiliates template extra TD Colspan
Case #1432 โ API Cancelation request calling undefined function if canceled service is on joint invoice
Case #1597 โ Fees returned from gateway modules not handled correctly
Case #1598 โ Prevent addons from doubling invoices when invoice selected items is used in the client summary
Case #1418 โ Prevent warning error from genTicketMask when creating new tickets
Case #1418 โ Resolve failure to locate KnowledgeBase suggestions when no existing k articles are passed
Case #1418 โ Updated IP not being set in core class
Case #1418 โ Improve session handling for cart.tpl override
Case #1418 โ Improve Product config listing page to reset product group order values for consistency
Case #1563 โ Update usage stats in ServerUsageUpdate function
Case #1755 โ Improve EU Transfer process to always use the client account details
Case #1676 โ [Security] Google Checkout update
Case #1631 โ [Security] Improve logic of widgets
Case #1731 โ [Security] Improve logic of license info page
Case #1759 โ [Security] Improve logic of carts
Case #1631 โ [Security] Improve logic of widgets | https://docs.whmcs.com/Changelog:WHMCS_V5.2 | 2018-05-20T15:29:26 | CC-MAIN-2018-22 | 1526794863626.14 | [] | docs.whmcs.com |
Changelog:WHMCS V7.0
Contents
- 1 Version 7.0.2
- 2 Version 7.0.1
- 3 Version 7.0.0 GA
- 4 Version 7.0.0 RC 1
- 5 Version 7.0.0 Beta 3
- 6 Version 7.0.0 Beta 2
- 7 Version 7.0.0 Beta 1
Version 7.0.2
Version 7.0.1
Version 7.0.0 GA
Version 7.0.0 RC 1
Version 7.0.0 Beta 3
Version 7.0.0 Beta 2
Version 7.0.0 Beta 1 | https://docs.whmcs.com/Changelog:WHMCS_V7.0 | 2018-05-20T15:31:38 | CC-MAIN-2018-22 | 1526794863626.14 | [] | docs.whmcs.com |
Difference between revisions of "Developing a MVC Component/Adding backend actions"
From Joomla! Documentation
Latest revision as of 15:00, 5 June 2013)
Joomla 3.x)
-) | https://docs.joomla.org/index.php?title=J2.5_talk:Developing_a_MVC_Component/Adding_backend_actions&diff=99919&oldid=88606 | 2016-04-29T05:11:30 | CC-MAIN-2016-18 | 1461860110372.12 | [] | docs.joomla.org |
Difference between revisions of "JHtmlTabs::end"
From Joomla! Documentation
Latest revision as of::end
Description
Close the current pane.
Description:JHtmlTabs::end [Edit Descripton]
public static function end ()
See also
JHtmlTabs::end source code on BitBucket
Class JHtmlTabs
Subpackage Html
- Other versions of JHtmlTabs::end
SeeAlso:JHtmlTabs::end [Edit See Also]
User contributed notes
<CodeExamplesForm /> | https://docs.joomla.org/index.php?title=API17:JHtmlTabs::end&diff=cur&oldid=57072 | 2016-04-29T05:48:59 | CC-MAIN-2016-18 | 1461860110372.12 | [] | docs.joomla.org |
5270 downloads @ 2841 KB/s5270 downloads @ 2841 KB/s
Play Kitty Pics - Free DownloadPlay Kitty Pics - Free Download
5812 downloads @ 6410 KB/s5812 downloads @ 6410 KB/s
[Verified] Play Kitty Pics[Verified] Play Kitty Pics
Sponsored Downloads
Related documents, manuals and ebooks about Play Kitty Pics
Sponsored Downloads
5270 downloads @ 2841 KB/s5270 downloads @ 2841 KB/s
Play Kitty Pics - Free DownloadPlay Kitty Pics - Free Download
5812 downloads @ 6410 KB/s5812 downloads @ 6410 KB/s
[Verified] Play Kitty Pics[Verified] Play Kitty Pics
Largemouth Bass This large-mouthed fish lives in many types of water habitats, but it prefers quiet warm rivers, lakes of ponds. It is seldom found in water deeper ...
mrsutahamerica.com/backe...ds/2015/03/2015-app.pdf Author: Matt Created Date: 3/23/2015 9:45:41 AM ...
Fun items that families can play together. Down ... Ready Kids Activity Book Author: Ready Kids - FEMA Subject: Kids activity book to prepare, plan and be informed ...
1 Describe Pete the Cat Pictures Blind kids benefit from picture descriptions every bit as much as sighted kids do from seeing the pictures. Don't forget to share
Peer pressure or curiosity can play a large role, and often teens don [t know what ... parents who want to talk to someone about their childs drug use and drinking.
Cat Pictures: Tokyo Cat Cafe ... Trying to decide whether to play or stay in the comfy bowl ... Petting a cat while he munches away on kitty grass
Raging Rhino Slot Machine Online ... 4 pics 1 word two pens slot machine ... how to play kitty glitter slot machine
This coloring book is designed for adults and children to work on together. ... When you play outside in the snow, what will you do to make sure you
QUIZZES and PICTURES TO COLOR. ... learn to play an instrument too. It is ... KITTY The kitty on Farmer Jasonโs farm can jump
- 01-Warning! This is an adult story based on erotic fantasies involving children, if you do not like this do then do not read any further. This kind of stories are ...
MiaBella is smart and funny and loves to play. ... our 10-year-old Persian, from a shelter. She is a very friendly old kitty with lots of personality. Excalibur
Alex is afraid they will make him dress up like a Kitty Fritter and dance or do something that will make him look stupid. 10. ... What sport does Alex play?
Relay for Life Games/Activities ... can do thatโ? Well here is your chance to play five games that are played regularly on the show
possibly play a role as vectors of hepatitis B virus. Biology Bedbugs have a ๏ฌat, oval-shaped body with no wings, and are 4โ7mm long. Their
at Kitty Hawk, North Carolina An ... TIMELINE OF BABE RUTHโS LIFE George Herman Ruth, Jr., is born on February 6 ... The Yankees play in their first World Series
Choose to play by the classic rules for buying, renting and selling properties or use the Speed Die to get into the action faster. If you've never played the classic
gene protein in the virus is believed to play a role in causing cancer and tumors, possibly through its interac-tion with the P53 tumor-suppressor gene.
PUPPY BOWL X and the BISSELLยฎ KITTY HALF-TIME SHOW are produced by Discovery Studios for Animal Planet. For Discovery Studios, John Tomlin is the executive
I just rede๏ฌned the kitty silhouette AND, ... You'll need to play with the ... pics below, it's posible to get ...
4 pics one word 6 letters slot machine ... juegos de casino gratis kitty glitter play thai paradise slots for free slot machine more chilli slot machine for play
RS Lilly Starlight Play Zana Express Lil Moneypenny 5656612 Shorty Lena Short Oak Oakadot ... $9,206; Kitty Cats Mirage $9,085; Haidas Royal Cat $2,804 GRADUATE.
180 reads 4 Pics 1 Word Answer List ... 145 reads Knight Amp Play 1 Kitty French 224 reads Section 8 1 Chromosomes Answers 477 reads Asus Nexus 7 User Manual
Sukumar Ray - poems - Publication Date: 2012 Publisher: ... short story collection "Pagla Dashu" and play ... From dawn till dusk Aunt Kitty sings a string of motley airs
The โTeam Nutrition Fruits and Vegetables Lessons for Preschool Childrenโ was ... He went out to play with his ... this is a crazy dream for a kitty cat!
Continue with a document on another device 91 Calendar 92 Make a calculation 93 Use your work phone 93 ... favorite music, and play games. Watch and listen
539 reads Guilty Pleasures Kindle Edition Kitty Thomas ... If you are looking for Kids Play Swing Sets, our library is free for you. We provide copy of Kids
Say: โDid you hear the kitty? Thank You, God, for kittens. ... Play a cassette tape or CD of quiet background music as infants and toddlers rest or sleep.
Star Struck Kitty (WR This Cats Smart) ... Reys Play (Playdox) ... Whizkey N Gunz Hip No. Consigned by Rhodes River Ranch Hip No. Title: 7300 LEGACY CATALOG 2015.indd
What To Do With Mismatched & Old Socks! ... less expensive and more fun as your child can play in their ... around these and run through the house with my kitty
Kitty Contract Agreements Templates ... Giovanni Nude Pics in digital format, ... [PDF] Learn How To Play Piano Free
Custom Order Cake Form Rev 7/29/2013. Title: Custom Order Cake Form.pub Author: kbaczko Created Date: 8/29/2013 4:52:48 PM ...
Sample Character Traits able active adventurous affectionate afraid alert ambitious angry annoyed anxious apologetic arrogant attentive average
LITTLE HOUSE IN THE BIG WOODS by Laura Ingalls Wilder ... Pa liked to play a game with Laura and Mary. What did Laura call the game? a. mad bull b. howling cat
Like the ancient form of shadow play to entertain and tell a story, Shadow Inserts enhance the Linen Shade and Frosted Shade Warmers with shadows and light.
Photo Analysis Worksheet B. Use the chart below to list people, objects, and activities in the photograph. A. Step 2. Inference
How to recite the Holy Rosary 1. SAY THESE PRAYERSโฆ IN THE NAME of the Father, and of the Son, and of the Holy Spirit. Amen. (As you say this, with your right hand ...
Housebreaking a Miniature Pinscher Option 1: ... We tried kitty litter under it but that seemed to encourage them to dig. ... after play, after any kind of ...
WORD LIST: ANIMALS AND THEIR SOUNDS Apes gibber Asses bray Bears growl Bees hum, buzz, murmur Beetles drone Birds sing Bitterns boom Blackbirds whistle Bulls bellow
Kitty Colliflower 16 5 6 5 4 Medal Play Caledonia Peggy Doody 1 1 Milly Eldredge 1 1 ... Championship Winner and Runnerup are automatic pics for the Captains Cup Team.
ยฉ2014 Microsoft Page 1 Meet Surface Pro 3 Surface Pro 3 is the tablet that can replace your laptop. Connect to a broad variety of accessories, printers, and networks ...
Hello kitty! โ by Y. Moriya. Fig. 8. โ Hello chibi-marukochan! โ by T. Moriue. Pictures by Conformal Mapping 9 Fig. 9. โ Finger play ...
play on the 9th. This meant that I ... few pics with you (don't worry - not too many!). ... Mackenzie and his Indian wife, Kitty, whereas my descendency
4 A Teacherโs Guide to the Signet Classics Edition of Jane Austenโs Pride and Prejudice Mr. Collinsโthe Bennet girlsโ overbearing cousin, a priggish clergyman ...
Ownerโs Manual A-Type, Cruiser & ProModelTM Scooters Version: 1_01_10 NOTE: Manual illustrations are for demonstration purposes only. Illustrations
Club Pics ... format best ball play. ... especially Kitty Van ortel for putting two cars in the show and for trailering obโs 1982 Vette.
Jeep โขโข Please read this manual Please read this manual ... adjustable seat belts are designed to be a play feature and do not function as protective safety ...
destructive act can be anything but detrimental ... purring โkitty,โ curled upon the cushion, ... served a variety of โplay behaviorsโ in free-
Aug 13, 2012 ยท Papers on Play-Making IV ... The Disgrace of Kitty Grey 1st Edition Viewed 469 times Last updated 04 May 2013 4 Pics 1 Word Answer List Viewed 240 times ...
A very useful and powerful tool is the color wheel's Hue, Saturation and Bright mode. It's really helpful to use the color picker to select a tone and only change one ... | https://docs.askiven.com/play-kitty-pics.html | 2016-04-29T04:01:02 | CC-MAIN-2016-18 | 1461860110372.12 | [] | docs.askiven.com |
Configuring Eclipse IDE for PHP development/Linux
From Joomla! Documentation
< Configuring Eclipse IDE for PHP views you want and remove the others you don't.
- Arrange the views the way you like and make you feel comfortable.
- Go to "Tool bar --> Window --> Save perspective as" and set a name for your custom perspective, ex: "pimp-My-IDE" and save it. You can also overwrite an existing view name.
So far you should be able to play with Eclipse IDE and understand its interface philosophy. If you want to see a video demonstration about Eclipse IDE to get a preview and taste some of its powers, check this out Webinar: Using Eclipse for Joomla! Development.
Understanding the folder structure.
Configuration
Installing More Extensions
For your Eclipse IDE you will need to install more extensions for PHP development and other tools to help you in your project development journey. Follow these steps and instructions:
- In Eclipse, go to: "Toolbar --> help --> install new software".
- You are now on the "Install" window. Click the drop-down list "Work with" and set it to "--All Available Sites--".
- In the list, expand the nodes and find the following packages to install:
- Go to "Web, XML, and Java EE Development" and select
- PHP Development Tools (PDT) SDK Feature
- JavaScript Development Tools
- Eclipse Web Developer Tools
- Go to "General Purpose Tools" and select
- Remote System Explorer End-User Runtime
- Press "Next >" and follow the installation wizard. Restart Eclipse if needed.
-. Restart Eclipse if needed.
Those are all the extensions to install for now. They should be enough to do local and remote PHP development. Nonetheless you can experiment and try remove empty lines, delete unnecessary trailing spaces, format the code and more. To activate these features:
- Go to "Toolbar --> Window --> Preferences --> PHP --> Editor --> Save Actions".
- Find and enable "Remove trailing whitespace" and also select "all lines".
- Go to "Toolbar --> Window --> Preferences --> JavaScript --> Editor --> Save Actions".
- Find and enable "Perform the selected actions on save".
- Find and enable "Additional actions".
- Press the button "Configure...".
- Select the tab "Code Organizing".
- Find and enable "Remove trailing whitespace".
- Find and select "All lines".
- Press "OK" to continue.
- When you are done," | https://docs.joomla.org/index.php?title=Configuring_Eclipse_IDE_for_PHP_development/Linux&oldid=76577 | 2016-04-29T05:05:45 | CC-MAIN-2016-18 | 1461860110372.12 | [] | docs.joomla.org |
Send Docs Feedback
Update Company App Key with API Product
updateCompanyKeyWithAPIProduct
Update Company App Key with API Product
Updates an existing company app key to add additional API products or attributes. Note that only a single API product can be resolved per app key at runtime. API products are resolved by name, in alphabetical order. The first API product found in the list will be returned.
Resource URL /organizations/{org_name}/companies/{company_name}/apps/{app_name}/keys/{consumer_key}?) | http://docs.apigee.com/management/apis/post/organizations/%7Borg_name%7D/companies/%7Bcompany_name%7D/apps/%7Bapp_name%7D/keys/%7Bconsumer_key%7D?rate=oppLtjZpY88NCrZcHgHmtMGfh-A2z2UhJFgTjRxTwc8 | 2016-04-29T04:05:15 | CC-MAIN-2016-18 | 1461860110372.12 | [] | docs.apigee.com |
One or more of your procedures contain so many string constant expressions that they exceed the compiler's internal storage limit. This can occur in code that is automatically generated. To fix this, you can shorten your procedures or declare contant identifiers instead of using so many literals in the code. | http://docs.embarcadero.com/products/rad_studio/delphiAndcpp2009/HelpUpdate2/EN/html/devcommon/cm_too_many_consts_xml.html | 2016-04-29T04:00:48 | CC-MAIN-2016-18 | 1461860110372.12 | [] | docs.embarcadero.com |
State-validation Protocol-based Rewards
Subject to change.
Validator-clients have two functional roles in the Solana network:
- Validate (vote) the current global state of that PoH.
- Be elected as โleaderโ on a stake-weighted round-robin schedule during which time they are responsible for collecting outstanding transactions and incorporating them into the PoH, thus updating the global state of the network and providing chain continuity.
Validator-client rewards for these services are to be distributed at the end of each Solana epoch. As previously discussed, compensation for validator-clients is provided via a protocol-based annual inflation rate dispersed in proportion to the stake-weight of each validator (see below) along with leader-claimed transaction fees available during each leader rotation. I.e. during the time a given validator-client is elected as leader, it has the opportunity to keep a portion of each transaction fee, less a protocol-specified amount that is destroyed (see Validation-client State Transaction Fees).
The effective protocol-based annual interest rate (%) per epoch received by validation-clients is to be a function of:
- the current global inflation rate, derived from the pre-determined dis-inflationary issuance schedule (see Validation-client Economics)
- the fraction of staked SOLs out of the current total circulating supply,
- the up-time/participation [% of available slots that validator had opportunity to vote on] of a given validator over the previous epoch.
The first factor is a function of protocol parameters only (i.e. independent of validator behavior in a given epoch) and results in a global validation reward schedule designed to incentivize early participation, provide clear monetary stability and provide optimal security in the network.
At any given point in time, a specific validator's interest rate can be determined based on the proportion of circulating supply that is staked by the network and the validator's uptime/activity in the previous epoch. For example, consider a hypothetical instance of the network with an initial circulating token supply of 250MM tokens with an additional 250MM vesting over 3 years. Additionally an inflation rate is specified at network launch of 7.5%, and a disinflationary schedule of 20% decrease in inflation rate per year (the actual rates to be implemented are to be worked out during the testnet experimentation phase of mainnet launch). With these broad assumptions, the 10-year inflation rate (adjusted daily for this example) is shown in Figure 1, while the total circulating token supply is illustrated in Figure 2. Neglected in this toy-model is the inflation suppression due to the portion of each transaction fee that is to be destroyed.
Figure 1: In this example schedule, the annual inflation rate [%] reduces at around 20% per year, until it reaches the long-term, fixed, 1.5% rate.
Figure 2: The total token supply over a 10-year period, based on an initial 250MM tokens with the disinflationary inflation schedule as shown in Figure 1. Over time, the interest rate, at a fixed network staked percentage, will reduce concordant with network inflation. Validation-client interest rates are designed to be higher in the early days of the network to incentivize participation and jumpstart the network economy. As previously mentioned, the inflation rate is expected to stabilize near 1-2% which also results in a fixed, long-term, interest rate to be provided to validator-clients. This value does not represent the total interest available to validator-clients as transaction fees for state-validation are not accounted for here. Given these example parameters, annualized validator-specific interest rates can be determined based on the global fraction of tokens bonded as stake, as well as their uptime/activity in the previous epoch. For the purpose of this example, we assume 100% uptime for all validators and a split in interest-based rewards between validators nodes of 80%/20%. Additionally, the fraction of staked circulating supply is assumed to be constant. Based on these assumptions, an annualized validation-client interest rate schedule as a function of % circulating token supply that is staked is shown in Figure 3.
Figure 3: Shown here are example validator interest rates over time, neglecting transaction fees, segmented by fraction of total circulating supply bonded as stake.
This epoch-specific protocol-defined interest rate sets an upper limit of protocol-generated annual interest rate (not absolute total interest rate) possible to be delivered to any validator-client per epoch. The distributed interest rate per epoch is then discounted from this value based on the participation of the validator-client during the previous epoch. | https://docs.solana.com/implemented-proposals/ed_overview/ed_validation_client_economics/ed_vce_state_validation_protocol_based_rewards | 2020-10-23T21:07:20 | CC-MAIN-2020-45 | 1603107865665.7 | [array(['/img/p_ex_schedule.png', None], dtype=object)
array(['/img/p_ex_supply.png', None], dtype=object)
array(['/img/p_ex_interest.png', None], dtype=object)] | docs.solana.com |
Developers are the target users of Kubernetes. Once a Tanzu Kubernetes cluster is provisioned, you can grant developer access using vCenter Single Sign-On authentication.
Authentication for Developers
A cluster administrator can grant cluster access to other users, such as developers. Developers can deploy pods to clusters directly using their user accounts, or indirectly using service accounts. For more information, see Using Pod Security Policies with Tanzu Kubernetes Clusters.
- For user account authentication, Tanzu Kubernetes clusters support vCenter Single Sign-On users and groups. The user or group can be local to the vCenter Server, or synchronized from a supported directory server.
- For service account authentication, you can use service tokens. For more information, see the Kubernetes documentation.
Adding Developer Users to a Cluster
To grant cluster access to developers:
- Define a Role or ClusterRole for the user or group and apply it to the cluster. For more information, see the Kubernetes documentation.
- Create a RoleBinding or ClusterRoleBinding for the user or group and apply it to the cluster. See the following example.
Example RoleBinding
To grant access to a vCenter Single Sign-On user or group, the subject in the RoleBinding must contain either of the following values for the
nameparameter.
The following example RoleBinding binds the vCenter Single Sign-On local user named Joe to the default ClusterRole named
edit. This role permits read/write access to most objects in a namespace, in this case the
default namespace.
kind: RoleBinding apiVersion: rbac.authorization.k8s.io/v1 metadata: name: rolebinding-cluster-user-joe namespace: default roleRef: kind: ClusterRole name: edit #Default ClusterRole apiGroup: rbac.authorization.k8s.io subjects: - kind: User name: sso:[email protected] #sso:<username>@<domain> apiGroup: rbac.authorization.k8s.io | https://docs.vmware.com/en/VMware-vSphere/7.0/vmware-vsphere-with-tanzu/GUID-6DE4016E-D51C-4E9B-9F8B-F6577A18F296.html | 2020-10-23T21:41:14 | CC-MAIN-2020-45 | 1603107865665.7 | [] | docs.vmware.com |
You are viewing documentation for Kubernetes version: v1.18
Kubernetes v1.18 documentation is no longer actively maintained. The version you are currently viewing is a static snapshot. For up-to-date documentation, see the latest version.
Field Selectors
Field selectors let you select Kubernetes resources based on the value of one or more resource fields. Here are some examples of field selector queries:
metadata.name=my-service
metadata.namespace!=default
status.phase=Pending
This
kubectl command selects all Pods for which the value of the
status.phase field is
Running:
kubectl get pods --field-selector status.phase=Running
Note: Field selectors are essentially resource filters. By default, no selectors/filters are applied, meaning that all resources of the specified type are selected. This makes the
kubectlqueries
kubectl get podsand
kubectl get pods --field-selector ""equivalent.
Supported fields"
Supported operators
You can use the
=,
==, and
!= operators with field selectors (
= and
== mean the same thing). This
kubectl command, for example, selects all Kubernetes Services that aren't in the
default namespace:
kubectl get services --all-namespaces --field-selector metadata.namespace!=default
Chained selectors
Multiple resource types
You use field selectors across multiple resource types. This
kubectl command selects all Statefulsets and Services that are not in the
default namespace:
kubectl get statefulsets,services --all-namespaces --field-selector metadata.namespace!=default | https://v1-18.docs.kubernetes.io/docs/concepts/overview/working-with-objects/field-selectors/ | 2020-10-23T22:29:40 | CC-MAIN-2020-45 | 1603107865665.7 | [] | v1-18.docs.kubernetes.io |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.