content
stringlengths 0
557k
| url
stringlengths 16
1.78k
| timestamp
timestamp[ms] | dump
stringlengths 9
15
| segment
stringlengths 13
17
| image_urls
stringlengths 2
55.5k
| netloc
stringlengths 7
77
|
---|---|---|---|---|---|---|
You can change the App Visibility collector settings, such as the host name of computer or port numbers, in the collector.properties file located in the following directory:
Note
This topic presents the following procedures:
By default, the App Visibility collector uses the operating system’s configured host name, called the Listening Address during installation, to communicate between components.
Override the default host name configuration for any of the following situations:
The property value directs end-user browser data through the load balancing or reverse proxy server and must be accessible from all App Visibility agents. You must configure the load balancing or reverse proxy server to forward requests to the assigned collectors.
Perform the following procedure to override the default host name configuration:
Modify the App Visibility collector's listening port.
The default port is 8200.
tomcat.listening.port=8200
Set the App Visibility portal connection:
portal.connection.ip=localhost portal.connection.port=8100
Note
When entering IPv6 addresses, you must enter the IP address in square brackets. For example:
portal.connection.ip=[::1]
Use this procedure to modify the database location, data retention size, and maximum database size. To change the database password, see Changing the App Visibility database password.
If you change the location of the database, or if you need to change database port, modify the database location:
database.url=jdbc\:postgresql\://localhost\:8800/avdb?currentSchema\=collector
Modify the period of time to retain data in the database for App Visibility agents (
retention.time) and for TEA Agents (
retention.time.synthetic_transactions).
The retention time is in days.
retention.time=35 retention.time.synthetic_transactions=35
Disabled (defect TOM-31342). The
db.max.size value is ignored.
Modify the database size. The database size is in MB.
db.max.size=102400
Restart the App Visibility collector service.
If the collector uses a proxy to connect to the App Visibility portal, configure the proxy details. If you do not need to configure a proxy, leave the values empty.
Set the proxy details for HTTP:
http.proxy.host= http.proxy.port=
Set the proxy details for HTTPS:
https.proxy.host= https.proxy.port=
Set the proxy details for SOCKS:
socks.proxy.host= socks.proxy.port=
Note
If you configure the proxy settings, you must make additional configuration changes to the Changing App Visibility portal settings.
Changing App Visibility portal settings
Changing App Visibility proxy settings
Starting and stopping the App Visibility server services
Changing security certificates in App Visibility components | https://docs.bmc.com/docs/display/tsavm107/Changing+App+Visibility+collector+settings | 2020-11-23T20:12:36 | CC-MAIN-2020-50 | 1606141164142.1 | [] | docs.bmc.com |
The Moviri ETL modules are provided by the BMC partner, Moviri.
To obtain the latest ETL module packages, see the Download the Feature Pack section in 10.7.01: Feature Pack 1.
Note
Verify that the version of ETL modules is compatible with the version of TrueSight Capacity Optimization that is installed on your system.
Out-of-the-box ETL modules
Sentry ETL modules
License entitlements
For more information, see the following topics: | https://docs.bmc.com/docs/display/btco107/Moviri+ETL+modules | 2020-11-23T20:21:58 | CC-MAIN-2020-50 | 1606141164142.1 | [] | docs.bmc.com |
Defines the overall approach to Open Services for Lifecycle Collaboration (OSLC) based specifications and capabilities that extend and complement W3C Linked Data Platform [LDP]. OSLC Core 3.0 constitutes the approach outlined in this document and capabilities referenced in other documents.-3.0-Part1]
OSLC Core Version 3.0. Part 1: Overview. Edited by Jim Amsden. 31 May 2018. OASIS Committee Specification Draft 03 / Public Review Draft 03...
Information Technology (IT) enterprises are constantly addressing demands to do more with less. To meet this demand they need more efficient development processes and supporting tools. This has resulted in demand for better support of integrated system and software processes. Enterprises want solutions (such as software or hardware development tools) from different vendors, open source projects and their own proprietary components to work together. This level of integration, however, can become quite challenging and unmanageable. In order to enable integration between a heterogeneous set of tools and components from various sources, there is a need for a sufficient supporting architecture that is loosely coupled, minimal, and standardized. OSLC is based on World Wide Web and Linked Data principles, such as those defined in the W3C Linked Data Platform [LDP], to create a cohesive set of specifications that can enable products, services, and other distributed network resources to interoperate successfully [LDP]..
The OSLC Core specifications provide additional capabilities that expand on the W3C LDP capabilities, as needed, to enable key integration scenarios. These capabilities define the essential and common technical elements of OSLC domain specifications and offers guidance on common concerns for creating, updating, retrieving, and linking to lifecycle resources based on W3C [LDP]. These specifications have emerged from the best practices and other work of other OSLC Member Section (MS)-affiliated Technical Committees (TCs), sometime referred to as OSLC domain TCs. OSLC domain TCs focus on a certain domain or topic. The OSLC Core TC develops technical specifications, creates best practices documents and formulates design principles that can be leveraged by other OSLC MS-affiliated TCs to enable them to focus on domain-specific concerns.
As seen in Fig. 2 OSLC Core 3.0 Overview, there are a number of capabilities developed in different standards organizations, TCs and working groups. The arrows represent either dependencies or extensions to some specifications or capabilities. OSLC MS-affiliated TC developed specifications may depend on OSLC Core 3.0 specifications as scenarios motivate. However, a leading goal is to minimize and eliminate unnecessary dependencies to simplify adoption, which may result in no dependency on OSLC Core 3.0 specifications for some OSLC domains.
This work is an evolution from the OSLC Core 2.0 [OSLCCore2] efforts, taking the experience gained from that effort along with the common foundation on W3C LDP, to produce an updated set of specifications that are simpler, built on layered capabilities and easier to adopt. ().
Terminology uses and extends the terminology and capabilities of W3C Linked Data Platform [LDP], W3C's Architecture of the World Wide Web [WEBARCH] and Hyper-text Transfer Protocol [HTTP11].
Some industry terms that are often referred to (not exhaustive):
Previous revisions of OSLC-based specifications [OSLCCore2], used terminology that may no longer be relevant, accurate or needed any more. Some of those deprecated terms are:
OSLC Core defines the namespace URI of with a namespace prefix of
osl primary goal of OSLC is to enable integration of federated, shared information across tools that support different, but related domains. OSLC was initially focused on development of Information Technology (IT) solutions involving processes, activities, work products and supporting tools for Application Lifecycle Management (ALM). However, OSLC capabilities could be applicable to other domains. The specific goals for OSLC Core 3.0 are to build on the existing OSLC Core 2.0 specifications to further facilitate the development and integration of domains and supporting tools that address additional integration needs. Specifically:
The following guiding principles were used to govern the evolution of OSLC and guide the development of the OSLC Core 3.0 specifications.Scenario-driven
Every capability should be linked back to key integration scenarios that motivate its need. These are important not only for knowing that the correct specification content is being developed but also to assist with implementers understanding the intended usage and in developing relevant test cases.Incremental
Specifications should be developed in an incremental fashion that not only validates the technical approaches but also delivers integration value sooner.Loose-coupling
Specifications should support a model where clients have little to no knowledge about server implementation-specific behaviors in order to support key integration scenarios. As a result, clients should be unaffected by any server application software or data model implementation changes. Similarly, client software should be able to be independently changed without changes to server software.Minimalistic
Specification authors should strive to find not only the simplest solution that would work for a given scenario but allows for easy adoption. Authors should avoid solutions that offer additional capabilities which may inhibit adoption of necessary capabilities.Capability Based
A capability is the ability to perform actions to achieve outcomes described by scenarios through the use of specific technologies. Capabilities should be incrementally defined as independent focused specifications and independently discoverable at runtime. Even though there may be some generally agreed upon best practices for capability publication and discovery, each capability should define how it is published and discovered. The Core OSLC capabilities are defined in this specification.Vocabularies
Various OSLC MS-affiliated TCs, or any specification development body that is authoring specifications for specific domains of knowledge, should minimally define vocabularies and the semantics behind the various terms. Some consideration should be given for global reuse when terms are used for cross domain queries and within other domain resource shape definitions. Domain specifications are the definition of an OSLC capability, and how those vocabulary terms are used in LDP interactions by both the clients and servers of that capability. The specification should include defining resource shapes that describe resources based on a set of vocabulary terms, which introduces any domain specific constraints on the vocabulary's usage.
OSLC domain vocabularies should follow the recommended best practices for managing RDF vocabularies described at [LDBestPractices].
This section is non-normative.
In support of the previously stated goals and motivation, it is desired to have a consistent and recommended architecture. The architecture needs to support scenarios requiring a protocol to access, create, update and delete resources. [LDP] is the foundation for this protocol. Resources need to relate, or link, to one another utilizing a consistent, standard and web-scale data model. Resource Description Framework (RDF) [rdf11-concepts] is the foundation for this. The ability to work with these data models over HTTP protocols, is based on [LDP].
Some scenarios require the need to integrate user interface components: either within a desktop or mobile web-browser, mobile device application, or rich desktop applications. For these scenarios the technology is rapidly evolving and changing. Priority should be based on existing standards such as HTML5, with use of
iframe and
postMessage(). [HTML5]
OSLC Core specification documents elaborate on the conformance requirements leveraging these various technologies and approaches.
As the primary goals have been outlined around lifecycle integration, some scenarios may require exploration of new (or different) approaches and technologies. As with all specification development efforts, the OSLC Core TC will advise, develop and approve such efforts through the established processes for cross-TC and organization coordination.
This section is non-normative.
The following sections and referenced documents define the capabilities for OSLC Core 3.0. These documents comprise the multi-part specification for OSLC Core 3.0. They represent common capabilities that servers MAY provide and that may be discovered and used by clients. Although OSLC Core could be useful on its own, it is intended to specify capabilities that are common across many domains. Servers will generally specify conformance with specific domain specifications, and those domain specifications will describe what parts of OSLC Core are required for conformance. This allows servers to implement the capabilities they need in a standard way without the burden of implementing capabilities that are not required. The purpose of the OSLC Core Discovery capability is to allow clients to determine what capabilities are provided by a server. Any provided capability must meet all the conformance criteria for that capability as defined in the OSLC Core 3.0 specifications.
This implies that any capability that is discoverable is essentially optional, and once discovered, the capability is provided as defined in the applicable OSLC specifications. Servers should support OSLC Discovery, but Discovery itself is also an optional capability as servers could provide other means of informing specific clients of supported OSLC capabilities that could be utilized directly. For example, a server might provide only preview dialogs on specific resources and nothing else.
Constraints on OSLC Core and Domain resources SHOULD be described using [OSLCShapes] which is included as part of the OSLC Core multi-part specifications. Servers MAY use other constraint languages such as [SHACL] to define resource constraints. that handle RDF data such as form and query builders.
OSLC Domain specifications SHOULD use the following URI pattern when publishing each individual resource shape:[vocab short name]/shapes/[version]/[shape-name]
For example, for Change Management 3.0, a shape describing the base Change Request resource type might have the shape URI:[vocab short name]/shapes/[SPEC-version]
For example, for Change Management 3.0, there should be a container at: with members such as:
Authentication determines how a user of a client identifies themselves to a server to ensure the user has sufficient privileges to access resources from that server, and provides a mechanism for servers to control access to resources.
Resource Discovery defines a common approach for HTTP/LDP-based servers to be able to publish their RESTful API capabilities and how clients can discover and use them.
OSLC resource representations come in many forms and are subject to standard HTTP and mechanisms for content negotiation.
OSLC domain specifications specify the representations needed for the specific scenarios that they are addressing, and should recognize that different representations are appropriate for different purposes. For example, browser oriented scenarios might be best addressed by JSON or Atom format representations.
OSLC domain specifications are also expected to follow common practices and conventions that are in concert with existing industry standards and which offer consistency across domains. All of the OSLC specifications are built upon the standard RDF data model, allowing OSLC to align with the Linked-Data Platform [LDP]. In addition, all OSLC specifications have adopted the convention to illustrate most examples using Turtle and/or JSON-LD representations and will typically require these representations to enable consistency across OSLC implementations.
Common Vocabulary Terms defines a number of commonly used RDF vocabulary terms and resources (shapes), that have broad applicability across various domains.
Resource Operations specify how clients create, read, update and delete resources managed by servers.
If-Matchheader on a PUT request:.
By adding the key=value pair
oslc.properties, specified below, to a resource URI, a client can request a new resource with a subset of the original resource's values. An additional key=value pair
oslc.prefix can be used to define prefixes used to identify the selected properties.
The
oslc.properties key person resource that has properties such as
foaf:givenName and
foaf:familyName. Suppose you want a representation of the bug report that includes its
dcterms:title and the
foaf:givenName and
foaf:familyName of the person referred to by its
dcterms:creator. The following URL illustrates the use of the
oslc.properties query value to include those properties:{foaf:givenName,foaf:familyName}
The
oslc.properties pair is defined by the oslc_properties term", */
In our examples of
oslc.properties, property names include a URI prefix, i.e. dcterms: or foaf:. value lets you specify URI prefixes used in property names. For example, suppose the foaf: prefix was not predefined. The following URL illustrates the use of the oslc.prefix value to define it:<>&oslc.properties=foaf:lastName,...
The syntax of the
oslc.prefix is defined by the oslc_prefix term. */
OSLC Core specifies a number of predefined PrefixDefinitions for convenience. OSLC Domain specifications may specify additional pre-defined PrefixDefinitions for their purposes. The following prefixes SHOULD be predefined:
Resource Preview specifies a technique to get a minimal HTML representation of a resource identified by a URL. Applications often use this representation to display a link with an appropriate icon, a label, or display a small or large preview when a user makes some gesture over a link.
Delegated Dialogs allow one application to embed a creation or selection UI into another using HTML
iframe elements and JavaScript code. The embedded dialog notifies the parent page of events using HTML5
postMessage.
OSLC servers will often manage large amounts of potentially complex link-data entities. Practical use of this information will require some query capability that minimally supports selection of matching elements, filtering of desired properties and ordering. OSLC Core defines a query capability that is relatively simple, can be implemented on a wide range of existing server architectures, and provides a standard, data source independent query mechanism. The purpose of this query capability is to support tool integration through a common query mechanism.
Resource Paging specifies a capability for servers to make the state of large resources available as a list of smaller subset resources (pages) whose representation is easier to produce by the server and consume by clients. Resource paging is particularly useful in handling results from the query capability or the contents of an LDP container.
Attachments describes a minimal way to manage attachments related to web resources using LDP-Containers and Non-RDF Source [LDP].
OSLC defines a Tracked Resource Set capability that allows servers to expose a set of resources in a way that enables clients to discover the exact set of resources in the set, to track ongoing changes affecting resources in the set. This allows OSLC servers to expose a live feed of linked data in a way that permits clients to build, aggregate, and maintain live, searchable information based on that linked data.
OSLC defines a Configuration Management capability for managing versions and configurations of linked data resources from multiple domains. Using client and server applications that implement the configuration management capability, a team administrator can create configurations of versioned resources contributed from tools and data sources across the lifecycle. These contributions can be assembled into aggregate (global) configurations that are used to resolve references to artifacts in a particular and reproducible context.
Error responses returned by servers in response to requests are defined in Common Vocabulary Terms, Errors.
This section is non-normative.
OSLC is intended to provide a foundation for (lifecycle) application interoperability. A significant number of OSLC domains, and client and server implementations already exist and are in common use. Interoperability issues between applications on incompatible OSLC versions could result in negative impact to end users. One of the goals of the OSLC initiative is to mitigate or eliminate the need for lock-step version upgrades, where clients or servers target one version of a specification and break when new versions are introduced -- requiring all services to be upgraded simultaneously.
OSLC Core and domain specifications will each be versioned independently, and may specify version numbers. Existing OSLC 2.0 clients and servers use the OSLC-Core-Version header described in OSLC Core 2.0 Specification Versioning to indicate what OSLC version they expect or support. But exposing version numbers in OSLC implementations could lead to interoperability issues. Ultimately each domain will decide its compatibility needs. OSLC Core 3.0 does not introduce any changes that would break existing OSLC 2.0 clients. Because of this, there is no need for OSLC Core 3.0 to require servers or clients to utilize an OSLC-Core-Version header with a value of 3.0.
If an OSLC 2.0 client accesses an OSLC 3.0 server, the 3.0 server will always respond to the client in a manner that is compatible with 2.0. The response may include additional headers and entity request or response body information defined by OSLC Core 3.0, but this information will be simply ignored by the 2.0 clients. There will be no missing or invalid information since OSLC Core 3.0 is designed to be compatible with OSLC Core 2.0.
For OSLC clients that access 2.0 servers:
OSLC Core 3.0 does not address compatibility with versions of OSLC prior to [OSLCCore2]. Servers wishing to support compatibility with versions prior to 2.0 should follow OSLC Core 2.0 Specification Versioning.
This section is non-normative.
The following individuals have participated in the creation of this specification and are gratefully acknowledged:
Participants:
James Amsden, IBM (Chair)
Nick Crossley, IBM
Jad El-khoury, KTH Royal Institute of Technology
Ian Green, IBM
David Honey, IBM
Jean-Luc Johnson, Airbus Group SAS
Harish Krishnaswamy, Software AG, Inc.
Arnaud LeHors, IBM
Sam Padget, IBM
Martin Pain, IBM
Arthur Ryman, IBM
Martin Sarabura, PTC (Chair)
Steve Speicher, IBM
This section is non-normative. | https://docs.oasis-open.org/oslc-core/oslc-core/v3.0/oslc-core-v3.0-part1-overview.html | 2020-11-23T20:14:18 | CC-MAIN-2020-50 | 1606141164142.1 | [] | docs.oasis-open.org |
{"metadata":{"image":[],"title":"","description":""},"api":{"url":"","auth":"required","params":[],"results":{"codes":[]},"settings":""},"next":{"description":"","pages":[]},"title":"Create a workflow","type":"basic","slug":"create-a-workflow",](the-pipeline-editor) hosted on the CGC in the **Workflow Editor**. As shown below, you can also use the **Workflow Editor** to build a workflow from scratch.\": \"* [Create a workflow](#section-create-a-workflow) \\n* [Insert tools](#section-insert-tools)\\n* [Connect tools](#section-connect-tools)\\n* [Add input and output nodes to workflows](#section-add-input-and-output-nodes-to-workflows)\\n* [Relabel tools](#section-relabel-tools)\"\n}\n[/block]\n##Create a workflow\nWorkflows are created within projects. So, to create a workflow, first [navigate to the dashboard for the project](view-a-project) that you want to work in.\n\n1. Go to the **Apps** tab, and click **+ Add app**.\n[block:image]\n{\n \"images\": [\n {\n \"image\": [\n \"\",\n \"create-a-wf-cgc.jpg\",\n 1227,\n 702,\n \"#18528d\"\n ],\n \"border\": true\n }\n ]\n}\n[/block]\n 2. Click **Create New App** and then click **Create a workflow ** in the pop-up, as shown below.\n[block:image]\n{\n \"images\": [\n {\n \"image\": [\n \"\",\n \"create-a-wf-cgc-2.jpg\",\n 1146,\n 727,\n \"#eaeaea\"\n ],\n \"border\": true\n }\n ]\n}\n[/block]\n3. Name your workflow, and click **Create**. \n[block:image]\n{\n \"images\": [\n {\n \"image\": [\n \"\",\n \"create-a-wf-cgc-3.jpg\",\n 1146,\n 704,\n \"#f8f1f8\"\n ],\n \"border\": true\n }\n ]\n}\n[/block]\nOnce you have named your new workflow, you will be taken to the Workflow Editor, shown below.\n[block:image]\n{\n \"images\": [\n {\n \"image\": [\n \"\",\n \"create-a-wf-cgc-4.jpg\",\n 1167,\n 687,\n \"#eff1f2\"\n ],\n \"border\": true\n }\n ]\n}\n[/block]\n<div align=\"right\"><a href=\"#top\">top</a></div>\n\n##Insert apps\nUse the right hand panel on the **Workflow Editor** to find apps, which will be the nodes in your workflow. You have the following options:\n\n * **Search apps** - enter the desired keyword(s) to search for an app.\n * **Category** - filter apps based on the category (type of analysis) they belong to (e.g. \"Variant-Calling\").\n * * **Toolkit** - filter apps based on their toolkit (e.g. \"SAMTools\"). Click the app name to open the app page with further information.\n\nApps are divided into:\n\n * **My projects** are apps that you have described using the Tool Editor (for Tools) or Workflow Editor (for workflows).\n * **Public apps** are publicly available apps on the CGC.\n\nDrag-and-drop your chosen tools onto the canvas. \n\nTools are graphically represented in the **Workflow Editor** as blue circular nodes, as shown below.\n[block:image]\n{\n \"images\": [\n {\n \"image\": [\n \"\",\n \"Screen Shot 2016-03-31 at 10.43.06 AM.jpeg\",\n \"440\",\n \"444\",\n \"#2d547c\",\n \"\"\n ]\n }\n ]\n}\n[/block]\n\n[block:callout]\n{\n \"type\": \"info\",\n \"body\": \"If you accidentally drag-and-drop the wrong tool onto the canvas, click on the node and then click on the ‘x’ in the red circle to remove the tool from the workflow.\",\n \"title\": \"Removing a tool from the Workflow Editor\"\n}\n[/block]\n<div align=\"right\"><a href=\"#top\">top</a></div>\n\n##Connect tools\nThere are circles on the perimeter of each each node in the **Workflow Editor**, as shown below. These represent the tool's ports, used for data to flow in and out of. Circles on the left of the node represent input ports whereas the ones on the right indicate output ports.\n[block:image]\n{\n \"images\": [\n {\n \"image\": [\n \"\",\n \"Screen Shot 2016-03-31 at 10.43.06 AM.jpeg\",\n \"440\",\n \"444\",\n \"#2d547c\",\n \"\"\n ]\n }\n ]\n}\n[/block]\nClicking on a port, and dragging will reveal a smart connector. Use this to connect tools into workflows.\n\n<div align=\"right\"><a href=\"#top\">top</a></div>\n\n##Add input and output nodes to workflows\nTo add an input node and connect it to a tool, drag the smart connector from the tool's input node to the far left of the the canvas. An input node will be added.\nTo add an output node to the workflow, drag the smart connector from a tool's output node to the far right of the canvas.\n\n<div align=\"right\"><a href=\"#top\">top</a></div>\n\n##Relabel tools\nTo re-name a tool, click on the tool, and then select the pencil icon that appears next to the tool name.\n\n[block:image]\n{\n \"images\": [\n {\n \"image\": [\n \"\",\n \"Screen Shot 2015-11-13 at 14.46.21.png\",\n \"476\",\n \"404\",\n \"#106a47\",\n \"\"\n ],\n \"caption\": \"Rename a tool\",\n \"border\": true\n }\n ]\n}\n[/block]\nWhen you've finished building a workflow, click **Save** in the upper right corner.\n\n## Add suggested files for an input\n\nWhen creating\n<div align=\"right\"><a href=\"#top\">top</a></div>","updates":[],"order":4,"isReference":false,"hidden":false,"sync_unique":"","link_url":"","link_external":false,"_id":"570bf334e516aa340041cca5","githubsync":"","project":"55faf11ba62ba1170021a9a7","},"parentDoc":null,":6,"createdAt":"2016-04-11T18:55:48.555Z"}
Create a workflow
<a name="top"></a> | https://docs.cancergenomicscloud.org/docs/create-a-workflow | 2020-11-23T19:07:47 | CC-MAIN-2020-50 | 1606141164142.1 | [] | docs.cancergenomicscloud.org |
Property
Metadata. Coerce Value Callback Property
Definition
Gets or sets a reference to a CoerceValueCallback implementation specified in this metadata.
public: property System::Windows::CoerceValueCallback ^ CoerceValueCallback { System::Windows::CoerceValueCallback ^ get(); void set(System::Windows::CoerceValueCallback ^ value); };
public System.Windows.CoerceValueCallback CoerceValueCallback { get; set; }
member this.CoerceValueCallback : System.Windows.CoerceValueCallback with get, set
Public Property CoerceValueCallback As CoerceValueCallback
Property Value
A CoerceValueCallback implementation reference.
Exceptions
Cannot set a metadata property once it is applied to a dependency property operation.
Remarks. | https://docs.microsoft.com/en-gb/dotnet/api/system.windows.propertymetadata.coercevaluecallback?view=netframework-4.7.2 | 2020-11-23T19:53:05 | CC-MAIN-2020-50 | 1606141164142.1 | [] | docs.microsoft.com |
.Line Method
Details
The line is drawn with the specified line width (lcd.linewidth) and "pen" color (lcd.forecolor). Drawing horizontal (lcd.horline) or vertical (lcd.verline) lines is more efficient than drawing generic lines, and should be used whenever possible.
The display panel must be enabled (lcd.enabled= 1- YES) for this method to work. | https://docs.tibbo.com/taiko/lcd_line | 2020-11-23T19:38:01 | CC-MAIN-2020-50 | 1606141164142.1 | [] | docs.tibbo.com |
Настройки проигрывателя
The Player Settings (menu: Edit > Project Settings > Player) let you set various options for the final game built by Unity. There are a few settings that are the same regardless of the build target but most are platform-specific and divided into the following sections:
The general settings are covered below. Settings specific to a platform can be found separately in the platform’s own manual section.
See also Unity Splash Screen settings. | https://docs.unity3d.com/ru/2018.2/Manual/class-PlayerSettings.html | 2020-11-23T20:16:45 | CC-MAIN-2020-50 | 1606141164142.1 | [] | docs.unity3d.com |
rotaryio – Support for reading rotation sensors¶
The
rotaryio module contains classes to read different rotation encoding schemes. See
Wikipedia’s Rotary Encoder page for more
background.
rotaryio.
IncrementalEncoder(pin_a: microcontroller.Pin, pin_b: microcontroller.Pin)¶
IncrementalEncoder determines the relative rotational position based on two series of pulses.
position:int¶
The current position in terms of pulses. The number of pulses per rotation is defined by the specific hardware.
__exit__(self)¶
Automatically deinitializes the hardware when exiting a context. See Lifetime and ContextManagers for more info. | https://circuitpython.readthedocs.io/en/6.0.x/shared-bindings/rotaryio/index.html | 2020-11-23T20:02:27 | CC-MAIN-2020-50 | 1606141164142.1 | [] | circuitpython.readthedocs.io |
How to create a list of all Opportunities that have a very low Engagement
Creating a list of contacts is a fundamental task for Marketing and Sales departments. Often marketing and sales not only want to know which accounts are engaged, but also which accounts are NOT engaged.
As an example imagine it's the end of the quarter and your sales team is calling out which opportunities are going to close. You bring up CaliberMind and pull up a list of the Accounts that have current opportunities in the quarter, but have little or no engagement. It allows the sales team to focus in very quickly on accounts and understand if they are really valid for the quarter or not.
Once you have this list of low engaged accounts you can then easily see what is going on. Has the account received a LOT of emails but has not responded? Have the wrong people been engaged in the dialog and therefore it's scoring much lower than it should? There are numerous questions that can be asked, and answered regarding the account once you start to look deeply at what is occurring across all the different platforms (Marketing, Sales, etc.).
Building out the list in CaliberMind is very easy.
- Head to the CaliberMind List Builder
- Select the Accounts Tab, in the drop-down select Create New, type in the name of your new list and press OK.
- Once we are in the Account List builder we can easily add the filters to give us just the list we are interested in. In this example we are going to use the following filters.
cm_engagement - cm_engagement contains the account score for all accounts within the system for all of the different scoring models deployed. We want all accounts whose score is less than 10 for this example. We also want to limit the results to a specific model, so we need to filter on the Model name.
cm_engaement.account_score < 10
cm_engagment.model_name = Inbound180
sf_opportunity - sf_opportunity is a complete record of all opportunities from your Salesforce system. We want to make sure we are only looking at open opportunities. So here we will select the stage_name values for just open Opportunities. Note these are the value in our demo environment and need to be matched to your stages.
sf_opportunity.stage_name = 0. Pre-demo/Discovery, 1. Demo/Discovery, 2. Selling, 3 - Selling / Proposal, 3. Free Trial
- Test and Save the List by selecting Test in the upper right. Make sure you see results (it should say the number of row's returned. Don't worry if it's more than what you expect as the total is the number of Contacts at all Accounts that match the criteria. Specific reports will filter this down to just eh Account. If you are good with the list select Save.
- Now you can see the list as a filter on any Account based report like ABM - Engagement Trends or Attribution - Attribution Overview.
Check out the complete walk-through on how to create this list here. | https://docs.calibermind.com/article/5pcrdpijf8-how-to-create-a-list-of-all-opportunities-that-have-a-very-low-engagement | 2020-11-23T18:44:30 | CC-MAIN-2020-50 | 1606141164142.1 | [] | docs.calibermind.com |
appRole resource type
Namespace: microsoft.graph
Represents an application role that can be requested by (and granted to) a client application, or that can be used to assign an application to users or groups in a specified role.
The appRoles property of the application and servicePrincipal entities are a collection of appRole.
With appRoleAssignments, app roles can be assigned to users, groups, or other applications' service principals.
Properties
JSON representation
The following is a JSON representation of the resource.
{ "allowedMemberTypes": ["string"], "description": "string", "displayName": "string", "id": "guid", "isEnabled": true, "origin": "string", "value": "string" } | https://docs.microsoft.com/en-us/graph/api/resources/approle?view=graph-rest-1.0 | 2020-11-23T20:44:00 | CC-MAIN-2020-50 | 1606141164142.1 | [] | docs.microsoft.com |
You can control access to entities (e.g., Cloud Actions, Edge Actions, Octave edge devices, and Streams for the users in your company by assigning users to certain user groups. This topic describes how to define those groups.
Note
Octave includes built-in Administrators and Users groups and these groups have following properties:
- The Administrators group grants read and write permissions to all entities in Octave.
- All users are automatically included in the Users group and cannot be removed from that group.
- The Users group permissions can be edited. This can be used as a way to set common permissions to all company members.
Configuring a Group
Follow the steps below to configure a group:
- Navigate to Manage > Groups.
- Click New Group to create a new group or click the edit button on the existing group to configure it; the Groups screen is displayed:
The main elements of the Groups screen are:
- Group Name: the name of the group.
Note
The name of Octave's built-in Administrators and Users groups cannot be modified.
- Temporary Permissions: enable this field so that the permissions defined in this group are only available temporarily. The duration of these permissions can then be set as described in the next point. When temporary permissions are set, the group name will be prefixed with Temporary:
- Temporary Permissions Duration: when Temporary Permissions is enabled you can then define the duration as:
- Relative: the amount of time the permissions are available starting from when you save the group.
- Absolute: an exact day and time in the future when the permissions are to expire.
- Creation and Update dates: indicates when the group was created and last updated.
- Entities Tab: defines the read/write permissions for the various entities in Octave.
- Devices Tab: specifies which devices or Tags can be accessed and defines the types of access allowed to the entities (Read, Write, Event read, and Event write). A Tag-based permission sets the permissions to all devices belonging to that Tag. You can specify permissions for a device or Tag by selecting the respective radio button from the entity dropdown:
- Streams Tab: specifies which Streams can be accessed and defines the types of access allowed to the Streams (Read, Write, Event read, and Event write). Streams inherit permissions from their parent Stream and cannot have less permissions than their parents. In the following screenshot, devices mangoh_d2 and mangoh_s4 both inherit the read and event read permissions from their parent group. In this case, those permissions cannot be disabled per device.
Notes
- A general rule is to grant the Read permission to an entity type, device, or Stream if the "Write" permission is/has also been granted. Otherwise, your group members won't be able to see the entity to be updated.
- Granting read access to a device Stream (e.g., "/COMPANY/devices/DEVICE") grants read access to the device object as well. As a result, the Devices tab only lists permissions given to device Streams, and to Tags.
- The "event read/write" permissions of a specific device/Stream grant access to the Events of that Stream.
- Permissions: defines the read/write permissions for the currently selected tab.
Updated 27 days ago | https://docs.octave.dev/docs/managing-user-groups-and-permissions | 2020-11-23T19:59:27 | CC-MAIN-2020-50 | 1606141164142.1 | [array(['https://files.readme.io/f91116e-groups_entities.png',
'groups_entities.png'], dtype=object)
array(['https://files.readme.io/f91116e-groups_entities.png',
'Click to close...'], dtype=object)
array(['https://files.readme.io/eaaef1a-image-2020-10-27-11-13-30-261.png',
'image-2020-10-27-11-13-30-261.png'], dtype=object)
array(['https://files.readme.io/eaaef1a-image-2020-10-27-11-13-30-261.png',
'Click to close...'], dtype=object)
array(['https://files.readme.io/73f31f6-devce_tag_selection.png',
'devce_tag_selection.png'], dtype=object)
array(['https://files.readme.io/73f31f6-devce_tag_selection.png',
'Click to close...'], dtype=object)
array(['https://files.readme.io/2ae2b26-stream_permission_inheritence.png',
'stream_permission_inheritence.png'], dtype=object)
array(['https://files.readme.io/2ae2b26-stream_permission_inheritence.png',
'Click to close...'], dtype=object) ] | docs.octave.dev |
Scenarios for Backing Up and Restoring TKGI Workloads
Page last updated:
This section summarizes the scenarios and considerations for workload backup and restore using Velero and the Restic plugin.
Kubernetes Workload Backup and Restore Scenarios Using Velero
The following scenarios and considerations summarize the various aspects of workload backup and restore using Velero for TKGI.
Please send any feedback you have to [email protected]. | https://docs.pivotal.io/tkgi/1-9/velero-scenarios.html | 2020-11-23T20:03:52 | CC-MAIN-2020-50 | 1606141164142.1 | [] | docs.pivotal.io |
Join the SingleStore Community Today
Get expert advice, develop skills, and connect with others.
Please follow this guide to learn how to migrate to SingleStore tools.
MemSQL Helios does not support this command.
Starts a MemSQL node given a MemSQL node ID.. | https://docs.singlestore.com/v7.0/reference/memsql-ops-cli-reference/memsql-node-and-cluster-management/memsql-start/ | 2020-11-23T19:46:13 | CC-MAIN-2020-50 | 1606141164142.1 | [] | docs.singlestore.com |
FILENUM (File Numbers) Library
The FILENUM library automates file number assignment. The fd. object supports up to fd.maxopenedfiles opened files. Using the FILENUM library, your code can get an unused file number, work with the file "on" this number, then release the file number into a pool of free file numbers. | https://docs.tibbo.com/taiko/lib_filenum | 2020-11-23T19:40:08 | CC-MAIN-2020-50 | 1606141164142.1 | [] | docs.tibbo.com |
About Vicon Nexus 2.6
V.6
Nexus 2.6.6
If you are planning to use MATLAB with Nexus 2.6, ensure that, in addition to installing MATLAB, you install the .Net Framework version 4.5.
Systems supported for Nexus 2
Before you install Vicon Nexus 2.6, note the following limitations on supported systems:
- Nexus captures data only from Vicon systems (including Vicon Vero and Vicon Vue, Vicon Vantage, Vicon Bonita, Vicon T?Series, and MX+ and MX cameras and units).
- Nexus 2.6.
For more information on the installation and licensing process, see Installing and licensing Vicon Nexus .. | https://docs.vicon.com/display/Nexus26/What%27s+new+in+Vicon+Nexus+2.6 | 2020-11-23T19:28:22 | CC-MAIN-2020-50 | 1606141164142.1 | [] | docs.vicon.com |
The following details existing device issues that have been discovered with other releases. A resolution is included to address the issue, if available.
Being a menu based device, only discovery using SNMP is supported.
There is no post operation as the device reboots itself after a configuration push.
Interfaces pull is supported using SNMP only. | https://docs.vmware.com/en/VMware-Smart-Assurance/10.1.0/ncm-dsr-support-matrix-27HF1/GUID-F0EA6BC9-A32F-43A7-B7C6-0B91C0E83358.html | 2020-11-23T20:25:20 | CC-MAIN-2020-50 | 1606141164142.1 | [] | docs.vmware.com |
Any application, including Otto, can be loosely divided into two parts:
Considerations of the Otto user interface include, for example, how the products are laid out on the page, how the selectors look, how the checkout button is labelled, what sort of fonts and colors are used to display the text, and so on. Considerations of Otto’s application logic includes how Otto adjusts product price based on discount coupons, and how it records that information to be displayed in the future.
Theming consists of changing the user interface without changing the application logic. When you set up an E-Commerce website for use with your Open edX site, you probably want to use your organization’s own logo, modify the color scheme, change links in the header and footer for SEO (search engine optimization) purposes, and so on.
However, although the user interface might look different, the application logic must remain the same so that Otto!
The default Open edX theme is named “Comprehensive Theming”. You can disable
Comprehensive Theming by setting
ENABLE_COMPREHENSIVE_THEMING to
False, as shown in this example, then applying your custom theme.
ENABLE_COMPREHENSIVE_THEMING = False
From a technical perspective, theming consists of overriding core templates, static assets, and Sass with themed versions of those resources.
Every theme must conform to a directory structure that mirrors the Otto directory structure.
my-theme ├── README.rst ├── static | └── images | | └── logo.png | | | └── sass | └── partials | └── utilities | └── _variables.scss | └── templates └── oscar | └── dashboard | └── index.html └── 404.html
Any template included in
ecommerce/templates directory can be “themed”.
However, make sure not to override class names or ID values of HTML elements
within a template, as these are used by JavaScript or CSS. Overriding these
names and values can cause unwanted behavior.
Any static asset included in
ecommerce/static can be overridden except for
the CSS files in the
ecommerce/static/css directory. CSS styles can be
overridden via Sass overrides explained below.
Caution
Theme names must be unique. The names of static assets or directories must not be same as the theme’s name, otherwise static assets will not work correctly.
Sass overrides are a little different from static asset or template overrides.
There are two types of styles included in
ecommerce/static/sass:
Caution
Styles present in
ecommerce/static/sass/base should not be
overridden as overriding these could result in an unexpected behavior.
Any styles included in
ecommerce/static/sass/partials can be overridden.
Styles included in this directory contain variable definitions that are used
by main Sass files. Elements of the user interface such as header/footer,
background, fonts, and so on, can be updated in this directory.
To enable a theme, you must first install your theme onto the same server that
is running Otto. If you are using devstack or fullstack to run Otto, you must
be sure that the theme is present on the Vagrant virtual machine. It is up to
you where to install the theme on the server, but a good default location is
/edx/app/ecommerce/ecommerce/themes.
Note
All themes must reside in the same physical directory.
In order for Otto to use the installed themes, you must specify the location
of the theme directory in Django settings by defining
COMPREHENSIVE_THEME_DIRS in your settings file, as shown in the example,
where
/edx/app/ecommerce/ecommerce/themes is the path to where you have
installed the themes on your server.
COMPREHENSIVE_THEME_DIRS = ["/edx/app/ecommerce/ecommerce/themes", ]
You can list all theme directories using this setting.
After you install a theme, you associate it with your site by adding appropriate entries to the following tables.
Site
Site Themes
For local devstack, if the Otto server is running at
localhost:8002 you can
enable a
my-theme by following these steps.
localhost:8002and the name “Otto My Theme”.
my-theme, selecting
localhost:8002from the
sitedropdown.
The Otto server can now be started, and you should see that
my-theme has
been applied. If you have overridden Sass styles and you are not seeing those
overrides, then you need to compile Sass files as described in Compiling
Theme Sass.
A theme can be disabled by removing its corresponding
Site Theme entry
using django admin.
If you have already set up
COMPREHENSIVE_THEME_DIRS, you can use the
management command for adding
Site and
SiteTheme directly from the
terminal.
python manage.py create_or_update_site_theme --site-domain=localhost:8002 --site-name=localhost:8002 --site-theme=my-theme
The
create_or_update_site_theme command accepts the following optional
arguments, listed below with examples.
ecommerce.settings.devstack.
python manage.py create_or_update_site_theme --settings=ecommerce.settings.production
# update domain of the site with id 1 and add a new theme # ``my-theme`` for this site python manage.py create_or_update_site_theme --site-id=1 --site-domain=my-theme.localhost:8002 --site-name=my-theme.localhost:8002 --site-theme=my-theme
python manage.py create_or_update_site_theme --site-domain=localhost:8002 --site-theme=my-theme
''.
python manage.py create_or_update_site_theme --site-domain=localhost:8002 --site-name=localhost:8002 --site-theme=my-theme
python manage.py create_or_update_site_theme --site-domain=localhost:8002 --site-name=localhost:8002 --site-theme=my-theme
You use the management command
update_assets to compile and collect themed
Sass.
python manage.py update_assets
The
update_assets command accepts the following optional arguments, listed
below with examples.
ecommerce.settings.devstack.
python manage.py update_assets --settings=ecommerce.settings.production
allfor all themes,
noto skip Sass compilation for themes. The default option is
all.
# compile Sass for all themes python manage.py update_assets --theme=all # compile Sass for only given themes, useful for situations if you have # installed a new theme and want to compile Sass for just this theme python manage.py update_assets --themes my-theme second-theme third-theme # skip Sass compilation for themes, useful for testing changes to system # Sass, keeping theme styles unchanged python manage.py update_assets --theme=no
nested,
expanded,
compactand
compressed. The default option is
nested.
python manage.py update_assets --output-style='compressed'
# useful in cases where you have updated theme Sass, and system Sass is # unchanged. python manage.py update_assets --skip-system
python manage.py update_assets --enable-source-comments
collectstaticcall after Sass compilation.
# useful if you just want to compile Sass, and call ``collectstatic`` later, # possibly by a script python manage.py update_assets --skip-collect
If you have gone through the preceding procedures and you are not seeing theme overrides, check the following areas.
COMPREHENSIVE_THEME_DIRSmust contain the path for the directory containing themes. For example, if your theme is
/edx/app/ecommerce/ecommerce/themes/my- themethen the correct value for
COMPREHENSIVE_THEME_DIRSis
['/edx/app/ecommerce/ecommerce/themes'].
domainname for site is the name that users will put in the browser to access the site, and includes the port number. For example, if Otto is running on
localhost:8002then the value for
domainshould be
localhost:8002.
my-themeis the correct theme dir name. | https://edx.readthedocs.io/projects/edx-installing-configuring-and-running/en/open-release-juniper.master/ecommerce/theming.html | 2020-11-23T19:01:26 | CC-MAIN-2020-50 | 1606141164142.1 | [] | edx.readthedocs.io |
Source code for kedro.runner.thread. """``ThreadRunner`` is an ``AbstractRunner`` implementation. It can be used to run the ``Pipeline`` in parallel groups formed by toposort using threads. """ import warnings from collections import Counter from concurrent.futures import FIRST_COMPLETED, ThreadPoolExecutor, wait from itertools import chain from typing import Set from kedro.io import AbstractDataSet, DataCatalog, MemoryDataSet from kedro.pipeline import Pipeline from kedro.pipeline.node import Node from kedro.runner.runner import AbstractRunner, run_node[docs]class ThreadRunner(AbstractRunner): """``ThreadRunner`` is an ``AbstractRunner`` implementation. It can be used to run the ``Pipeline`` in parallel groups formed by toposort using threads. """[docs] def __init__(self, max_workers: int = None, is_async: bool = False): """ Instantiates the runner. Args: max_workers: Number of worker processes to spawn. If not set, calculated automatically based on the pipeline configuration and CPU core count. is_async: If True, set to False, because `ThreadRunner` doesn't support loading and saving the node inputs and outputs asynchronously with threads. Defaults to False. Raises: ValueError: bad parameters passed """ if is_async: warnings.warn( "`ThreadRunner` doesn't support loading and saving the " "node inputs and outputs asynchronously with threads. " "Setting `is_async` to False." ) super().__init__(is_async=False) if max_workers is not None and max_workers <= 0: raise ValueError("max_workers should be positive") self._max_workers = max_workers _get_required_workers_count(self, pipeline: Pipeline): """ Calculate the max number of processes required for the pipeline """ # Number of nodes is a safe upper-bound estimate. # It's also safe to reduce it by the number of layers minus one, # because each layer means some nodes depend on other nodes # and they can not run in parallel. # It might be not a perfect solution, but good enough and simple. required_threads = len(pipeline.nodes) - len(pipeline.grouped_nodes) + 1 return ( min(required_threads, self._max_workers) if self._max_workers else required_threads ) def _run( # pylint: disable=too-many-locals,useless-suppression self, pipeline: Pipeline, catalog: DataCatalog, run_id: str = None ) -> None: """The abstract interface for running pipelines. Args: pipeline: The ``Pipeline`` to run. catalog: The ``DataCatalog`` from which to fetch data. run_id: The id of the run. Raises: Exception: in case of any downstream node failure. """ nodes = pipeline.nodes load_counts = Counter(chain.from_iterable(n.inputs for n in nodes)) node_dependencies = pipeline.node_dependencies todo_nodes = set(node_dependencies.keys()) done_nodes = set() # type: Set[Node] futures = set() done = None max_workers = self._get_required_workers_count(pipeline) with ThreadPoolExecutor(max_workers=max_workers) as pool: while True: ready = {n for n in todo_nodes if node_dependencies[n] <= done_nodes} todo_nodes -= ready for node in ready: futures.add( pool.submit(run_node, node, catalog, self._is_async, run_id) ) if not futures: assert not todo_nodes, (todo_nodes, done_nodes, ready, done) break done, futures = wait(futures, return_when=FIRST_COMPLETED) for future in done: try: node = future.result() except Exception: self._suggest_resume_scenario(pipeline, done_nodes) raise done_nodes.add(node) # decrement load counts and release any data sets we've finished # with this is particularly important for the shared datasets we # create above) | https://kedro.readthedocs.io/en/stable/_modules/kedro/runner/thread_runner.html | 2020-11-23T19:37:04 | CC-MAIN-2020-50 | 1606141164142.1 | [] | kedro.readthedocs.io |
Rocket.Chat allows for the usage of a CDN to server static assets such as JS, CSS and images such as avatars.
If you provide a CDN prefix that is not live or incorrect you may lose access to your Rocket.Chat instance as the required assets will not be found.
By navigating to the General section of the Administration system in Rocket.Chat there are the options to provide a CDN for all assets and optionally set a separate CDN for just JS & CSS assets.
This is a string that depending on the value provided will generate different outcomes.
Enable this for serving all assets from the same CDN.
This option takes the same style input as CDN Prefix. The value provided will be applied only to JS and CSS assets.
If the situation occurs where CDN stops working or the provided values are incorrect, then there are a few work arounds to allow a fix to be implemented inside the Administration settings, which may be inaccessible.
As the front end of Rocket.Chat may be inaccessible, the backend Mongo database can be updated to remove the CDN. The following Mongo commands should reset the value to the default state.
db.rocketchat_settings.update({_id:"CDN_PREFIX"},{$set:{"value":""}})
db.rocketchat_settings.update({_id:"CDN_JSCSS_PREFIX"},{$set:{"value":""}})
A browser extension can be used to rewrite URLs from the CDN address to the same location as where Rocket.Chat is running. Please take care when selecting an appropriate extension for your browser. | https://docs.rocket.chat/guides/administrator-guides/cdn | 2020-11-23T18:55:51 | CC-MAIN-2020-50 | 1606141164142.1 | [] | docs.rocket.chat |
The Mecanim Animation System is particularly well suited for working with animations for humanoid skeletons. Since humanoid skeletons are used extensively in games, Unity provides a specialized workflow, and an extended tool set for humanoid animations.
Because of the similarity in bone structure, it is possible to map animations from one humanoid skeleton to another, allowing retargeting and inverse kinematics. With rare exceptions, humanoid models can be expected to have the same basic structure, representing the major articulate parts of the body, head and limbs. The Mecanim system makes good use of this idea to simplify the rigging and control of animations. A fundamental step in creating a animation is to set up a mapping between the simplified humanoid bone structure understood by Mecanim and the actual bones present in the skeleton; in Mecanim terminology, this mapping is called an Avatar. The pages in this section explain how to create an Avatar for your model. | https://docs.unity3d.com/cn/560/Manual/AvatarCreationandSetup.html | 2020-11-23T19:56:08 | CC-MAIN-2020-50 | 1606141164142.1 | [] | docs.unity3d.com |
Welcome to SCSGate’s documentation!¶
This python module allows to interact with a SCSGate device.
The module has been written to manage a SCSGate device with home-assistant.
Monitoring the SCS bus¶
The scsgate pip package provides a script named
scs-monitor that has
two purposes:
- interactively create a configuration file for home-assistant
- sniff all the messages going over the SCS bus
Creation of a home-assistant configuration file¶
scs-monitor can be used to create a home-assistant configuration
file defining all the available devices.
This can be done by using the
--homeassistant-config flag.
Once started
scs-monitor will start sniffing all the events going
over the SCS bus. For each captured message it will extract the ID of
the relevant device and will ask the user to enter an ID for
home-assistant and the name of the device.
By pressing
CTRL-C the program will exit and generate the
home-assistant configuration file.
Sniffing messages¶
By defaul
scs-monitor will print all the messages going over the SCS
buffer.
It’s possible to filter the messages related with a list of known
devices. This can be done using the
-f flag followed by the name of
the file containing the devices to ignore. The file is a yaml document
like the one created in the previous step.
It’s also possible to redirect all the output to a text file by using
the
-o flag. | https://scsgate.readthedocs.io/en/latest/?badge=latest | 2020-11-23T19:17:43 | CC-MAIN-2020-50 | 1606141164142.1 | [] | scsgate.readthedocs.io |
Automatic Install Using
go-xcat¶
go-xcat is a tool that can be used to fully install or update xCAT.
go-xcat will automatically download the correct package manager repository file from xcat.org and use the public repository to install xCAT. If the xCAT management node does not have internet connectivity, use process described in the Manual Installation section of the guide.
Download the
go-xcattool using
wget:
wget -O - >/tmp/go-xcat chmod +x /tmp/go-xcat
Run the
go-xcattool:
/tmp/go-xcat install # installs the latest stable version of xCAT /tmp/go-xcat -x devel install # installs the latest development version of xCAT | https://xcat-docs.readthedocs.io/en/latest/guides/install-guides/yum/automatic_install.html | 2020-11-23T19:21:32 | CC-MAIN-2020-50 | 1606141164142.1 | [] | xcat-docs.readthedocs.io |
pyinfra.pseudo_modules module¶
These three pseudo modules (state, inventory, host) are used throughout pyinfra and provide the magic that means “from pyinfra import host” inside a deploy file always represents the current host being executed, ie these modules are dynamic and change during execution of pyinfra.
Although CLI only when in use, these are bundled into the main pyinfra package as they are utilised throughout (to determine the current state/host when executing in CLI mode).
- class
pyinfra.pseudo_modules.
PseudoModule¶
Bases:
object | http://docs.pyinfra.com/en/1.x/apidoc/pyinfra.pseudo_modules.html | 2020-11-23T18:35:02 | CC-MAIN-2020-50 | 1606141164142.1 | [] | docs.pyinfra.com |
An Act to create 16.314 of the statutes; Relating to: employment screening of and employability plans for residents in public housing. (FE)
Amendment Histories
Bill Text (PDF: )
LC Amendment Memo
Fiscal Estimates and Reports
SB4 ROCP for Committee on Public Benefits, Licensing and State-Federal Relations On 2/15/2018 (PDF: )
SB4 ROCP for Committee on Senate Organization (PDF: )
LC Bill Hearing Materials
Wisconsin Ethics Commission information
January 2018 Special Session Assembly Bill 4 - A - Enacted into Law | https://docs.legis.wisconsin.gov/2017/proposals/jr8/sb4 | 2021-02-25T00:26:18 | CC-MAIN-2021-10 | 1614178349708.2 | [] | docs.legis.wisconsin.gov |
New-Azure
ADApplication
Creates an application.
Syntax
New-Azure
ADApplication [-AddIns <System.Collections.Generic.List`1[Microsoft.Open.AzureAD.Model.AddIn]>] [-AllowGuestsSignIn <Boolean>] [-AllowPassthroughUsers <Boolean>] [-AppLogoUrl <String>] [-AppRoles <System.Collections.Generic.List`1[Microsoft.Open.AzureAD.Model.AppRole]>] [-AvailableToOtherTenants <Boolean>] -DisplayName <String> [-ErrorUrl <String>] [-GroupMembershipClaims <String>] [-Homepage <String>] [-IdentifierUris <System.Collections.Generic.List`1[System.String]>] [-InformationalUrls <InformationalUrl>] [-IsDeviceOnlyAuthSupported <Boolean>] [-IsDisabled <Boolean>] []>] [-Oauth2RequirePostResponse <Boolean>] [-OrgRestrictions <System.Collections.Generic.List`1[System.String]>] [-OptionalClaims <OptionalClaims>] [-ParentalControlSettings <ParentalControlSettings>] [-PasswordCredentials <System.Collections.Generic.List`1[Microsoft.Open.AzureAD.Model.PasswordCredential]>] [-PreAuthorizedApplications <System.Collections.Generic.List`1[Microsoft.Open.AzureAD.Model.PreAuthorizedApplication]>] [-PublicClient <Boolean>] [-PublisherDomain <String>] [-RecordConsentConditions <String>] [-ReplyUrls <System.Collections.Generic.List`1[System.String]>] [-RequiredResourceAccess <System.Collections.Generic.List`1[Microsoft.Open.AzureAD.Model.RequiredResourceAccess]>] [-SamlMetadataUrl <String>] [-SignInAudience <String>] [-WwwHomepage <String>] [.
{{ Fill AllowGuestsSignIn Description }}
{{ Fill AllowPassthroughUsers Description }}
{{ Fill AppLogoUrl Description }}
The collection of application roles that an application may declare. These roles can be assigned to users, groups or service principals.
Indicates whether this application is available in other tenants.
Specifies the display name of the application..
{{ Fill InformationalUrls Description }}
{{ Fill IsDeviceOnlyAuthSupported Description }}
{{ Fill IsDisabled Description }}
{{ Fill OptionalClaims Description }}
{{ Fill OrgRestrictions Description }}
{{ Fill ParentalControlSettings Description }}
The collection of password credentials associated with the application.
{{ Fill PreAuthorizedApplications Description }}
Specifies whether this application is a public client (such as an installed application running on a mobile device). Default is false.
{{ Fill PublisherDomain Description }}.
{{ Fill SignInAudience Description }}
{{ Fill WwwHomepage Description }} | https://docs.microsoft.com/en-us/powershell/module/azuread/new-azureadapplication?view=azureadps-2.0 | 2021-02-25T01:04:12 | CC-MAIN-2021-10 | 1614178349708.2 | [] | docs.microsoft.com |
Attacker Behavior Analytics
Attacker Behavior Analytics are pre-built detections modeled around our wide array of threat intelligence. Attacker Behavior Analytics expose the finite ways in which attackers gain persistence on an asset and send and receive commands to victim machines.
Read more about this feature in the Rapid7 blog post here:.
ABA Alerts
Each ABA detection hunts for a unique attacker behavior, which you can toggle to an alert, whitelist, or track as notable behavior. To manage these settings, go to Settings > Alert Settings > Attacker Behavior Analytics. Find the indicator or threat you want to manage and change its state in the provided dropdown menu.
Threat and Indicator Expiration
Attacker behaviors are constantly evolving and will become stale, so InsightIDR will expire old behaviors once they are past their value.
Threats
Threats are known malicious indicators that appear together during specific attacks. In the provided example, the Credential Harvester threat has several MimiKatz indicators including credential editors and others. However, this group would not appear together in a different threat, such as Ransomware.
On the "Threats" tab, you can expand each threat to see more details about the individual indicators.
Indicators
Indicators are individual behaviors that are used in attacks. Each of the indicators includes recommended actions to remediate any harm done by the indicator.
Because there are so many indicators, InsightIDR will allow you to display more indicators at the bottom of the page.
Search
You can search through threats and indicators for specific malicious actions and behaviors.
For example, if you believe that you are vulnerable through SSH, you can use InsightIDR to search for attacker behavior that might be utilized against you.
| https://docs.rapid7.com/insightidr/attacker-behavior-analytics/ | 2021-02-24T23:18:24 | CC-MAIN-2021-10 | 1614178349708.2 | [array(['/areas/docs/_repos//product-documentation__master/e72e7043e22ca9f0d57e195a9d1aa23334384c48/insightidr/images/Screen Shot 2018-06-08 at 13.12.13.png',
None], dtype=object)
array(['/areas/docs/_repos//product-documentation__master/e72e7043e22ca9f0d57e195a9d1aa23334384c48/insightidr/images/Screen Shot 2018-04-23 at 5.54.54 PM.png',
None], dtype=object)
array(['/areas/docs/_repos//product-documentation__master/e72e7043e22ca9f0d57e195a9d1aa23334384c48/insightidr/images/Screen Shot 2018-04-23 at 6.08.20 PM.png',
None], dtype=object)
array(['/areas/docs/_repos//product-documentation__master/e72e7043e22ca9f0d57e195a9d1aa23334384c48/insightidr/images/Screen Shot 2018-04-23 at 5.55.47 PM.png',
None], dtype=object)
array(['/areas/docs/_repos//product-documentation__master/e72e7043e22ca9f0d57e195a9d1aa23334384c48/insightidr/images/Screen Shot 2018-04-23 at 6.23.58 PM.png',
None], dtype=object)
array(['/areas/docs/_repos//product-documentation__master/e72e7043e22ca9f0d57e195a9d1aa23334384c48/insightidr/images/Screen Shot 2018-04-23 at 6.23.43 PM.png',
None], dtype=object) ] | docs.rapid7.com |
jnpr.junos.utils¶
jnpr.junos.utils.config¶
- class
jnpr.junos.utils.config.
Config(dev, mode=None, **kwargs)[source]¶
Bases:
jnpr.junos.utils.util.Util
Overview of Configuration Utilities.
commit(): commit changes
commit_check(): perform the commit check operation
diff(): return the diff string between running and candidate config
load(): load changes into the candidate config
lock(): take an exclusive lock on the candidate config
pdiff(): prints the diff string (debug/helper)
rescue(): controls “rescue configuration”
rollback(): perform the load rollback command
unlock(): release the exclusive lock
__init__(dev, mode=None, **kwargs)[source]¶
# mode can be private/dynamic/exclusive/batch/ephemeral with Config(dev, mode='exclusive') as cu: cu.load('set system services netconf traceoptions file xyz', format='set') print cu.diff() cu.commit()
Warning
Ephemeral databases are an advanced Junos feature which if used incorrectly can have serious negative impact on the operation of the Junos device. We recommend you consult JTAC and/or you Juniper account team before deploying the ephemeral database feature in your network.
commit(**kvargs)[source]¶
Commit a configuration.
Warning
If the function does not receive a reply prior to the timeout a RpcTimeoutError will be raised. It is possible the commit was successful. Manual verification may be required.
commit_check(**kvargs)[source]¶
Perform a commit check. If the commit check passes, this function will return
True. If the commit-check results in warnings, they are reported and available in the Exception errs.
diff(rb_id=0, ignore_warning=False, use_fast_diff=False)[source]¶
Retrieve a diff (patch-format) report of the candidate config against either the current active config, or a different rollback.
load(*vargs, **kvargs)[source]¶
Loads changes into the candidate configuration. Changes can be in the form of strings (text,set,xml, json), XML objects, and files. Files can be either static snippets of configuration or Jinja2 templates. When using Jinja2 Templates, this method will render variables into the templates and then load the resulting change; i.e. “template building”.
lock()[source]¶
Attempts an exclusive lock on the candidate configuration. This is a non-blocking call.
pdiff(rb_id=0, ignore_warning=False, use_fast_diff=False)[source]¶
Helper method that calls
rollback(rb_id=0)[source]¶
Rollback the candidate config to either the last active or a specific rollback number.
jnpr.junos.utils.fs¶
- class
jnpr.junos.utils.fs.
FS(dev)[source]¶
Bases:
jnpr.junos.utils.util.Util
Filesystem (FS) utilities:
cat(): show the contents of a file
checksum(): calculate file checksum (md5,sha256,sha1)
cp(): local file copy (not scp)
cwd(): change working directory
ls(): return file/dir listing
mkdir(): create a directory
pwd(): get working directory
mv(): local file rename
rm(): local file delete
rmdir(): remove a directory
stat(): return file/dir information
storage_usage(): return storage usage
directory_usage(): return directory usage
storage_cleanup(): perform storage storage_cleanup
storage_cleanup_check(): returns a list of files which will be
- removed at cleanup
symlink(): create a symlink
tgz(): tar+gzip a directory
checksum(path, calc='md5')[source]¶
Performs the checksum command on the given file path using the required calculation method and returns the string value. If the path is not found on the device, then
Noneis returned.
cp(from_path, to_path)[source]¶
Perform a local file copy where from_path and to_path can be any valid Junos path argument. Refer to the Junos “file copy” command documentation for details.
directory_usage(path='.', depth=0)[source]¶
Returns the directory usage, similar to the unix “du” command.
ls(path='.', brief=False, followlink=True)[source]¶
File listing, returns a dict of file information. If the path is a symlink, then by default followlink will recursively call this method to obtain the symlink specific information.
mv(from_path, to_path)[source]¶
Perform a local file rename function, same as “file rename” Junos CLI.
stat(path)[source]¶
Returns a dictionary of status information on the path, or
Noneif the path does not exist.
storage_cleanup()[source]¶
Perform the ‘request system storage cleanup’ command to remove files from the filesystem. Return a
dictof file name/info on the files that were removed.
storage_cleanup_check()[source]¶
Perform the ‘request system storage cleanup dry-run’ command to return a
dictof files/info that would be removed if the cleanup command was executed.
symlink(from_path, to_path)[source]¶
Executes the ‘ln -sf from_path to_path’ command.
Warning
REQUIRES SHELL PRIVILEGES
jnpr.junos.utils.scp¶
- class
jnpr.junos.utils.scp.
SCP(junos, **scpargs)[source]¶
Bases:
object
The SCP utility is used to conjunction with
jnpr.junos.utils.sw.SWwhen transferring the Junos image to the device. The
SCPcan be used for other secure-copy use-cases as well; it is implemented to support the python context-manager pattern. For example:
from jnpr.junos.utils.scp import SCP with SCP(dev, progress=True) as scp: scp.put(package, remote_path)
open(**scpargs)[source]¶
Creates an instance of the scp object and return to caller for use.
Note
This method uses the same username/password authentication credentials as used by
jnpr.junos.device.Device. It can also use
ssh_private_key_fileoption if provided to the
jnpr.junos.device.Device
jnpr.junos.utils.start_shell¶
- class
jnpr.junos.utils.start_shell.
StartShell(nc, timeout=30)[source]¶
Bases:
object
Junos shell execution utility. This utility is written to support the “context manager” design pattern. For example:
def _ssh_exec(self, command): with StartShell(self._dev) as sh: got = sh.run(command) return got
open()[source]¶
Open an ssh-client connection and issue the ‘start shell’ command to drop into the Junos shell (csh). This process opens a
paramiko.SSHClientinstance.
run(command, this='(%|#|\\$)\\s', timeout=0)[source]¶
Run a shell command and wait for the response. The return is a tuple. The first item is True/False if exit-code is 0. The second item is the output of the command.
with StartShell(dev) as ss: print ss.run('cprod -A fpc0 -c "show version"', timeout=10)
Note
as a side-effect this method will set the
self.last_okproperty. This property is set to
Trueif
$?is “0”; indicating the last shell command was successful else False. If
thisis set to None, last_ok will be set to True if there is any content in result of the executed shell command.
jnpr.junos.utils.sw¶
- class
jnpr.junos.utils.sw.
SW(dev)[source]¶
Bases:
jnpr.junos.utils.util.Util
Software Utility class, used to perform a software upgrade and associated functions. These methods have been tested on simple deployments. Refer to install for restricted use-cases for software upgrades.
- Primary methods:
install(): perform the entire software installation process
reboot(): reboots the system for the new image to take effect
poweroff(): shutdown the system
- Helpers: (Useful as standalone as well)
put(): SCP put package file onto Junos device
pkgadd(): performs the ‘request’ operation to install the package
validate(): performs the ‘request’ to validate the package
- Miscellaneous:
- rollback: same as ‘request software rollback’
- inventory: (property) provides file info for current and rollback images on the device
halt(in_min=0, at=None, all_re=True, other_re=False)[source]¶
Perform a system halt, with optional delay (in minutes) or at a specified date and time.
install(package=None, pkg_set=None, remote_path='/var/tmp', progress=None, validate=False, checksum=None, cleanfs=True, no_copy=False, issu=False, nssu=False, timeout=1800, cleanfs_timeout=300, checksum_timeout=300, checksum_algorithm='md5', force_copy=False, all_re=True, vmhost=False, **kwargs)[source]¶
Performs the complete installation of the package that includes the following steps:
- If :package: is a URL, or :no_copy: is True, skip to step 8.
- computes the checksum of :package: or :pgk_set: on the local host if :checksum: was not provided.
- performs a storage cleanup on the remote Junos device if :cleanfs: is
True
- Attempts to compute the checksum of the :package: filename in the :remote_path: directory of the remote Junos device if the :force_copy: argument is
False
- SCP or FTP copies the :package: file from the local host to the :remote_path: directory on the remote Junos device under any of the following conditions:
- The :force_copy: argument is
True
- The :package: filename doesn’t already exist in the :remote_path: directory of the remote Junos device.
- The checksum computed in step 2 does not match the checksum computed in step 4.
- If step 5 was executed, computes the checksum of the :package: filename in the :remote_path: directory of the remote Junos device.
- Validates the checksum computed in step 2 matches the checksum computed in step 6.
- validates the package if :validate: is True
- installs the package
Warning
This process has been validated on the following deployments.
Tested:
- Single RE devices (EX, QFX, MX, SRX).
- MX dual-RE
- EX virtual-chassis when all same HW model
- QFX virtual-chassis when all same HW model
- QFX/EX mixed virtual-chassis
- Mixed mode VC
Known Restrictions:
- SRX cluster
- MX virtual-chassis
You can get a progress report on this process by providing a progress callback.
inventory¶
Returns dictionary of file listing information for current and rollback Junos install packages. This information comes from the /packages directory.
Warning
Experimental method; may not work on all platforms. If you find this not working, please report issue.
- classmethod
local_checksum(package, algorithm='md5')[source]¶
Computes the checksum value on the local package file.
- classmethod
local_sha1(package)[source]¶
Computes the SHA1 checksum value on the local package file.
pkgadd(remote_package, vmhost=False, **kvargs)[source]¶
Issue the RPC equivalent of the ‘request system software add’ command or the ‘request vmhost software add’ command on the package. If vhmhost=False, the <request-package-add> RPC is used and the The “no-validate” options is set. If you want to validate the image, do that using the specific
validate()method. If vmhost=True, the <request-vmhost-package-add> RPC is used.
If you want to reboot the device, invoke the
reboot()method after installing the software rather than passing the
reboot=Trueparameter.
pkgaddISSU(remote_package, vmhost=False, **kvargs)[source]¶
Issue the RPC equivalent of the ‘request system software in-service-upgrade’ command or the ‘request vmhost software in-service-upgrade’ command on the package. If vhmhost=False, the <request-package-in-service-upgrade> RPC is used. If vmhost=True, the <request-vmhost-package-in-service-upgrade> RPC is used.
pkgaddNSSU(remote_package, **kvargs)[source]¶
Issue the ‘request system software nonstop-upgrade’ command on the package.
poweroff(in_min=0, at=None, on_node=None, all_re=True, other_re=False)[source]¶
Perform a system shutdown, with optional delay (in minutes) .
If the device is equipped with dual-RE, then both RE will be shut down. This code also handles EX/QFX VC.
Todo
need to better handle the exception event.
put(package, remote_path='/var/tmp', progress=None)[source]¶
SCP or FTP ‘put’ the package file from the local server to the remote device.
reboot(in_min=0, at=None, all_re=True, on_node=None, vmhost=False, other_re=False)[source]¶
Perform a system reboot, with optional delay (in minutes) or at a specified date and time.
If the device is equipped with dual-RE, then both RE will be rebooted. This code also handles EX/QFX VC.
remote_checksum(remote_package, timeout=300, algorithm='md5')[source]¶
Computes a checksum of the remote_package file on the remote device.
rollback()[source]¶
Issues the ‘request’ command to do the rollback and returns the string output of the results.
safe_copy(package, remote_path='/var/tmp', progress=None, cleanfs=True, cleanfs_timeout=300, checksum=None, checksum_timeout=300, checksum_algorithm='md5', force_copy=False)[source]¶
Copy the install package safely to the remote device. By default this means to clean the filesystem to make space, perform the secure-copy, and then verify the checksum.
validate(remote_package, issu=False, nssu=False, **kwargs)[source]¶
Issues the ‘request’ operation to validate the package against the config.
jnpr.junos.utils.util¶
Junos PyEZ Utility Base Class
jnpr.junos.utils.ftp¶
FTP utility
- class
jnpr.junos.utils.ftp.
FTP(junos, **ftpargs)[source]¶
Bases:
ftplib.FTP
FTP utility can be used to transfer files to and from device.
__init__(junos, **ftpargs)[source]¶
Supports python context-manager pattern. For example:
from jnpr.junos.utils.ftp import FTP with FTP(dev) as ftp:(package, remote_path)
get(remote_file, local_path='/home/docs/checkouts/readthedocs.org/user_builds/junos-pyez/checkouts/latest/docs')[source]¶
This function is used to download file from router to local execution server/shell. | https://junos-pyez.readthedocs.io/en/latest/jnpr.junos.utils.html | 2021-02-24T23:43:38 | CC-MAIN-2021-10 | 1614178349708.2 | [] | junos-pyez.readthedocs.io |
There are two main approaches to how you can control the use of ActiveX controls in your company. For more info about ActiveX controls, including how to manage the controls using Group Policy, see Group Policy and ActiveX installation in the Internet Explorer 11 (IE11) - Deployment Guide for IT Pros.
Note
ActiveX controls are supported in Internet Explorer for the desktop for Windows 7 and Windows 8.1. They are not supported on the immersive version of Internet Explorer for Windows 8.1.
Scenario 1: Limited Internet-only use of ActiveX controls
While you might not care about your employees using ActiveX controls while on your intranet sites, you probably do want to limit ActiveX usage while your employee is on the Internet. By specifying and pre-approving a set of generic controls for use on the Internet, you’re able to let your employees use the Internet, but you can still limit your company’s exposure to potentially hazardous, non-approved ActiveX controls.
For example, your employees need to access an important Internet site, such as for a business partner or service provider, but there are ActiveX controls on their page. To make sure the site is accessible and functions the way it should, you can visit the site to review the controls, adding them as new entries to your
<system_drive>\Windows\Downloaded Program Files folder. Then, as part of your browser package, you can enable and approve these ActiveX controls to run on this specific site; while all additional controls are blocked.
To add and approve ActiveX controls
In IE, click Tools, and then Internet Options.
On the Security tab, click the zone that needs to change, and click Custom Level.
Go to Run ActiveX controls and plug-ins, and then click Administrator approved.
Repeat the last two steps until you have configured all the zones you want.
When you run the IEAK 11 Customization Wizard to create a custom package, you'll use the Additional Settings page, clicking each folder to expand its contents. Then select the check boxes for the controls you want to approve.
Scenario 2: Restricted use of ActiveX controls
You can get a higher degree of management over ActiveX controls by listing each of them out and then allowing the browser to use only that set of controls. The biggest challenge to using this method is the extra effort you need to put into figuring out all of the controls, and then actually listing them out. Because of that, we only recommend this approach if your complete set of controls is relatively small.
After you decide which controls you want to allow, you can specify them as approved by zone, using the process described in the first scenario. | https://docs.microsoft.com/en-us/internet-explorer/ie11-ieak/add-and-approve-activex-controls-ieak11 | 2017-05-22T22:56:33 | CC-MAIN-2017-22 | 1495463607120.76 | [] | docs.microsoft.com |
Transactional Publisher¶
The following example uses RabbitMQ’s Transactions feature to send the message, then roll it back:
import rabbitpy # Connect to RabbitMQ on localhost, port 5672 as guest/guest with rabbitpy.Connection('amqp://guest:guest@localhost:5672/%2f') as conn: # Open the channel to communicate with RabbitMQ with conn.channel() as channel: # Start the transaction tx = rabbitpy.Tx(channel) tx.select() # Create the message to publish & publish it message = rabbitpy.Message(channel, 'message body value') message.publish('test_exchange', 'test-routing-key') # Rollback the transaction tx.rollback() | http://rabbitpy.readthedocs.io/en/latest/examples/transactional_publisher.html | 2017-05-22T21:14:42 | CC-MAIN-2017-22 | 1495463607120.76 | [] | rabbitpy.readthedocs.io |
You are viewing an older version of this topic. To go to a different version, use the version menu at the upper-right.
Storing:
You could then update the object from somewhere else:
This approach is useful for storing an object that is needed by multiple components across your application. | https://docs.mulesoft.com/mule-user-guide/v/3.3/storing-objects-in-the-registry | 2018-07-16T02:24:57 | CC-MAIN-2018-30 | 1531676589172.41 | [] | docs.mulesoft.com |
Problem Management release notes ServiceNow® Problem Management product enhancements and updates in the Kingston release. Activation information Active by default. New in the Kingston release Business Service Active task: Added the active tasks icon to view a list of all active tasks affecting the business service that you have selected as a configuration item. Baseline ITSM dashboards Performance Analytics Solutions View IT Manager, IT Agent, and IT Executive ITSM dashboards that contain actionable data visualizations to monitor ongoing ITSM operations to help improve your business processes. | https://docs.servicenow.com/bundle/kingston-release-notes/page/release-notes/it-service-management/problem-management-rn.html | 2018-07-16T03:09:20 | CC-MAIN-2018-30 | 1531676589172.41 | [] | docs.servicenow.com |
What's in the Release Notes
These release notes cover the following topics:
What's New
This release introduces the following new and enhanced capabilities of the vCloud Availability Installer appliance:
- New script for moving replications between datastores.
- New script for moving replications between vSphere Replication Servers.
- New script for consolidating virtual machines for a whole vCloud Director instance, a given organization, a given vApp or a given virtual machine.
- New script for deleting replications.
- New
--typeargument in the
vcav vcd get-cloud-proxy. The script now returns
to-the-cloudURL and
from-the-cloudURL. If you run the script without specifying a value for
--type, you receive the
to-the-cloudURL.
- New
--folderargument in all
vcav * createscripts. You can use the option to deploy a virtual machine into an existing folder.Use
--folderfor standard command deployment and
placement-folderfor deployment using registry file.
- Updated SLES packages.
- The
vcav cassandra createscript now works with Cassandra 2.2.9.
- Enhanced
vcav hcs add-rights-to-rolescript that supports the new mechanism for assigning rights in vCloud Director 8.20.
- Enhanced vSphere Replication Cloud Service configuration script. The
vcav hcs configurecommand now enables the use of TLS 1.2 during the SSL handshake process.
- Enhanced
vcav cassandra import-hcs-certificatescript. You can now run the script on a dedicated Cassandra host.
- Enhanced error and warning messages for
vcav trust test,
vcav trust add,
vcav hms configure,
vcav vcd check,
vcav org-vdc enable-replication,
vcav cassandra register, and
vcav hcs configurescripts.
Product Documentation
In addition to the current release notes, you can use the documentation set for vCloud Availability 1.0.1 that includes the following deliverables.
- vCloud Availability for vCloud Director 1.0.1 Documentation
- Interoperability Pages for vCloud Availability for vCloud Director 1.0.1
Resolved Issues
Running
vcav vcd [operation]commands or commands that contain the
vcd-addressargument on your vCloud Availability Installer may return an error
An issue cased an API login request to fail, logging an error of the form:
ERROR - Unable to login to VCD VCD-IP-Address
is now resolved.
Certificates using a self-signed Intermediate Certificate Authority may result in an SSL Error during configuration
An issue that can cause configuration script failures, logging an error of the form:
bad handshake: Error([('SSL routines', 'SSL3_GET_SERVER_CERTIFICATE', 'certificate verify failed')],)
is now resolved. | https://docs.vmware.com/en/vCloud-Availability-for-vCloud-Director/1.0.1.1/rn/vCloud-Availability-for-vCloud-Director-1011-Release-Notes.html | 2018-07-16T03:20:21 | CC-MAIN-2018-30 | 1531676589172.41 | [] | docs.vmware.com |
Third-Party Usage¶
Running your own Add-ons server will likely require a few changes. There is currently no easy way to provide custom templates and since Firefox Accounts is used for authentication there is no way to authenticate a user outside of a Mozilla property.
If you would like to run your own Add-ons server you may want to update addons-server to support custom templates and move the Firefox Accounts management to a django authentication backend.
Another option would be to add any APIs that you required and write a custom frontend. This work is already underway and should be completed at some point but help is always welcome. You can find the API work in this project and the frontend work in addons-frontend. | http://addons-server.readthedocs.io/en/latest/topics/third-party.html | 2018-07-16T02:59:49 | CC-MAIN-2018-30 | 1531676589172.41 | [] | addons-server.readthedocs.io |
System administrators create a global outbound email server to handle outbound email notifications. You can create only one.
- Click Test Connection.
- Click Add. | https://docs.vmware.com/en/vRealize-Automation/7.4/com.vmware.vra.prepare.use.doc/GUID-598D8785-840B-46E6-8C35-1FC97AA446D4.html | 2018-07-16T03:15:49 | CC-MAIN-2018-30 | 1531676589172.41 | [] | docs.vmware.com |
Release Notes
0.1. 2.6.1
0.1.1. Upgraded Features
- Accordion: Added
expandedprops to accordion.
- ActionSheet: Fixed as per design guidelines.
- Date Picker: Added
onDateChangecallback support for Android.
- Picker: Fixed Header Left Button alignment as per design guidelines.
- Theme:
- Card:
- Replaced
listItemPaddingfor cards with new variable
cardItemPadding. This lets to customize space between Card and CardItem.
- Updated
transparentprop to render without elevation and border.
- Input: Added Picker support with Input. Introduced
pickerprop with
<Item>.
0.1.2. Bug Fixes
- Accordion: Added expanded parameter to renderHeader callback method.
- Font: Added
Fonts/MaterialCommunityIcons.ttf.
- Header: Added Statusbar color support for
transparentHeader on Android.
- Input: Fixed FloatingLabel's float issue onFocus of Input.
0.2. 2.6.0
0.2.1. New Features
- Added Vue-Native plugin for NativeBase.
0.3. 2.5.2
0.3.1. Upgraded Features
- Accordion: Added border style to accordion along with customisable from theme.
- Card: Added card borderRadius to theme.
- DatePicker:
- Exposed
onDateChangemethod for iOS.
- Added
placeHolderTextStyleprops to DatePicker.
- Header: Added
transparentprop with Header.
- Typescript: Added definitions for Accordion and DatePicker.
0.3.2. Bug Fixes
- General: NativeBase passes flow check.
- Header: Fix header padding issue on iphoneX in case of inline styles.
- Input:
- StackedLabel supports
multilineprop.
- Fixed back StackedLabel input scroll.
- FloatingLabel supports
multilineprop.
- Added check to filter out Input.
- Tabs: Tab button text font size is customizable from theme.
- Typescript:
- Fixed typo.
0.4. 2.5.1
0.4.1. Upgraded Features
- Changes in
package.jsonto improve install and jest performance
0.5. 2.5.0
0.5.1. New Features
- Added Accordion component.
- Added Date Picker component.
- Added Jest test cases to components.
0.5.2. Upgraded Features
- Upgraded dev dependencies to support Jest test cases.
- Safearea implementation for Header, Content and Footer.
- Icon: Added
typeIcon Proptypes.
- Picker:
- Added back
modalStylefor iOS picker.
- Added
enabledprop to picker for iOS.
- Segment: Added icon support with segments.
- Typescript:
- Added Icon typing to Button.
- Added
noIndenttyping to ListItem.
- Updated with new types which supports latest of react-native types (16.3+).
0.5.3. Bug Fixes
- ActionSheet:
- Update ActionSheet
refs, if application root is reinitialized.
- Title space added to Actionsheet only in presence of title prop.
- Button:
Viewrenders Block style button in
Formfor Android.
- Input: Floating Label reset its position when input is cleared.
- ListItem:
- Added
touchableHighlightStyleprops for listItem.
- ListItem supports all touchablenativefeedback props.
- Picker:
- Picker renders with single item defined in its
Itemlist.
- Picker radio button aligned to right.
- Segment: Removed segment button horizontal padding.
- Toast:
- Update Toast
refs, if application root is reinitialized.
- Set Toast without timeout along with reason for
onClose.
- Typescript: CheckBox interface extends Touchableopacity props.
0.6. 2.4.5
Fixed Header as per iOS and material design guidelines
0.7. Upgraded Features
- Added
noLeftprop with Header. Irrespective of Left component defined or not, Android will display Body's Title component to left of display. Applicable for Android platform, no changes when used for iOS
0.8. Bug Fixes
- Fixed lineHeight for Left, Body, Right components
- Fixed alignment of child elements of Left, Body and Right components
- Fixed fontSize (text, icon) for child elements of Left and Right components
- Fixed
hasTextfor Header's Button used with Left and Right components
- Fixed Title fontSize and fontWeight
- Fixed Subtitle fontSize and fontWeight
- Header's Left component will not render Text button for Android
0.9. 2.4.4
0.9.1. Upgraded Features
- Font: Default font size changed to 16.
0.9.2. Bug Fixes
- Card: Fixed CardItem
borderedfrom displaying randomly for Android.
- Font: Added Font
rubicon-icon-font.ttf.
- Footer: Fixed Button text color for all color shades when used in
<Footer>.
- Fixed floating label input text from going onto second line.
- Removed lineHeight dependency of Stackedlabel input.
- ListItem:
- Added
noBorderstyle for ListItem with props namely
icon,
avatarand
thumbnail.
- Added
noIndentprops to listItem.
- Added
listItemSelectedto theme variables.
- Added support to style
iosIconof picker (iOS).
- Fixed Picker from disappearing for Android when displayed along with icon.
- Radio: Added active and inactive color props to radio.
0.10. 2.4.3
0.10.1. General
- Folder Structure: Renamed
Utilsto
utils.
0.10.2. Bug Fixes
- Button: Removed
lineHeightdependency in button.
- Card:
- Added
noShadowto card theme.
- Fixed CardItem header and footer border for Android.
- Header: Reduce space between left button and title for Android.
- Input:
- FloatingLabel renders icon, label and input in its order of definition.
- Added missing ref to Input in Item.js.
- Removed
lineHeightdependency of Input.
- Picker: Removed Content warapping Flatlist.
- Typescript:
- Moved listview properties of interface ReactListViewProperties to Card interface.
- Added missing props to list interface.
- Added
getRefto Input interface.
- Added
spanand
hasSubtitleto Header interface.
- Added missing props of Picker, Header, SwipeRow, Toast.
0.11. 2.4.2
0.11.1. General
- Upgraded
react native vector icons.
- Upgraded
react-native-keyboard-aware-scroll-view.
0.11.2. Bug Fixes
- Button:
- Fixed icon alignment with
smallbutton for iOS.
- Added
transparenttheme with
disabledbutton.
- Added
maxHeightto rounded button.
- Aligned icons with title text.
- Content: Moved padding from Content's style to
contentContainerStyle.
- Input:
- Fixed Input scrolling issue for
stackedLabel.
- Input fixed to fetch variables from theme context.
inputColorPlaceholderreflects changes when customized in ejected theme.
- Searchbar: Fixed searchbar input lineHeight when
Inputis passed with
value.
- TypeScript:
- Added contentProps to Tabs.
- Overrided picker style type.
0.12. 2.4.1
0.12.1. Bug Fixes
- General: Updated
npm-ignoreto fix install issue.
0.13. 2.4.0
0.13.1. New Features
- NativeBase is now available for web.
0.13.2. Upgraded Features
- Theme:
- Removed excess marginLeft with List.
0.13.3. Bug Fixes
- Button: Fixed ripple / highlight effect on a rounded button with respect to its border radius.
- CardItem: Fixed Text color of CardItem with
bordered. Fixed Footer text when used with
borderedCardItem.
- FAB: Fixed FAB container
flexDirectioncode.
- Label: Fixed usage of
StyleSheetwith
Label.
- ListItem: ListItem pass down
delayPressOut& other TouchableHighlight props.
- Theme:
- Fixed FooterTab variables for Android.
- Typescript:
- Added refreshing and refreshControl to Content definition.
- Added
textStylefield to Picker.
0.14. 2.3.10
0.14.1. Upgraded Features
- General: Included NativeBase support for Ignite in ReadMe.
- Icons: Added EvilIcons support for Icons.
0.14.2. Bug Fixes
- General: NativeBase components resolved in
PhpStorm / WebStorm.
- FAB: Added stylesheet support for styling Fab child buttons.
- Icons: Icons render wrt
typewhen fetching names across different font families.
- Input: Fix for Input underline color (Android).
- ListItem: Fixed
selectedstyle for ListItem.
- Picker: Changed FlatList keytype from number to string.
- Tabs: Keyboard double click issue fixed with Tabs.
- Theme:
- Fixed
platform,
materialand
commontheme for iOS and Android.
- Theme files supports
fontVariant(array of enum).
- TypeScript:
- Added missing props for SwipeRow.
- Added the style property to the Checkbox.
0.15. 2.3.9
0.15.1. General
- Button: TouchableNativeFeedback supports Android Platform Version 21 onwards.
0.15.2. Upgraded Features
- Theme: Fix/remove platform dependency/materialjs.
- Toast:
- Refactored ToastContainer to DRY-up calls to
Animated.timing.
- Save a timeout when fading the toast out by using the
Animated.timing completioncallback.
- Typescript:
- Added
thumbnailprop in ListItem Typescript.
- Added
smalland
largeproperties to Thumbnail.
- Added
leftOpenValueproperty to interface List.
- Added few Card and CardItem types.
- Added
typeprop to Icon.
- Added button icon types for ActionSheet options.
0.15.3. Bug Fixes
- ActionSheet:
- Fixed warning issue. Changed Flatlist keytype from number to string.
- Defined bounds of ActionSheet modal to restrict within the Root container in case of huge list of options for ActionSheet modal.
- Footer: Styled child components of Footer.
- Input: Fixed overlapsping of Stack label with Input text field when wrapped without Content.
- Typescript:
- Changed
SubTitleto
Subtitle.
0.16. 2.3.8
0.16.1. General
- Dev-dependencies: Upgraded
react-native-easy-gridfrom 0.1.15 to 0.1.17
0.16.2. Upgraded Features
- Button: Improved Button theme structure to remove code redundancies.
- CardItem: Improved CardItem theme structure to remove code redundancies.
- Icon: Accept Icon Type as a prop.
- List: Added enable EmptySections flag to List to render empty section headers.
- Toast:
- Toast component improvements with
onClosecallback.
- Fixed Toast timeout bug. Save the timeout ID when a toast is shown so that we can clear any existing timeout when a new toast is shown so that an old timeout doesn't close a new toast prematurely.
- TypeScript: Added optional
SwipeRowproperties to prevent tslint error.
0.16.3. Bug Fixes
- Input: FLoating Label is cropped from top while it floats on top.
- H1, H2, H3:
H1,
H2,
H3now takes number along with string as input.
- Segment: Fixed segment overlapping with Right element in Header.
- Theme: Fixed menu icon color for Android.
- TypeScript:
- Added TypeScript support for Picker
placeholderStyle.
ViewStylesto accept array.
- Typescript declaration file missing
ScrollableTab.
- Fixed Header
Titletype.
0.17. 2.3.7
0.17.1. Upgraded Features
- Packages: Replaced git URL by release versions for
react-native-drawerand
react-native-keyboard-aware-scroll-view.
- Theme: Updated some of the theme variables.
0.18. 2.3.6
0.18.1. New Features
- Font:
- Adding support for Feather Font.
- Added support for EvilIcons.
0.18.2. Upgraded Features
- ActionSheet: Replaced ListView with FlatList in ActionSheet.
- CardItem: Added activeOpacity prop for CardItem.
- Picker: Replaced ListView with FlatList in Picker.
- SwipeRow: Added style implementation for SwipeRow.
- Theme:
- Updated Shoutem theme from 0.2.1 to 0.2.2.
- Removed unused theme variables.
- Sorted variables component-wise alphabetically.
- Type definition:
- Updated type definition for ActionSheet. Title optional.
- Added Btn, Tabs and Tabs missing types.
0.18.3. Bug Fixes
- General:
- Removes unused and broken var declaration.
- Added missing property style to interface separator in index.d.ts.
- FAB:
- Fixed buttongroup popping out initially on bottomLeft.
- Proper spacing between FAB and buttongroup for all positions.
- Tab: Tab's initialPage and tab indicator issue fixed.
- Type definition: SwipeRow not exported in TypeScript definition. Added missing export SubTitle in typescript declaration file.
0.19. 2.3.5
0.19.1. Bug Fixes
- Release Crash: Fixed PropTypes issue, which caused Release Crash for iOS and Android.
- Actionsheet:
- Actionsheet for Android returns
buttonIndexas number.
- Fixed Actionsheet returning different buttonIndex for different platforms when on touch outside
- Card: Fixed UI breakage with Card for iPhoneX view.
- Icon: Wrong icon name mapping for Android.
- Picker: Added placeholderStyle to Picker. Customizable color for Icon with Picker
- Searchbar: Text vertically centered in Header SearchBar.
- Tabs: Fixed
overlayTopposition for Tabs.
- Toast: Text and Button-text supports empty string. Added default duration of 1500.
- Theme:
- Removed unused variable
listItemHeight.
- Use theme to turn off
uppercasebuttons on Android
0.20. 2.3.4
0.20.1. Bug Fixes
- Keyboard behaviour: Added
keyboardShouldPersistTapsprop with handled as default value.
0.21. 2.1.4
0.21.1. Upgraded Features
- Upgraded react-native-vector-icons to 4.1.1
- Upgraded native-base-shoutem-theme to 0.1.4
- Excluded react-native-scrollable-tab-view dependency
0.21.2. Bug Fixes
- Button: Made button text uppercase by default for Android
- Text: Added uppercase prop to Text
- View: Fixed warning for View proptype
0.22. 2.1.3
0.22.1. Upgraded Features
- Fixed StaticContainer issue
- Update Typescript definitions
0.22.2. Bug Fixes
- ActionSheet: Fixed issue on pressing Android Back button while an ActionSheet is displayed
- Segment: Fixed Segment issue to render at the center of Header
- CheckBox: Fixed property for changing CheckBox background color
- CardItem: Fixed issue with borderBottomWidth for last CardItem
- Icons: Renders SimpleLineIcons
- List:
- List re-renders when children change on iOS
- Dynamic List refreshes when data is refreshed
- FAB: FAB expands with icons
- Toast:
- Fixed Button text on Toast
- Works in landscape mode
- Picker: Works in landscape mode
- Fixed native-base-shoutem-theme
0.23. 2.1.2
0.23.1. Upgraded Features:
- Changed package name for Shoutem theme
- Touchable effect to Card
- Updated Doc URL for Customize section in ejectTheme.js
0.23.2. Bug Fixes
- Fixed Icon color with Form item Floating labels
- Fixed property for changing CheckBox background color
- Fixed Checkbox Size attribute
0.24. 2.1.0
0.24.1. Bug Fixes
- Picker: Fixed related picker issue.
- General: Performance issue resolved.
0.25. 2.0.0-alpha1
0.25.1. New Features
- Tab: Uncontrolled Tabs similar to FooterTabs.
- Icon: Gives platform specific icons.
- Form Components
- Item: Much like InputGroup with added features of inline label, stacked label, and floating label.
- Left, Body, Right: Views which aligns its content to the left, center, right respectively.
- Smart Components:
- Header
- Button
- Tabs
- StyleProvider: To apply themes and customize any components.
0.25.2. Upgraded Features
- With
StyleProvider, all components are fully customizable.
- CardItem, Header and ListItem: Use of
Left,
Bodyand
Rightcomponents for proper alignments and customization.
0.26. 0.5.21
0.26.1. Bug Fixes
- FABs: Fixed scrolling issue.
- Footer Tabs: Fixed TabBar text size issue.
0.26.2. Upgraded Features
- List: Added refreshControl feature.
- Vector Icons: Fixed installation dependency of React Native Vector Icons.
0.27. 0.5.20
0.27.1. Bug Fixes
- Button: Fixed for null children condition.
- Vector Icons: Fixed installation dependency of React Native Vector Icons.
- Input: InlineLabel Input fixed.
0.28. 0.5.19
0.28.1. New Features
- Gravatar: Thumbnail like feature which pulls out avatar of user if registered globally.
0.28.2. Upgraded Features
- Picker:
- Added new props.
- Support inside an InputGroup when used with ListItem.
0.28.3. Bug Fixes
- InputGroup: Fixed issue when Button inside a InputGroup without Icon.
0.29. 0.5.17
0.29.1. Bug Fixes
0.30. 0.5.16
0.30.1. Upgraded Features
- Picker:
- Support inline label for Picker(Android).
- Picker fills the view when defined as inlineLabel.
0.30.2. Bug Fixes
- General:
- Changed import for ReactNativePropRegistry.
- Fixed React and RN dependencies.
- Deck Swiper: Fixed issues with RN 0.37
- Spinner: Updated dimensions for spinner.
0.31. 0.5.15
0.31.1. Upgraded Features
- Added Linting rules to missing files
0.31.2. Bug Fixes
- General: Widgets support null or undefined children.
- DeckSwiper: Fixed onSwiping feature.
0.32. 0.5.14
0.32.1. New Features
- Added Typings
0.32.2. Upgraded Features
- DeckSwiper: Added new props onSwiping, renderTop, renderBottom.
0.32.3. Bug Fixes
- Button: Fixed toUpperCase error on Android.
- FAB: Fixed alignment for FAB in all four direction.
- Header: Support Title and/or Button.
0.33. 0.5.13
0.33.1. Upgraded Features
- Ref: Added _root ref to all components.
0.33.2. Bug Fixes
- ListItem: Fixed bug in case of null child.
0.34. 0.5.12
0.34.1. New Features
- Badge with FooterTab Button: NativeBase Badge with Buttons in FooterTab; much like Facebook notifications.
- FABs: A special type of promoted action, which when clicked may contain more related actions.
- Shallow Merge: NativeBase switched to shallow merge.
- Top TabBar
0.34.2. Upgraded Features
- CardItem: Fixed padding between card items for iOS and Android.
0.34.3. Bug Fixes
- Tabs: Updated Tabs background color.
- Vector Icons: Fixed installation dependency of React Native Vector Icons.
0.35. 0.5.11
0.35.1. Upgraded Features
- Header: Improved alignment.
0.36. 0.5.10
0.36.1. Upgraded Features
- Badge: Updated Text style for Badge.
- Button: Improved alignment.
- Card: Button in Card renders with proper size.
- FooterTab: Improved alignment.
- Header:
- Supports
Titleto be used as single component in Header with proper alignment for iOS and Android.
- Updated Buttons in Header.
- ListItem:
- Improved alignment.
- Alignment for Badge.
- Alignment for Button.
- Picker: Improved alignment.
- Searchbar: Alignment of Icon and placeholder text in Searchbar.
0.37. 0.5.9
0.37.1. New Features
- Deck Swiper: Tinder-like swipe cards to select/reject data set with features to swipe left and right.
- Generate Image from Icon: Genrates an Image resource for NativeBase Icons.
- filter() for null value: Usually if a null value is passed as a child to Component, it throws few errors. This .filter() removes all falsey values from this.props.children, preventing the errors, and returning the correct result.
0.37.2. Upgraded Features
- FooterTab: Added onPress support for elements of FooterTab.
- InputGroup: Allows null block inside InputGroup Component.
- Tabs:
- Help to switch between the Tabs component programatically. Say page = 1.
- Ensures that Tabs component's props.children is an array, else creates a single item array if it is not. Thus allows calls to .filter() and .map().
- ES Lint: Config ESLint (airbnb) to enforce coding style.
0.37.3. Bug Fixes
- Picker: Updates Picker.Item value dynamically.
- Keyboard-aware-scroll-view:
- resetScrollToCoords: This is user definable prop. Coordinates that will be used to reset the scroll when the keyboard hides. Also restores scroll position after keyboard hides if resetScrollToCoords is not set.
- disableKBDismissScroll: Disables automatic scroll on focus.
- Content: Eliminates margin on the top of Content which includes any fields inside of it.
0.38. 0.5.8
0.38.1. New Features
- FooterTab: Button Tabs in Footer.
0.38.2. Upgraded Features
- Upgraded react-native-keyboard-aware-scroll-view from 0.1.2 to 0.2.0
- Button: Supports prop
capitalize(only Android)
- Dynamic Card: Render data in chunks with large set of data.
- Content: Added ref_scrollview to Content.
- Theme: Theme variables added for
- Toolbar for android
- StatusBar for android
0.38.3. Bug Fixes
- Badge: Supports font-size, lineHeight, width and color.
- Checkbox: Responds on user's click.
- H1, H2, H3 LineHeight: Added lineHeight to H1, H2, H3 components.
- LisItem:
- LisItem height removed and scrollView added to container.
- LisItem supports both
iconLeftand
iconRightused together.
- Radio Button: Responds on user's click.
0.39. 0.5.7
0.39.1. Upgraded Features
- Theme applicable to iconFontFamily
- Fixed Button issues for Android
- List now supports Picker as ListItem
0.40. 0.5.6
0.40.1. Upgraded Features
- Picker
- Styling Picker
- Custom buttons in header for Picker.
- Card includes new prop : button
- List includes new prop : button
0.41. 0.5.5
0.41.1. New Features
- Button includes a new prop, disabled.
0.41.2. Upgraded Features
- Revised Anatomy
- Fixed bugs when adding props with components to NativeBase widgets.
- InputGroup
- Input accepts defined Stylesheet.
- Added reference to InputGroup to fetch input value.
0.42. 0.5.4
0.42.1. New Features
- InputGroup
- Provides three types of textbox: success, disabled and error.
- Customize border color using Theme file.
0.42.2. Upgraded Features
- Header component now takes even a single element, Button or Title.
0.43. 0.5.3
0.43.1. New Features
- Added CardSwiper
- Added DeckSwiper
- Integrated Clamp 1.0.1
- Prop types added to all components of NativeBase.
0.43.2. Upgraded Features
- Fixed alignment for TextInput
- Thumbnail supports styles using Theme file.
0.44. 0.5.2
0.44.1. New Features
- Added Picker (dropdown)
- Integrated Keyboard aware scrollview 0.1.2
0.44.2. Upgraded Features
- Header component for Android
- Button
- Card Header and Footer component
- Form
- InputGroup
- List
- Search bar
- Tabs
0.45. 0.5.0
0.45.1. New Features
- Platform specific components with single codebase.
- Added a set of Fonts.
- Added Check Box.
- Added Radio Button.
- Added Search Bar.
- Added Spinner.
- Added Tabs.
- Added Dynamic List to render data in chunks for app with large set of data.
0.46. 0.4.6
0.46.1. New Features
- Supports upgraded version of React: v15.0.2 to v15.1.0
- Supports upgraded version of React Native: v0.26.0 to v0.27.1
- Supports upgraded version of React Native Vector Icons: v2.0.x
0.47. 0.3.1
0.47.1. New Features
- Added Form
- Added ProgressBar
- Added Spinner
0.48. 0.3.0
0.48.1. New Features
- Added Card types.
- Card with Header and Footer
- Card List
- Card Image
- Card Showcase
- Added Layout.
- Added Theme customization.
- Added Thumbnail component.
0.49. 0.2.1
0.49.1. Upgraded Features
- Styling for Icon component.
0.49.2. New Features
- Added Badge component.
- Added Button types.
- Button Theme
- Block Button
- Round Button
- Icon Button
- Button Size
- Added List types.
- List Divider
- List Icon
- List Avatar
- List Thumbnail
0.50. 0.2.0
0.50.1. New Features
- Added List component.
0.50.2. Upgraded Feature
- Toolbar is recreated as two new components: Header and Footer.
- Added props to Button.
0.51. 0.1.1
0.51.1. New Features
- Added layout for screen, separating header and footer from the body.
- Added Header and Footer using layout.
- Added button that includes: text, icons, text-with-icon.
- Added bootstrap themes for button.
- Added InputGroup to include various styles of textbox. | http://docs.nativebase.io/docs/release-notes/Release.html | 2018-07-16T02:26:58 | CC-MAIN-2018-30 | 1531676589172.41 | [] | docs.nativebase.io |
tab, click the Show deleted files icon. The deleted files and folders appear.
- Select the file or folder that you want to restore.
If you want to select multiple files or folders, click the check box before a file or folder name to select that file or folder.
- Click Restore. The selected files and folders are restored. | https://docs.druva.com/005_inSync_Client/inSync_Client_5.8/Share_and_Sync/Share_and_Sync_for_inSync_Cloud/Work_with_inSync_Share_content/050_Restore_deleted_content | 2018-07-16T02:40:58 | CC-MAIN-2018-30 | 1531676589172.41 | [] | docs.druva.com |
AccountingIntegrator Enabler 2.2.1 User Guide Add entire publication Select collection Cancel Create a New Collection Set as my default collection Cancel This topic and sub-topics have been added to MyDocs. Ok No Collection has been selected. --> Audit-Rule: Audit Trace tab What the Audit Trace tab shows How to use the Audit Trace tab What the Audit Trace tab shows Use the Audit Trace tab to define the Business-Document fields to extract and audit. The Audit Trace tab is organized into three parts separated by splitter bars as illustrated schematically in the following diagram: Icon barprovides buttons to delete an audit trace or reorder the list of defined audit traces. Business-Document tree structure you selected in the Business-Document field in the General tab. Field definition displays a table containing the Business-Document fields selected from the right pane to extract and audit. How to use the Audit Trace tab Using the icon bar ClickToDelete an audit trace.AccountingIntegrator Enabler removes the Business-Document from the audit trace list, and places it in the Business-Document tree structure in the right pane.Move the selected audit trace up in the list. This enables you to set the execution sequence for a set of Audit-Rules.Move the selected audit trace down in the list. This enables you to set the execution sequence for a set of Audit-Rules.Defining fields to auditSelecting fields to auditFrom the Business-Document tree structure in the right pane, drag-and-drop nodes to the fields in the field definition table in the left pane.You can use Shift-Click to select a set of fields between the first and last click or Ctrl-click to select a set of non-adjacent fields.Fields you cannot auditThe following fields are not available for selection:Fields already defined as audit fieldsFields hierarchically inferior or superior to audit fields already definedThese fields are marked with the icon. If you try to use a field that is restricted from use, the application displays the following error message:"The field <field name> cannot be added since this field or one its hierarchy has already been added."For exampleTo explain this constraint, consider the example of a Business-Document that comprises the following fields: FIELD-1 FIELD-XFIELD-AFIELD-BFIELD-YFIELD-ZFIELD-2Each time you select a field, that selection changes the set of possible remaining fields. For example, if you select FIELD-X, then the set of potential fields that you could select becomes:FIELD-YFIELD-ZFIELD-2This is because the fields hierarchically superior (FIELD-1) and inferior (FIELD-A and FIELD-B) can no longer be selected. However, if you select FIELD-Y, then the set of potential fields that you could select becomes:FIELD-BFIELD-YFIELD-ZFIELD-2Completing the field definition tableSpecify the following attributes for each Business-Document field extracted for audit.FieldContentsFieldFrom the right pane, drag-and-drop the Business-Document field that contains the field you want to audit.AccountingIntegrator Enabler displays the Field with the following syntax: Field Name + ( DML Type ( Definition ) )LabelAccountingIntegrator Enabler displays the description you entered when you defined the selected Business-Document field.Back to top Related Links | https://docs.axway.com/bundle/AccountingIntegratorEnabler_221_UserGuide_allOS_en_HTML5/page/Content/UserGuide/AIEnabler/Rules/Audit/Audit-Rule__Audit_Trace_tab.htm | 2018-07-16T02:47:55 | CC-MAIN-2018-30 | 1531676589172.41 | [] | docs.axway.com |
Solidity Gradle Plugin¶
Simple Gradle plugin used by the Web3j plugin to compile Solidity contracts, but it can be used in any standalone project for this purpose.
Plugin configuration¶
To configure the Solidity Gradle Plugin using the plugins DSL or the legacy plugin application,
check the plugin page.
The minimum Gradle version to run the plugin is
5.+.
Then run this command from your project containing Solidity contracts:
./gradlew build
After the task execution, the base directory for compiled code (by default
$buildDir/resources/solidity) will contain a directory for each source set
(by default
main and
test), and each of those a directory with the compiled code.
Code generation¶
The
solidity DSL allows to configure the generated code, e.g.:
solidity { outputComponents = [BIN, ABI, ASM_JSON] optimizeRuns = 500 }
The properties accepted by the DSL are listed in the following table:
Notes:
- Setting the
executableproperty will disable the bundled
solcand use your local or containerized executable:
solidity { executable = "docker run --rm -v $projectDir/src:/src -v $projectDir/build:/build ethereum/solc:0.6.4-alpine" version = '0.4.15' }
- Use
versionto change the bundled Solidity version. Check the Solidity releases for all available versions.
allowPathscontains all project's Solidity source sets by default.
Source sets¶
By default, all
.sol files in
$projectDir/src/main/solidity will be processed by the plugin.
To specify and add different source sets, use the
sourceSets DSL. You can also set your preferred
output directory for compiled code.
sourceSets { main { solidity { srcDir { "my/custom/path/to/solidity" } output.resourcesDir = file('out/bin/compiledSol') } } }
Gradle Node Plugin¶
The plugin makes use of the Node plugin to resolve third-party contract dependencies. It currently supports:
When importing libraries from
@openzeppeling/contracts in your Solidity contract the plugin will use the task
resolveSolidity to generate
a
package.json file in order to be used by the Node plugin. By default,
package.json will be generated under the
build/ directory.
If you with do define your own
package.json you need to add the following snippet in your
build.gradle file.
node { nodeProjectDir = file("my/custom/node/directory") }
The plugin will look for the
package.json file in the directory set and will also download the node modules under the same directory.
Plugin tasks¶
The Java Plugin
adds tasks to your project build using a naming convention on a per source set basis
(i.e.
compileJava,
compileTestJava).
Similarly, the Solidity plugin will add a:
resolveSoliditytask for all project Solidity sources.
compileSoliditytask for the project
mainsource set.
compile<SourceSet>Solidityfor each remaining source set. (e.g.
compileTestSolidityfor the
testsource set, etc.).
To obtain a list and description of all added tasks, run the command:
./gradlew tasks --all | http://docs.web3j.io/plugins/solidity_gradle_plugin/ | 2020-11-24T06:15:19 | CC-MAIN-2020-50 | 1606141171126.6 | [] | docs.web3j.io |
ro the ROS package management tool. The rospack package contains a single binary, called rospack.
rospack is part dpkg, part pkg-config. The main function of rospack is to crawl through the packages in ROS_ROOT and ROS_PACKAGE_PATH, read and parse the manifest.xml for each package, and assemble a complete dependency tree for all packages.
Using this tree, rospack can answer a number of queries about packages and their dependencies. Common queries include:
rospack is intended to be cross-platform.
rospack crawls in the following order: the directory ROS_ROOT, followed by the colon-separated list of directories ROS_PACKAGE_PATH, in the order they are listed.
During the crawl, rospack examines the contents of each directory, looking for a file called manifest.xml. If such a file is found, the directory containing it is considered to be a ROS package, with the package name equal to the directory name. The crawl does not descend further once a manifest is found (i.e., packages cannot be nested inside one another).
If a manifest.xml file packages by the same name exist within the search path, the first one found wins. It is strongly recommended that you keep packages.
rospack re-parses the manifest.
rospack's performance can be adversely affected by the presence of very broad and/or deep directory structures that don't contain manifest files. If such directories are in rospack's search path, it can spend a lot of time crawling them only to discover that there are no packages to be found. You can prevent this latency by creating a rospack_nosubdirs file in such directories. If rospack seems to be running annoyingly slowly, you can use the profile command, which will print out the 20 slowest trees to crawl (or use profile --length=N to print the slowest N trees).
Because rospack is the tool that determines dependencies, it cannot depend on anything else. Thus rospack contains a copy of the TinyXML library, instead of using the copy available in 3rdparty. For the same reason, unit tests for rospack, which require gtest, are in a separate package, called rospack_test.
rospack is used entirely as a command-line tool. While the main functionality within rospack is built as a library for testing purposes, it is not intended for use in writing other applications. Should this change, the rospack library API should be cleaned up and better documented.
For now, the user-visible API is:
See main.cpp for example usage
rospack does not expose a ROS API.
rospack is the command-line tool that provides package management services.
rospack crawls the directory ROS_ROOT and the colon-separated directories in ROS_PACKAGE_PATH, determining a directory to be package if it contains a file called manifest.xml. | http://docs.ros.org/en/electric/api/rospack/html/ | 2020-11-24T07:36:34 | CC-MAIN-2020-50 | 1606141171126.6 | [] | docs.ros.org |
Get data from APIs and other remote data interfaces through scripted inputs
This documentation does not apply to the most recent version of Splunk. Click here for the latest version.
Contents
Get data from APIs and other remote data interfaces through scripted inputs
Splunk can accept events from scripts that you provide. Scripted input is useful in conjunction with command-line tools, such as
vmstat,
iostat,
netstat,
top, etc. You can use scripted input to get data from Dashboards, Views, and Apps/freebsd).
Starting with release 4.2, any
stderr messages generated by scripted inputs are logged = <integer>|<cron schedule>
- Indicates how often to execute the specified command. Specify either an integer value representing seconds or a valid cron schedule.
- Defaults to 60 seconds.
- When a
cron scheduleis specified, the script is not executed on start up.
->
- Set the index where events from this input will be stored.
- The
<string>is prepended with 'index::'.
- Defaults to
main, or whatever you have set as your default index.
- For more information about the index field, see "How indexing works" in the Admin: Overriding the source key is generally not recommended. Typically, the input layer will provide a more accurate string to aid in problem analysis and investigation, accurately recording the file from which the data was retreived. Consider use of source types, tagging, and search wildcards before overriding this value.
- The
<string>is prepended Dashboards,/default/:
. | http://docs.splunk.com/Documentation/Splunk/4.3.1/Data/Setupcustominputs | 2015-02-27T05:57:26 | CC-MAIN-2015-11 | 1424936460576.24 | [] | docs.splunk.com |
This page represents the current plan; for discussion please check the tracker link above.
Description
We have both some text parsing (CQL and WKT) and XML parsing (filter, gml) incoming. This proposal outlines how we can be consistent in terms of package structure.
- CQL - Catalogue Query Language - currently an unsupported module
- XML - Extensible Markup Language - we are forced to run our own parser due to scaling issues
- WKT - Well Known Text - we already support CRS definition, we need one for Parsing Geometry
How can we be consistent:
- Make a "default" useful parser available as a utility class
- CQL
- CRS.parseWKT
- JTS.parseWKT
- Spatial.parseWKT (should be method compatible with JTS utility class)
- Hide details in a parser package (that casual users do not need to import)
- Be consistent with package names (not sure how to handle version differences)
- org.geotools.filter.text.cql2 - parser code for CQL
- org.geotools.geometry.text.wkt - parser code for WKT
- org.geotools.filter.xml.filter1_0 - filter 1.0 bindings (requires gml2)
- org.geotools.filter.xml.filter1_1 - Filter 1.1 bindings (requires gml3)
- org.geotools.geometry.xml.gml2 - geometry bindings for latest GML2
- org.geotools.geometry.xml.gml3 - geometry bindings for latest GML3
Of the format org.geotools.SUBJECT.PARSER.SPECIFICATION:
- SUBJECT - is the output being produced (ie style, geometry, referencing, feature, filter, etc...)
- PARSER - is the kind of "input" being considered (ie text or xml )
- SPECIFICATION - (optional) if you need to get more specific on the kind of "input" you can quote the specification here.
Please note this is for the gory details only; your users should not have to import anything from these packages. You may be stuck making some of the content public (especially for XML callbacks) - but none of your example/user code should be forced into an import.
Status
Voting in process - closes April 2nd.
Discussed on the email list and in a weekly IRC meeting ( 2007/03/19/IRC Meeting - 19 March 2007).
Votes are currently being collected:
- Andrea Aime +1
- Chris Holmes
- Ian Turton
- Jody Garnett +1
- Martin Desruisseaux
- Richard Gould +1
- Simone Giannecchini
It would be nice to have this approved this week so the work can be in the release
Resources
Tasks
A target release is also provided for each milestone.
API Changes
BEFORE
This change introduces new API:
AFTER
Documentation Changes
Website:
- Update Upgrade to 2.4 instructions (if xml parsing packages are changed)
Developers Guide:
- Update description package convention to describe how additional parsers can be added.
User Guide:
- CQL examples look great!
- CRS examples needed
- Spatial examples needed
User Manual:
- Check with Acuster to see if demo can be updated
Issue Tracker:
- Close jira when completed | http://docs.codehaus.org/display/GEOTOOLS/Provide+Parsers+in+a+consistent+fashion?sortBy=name&sortOrder=ascending | 2015-02-27T06:03:35 | CC-MAIN-2015-11 | 1424936460576.24 | [] | docs.codehaus.org |
Download
Install Composite C1!
- Ensure you have Microsoft .NET Framework 4.5 installed on your computer.
- Install Composite C1 on Microsoft WebMatrix:
- Run your website in WebMatrix and complete the Composite C1 Setup wizard.
- For an intro on how to edit your pages and media, visit our "Getting Started" guide for users.
- For an intro on how to customize and publish your website, visit our "Getting Started" guide for web pros.
For more details on how to complete the installation and setup visit our Installation and Setup guide.
Visual Studio Developers and IIS Administrators
If you know your way around IIS or Visual Studio 2010 (or later), download a ZIP containing the website directory and a Visual Studio solution file.
How to set up Composite C1 on IIS and Visual Studio
Source code download
Composite C1 is free open source software, it is fully featured and the source code is available at. | http://docs.composite.net/Getting-started/Download | 2015-02-27T05:55:38 | CC-MAIN-2015-11 | 1424936460576.24 | [] | docs.composite.net |
The Pyramid Web Framework, Version 1.1 as "as-is" basis. The author and the publisher shall have neither liability nor responsibility to any person or entity with respect to any loss or damages arising from the information contained in this book. No patent liability is assumed with respect to the use of the information contained herein.. | http://docs.pylonsproject.org/projects/pyramid/en/master/copyright.html | 2015-02-27T06:00:17 | CC-MAIN-2015-11 | 1424936460576.24 | [] | docs.pylonsproject.org |
Definition
Start a container that is not already running
Explanation
First you need to create a
Container instance. This can be done using the container factory or directly by instating a container implementation class.
Once you have this container instance, starting the container is as simple as calling the
start() method. Before doing this though you'll need to ensure you have defined the container's
homeDir (if you're using a container in standalone mode - It's not required for containers in embedded mode). You'll also need to ensure you've also created and assigned a container installation.
Of course it you wish to statically deploy archives, you'll need to add deployables to the container.
It is important to note that the
Container.
Starting Resin 3.x with no deployables | http://docs.codehaus.org/pages/viewpage.action?pageId=13159 | 2015-02-27T06:09:43 | CC-MAIN-2015-11 | 1424936460576.24 | [] | docs.codehaus.org |
Ways to populate the Product Catalog
You can populate the Product Catalog in the following ways:
- Manually add product entries
You can create new companies or products, versions, and patches from the Product Catalog Console, and define their status.
For more information, see Adding Product Catalog entries.
- Import data using Atrium Integrator jobs
Based on your input source type (for example CSV, XML, or DB ), you must design and run an Atrium Integrator job to import the data. For more information, see Transferring data from external data stores to BMC CMDB.
- Import data using UDM (Unified Data Management )
Use the Data Management tool which includes predefined templates to import data into the Product Catalog. See Data Management in Remedy ITSM suite online documentation.
- Use Normalization Engine to create product entries
If you import data from Atrium Integrator or any other external sources, then use Normalization Engine to populate the Product Catalog. For guidelines and procedures, see Configuring the Product Catalog for normalization.
When using Normalization Engine to create product entries, verify that the CIs in the BMC CMDB dataset contain the appropriate values for the
ManufacturerName,
Model,
Category,
Type, and
Item attributes. Make sure that all CIs for a specific product have the same values for these attributes; otherwise, Normalization Engine is likely to create duplicate entries for a product in the Product Catalog. If you have duplicates in the Product Catalog, you must remove them manually.
If you implement software license management, evaluate the
VersionNumber and
MarketVersion values of software CIs. Before normalizing the CIs and creating new Product Catalog entries, create Version Rollup rules to map related
VersionNumber values to a single
MarketVersion value. For more information, see Managing software licenses in your organization using normalization. | https://docs.bmc.com/docs/ac1911/ways-to-populate-the-product-catalog-896325197.html | 2021-07-24T01:37:55 | CC-MAIN-2021-31 | 1627046150067.87 | [] | docs.bmc.com |
Management packs allow you to deploy a range of services to your Ambari-managed cluster. You can use a management pack to deploy a specific component or service, or to deploy an entire platform, like HDF.
In general, when working with management packs, you perform the following tasks in this order:
Install the management pack.
Update the repository URL in Ambari.
Start the Ambari Server.
Launch the Ambari Installation Wizard. | https://docs.cloudera.com/HDPDocuments/Ambari-2.7.4.0/bk_ambari-installation/content/ch_working-with-mpacks.html | 2021-07-24T02:38:57 | CC-MAIN-2021-31 | 1627046150067.87 | [] | docs.cloudera.com |
Base
Rich Field. Render(HtmlTextWriter) Method.
protected: override void Render(System::Web::UI::HtmlTextWriter ^ output);
protected override void Render (System.Web.UI.HtmlTextWriter output);
override this.Render : System.Web.UI.HtmlTextWriter -> unit
Protected Overrides Sub Render (output As HtmlTextWriter)
Parameters
- output
- HtmlTextWriter
Reserved for internal use. | https://docs.microsoft.com/en-us/dotnet/api/microsoft.sharepoint.publishing.webcontrols.baserichfield.render?view=sharepoint-server | 2021-07-24T02:44:54 | CC-MAIN-2021-31 | 1627046150067.87 | [] | docs.microsoft.com |
New shares can enter error state during the creation process.
error
/etc/var/log/manila-share.log
If a share type contains invalid extra specs, the scheduler will not be
able to locate a valid host for the shares.
To diagnose this issue, make sure that scheduler service is running in
debug mode. Try to create a new share and look for message Failed to
schedule create_share: No valid host was found. in
/etc/var/log/manila-scheduler.log.
Failed to
schedule create_share: No valid host was found.
/etc/var/log/manila-scheduler.log
To solve this issue look carefully through the list of extra specs in
the share type, and the list of share services reported capabilities.
Make sure that extra specs are pointed in the right way.
By default, a new share does not have any active access rules.
To provide access to new share, you need to create
appropriate access rule with the right value.
The value must defines access.
After upgrading the Shared File Systems service from version v1 to version
v2.x, you must update the service endpoint in the OpenStack Identity service.
Otherwise, the service may become unavailable.
To get the service type related to the Shared File Systems service, run:
# openstack endpoint list
# openstack endpoint show <share-service-type>
You will get the endpoints expected from running the Shared File Systems
service.
Make sure that these endpoints are updated. Otherwise, delete the outdated
endpoints and create new ones.
The Shared File System service manages internal resources effectively.
Administrators may need to manually adjust internal resources to
handle failures.
Some drivers in the Shared File Systems service can create service entities,
like servers and networks. If it is necessary, you can log in to
project service and take manual control over it.
service
Except where otherwise noted, this document is licensed under
Creative Commons
Attribution 3.0 License. See all
OpenStack Legal Documents. | https://docs.openstack.org/ocata/admin-guide/shared-file-systems-troubleshoot.html | 2021-07-24T00:36:35 | CC-MAIN-2021-31 | 1627046150067.87 | [] | docs.openstack.org |
The Bad Man
Country:
- Taiwan
Introduction
The Bad Man is a documentary of the life story of a violent and bloodthirsty young man from Kachin, Myanmar, who casually discusses killing as if it were not his own story. His past of a non-voluntary, military life has cost him a great deal and completely changed his post-military life. With all the wounds and experiences, he is now considering which path to take for the future.
Director Statement
I have chosen to document the life of a Burmese Kachin soldier, as I endeavour to manipulate things happening at the moment through filming. Myanmar has a history of the longest civil war in the world, which has lasted for more than seven decades.
The documentary, The Bad Man, outlines a story that is similar to my childhood experience. The protagonist in the film was captured by the Kachin Independence Army to serve as a soldier, which completely changed his life forever. I can remember one night as a child, the Kachin Independence Army broke into my house and conscripted me to serve in the army. I was only eight years old at that time. I was frightened and cried continuously. My parents tried by every means possible to prevent my being taken away by the Army. It was not until the soldiers were satisfied by the food and wine prepared by my parents, did they concede to letting me stay with my family.
Based on these experiences, when I encountered the protagonist, I couldn’t help but ask myself, "If I had failed to escape that night when the Army came for me, would my life have turned out the same as this man's? Would my mind have become as numb as his in relation to deeds that seem cruel by the standards of normal people?"
Every individual holds their own answer to the question “Is human nature inherently good or inherently evil?” However, I believe that everything we encounter in life will add small changes to our original nature.
Awards
2021 Taipei Film Awards - Best Documentary Nomination
Team
- Director | https://docs.tfi.org.tw/en/film/6653 | 2021-07-24T00:30:02 | CC-MAIN-2021-31 | 1627046150067.87 | [array(['https://docs.tfi.org.tw/sites/default/files/styles/film_banner/public/image/film/%E6%83%A1%E4%BA%BA%E4%B9%8B%E7%85%9E%20THE%20BAD%20MAN%20-%20still%203.jpg?itok=G8FPSQdg',
None], dtype=object)
array(['https://docs.tfi.org.tw/sites/default/files/styles/film_poster/public/film/img2/%E6%83%A1%E4%BA%BA%E4%B9%8B%E7%85%9E%20THE%20BAD%20MAN%20-%20still%205.jpg?itok=DX6yUzIT&c=919d5c03f09c26970424259065292e91',
None], dtype=object) ] | docs.tfi.org.tw |
Jupyter Notebook Tutorials¶
Below you will find Jupyter Notebook tutorials organized by usage/subject. Each notebook is grouped by one library/sub-library of IRAF tasks, which corresponds to the title of the notebook. Putting together these tutorial notebooks is an ongoing task, and contributions are welcome.
If you are new to Python or Astropy, we recommend starting with the introduction page to see some Python and Astropy information that will be used extensively in these tutorials.
For questions or comments please see our github or you can visit the STScI help page.
Contents¶
Image Manipulation:
Fits Tools:
Cosmic Ray Rejections:
Other / General Tools:
Index¶
To search IRAF tasks by task name, see the index linked below. | https://stak-notebooks.readthedocs.io/en/latest/ | 2021-07-24T00:14:32 | CC-MAIN-2021-31 | 1627046150067.87 | [] | stak-notebooks.readthedocs.io |
The welcome module provides a checklist of a few tasks to complete in order to configure SproutCMS for initial use. For example, making sure it can connect to the database, and setting up the initial operator account.
Note: the welcome module is enabled by default when first installing Sprout, and is subsequently disabled. If you ever need to re-enable it, just uncomment or re-add the following lines in the file config/config.php:
config/config.php
/**
* Remove these three lines once SproutCMS has been set up
**/
Sprout\Helpers\Register::modules([
'Welcome',
]);
The welcome module should now be active again. To access it, just browse to which should automatically redirect you to
The interface should like something like this:
The steps simply need to be taken in order until they're all the happy bright green colour. Once they have been comnpleted, clicking the 'reload' button at the bottom will send you to the home page of your new SproutCMS site. | http://docs.getsproutcms.com/installation/welcome-module | 2021-07-24T02:04:14 | CC-MAIN-2021-31 | 1627046150067.87 | [] | docs.getsproutcms.com |
How to Access Campaign Analytics?
By
updated 3 months ago
While the main Analytics page showcases the metrics of all Display Campaigns present in that particular Airtory Studio Account, you can access individual Campaign Analytics to assess their respective performance.
- Once a campaign has been created, you can click on the campaign name to see the Placements present there.
- You can then toggle to the Analytics tab to view the statistics of all Placements under that campaign.
- The default date range for which the Analytics Campaign ads have loaded on the page for the date range selected
- Invalid Impressions: Impressions that do not meet Airtory’s quality standard
- the campaign.
- Interaction, Click, and Engagement Stats: Graphical representation of the interaction/engagement and Click metrics of a particular creative. To view the interaction stats of a different placement, you can select the same from the drop-down menu.
- Along with selecting a particular Placement to view the graph, you can also. You can view this for each creative part of the campaign by selecting it from the drop-down menu.
Hope this article was helpful. If you have any queries, please write to [email protected].
Details on individual Creative/Placement Analytics are covered in this article. | https://docs.airtory.com/article/13-campaign-analytics-airtory-studio | 2021-07-24T01:22:26 | CC-MAIN-2021-31 | 1627046150067.87 | [] | docs.airtory.com |
acceleration measurements which occurred during the last frame. (Read Only) (Allocates temporary variables).
// Calculates weighted sum of acceleration measurements which occurred during the last frame // Might be handy if you want to get more precise measurements
function Update () { var acceleration:Vector3 = Vector3.zero; for (var accEvent : AccelerationEvent in Input.accelerationEvents) { acceleration += accEvent.acceleration * accEvent.deltaTime; } print (acceleration); }
using UnityEngine; using System.Collections;
public class ExampleClass : MonoBehaviour { void Update() { Vector3 acceleration = Vector3.zero; foreach (AccelerationEvent accEvent in Input.accelerationEvents) { acceleration += accEvent.acceleration * accEvent.deltaTime; } print(acceleration); } }
Did you find this page useful? Please give it a rating: | https://docs.unity3d.com/2017.3/Documentation/ScriptReference/Input-accelerationEvents.html | 2021-07-24T00:46:01 | CC-MAIN-2021-31 | 1627046150067.87 | [] | docs.unity3d.com |
EventNotification¶
The EventNotification object corresponds to a unique EventSubscription within the Carvoyant system. The EventNotification will be delivered to the postURL provided by the EventSubscription via an HTTP POST message whose JSON body contains the details specific to the notification type. Examples for the supported event types can be found in this documents children.
The Event model that we have implemented is based off of the Evented API Spec. This is a generic specification that helps define the transport of API events between two systems.
See the EventType page for details on the different events that can be notified.
Note
You will only be able to get notifications that have been created for your client Id. Specifically, we will look at the access token specified in the request, determine the client Id that was authorized with that access token, and only return notifications for EventSubscription s for that client Id. You do not have access to notifications for subscriptions that have been created by other client Ids.
Common Properties
Supported Verbs
- GET
GET¶
Returns one or more event notifications. By default, the first 50 results are returned.
Query Paths
- /account/{account-id}/eventNotification/{notification-id}
- /account/{account-id}/eventNotification/{event-type}/{notification-id}
- /account/{account-id}/eventSubscription/{subscription-id}/eventNotification/{notification-id}
- /account/{account-id}/eventSubscription/{subscription-id}/eventNotification/{event-type}/{notification-id}
- /vehicle/{vehicle-id}/eventNotification/{notification-id}
- /vehicle/{vehicle-id}/eventNotification/{event-type}/{notification-id}
- /vehicle/{vehicle-id}/eventSubscription/{subscription-id}/eventNotification/{notification-id}
- /vehicle/{vehicle-id}/eventSubscription/{subscription-id}/eventNotification/{event-type}/{notification-id}
Query Parameters
Call Options
Sample JSON Response
Note
This response only includes the properties that are common to all EventType . It is not a complete response. Refer to the EventType page for the detailed list of what properties are returned for the notification.
{ "notifications":[ { "id":315931, "subscriptionId":1647, "_domain":"carvoyant.com", "_type":"VEHICLEDISCONNECTED", "_name":"VEHICLEDISCONNECTED", "_timestamp":"20140912T010246+0000", "minimumTime":0, "httpStatusCode":200, "notificationPeriod":"INITIALSTATE", "dataSetId":4795420, "creatorClientId":"hasa2czfebhsj6XXXXXXXXXX", "vehicleId":123 }, { "id":315932, "subscriptionId":1646, "_domain":"carvoyant.com", "_type":"VEHICLECONNECTED", "_name":"VEHICLECONNECTED", "_timestamp":"20140912T010303+0000", "minimumTime":0, "httpStatusCode":200, "notificationPeriod":"INITIALSTATE", "dataSetId":4795435, "creatorClientId":"hasa2czfebhsj6XXXXXXXXXX", "vehicleId":123 } ], "totalRecords":2 } | https://carvoyant-api.readthedocs.io/en/latest/api-reference/resources/event-notification.html | 2021-07-24T00:41:41 | CC-MAIN-2021-31 | 1627046150067.87 | [] | carvoyant-api.readthedocs.io |
This service-level alert is triggered if the configured percentage of Region Server
processes cannot be determined to be up and listening on the network for the configured
critical threshold.The default setting is 10% to produce a WARN alert and 30% to produce a
CRITICAL alert. It uses the
check_aggregate plugin to aggregate the results
of
RegionServer process down checks.
Look at the configuration files (
/etc/hbase/conf)
If the failure was associated with a particular workload, try to understand the workload better
Restart the RegionServers | https://docs.cloudera.com/HDPDocuments/Ambari-1.5.1.0/bk_Monitoring_Hadoop_Book/content/ch03s05s06s01.html | 2021-07-24T01:17:21 | CC-MAIN-2021-31 | 1627046150067.87 | [] | docs.cloudera.com |
LogsNetwork Traffic Analyzer Logs
Network Traffic Analyzer includes the following logs:
See Also
Logs
About Logs
Logging Quick Start
Survey of Frequently Used Logs
Consolidated Logs
Action Log
Actions Applied Log
Activity Log
Actions Activity Log
Blackout Summary Log
Scan History
General Error Log
Hyper-V Event Log
Logger Health Messages
Passive Monitor Error Log
Performance Monitor Error Log
Policy Audit
Recurring Action Log
Scheduled Report Log
SNMP Trap Log
Syslog
Task Log
VMware Event Log
Web User Activity Log
Windows Event Log
Wireless
Alert Center Log View
Applications State Change Log
APM-Resolved Items Log
Quick Help Links | https://docs.ipswitch.com/NM/WhatsUpGold2019/03_Help/1033/42456.htm | 2021-07-24T01:52:16 | CC-MAIN-2021-31 | 1627046150067.87 | [] | docs.ipswitch.com |
You're reading the documentation for an older, but still supported, version of ROS 2. For information on the latest version, please have a look at Galactic.
Understanding ROS 2 actions¶
Goal: Introspect actions in ROS 2.
Tutorial level: Beginner
Time: 15 minutes
Contents
Background¶
Actions are one of the communication types in ROS 2 and are intended for long running tasks. They consist of three parts: a goal, feedback, and a result.
Actions are built on topics and services. Their functionality is similar to services, except actions are preemptable (you can cancel them while executing). They also provide steady feedback, as opposed to services which return a single response.
Actions use a client-server model, similar to the publisher-subscriber model (described in the topics tutorial). An “action client” node sends a goal to an “action server” node that acknowledges the goal and returns a stream of feedback and a result.
Prerequisites¶
This tutorial builds off concepts, like nodes and topics, covered in previous tutorials.
This tutorial uses Use actions¶
When you launch the
/teleop_turtle node, you will see the following message in your terminal:
Use arrow keys to move the turtle. Use G|B|V|C|D|E|R|T keys to rotate to absolute orientations. 'F' to cancel a rotation.
Let’s focus on the second line, which corresponds to an action. (The first instruction corresponds to the “cmd_vel” topic, discussed previously in the topics tutorial.)
Notice that the letter keys
G|B|V|C|D|E|R|T form a “box” around the
F key on your keyboard.
Each key’s position around
F corresponds to that orientation in turtlesim.
For example, the
E will rotate the turtle’s orientation to the upper left corner.
Pay attention to the terminal where the
/turtlesim node is running.
Each time you press one of these keys, you are sending a goal to an action server that is part of the
/turtlesim node.
The goal is to rotate the turtle to face a particular direction.
A message relaying the result of the goal should display once the turtle completes its rotation:
[INFO] [turtlesim]: Rotation goal completed successfully
The
F key will cancel a goal mid-execution, demonstrating the preemptable feature of actions.
Try pressing the
C key, and then pressing the
F key before the turtle can complete its rotation.
In the terminal where the
/turtlesim node is running, you will see the message:
[INFO] [turtlesim]: Rotation goal canceled
Not only can the client-side (your input in the teleop) preempt goals, but the server-side (the
/turtlesim node) can as well.
When the server-side preempts an action, it “aborts” the goal.
Try hitting the
D key, then the
G key before the first rotation can complete.
In the terminal where the
/turtlesim node is running, you will see the message:
[WARN] [turtlesim]: Rotation goal received before a previous goal finished. Aborting previous goal
The server-side aborted the first goal because it was interrupted.
3 ros2 node info¶
To see the
/turtlesim node’s actions, open a new terminal and run the command:
ros2 node info /turtlesim
Which will return a list of
/turtlesim’s subscribers, publishers, services, action servers and action clients:
/turtlesim Subscribers: /parameter_events: rcl_interfaces/msg/ParameterEvent /turtle1/cmd_vel: geometry_msgs/msg/Twist Publishers: /parameter_events: rcl_interfaces/msg/ParameterEvent /rosout: rcl_interfaces/msg/Log /turtle1/color_sensor: turtlesim/msg/Color /turtle1/pose: turtlesim/msg/Pose Services: /turtlesim/describe_parameters: rcl_interfaces/srv/DescribeParameters /turtlesim/get_parameter_types: rcl_interfaces/srv/GetParameterTypes /turtlesim/get_parameters: rcl_interfaces/srv/GetParameters /turtlesim/list_parameters: rcl_interfaces/srv/ListParameters /turtlesim/set_parameters: rcl_interfaces/srv/SetParameters /turtlesim/set_parameters_atomically: rcl_interfaces/srv/SetParametersAtomically Action Servers: /turtle1/rotate_absolute: turtlesim/action/RotateAbsolute Action Clients:
Notice that the
/turtle1/rotate_absolute action for
/turtlesim is under
Action Servers.
This means
/turtlesim responds to and provides feedback for the
/turtle1/rotate_absolute action.
The
/teleop_turtle node has the name
/turtle1/rotate_absolute under
Action Clients meaning that it sends goals for that action name.
ros2 node info /teleop_turtle
Which will return:
/teleop_turtle Subscribers: /parameter_events: rcl_interfaces/msg/ParameterEvent Publishers: /parameter_events: rcl_interfaces/msg/ParameterEvent /rosout: rcl_interfaces/msg/Log /turtle1/cmd_vel: geometry_msgs/msg/Twist Services: /teleop_turtle/describe_parameters: rcl_interfaces/srv/DescribeParameters /teleop_turtle/get_parameter_types: rcl_interfaces/srv/GetParameterTypes /teleop_turtle/get_parameters: rcl_interfaces/srv/GetParameters /teleop_turtle/list_parameters: rcl_interfaces/srv/ListParameters /teleop_turtle/set_parameters: rcl_interfaces/srv/SetParameters /teleop_turtle/set_parameters_atomically: rcl_interfaces/srv/SetParametersAtomically Action Servers: Action Clients: /turtle1/rotate_absolute: turtlesim/action/RotateAbsolute
4 ros2 action list¶
To identify all the actions in the ROS graph, run the command:
ros2 action list
Which will return:
/turtle1/rotate_absolute
This is the only action in the ROS graph right now.
It controls the turtle’s rotation, as you saw earlier.
You also already know that there is one action client (part of
/teleop_turtle) and one action server (part of
/turtlesim) for this action from using the
ros2 node info <node_name> command.
4.1 ros2 action list -t¶
Actions have types, similar to topics and services.
To find
/turtle1/rotate_absolute’s type, run the command:
ros2 action list -t
Which will return:
/turtle1/rotate_absolute [turtlesim/action/RotateAbsolute]
In brackets to the right of each action name (in this case only
/turtle1/rotate_absolute) is the action type,
turtlesim/action/RotateAbsolute.
You will need this when you want to execute an action from the command line or from code.
5 ros2 action info¶
You can further introspect the
/turtle1/rotate_absolute action with the command:
ros2 action info /turtle1/rotate_absolute
Which will return
Action: /turtle1/rotate_absolute Action clients: 1 /teleop_turtle Action servers: 1 /turtlesim
This tells us what we learned earlier from running
ros2 node info on each node:
The
/teleop_turtle node has an action client and the
/turtlesim node has an action server for the
/turtle1/rotate_absolute action.
6 ros2 interface show¶
One more piece of information you will need before sending or executing an action goal yourself is the structure of the action type.
Recall that you identified
/turtle1/rotate_absolute’s type when running the command
ros2 action list -t.
Enter the following command with the action type in your terminal:
ros2 interface show turtlesim/action/RotateAbsolute
Which will return:
# The desired heading in radians float32 theta --- # The angular displacement in radians to the starting position float32 delta --- # The remaining rotation in radians float32 remaining
The first section of this message, above the
---, is the structure (data type and name) of the goal request.
The next section is the structure of the result.
The last section is the structure of the feedback.
7 ros2 action send_goal¶
Now let’s send an action goal from the command line with the following syntax:
ros2 action send_goal <action_name> <action_type> <values>
<values> need to be in YAML format.
Keep an eye on the turtlesim window, and enter the following command into your terminal:
ros2 action send_goal /turtle1/rotate_absolute turtlesim/action/RotateAbsolute "{theta: 1.57}"
You should see the turtle rotating, as well as the following message in your terminal:
Waiting for an action server to become available... Sending goal: theta: 1.57 Goal accepted with ID: f8db8f44410849eaa93d3feb747dd444 Result: delta: -1.568000316619873 Goal finished with status: SUCCEEDED
All goals have a unique ID, shown in the return message.
You can also see the result, a field with the name
delta, which is the displacement to the starting position.
To see the feedback of this goal, add
--feedback to the last command you ran.
First, make sure you change the value of
theta.
After running the previous command, the turtle will already be at the orientation of
1.57 radians, so it won’t move unless you pass a new
theta.
ros2 action send_goal /turtle1/rotate_absolute turtlesim/action/RotateAbsolute "{theta: -1.57}" --feedback
Your terminal will return the message:
Sending goal: theta: -1.57 Goal accepted with ID: e6092c831f994afda92f0086f220da27 Feedback: remaining: -3.1268222332000732 Feedback: remaining: -3.1108222007751465 … Result: delta: 3.1200008392333984 Goal finished with status: SUCCEEDED
You will continue to receive feedback, the remaining radians, until the goal is complete.
Summary¶
Actions are like services that allow you to execute long running tasks, provide regular feedback, and are cancelable.
A robot system would likely use actions for navigation. An action goal could tell a robot to travel to a position. While the robot navigates to the position, it can send updates along the way (i.e. feedback), and then a final result message once it’s reached its destination.
Turtlesim has an action server that action clients can send goals to for rotating turtles.
In this tutorial, you introspected that action,
/turtle1/rotate_absolute, to get a better idea of what actions are and how they work.
Next steps¶
Now you’ve covered all of the core ROS 2 concepts. The last few tutorials in the “Users” set will introduce you to some tools and techniques that will make using ROS 2 easier, starting with Using rqt_console. | https://docs.ros.org/en/ros2_documentation/foxy/Tutorials/Understanding-ROS2-Actions.html | 2021-07-24T01:26:12 | CC-MAIN-2021-31 | 1627046150067.87 | [array(['../_images/Action-SingleActionClient.gif',
'../_images/Action-SingleActionClient.gif'], dtype=object)] | docs.ros.org |
Resizes the column widths of the columns that have auto-resize enabled to make all the columns fit to the width of the MultiColumnHeader render rect.
If no columns have MultiColumnHeaderState.Column.autoResize enabled then this method does nothing. This method is also called when selecting the 'Resize To Fit' context menu of the MultiColumnHeader. | https://docs.unity3d.com/es/2018.2/ScriptReference/IMGUI.Controls.MultiColumnHeader.ResizeToFit.html | 2021-07-24T02:52:21 | CC-MAIN-2021-31 | 1627046150067.87 | [] | docs.unity3d.com |
Deploying a Symfony2 Application to Elastic Beanstalk
This section walks you through deploying a sample application to Elastic Beanstalk using the Elastic Beanstalk Command Line Interface (EB CLI) and Git, and then updating the application to use the Symfony2 framework.
Sections
Set Up Your Git Repository
EB CLI is a command line interface that you can use with Git to deploy applications quickly and more easily. EB is available as part of the Elastic Beanstalk command line tools package. For instructions to install EB CLI, see Install the Elastic Beanstalk Command Line Interface (EB CLI).
Initialize your Git repository. After you run the following command, when you run
eb init, EB CLI will recognize that your application is set up with Git.
git init .
Set Up Your Symfony2 Development Environment
Set up Symfony2 and create the project structure. The following walks you through setting up Symfony2 on a Linux operating system. For more information, go to.
To set up your PHP development environment on your local computer
Download and install composer from getcomposer.org. For more information, go to.
curl -s | php
Install Symfony2 Standard Edition with Composer. Check for the latest available version. Using the following command, composer will install the vendor libraries for you.
php composer.phar create-project symfony/framework-standard-edition symfony2_example/
<version number>cd symfony2_example
Note
You may need to set the date.timezone in the php.ini to successfully complete installation. Also provide parameters for Composer, as needed.
Initialize the Git repository.
git init
Update the .gitignore file to ignore vendor, cache, logs, and composer.phar. These files do not need to get pushed to the remote server.
cat > .gitignore <<EOT app/bootstrap.php.cache app/cache/* app/logs/* vendor composer.phar EOT
Generate the hello bundle.
php app/console generate:bundle --namespace=Acme/HelloBundle --format=yml
When prompted, accept all defaults. For more information, go to Creating Pages in Symfony2.
Next, configure Composer. Composer dependencies require that you set the HOME or COMPOSER_HOME environment variable. Also configure Composer to self-update so that you always use the latest version.
To configure Composer
Create a configuration file with the extension .config (e.g.,
composer.config) and place it in an
.ebextensionsdirectory at the top level of your source bundle. You can have multiple configuration files in your
.ebextensionsdirectory. For information about the file format of configuration files, see Advanced Environment Customization with Configuration Files (
.ebextensions).
Note
Configuration files should conform to YAML or JSON formatting standards. For more information, go to or, respectively.
In the .config file, type the following.
commands: 01updateComposer: command: export COMPOSER_HOME=/root && /usr/bin/composer.phar self-update
1.0.0-alpha11option_settings: - namespace: aws:elasticbeanstalk:application:environment option_name: COMPOSER_HOME value: /root
Replace
1.0.0-alpha11with your preferred version of composer. See getcomposer.org/download for a list of available versions.
Configure Elastic Beanstalk
The following instructions use the Elastic Beanstalk command line interface (EB CLI) to configure an Elastic Beanstalk application repository in your local project directory.
To configure Elastic Beanstalk
From the directory where you created your local repository, type the following command:
eb init
When you are prompted for the Elastic Beanstalk region, type the number of the region. For information about this product's regions, go to Regions and Endpoints in the Amazon Web Services General Reference. For this example, we'll use US West (Oregon).
When you are prompted for the Elastic Beanstalk application name, type the name of the application. Elastic Beanstalk generates an application name based on the current directory name if an application name has not been previously configured. In this example, we use symfony2app.
Enter an AWS Elastic Beanstalk application name (auto-generated value is "windows"): symfony2app
Note
If you have a space in your application name, make sure you do not use quotation marks.
Type
yif Elastic Beanstalk correctly detected the correct platform you are using. Type
nif not, and then specify the correct platform.
When prompted, type
yif you want to set up Secure Shell (SSH) to connect to your instances. Type
nif you do not want to set up SSH. In this example, we will type
n.
Do you want to set up SSH for your instances? (y/n): n
Create your running environment.
eb create
When you are prompted for the Elastic Beanstalk environment name, type the name of the environment. Elastic Beanstalk automatically creates an environment name based on your application name. If you want to accept the default, press Enter.
Enter Environment Name (default is HelloWorld-env):
Note
If you have a space in your application name, make sure you do not have a space in your environment name.
When you are prompted to provide a CNAME prefix, type the CNAME prefix you want to use. Elastic Beanstalk automatically creates a CNAME prefix based on the environment name. If you want to accept the default, press Enter.
Enter DNS CNAME prefix (default is HelloWorld):
After configuring Elastic Beanstalk, you are ready to deploy a sample application.
If you want to update your Elastic Beanstalk configuration, you can use the
init command
again. When prompted, you can update your configuration options. If you want to keep
any previous settings, press the Enter key.
To deploy a sample application
From the directory where you created your local repository, type the following command:
eb deploy
This process may take several minutes to complete. Elastic Beanstalk will provide status updates during the process. If at any time you want to stop polling for status updates, press Ctrl+C. Once the environment status is Green, Elastic Beanstalk will output a URL for the application. You can copy and paste the URL into your web browser to view the application.
View the Application
Update the Application
After you have deployed a sample application, you can update it with your own application. In this step, we update the sample application with a simple "Hello World" Symfony2 application.
To update the sample application
Add your files to your local Git repository, and then commit your change.
git add -A && git commit -m "Initial commit"
Note
For information about Git commands, go to Git - Fast Version Control System.
Create an application version matching your local repository and deploy to the Elastic Beanstalk environment if specified.
eb deploy
You can also configure Git to push from a specific branch to a specific environment. For more information, see "Using Git with EB CLI" in the topic Managing Elastic Beanstalk Environments with the EB CLI.
After your environment is Green and Ready, append /web/hello/AWS to the URL of your application. The application should write out "Hello AWS!"
You can access the logs for your EC2 instances running your application. For instructions on accessing your logs, see Instance Logs.
Clean Up
If you no longer want to run your application, you can clean up by terminating your environment and deleting your application.
Use the
terminate command to terminate your environment and the
delete command to delete your application. | http://docs.aws.amazon.com/elasticbeanstalk/latest/dg/create_deploy_PHP_symfony2.html | 2016-12-03T00:18:31 | CC-MAIN-2016-50 | 1480698540798.71 | [] | docs.aws.amazon.com |
DataPager Overview
RadDataPager can be used to display paging navigation controls for other data-bound controls that implement the IPageableItemContainer or IRadPageableItemContainer interface (like the RadListView and MS ListView). The RadDataPager control lets users view large sets of data in small chunks for faster loading and easier navigation. It also provides a set of events, helper methods and properties for custom intervention.
You can easily add the RadDataPager control to a Web Form within Visual Studio. The paging interface appears wherever you place the RadDataPager control on the page. You may place it before or after the RadListView control, as well as within its LayoutTemplate element.
The RadDataPager control has the following properties for using it in its default state:
PagedControlID is the ID of the control that implements one of the following interfaces - IPageableItemContainer or IRadPageableItemContainer. This is the control that will be paged by RadDataPager control. If RadDataPager is placed in Controls collection of IPageableItemContainer / IRadPageableItemContainer setting this property is optional. In case PagedControlID is not set, RadDataPager will attempt to find its container automatically.
PageSize is the number of items and rows to display on each page.
StartRowIndex gets the index of the first record that is displayed on a page of data.
TotalRowCount gets the total number of records that are displayed in the underlying data source.
MaximumRows gets the maximum number of records that are displayed for each page of data.
RadDataPager field types
The RadDataPager control has a number of fields you can use, including template support for designing your own pager. The following table lists the different RadDataPager fields:
Events and Methods
Telerik RadDataPager control contains the following sever-side events and methods:
RadDataPager command names and arguments
- PageCommandName represents the Page command name which fires the RadDataPager.PageIndexChanged event. It can be raised bybuttons residing in the RadDataPager body. Their CommandName should be set to Page and CommandArgument must match one of the values from the table below:
- PageSizeChangeCommandName represents the PageSizeChange command name which fires RadDataPager.PageSizeChanged event. It can be raised by buttons residing in the RadDataPager body. Their CommandName should be set to PageSizeChange and CommandArgument must be the actual number representing the new page size that will be set. | http://docs.telerik.com/devtools/aspnet-ajax/controls/datapager/overview | 2016-12-03T00:20:34 | CC-MAIN-2016-50 | 1480698540798.71 | [array(['images/DataPager_Overview.png', 'RadDataPager'], dtype=object)] | docs.telerik.com |
This is the AWS CodeCommit API Reference. This reference provides descriptions of the operations and data types for AWS CodeCommit API.
GetBranch, which returns information about a specified branch
ListBranches, which lists all branches for a specified repository
UpdateDefaultBranch, which changes the default branch for a repository
Information about committed code in a repository, by calling the following:.
This document was last published on November 21, 2016. | http://docs.aws.amazon.com/codecommit/latest/APIReference/Welcome.html | 2016-12-03T00:16:25 | CC-MAIN-2016-50 | 1480698540798.71 | [] | docs.aws.amazon.com |
Help Center
Local Navigation
Set the trackpad sensitivity
You can set how the trackpad responds to your touch. A high sensitivity level requires less pressure than a lower sensitivity level.
Next topic: Change your typing style
Previous topic: Set the cursor speed
Was this information helpful? Send us your comments. | http://docs.blackberry.com/en/smartphone_users/deliverables/25326/Set_trackball_trackpad_sensitivity_60_1123546_11.jsp | 2014-10-20T13:20:15 | CC-MAIN-2014-42 | 1413507442900.2 | [] | docs.blackberry.com |
java.lang.Object
org.modeshape.graph.connector.base.Repository<DiskNode,DiskWorkspace>org.modeshape.graph.connector.base.Repository<DiskNode,DiskWorkspace>
org.modeshape.connector.disk.DiskRepositoryorg.modeshape.connector.disk.DiskRepository
@ThreadSafe public class DiskRepository
The representation of a disk-based repository and its content.
protected static final Logger LOGGER
protected final FileChannel lockFileChannel
protected AtomicInteger readLockCount
public DiskRepository(DiskSource source)
protected void initialize()
Repository
Due to the ordering restrictions on constructor chaining, this method cannot be called until the repository is fully
initialized. This method MUST be called at the end of the constructor by any class that implements
MapRepository
.
initializein class
Repository<DiskNode,DiskWorkspace>
public DiskWorkspace createWorkspace(Transaction<DiskNode,DiskWorkspace> txn, String name, CreateWorkspaceRequest.CreateConflictBehavior existingWorkspaceBehavior, String nameOfWorkspaceToClone) throws InvalidWorkspaceException
Repository.
createWorkspacein class
Repository<DiskNode,DiskWorkspace> Set<String> getWorkspaceNames()
getWorkspaceNamesin class
Repository<DiskNode,DiskWorkspace>
Repository.getWorkspaceNames()
protected File getRepositoryRoot()
public DiskTransaction startTransaction(ExecutionContext context, boolean readonly)
committedor
rolled back.
startTransactionin class
Repository<DiskNode,DiskWorkspace>
context- the context in which the transaction is to be performed; may not be null
readonly- true if the transaction will not modify any content, or false if changes are to be made
Repository.startTransaction(org.modeshape.graph.ExecutionContext, boolean) | http://docs.jboss.org/modeshape/2.6.0.Final/api-full/org/modeshape/connector/disk/DiskRepository.html | 2014-10-20T13:01:54 | CC-MAIN-2014-42 | 1413507442900.2 | [] | docs.jboss.org |
the former Quality and Testing. That is absolutely true because the main discussion here is that we talk on.
<image goes here>
The process is started in one of two ways: the bug is added to the tracker, or a user reports the bug in the Quality and Testing forum for the given major/minor release.
If an issue is reported on the forums is the Bug Squad teams's responsibility to verify that the issue is in fact a bug.. | http://docs.joomla.org/index.php?title=Joomla!_Maintenance_Procedures&diff=537&oldid=536 | 2014-10-20T13:08:14 | CC-MAIN-2014-42 | 1413507442900.2 | [] | docs.joomla.org |
other
This template is used inside other templates that need to behave differently (usually look differently) depending on what type of page they are on. It detects and groups all the different namespaces used on Joomla! Documentation into four types:
If this template is used without any parameters it returns the type name that the page belongs to: main, talk, file or other.
This template can also take four parameters and then returns one of them depending on which type a page belongs to.:
For a more specific alternative see {{thingamabob}}
main talk other is based on: Wikipedia:Main talk other (edit|talk|history|links|watch|logs) | http://docs.joomla.org/index.php?title=Template:Main_talk_file_other&oldid=6356 | 2014-10-20T13:02:00 | CC-MAIN-2014-42 | 1413507442900.2 | [] | docs.joomla.org |
Modifying site settings
You can configure the general, global, and feature-specific settings for your site in the Settings panel (the menu items contained in the Settings section) of the Orchard dashboard. This topic describes these site-level settings.
In the Settings panel in the dashboard, the settings are arranged into categories, including General, Gallery, Comments, Media, Users, and Cache.
General Settings
To access general settings, click Settings in the Settings panel. This opens the following screen:
Note The general settings screen also displays options that are specific to the features that are enabled for your site.
In the general settings category, you can modify the following global and site-wide settings:
- Site Name. The name of your site, which is usually displayed by the applied theme.
- Default Site Culture. The default culture for the site. You can also add culture codes here. For more information, see Creating Global-Ready Applications.
- Page title separator. The character that is used to separate sections of a page title. For example, the default separator character for the
en-USlocale is a hyphen (-).
- Super user. A user who has administrative capability on the site, regardless of configured roles. This is usually the user who ran the Orchard installation and setup. The default user is the admin account.
- Resource Debug Mode. The mode that determines whether scripts and style sheets are loaded in a "debuggable" form or in their minimal form.
- Default number of items per page. Determines the default number of items that are shown per page.
- Base URL. The base URL for your site.
- Maximum number of items per page. Determines the maximum number of items that are shown per page. Leave 0 for unlimited.
- Default Time Zone. Determines the default time zone used when displaying and editing dates and times.
- Default Site Calendar. Determines the default calendar used when displaying and editing dates and times. The 'Culture calendar' option means the default calendar for the culture of the current request will be used (not necessarily the configured default site culture).
Gallery Settings
To access settings for the gallery, click Gallery in the Settings panel. This opens the following screen:
In the gallery feed settings, you can add or delete a feed using the following settings:
- Add Feed. Lets you specify the URL to a gallery feed.
- Delete. Lets you remove an existing gallery feed.
For more information about how to add feeds to the gallery, see Registering Additional Gallery Feeds.
To access settings for comments, click Comments in the Settings panel. This opens the following screen:
In the comments settings, you can enable or disable the following features:
- Comments must be approved before they appear. Requires user comments to be approved by an administrator or moderator before they become visible on the site.
- Automatically close comments after. Number of days after comments are automatically closed. Leave to 0 to have them always available.
For more information about how to work with comments, see Moderating Comments.
User Settings
To access user settings, click Users in the Settings panel. This opens the following screen:
In the user settings, you can enable or disable the following settings in order to customize user registration:
- Users can create new accounts on the site. Configures the site to let users create a new account.
- Display a link to enable users to reset their password. Provides users with a way to reset their password.
- Users must verify their email address. Requires users to confirm their email address during registration.
- Users must be approved before they can log in. Requires administrative approval of new accounts before users can log in.
Cache
To access caches settings like Default Cache Duration, Max Age, Vary Query String Parameters, Vary Request Headers and Ignored Urls.
Change History
- Updates for Orchard 1.8
- 9-8-14: Updated screen shots for dashboard settings. Added cache section.
- Updates for Orchard 1.1
- 3-29-11: Added sections for new screens in the dashboard. Updated existing screen shots. | http://docs.orchardproject.net/Documentation/Modifying-site-settings | 2014-10-20T13:00:21 | CC-MAIN-2014-42 | 1413507442900.2 | [array(['/Upload/screenshots/dashboard_sitewide_settings.png', None],
dtype=object)
array(['/Upload/screenshots_675/manage_general_settings_675.png', None],
dtype=object)
array(['/Upload/screenshots_675/manage_gallery_feed_settings_675.png',
None], dtype=object)
array(['/Upload/screenshots_675/manage_site_comments_settings_675.png',
None], dtype=object)
array(['/Upload/screenshots_675/manage_site_user_settings_675.png', None],
dtype=object)
array(['/Upload/screenshots_675/cachesettings_675.png', None],
dtype=object) ] | docs.orchardproject.net |
From the Patch Repository, you can include available as well as recently downloaded patches and extensions in a baseline of your choice.
Prerequisites
Connect the vSphere Client to a vCenter Server system with which Update Manager is registered, and on the Home page, click Update Manager under Solutions and Applications icon.
Procedure
- Click the Patch Repository tab to view all the available patches and extensions.
- Click the Add to baseline link in the Baselines column for a selected patch.
- In the Edit containing baselines window, select the baselines in which you want to include this patch or extension and click OK.
If your vCenter Server system is connected to other vCenter Server systems by a common Single Sign-On domain, you can add or exclude the patches from baselines specific to the selected Update Manager instance. | https://docs.vmware.com/en/VMware-vSphere/6.0/com.vmware.vsphere.update_manager.doc/GUID-EC57653D-1D94-459F-B2E6-F4697411BB43.html | 2018-06-18T00:18:59 | CC-MAIN-2018-26 | 1529267859904.56 | [] | docs.vmware.com |
You can use baseline groups to apply upgrade and patch baselines together for upgrading and updating hosts in a single remediation operation.
You can upgrade all ESX/ESXi hosts in your deployment system by using a single upgrade baseline. You can apply patches to the hosts at the same time by using a baseline group containing one upgrade baseline and multiple host patch baselines.
This workflow describes how to upgrade and patch the hosts in your vSphere inventory at the same time. You can upgrade hosts and apply patches to hosts at the folder, cluster, or datacenter level. You can also upgrade and patch a single host. This workflow describes the process to patch and upgrade.
Import an ESXi image (which is distributed as an ISO file) and create a host upgrade baseline.
You must import an ESXi image, so that you can upgrade the hosts in your vSphere inventory. You can import ESXi images from the ESXi Images tab of the Update Manager Administration view.
For a complete procedure about importing ESXi images, see Import Host Upgrade Images and Create Host Upgrade Baselines.
Create fixed or dynamic host patch baselines.
Dynamic patch baselines contain a set of patches, which updates automatically according to patch availability and the criteria that you specify. Fixed baselines contain only patches that you select, regardless of new patch downloads.
You can create patch baselines from the Baselines and Groups tab of the Update Manager Administration view. For more information about creating fixed patch baselines, see Create a Fixed Patch Baseline. The detailed instructions about creating a dynamic patch baseline are described in Create a Dynamic Patch Baseline.
Create a baseline group containing the patch baselines as well as the host upgrade baseline that you created.
You can create baseline groups from the Baselines and Groups tab of the Update Manager Administration view. For more information about creating baseline groups for hosts, see Create a Host Baseline Group.
Attach the baseline group to a container object.
To scan and remediate the hosts in your environment, you must first attach the host baseline group to a container object containing the hosts that you want to remediate. You can attach baseline groups to objects from the Update Manager Compliance view. For more information about attaching baseline groups to vSphere objects, see Attach Baselines and Baseline Groups to Objects.
Scan the container object.
After you attach the baseline group.
Remediate the container object.
Remediate the hosts that are in Non-Compliant state to make them compliant with the attached baseline group. For more information about remediating hosts against baseline groups containing patch, extension, and upgrade baselines, see Remediate Hosts Against Baseline Groups.
During the remediation, the upgrade is performed first. Hosts that need to be both upgraded and updated with patches are first upgraded and then patched. Hosts that are upgraded might reboot and disconnect for a period of time during remediation.
Hosts that do not need to be upgraded are only patched.
The hosts in the container object become compliant with the attached baseline group. | https://docs.vmware.com/en/VMware-vSphere/5.5/com.vmware.vsphere.update_manager.doc/GUID-CDFD68F2-E0EC-451D-BEA5-D33E381A3EC1.html | 2018-06-18T00:01:01 | CC-MAIN-2018-26 | 1529267859904.56 | [] | docs.vmware.com |
Warning! This page documents an earlier version of InfluxDB, which is no longer actively developed. InfluxDB v1.5 is the most recent stable version of InfluxDB.
8083and InfluxDB ignores the
[admin]section in the configuration file if that section is present. Chronograf replaces the web admin interface with improved tooling for querying data, writing data, and database management. See Chronograf’s transition guide for more information. | https://docs.influxdata.com/influxdb/v1.3/tools/web_admin/ | 2018-06-17T23:34:51 | CC-MAIN-2018-26 | 1529267859904.56 | [] | docs.influxdata.com |
public abstract class ConcurrencyThrottleSupport extends java.lang.Object implements java.io.Serializable
Designed for use as a base class, with the subclass invoking
the
beforeAccess() and
afterAccess() methods at
appropriate points of its workflow. Note that
afterAccess
should usually be called in a finally block!
The default concurrency limit of this support class is -1 ("unbounded concurrency"). Subclasses may override this default; check the javadoc of the concrete class that you're using.
setConcurrencyLimit(int),
beforeAccess(),
afterAccess(),
ConcurrencyThrottleInterceptor,
Serializable, Serialized Form
clone, equals, finalize, getClass, hashCode, notify, notifyAll, toString, wait, wait, wait
public static final int UNBOUNDED_CONCURRENCY
public static final int NO_CONCURRENCY
protected transient org.apache.commons.logging.Log logger
public ConcurrencyThrottleSupport()
public void setConcurrencyLimit(int concurrencyLimit)
In principle, this limit can be changed at runtime, although it is generally designed as a config time setting.
NOTE: Do not switch between -1 and any concrete limit at runtime, as this will lead to inconsistent concurrency counts: A limit of -1 effectively turns off concurrency counting completely.
public int getConcurrencyLimit()
public boolean isThrottleActive()
trueif the concurrency limit for this instance is active
getConcurrencyLimit()
protected void beforeAccess()
This implementation applies the concurrency throttle.
afterAccess()
protected void afterAccess()
beforeAccess() | http://docs.spring.io/spring-framework/docs/3.2.0.RC2/api/org/springframework/util/ConcurrencyThrottleSupport.html | 2015-11-25T04:31:58 | CC-MAIN-2015-48 | 1448398444228.5 | [] | docs.spring.io |
Difference between revisions of "JApplication::getMessageQueue"
From Joomla! Documentation
Revision as of 13MessageQueue
Description
Get the system message queue.
Description:JApplication::getMessageQueue [Edit Descripton]
public function getMessageQueue ()
- Returns The system message queue.
- Defined on line 412 of libraries/joomla/application/application.php
- Since
See also
JApplication::getMessageQueue source code on BitBucket
Class JApplication
Subpackage Application
- Other versions of JApplication::getMessageQueue
SeeAlso:JApplication::getMessageQueue [Edit See Also]
User contributed notes
<CodeExamplesForm /> | https://docs.joomla.org/index.php?title=API17:JApplication::getMessageQueue&diff=89406&oldid=55855 | 2015-11-25T04:16:05 | CC-MAIN-2015-48 | 1448398444228.5 | [] | docs.joomla.org |
Difference between revisions of "How to use the Template Manager"
From Joomla! Documentation
Revision as of 14:23, 29 November 2013
<translate> To edit or copy a template's files with the Template Manager: Customise Template you must first access the Template manager.</translate>
<translate>
Contents
Access the Template Manager
</translate> {{:J3.x:To access the Template Manager/<translate> en</translate>}}
<translate>
Access the Template Manager Customisation Feature
</translate> {{:J3.x:Access_Template_Manager_Customisation/<translate> en< "Manager my favorite. | https://docs.joomla.org/index.php?title=J3.3:How_to_use_the_Template_Manager&diff=105886&oldid=105804 | 2015-11-25T05:15:49 | CC-MAIN-2015-48 | 1448398444228.5 | [] | docs.joomla.org |
Changes related to "Help34:Menus Menu Item Contact Category"
← Help34:Menus Menu Item Contact Category<<
16 November 2015
12:54Help34:Components Content Categories Edit (2 changes; hist; +16) [MATsxm×2]
15 November 2015
15:08 MATsxm (Talk | contribs) moved page File:Help30-pagination.png to File:Help30-pagination-en.png
15:05 MATsxm (Talk | contribs) moved page File:Help30-article-category-list-display-select.png to File:Help30-article-category-list-display-select-en.png
29 October 2015
08:34(Page translation log) MATsxm (Talk | contribs) marked Help34:Menus Menu Item Manager Edit for translation
08:32Help34:Menus Menu Item Manager Edit (diff; hist; +165) Abulafia
| https://docs.joomla.org/index.php?title=Special:RecentChangesLinked&days=30&from=&target=Help33%3AMenus_Menu_Item_Contact_Category | 2015-11-25T05:40:40 | CC-MAIN-2015-48 | 1448398444228.5 | [array(['/extensions/CleanChanges/images/Arr_r.png', None], dtype=object)
array(['/extensions/CleanChanges/images/Arr_d.png', 'Hide details -'],
dtype=object)
array(['/extensions/CleanChanges/images/Arr_.png', None], dtype=object)
array(['/extensions/CleanChanges/images/Arr_.png', None], dtype=object)
array(['/extensions/CleanChanges/images/Arr_.png', None], dtype=object)
array(['/extensions/CleanChanges/images/showuserlinks.png',
'Show user links Show user links'], dtype=object)
array(['/extensions/CleanChanges/images/Arr_.png', None], dtype=object)
array(['/extensions/CleanChanges/images/showuserlinks.png',
'Show user links Show user links'], dtype=object)
array(['/extensions/CleanChanges/images/Arr_r.png', None], dtype=object)
array(['/extensions/CleanChanges/images/Arr_d.png', 'Hide details -'],
dtype=object)
array(['/extensions/CleanChanges/images/Arr_.png', None], dtype=object)
array(['/extensions/CleanChanges/images/Arr_.png', None], dtype=object)
array(['/extensions/CleanChanges/images/Arr_.png', None], dtype=object)
array(['/extensions/CleanChanges/images/Arr_.png', None], dtype=object)
array(['/extensions/CleanChanges/images/showuserlinks.png',
'Show user links Show user links'], dtype=object) ] | docs.joomla.org |
Security Checklist/Testing and Development
From Joomla! Documentation
Revision as of 11:32, 24 October 2008 by Dextercowley (Talk | contribs)
The More Suggested Tools link appears to be out of date and not working. I don't know if there is an updated one or not. Mark Dexter 13:32, 24 October 2008 (EDT) | https://docs.joomla.org/index.php?title=Talk:Security_Checklist/Testing_and_Development&direction=prev&oldid=76453 | 2015-11-25T04:54:36 | CC-MAIN-2015-48 | 1448398444228.5 | [] | docs.joomla.org |
, Database, and Web – Snapshot, Disks or Both.
Note: By default, all the resource types will be selected.
Here’s how the Activation screen looks:
> – Virtual Machines. You can set an alert stating that a CPU Utilization above a threshold limit of say, 90%,
Tags help to organize Azure cloud resources, and simplify the billing process.
Append Tags
Using CoreStack, you can add tags and the corresponding values will be appended for all the resources provisioned hereafter either through the Azure portal or through CoreStack.
For example, click Add to append Release as Tag Key and 4.5 as Tag Value.
Enforced Tags
Enforced tags refer to those tags, the resources associated with which will be actively monitored and any non-compliance be reported in the Compliance dashboard.
Policies
Here, select the policies that you want to be applicable for your cloud account. There are different types of policies you can select from – Standards, Security, Cost Optimization and Availability.
Schedules
This is to provide rules for scheduling auto shutdown of the virtual machine associated with the cloud account.
The options available are:
Another key scheduling feature that CoreStack offers is the AutoBackup. Follow these steps to activate the backup:
- Check the Virtual Machine Backup and Retention option.
- A set of fields appears, to help you provide the backup details:
Consumption
This section highlights the settings for VMs specific to this cloud account in the Self Service Catalog. Here you can select the Operating Systems, Resource Groups, Preferred Regions and Defined
User can define their own budget and enter it in User Defined Section manually.
Auto Calculated for which you want to view the details. For example, if you want to look at an Azure account, click on AzureRM available on the left side, a list will appear on the right. From that list, click on the Account Name or select “View Settings” to view more options.
The complete account details, including the configuration settings, appear as shown:
This shows a summary of all the information provided during the four-step onboarding process: Authentication, Activation, Configuration and Authorization. | https://docs.corestack.io/getting-started/account-onboarding-process-azure/ | 2019-04-18T14:18:41 | CC-MAIN-2019-18 | 1555578517682.16 | [array(['https://docs.corestack.io/wp-content/uploads/2018/06/onboarding_add_account_screenshot_1.jpg',
None], dtype=object)
array(['https://docs.corestack.io/wp-content/uploads/2018/06/onboarding_add_account_screenshot_2.jpg',
None], dtype=object)
array(['https://docs.corestack.io/wp-content/uploads/2018/06/onboarding_add_account_screenshot_3.jpg',
None], dtype=object)
array(['https://docs.corestack.io/wp-content/uploads/2018/06/onboarding_add_account_screenshot_4.jpg',
None], dtype=object)
array(['https://docs.corestack.io/wp-content/uploads/2018/06/onboarding_add_account_screenshot_5.jpg',
None], dtype=object)
array(['https://docs.corestack.io/wp-content/uploads/2018/06/onboarding_add_account_screenshot_6.jpg',
None], dtype=object)
array(['https://docs.corestack.io/wp-content/uploads/2018/06/onboarding_add_account_screenshot_7.jpg',
None], dtype=object)
array(['https://docs.corestack.io/wp-content/uploads/2018/06/onboarding_add_account_screenshot_8.jpg',
None], dtype=object)
array(['https://docs.corestack.io/wp-content/uploads/2018/06/onboarding_add_account_screenshot_9.jpg',
None], dtype=object)
array(['https://docs.corestack.io/wp-content/uploads/2018/06/onboarding_add_account_screenshot_10.jpg',
None], dtype=object)
array(['https://docs.corestack.io/wp-content/uploads/2018/06/onboarding_add_account_screenshot_11.jpg',
None], dtype=object)
array(['https://docs.corestack.io/wp-content/uploads/2018/06/onboarding_add_account_screenshot_18.jpg',
None], dtype=object)
array(['https://docs.corestack.io/wp-content/uploads/2018/06/onboarding_add_account_screenshot_12.jpg',
None], dtype=object)
array(['https://docs.corestack.io/wp-content/uploads/2018/10/acc-cost-userdefined.jpg',
None], dtype=object)
array(['https://docs.corestack.io/wp-content/uploads/2018/10/acc-cost-autocalculate.jpg',
None], dtype=object)
array(['https://docs.corestack.io/wp-content/uploads/2018/06/onboarding_add_account_screenshot_14.jpg',
None], dtype=object)
array(['https://docs.corestack.io/wp-content/uploads/2018/06/onboarding_add_account_screenshot_15.jpg',
None], dtype=object)
array(['https://docs.corestack.io/wp-content/uploads/2018/10/azure_account_view_menu_lists.jpg',
None], dtype=object)
array(['https://docs.corestack.io/wp-content/uploads/2018/06/onboarding_add_account_screenshot_16.jpg',
None], dtype=object) ] | docs.corestack.io |
How to Create & Edit Slideouts
Creating Slideouts
-
Step-level
You can make changes to individual steps in your slideout flow that will not affect the entire flow.
Changing button text
To change the text of the buttons in your slideouts (that your end-users will see), simply click on the button and start editing the text.
Deleting & Reordering Steps
To edit individual steps within your slideout flow, you'll have to click on the specific step.
- To reorder your steps: drag and drop the into place.
- To delete a step:click the cog icon and then the trash bin of the step you'd like to remove a X button in the top-right corner of the modal to let your users opt out. If you uncheck the Skippable box, the X button will be removed and users will have to click through your entire flow before it closes.
Theme
You can apply any of the themes you create at your Styles page to your flow. Select a theme from the Theme dropdown and it will automatically apply its styling to every step in your flow.
Learn more about creating themes here..
More Advanced Guides
For even more advanced guides on slideouts, see: | https://docs.appcues.com/article/184-creating-slideouts | 2019-04-18T14:30:26 | CC-MAIN-2019-18 | 1555578517682.16 | [array(['http://d33v4339jhl8k0.cloudfront.net/docs/assets/559e91f8e4b0b0593824b4a9/images/589e3074dd8c8e73b3e98175/file-kKVvYGpNwK.gif',
None], dtype=object) ] | docs.appcues.com |
Data Visualization
Creating an Activity Chart
Okay! Let's create a new Activity Chart.
Select the Data Visualization module from the Modules page in the Form Tools administration section. There, on the first page you get redirected to, click the "Create New Visualization" button. A dialog window will appear that looks like the screenshot to the right. To make it as clear as possible, the two visualization types are clearly marked. Click on the "Activity Chart" image on the left.
After you click it, you will be redirected to a single page where you can configure your new Activity Chart. Like with the default Activity Chart settings, the page contains two sample visualizations: thumbnail and full size.
The screenshot to the right shows how the page looks before setting any values. Here's what each of the settings mean. | https://docs.formtools.org/modules/data_visualization/activity_charts/creating/ | 2019-04-18T14:24:15 | CC-MAIN-2019-18 | 1555578517682.16 | [] | docs.formtools.org |
from the ray's origin to the impact point.
In the case of a ray, the distance represents the magnitude of the vector from the ray's origin to the impact point.
In the case of a swept volume or sphere cast, the distance represents the magnitude of the vector from the origin point to the translated point at which the volume contacts the other collider.
Note that RaycastHit.point represents the point in space where the collision occurs.
using UnityEngine;
public class Example : MonoBehaviour { // Movable, levitating object.
// This works by measuring the distance to ground with a // raycast then applying a force that decreases as the object // reaches the desired levitation height.
// Vary the parameters below to // get different control effects. For example, reducing the // hover damping will tend to make the object bounce if it // passes over an object underneath.
// Forward movement force. float moveForce = 1.0f;
// Torque for left/right rotation. float rotateTorque = 1.0f;
// Desired hovering height. float hoverHeight = 4.0f;
// The force applied per unit of distance below the desired height. float hoverForce = 5.0f;
// The amount that the lifting force is reduced per unit of upward speed. // This damping tends to stop the object from bouncing after passing over // something. float hoverDamp = 0.5f;
// Rigidbody component. Rigidbody rb;
void Start() { rb = GetComponent<Rigidbody>();
// Fairly high drag makes the object easier to control. rb.drag = 0.5f; rb.angularDrag = 0.5f; }
void FixedUpdate() { // Push/turn the object based on arrow key input. rb.AddForce(Input.GetAxis("Vertical") * moveForce * transform.forward); rb.AddTorque(Input.GetAxis("Horizontal") * rotateTorque * Vector3.up);
RaycastHit hit; Ray downRay = new Ray(transform.position, -Vector3.up);
// Cast a ray straight downwards. if (Physics.Raycast(downRay, out hit)) { // The "error" in height is the difference between the desired height // and the height measured by the raycast distance. float hoverError = hoverHeight - hit.distance;
// Only apply a lifting force if the object is too low (ie, let // gravity pull it downward if it is too high). if (hoverError > 0) { // Subtract the damping from the lifting force and apply it to // the rigidbody. float upwardSpeed = rb.velocity.y; float lift = hoverError * hoverForce - upwardSpeed * hoverDamp; rb.AddForce(lift * Vector3.up); } } } }
See Also: Physics.Raycast, Physics.Linecast, Physics.RaycastAll.
Did you find this page useful? Please give it a rating: | https://docs.unity3d.com/ScriptReference/RaycastHit-distance.html | 2019-04-18T15:00:29 | CC-MAIN-2019-18 | 1555578517682.16 | [] | docs.unity3d.com |
Differences between Motor and PyMongo¶
Important
This page describes using Motor with Tornado. Beginning in version 0.5 Motor can also integrate with asyncio instead of Tornado.
Major differences¶
Connecting to MongoDB¶
Motor provides a single client class,
MotorClient. Unlike PyMongo’s
MongoClient, Motor’s client class does
not begin connecting in the background when it is instantiated. Instead it
connects on demand, when you first attempt an operation.
Coroutines¶
Motor supports nearly every method PyMongo does, but Motor methods that do network I/O are coroutines. See Tutorial: Using Motor With Tornado.
Threading and forking¶
Multithreading and forking are not supported; Motor is intended to be used in a single-threaded Tornado application. See Tornado’s documentation on running Tornado in production to take advantage of multiple cores.
Minor differences¶
GridFS¶
File-like
PyMongo’s
GridInand
GridOutstrive to act like Python’s built-in file objects, so they can be passed to many functions that expect files. But the I/O methods of
MotorGridInand
MotorGridOutare asynchronous, so they cannot obey the file API and aren’t suitable in the same circumstances as files.
Setting properties
In PyMongo, you can set arbitrary attributes on a
GridInand they’re stored as metadata on the server, even after the
GridInis closed:
fs = gridfs.GridFSBucket(db) grid_in, file_id = fs.open_upload_stream('test_file') grid_in.close() grid_in.my_field = 'my_value' # Sends update to server.
Updating metadata on a
MotorGridInis asynchronous, so the API is different:
@gen.coroutine def f(): fs = motor.motor_tornado.MotorGridFSBucket(db) grid_in, file_id = fs.open_upload_stream('test_file') yield grid_in.close() # Sends update to server. yield grid_in.set('my_field', 'my_value')
See also’s) | https://motor.readthedocs.io/en/stable/differences.html | 2019-04-18T15:16:55 | CC-MAIN-2019-18 | 1555578517682.16 | [] | motor.readthedocs.io |
CodeRed CMS 0.10.0 release notes¶
New features¶
NEW event & calendar pages! See Events.
NEW “Embed Media” block replacing the “Embed Video” block. Embed Media supports YouTube, Vimeo, Tweets, Facebook posts, GitHub gists, Spotify, Etsy, Tumblr, and dozens of other sources.
NEW tags on all
CoderedPagepages. Tags provide a global and flexible way of organizing and categorizing content. More tagging features coming soon.
NEW color picker available in the wagtail admin. Use
coderedcms.fields.ColorFieldon your models or
coderedcms.widgets.ColorPickerWidgeton Django form fields.
NEW official documentation! More user and how-to guides coming soon. Available at.
Updated to Wagtail 2.3.
Added support for Django 2.1. Supports Django 1.11 through 2.1.
Added additional template blocks in
coderedcms/pages/base.htmlfor easier extending.
Maintenance¶
Replace Wagtail version with CodeRed CMS version in admin.
Replace built-in cache with wagtail-cache.
Upgrade considerations¶
Some template blocks added to
coderedcms/pages/web_page.htmlin 0.9 were removed and replaced with more general blocks in
coderedcms/pages/base.html.
“Formatted Code” block (
coderedcms.blocks.CodeBlock) was removed. If needed, use a 3rd party block instead such as wagtailcodeblock, or use the new “Embed Media” block with GitHub gists.
CODERED_CACHE_PAGESsetting replaced with
WAGTAIL_CACHE.
CODERED_CACHE_BACKENDsetting replaced with
WAGTAIL_CACHE_BACKEND.
Existing projects must add
wagtailcacheto
INSTALLED_APPSin project settings.
Existing projects must make and apply new migrations. | https://docs.coderedcorp.com/cms/stable/releases/v0.10.0.html | 2019-04-18T15:06:01 | CC-MAIN-2019-18 | 1555578517682.16 | [] | docs.coderedcorp.com |
Custom Fields
Saving tab
This is an optional section. If the data for your form field type is just passed along in the POST request like the "native" HTML field types (textboxes, textareas etc), you don't need to specify anything here. The code will simply look at the POST request, pull the appropriate value out and insert it into the database. This is very quick. However, it doesn't handle all cases. The "Saving" tab lets you add custom PHP code that executes on fields of this type, prior to adding to the database, so you can do whatever you want to doctor the field ata.
Form Tools stores all data for a form field in the fields's [prefix]form_[form ID] database column. In some cases, this may require serializing the data, then de-serializing it when viewing / editing.
As a simple example, the Phone Number field type lets users create a phone number field of an arbitrary format. They enter a string like "(xxx) xxx-xxxx" and the field type (on the Displaying -> Edit Field tab) converts that into three textboxes. So, thinking as a developer, that means that when a form containing these phone numbers is submitted, the POST request will include an arbitrary number of textboxes (depending on the string they entered) - each containing a part of the phone number field. So in order to store the entire phone number in the (single) database field, you need to piece it all together and then submit the concatenated - or serialized - result. And that's precisely what's being done in this field type, shown in the screenshot above.
The phone number stores all the phone number parts, separated by the pipe character, so it's easy to separate and re-create when viewing / editing.
The $vars variable contains all the information about the POST request and field - including whatever arbitrary settings were specified for the field type. To see what's available, the simplest thing to do is just do a print_r($vars); exit; in that section, then in the Form Tools interface, submit the Edit Submission page on a View that contains this field type. That will display on the screen the entire contents of the variable.
You MUST define a $value var
One last tip: if you enter custom PHP code here, you must define a $value variable that contains the value to store in the database. It should be a string or number, nothing else. Arrays will just be converted to "Array()" which won't help anybody! | https://docs.formtools.org/modules/custom_fields/saving_tab/ | 2019-04-18T14:56:49 | CC-MAIN-2019-18 | 1555578517682.16 | [] | docs.formtools.org |
PayWhirl allows you to add shipping charges to every purchase. This can be based on the customers location or total price of the order.
In this tutorial we'll cover how to charge shipping rates based on order price.
To get started navigate to the Shipping & Taxes menu item:
If you haven't setup any shipping rules you can click the "Create a Shipping Rule" button. If you already have shipping rules you will need to click the "New Rule" button in the top right corner of PayWhirl.
You can create shipping rules based on a specific price range on PayWhirl. In this example we will add a $10 shipping fee to order totals under $50. Start by giving your shipping rule a name and price. Then you will select the type as "Based on Price Range":
This rule will add a $10 shipping charge to all order totals between $0 and $50.
NOTE: If two or more shipping rules apply to a purchase the one with the more specific rule will be applied.
A price range is the most specific shipping rule and will always apply as long as the customer's "shipping total" is in the specified range.
You have the ability to be as specific as you want with shipping rules. Lets say you have a shipping rule set to apply on all purchases. If you create a rule for a price range, that rule will take apply to purchases in the price range instead of the more general 'on ever purchase' rule. Same goes with a location based rule. If you have a two rules, one for every purchase and one for customers in the United States, the more specific location rule will be applied.
Related Articles:
PayWhirl widgets & embed codes
How to create payment plans
How to charge sales tax for specific locations
If you have any questions about shipping rules please let us know!
Team PayWhirl | https://docs.paywhirl.com/PayWhirl/getting-started-with-paywhirl/configuring-paywhirl-settings-for-your-business/how-to-add-shipping-charges-based-on-the-total-price-of-the-order | 2019-04-18T14:37:17 | CC-MAIN-2019-18 | 1555578517682.16 | [array(['https://downloads.intercomcdn.com/i/o/59102005/702c7eec658c64bab59df24c/image.png',
None], dtype=object)
array(['https://downloads.intercomcdn.com/i/o/77119440/96a44e5c09316d7ee53890af/Screen_Shot_2016-03-10_at_9.28.53_AM.png',
None], dtype=object)
array(['https://downloads.intercomcdn.com/i/o/77119441/a88b38f95ebcd6ca334a00a3/Screen_Shot_2016-03-10_at_9.49.54_AM.png',
None], dtype=object) ] | docs.paywhirl.com |
Delete Port
You can delete a port that is no longer required for use. You can delete multiple ports at a time.
You must be a self-service user or an administrator to perform this operation.
To delete one or more ports, follow the steps given below.
- Log in to Clarity.
- Click Networks in the left panel.
- Click the Ports tab.
- Select the checkbox for the ports to edit from the list of ports.
- Click Delete Ports on the toolbar seen above the list of ports.
- Click Delete Ports.
The selected ports are deleted and are no longer available for use. | https://docs.platform9.com/user-guide/networks/delete-port/ | 2019-04-18T14:41:52 | CC-MAIN-2019-18 | 1555578517682.16 | [] | docs.platform9.com |
As a developer, you can quickly view the status of your orders and existing VMs developer has to contribute towards optimization of cloud costs, and hence this graph is an excellent tool that indicates the monthly usage cost of the cloud resources.
Section 4: Total Orders
This section shows the total number of orders received in a month-on-month basis.
Section 5: Order Status Distribution
This pie chart offers a quick view of the number of status terminated, rejected, etc. | https://docs.corestack.io/my-dashboard/ | 2019-04-18T15:02:14 | CC-MAIN-2019-18 | 1555578517682.16 | [array(['https://docs.corestack.io/wp-content/uploads/2018/10/img_37_developer_dashboard_screenshot.jpg',
None], dtype=object) ] | docs.corestack.io |
Problem: SSO Server Redirects to Original URL, Not to Vanity Databricks URL
Problem
When you log into Databricks using a vanity URL (such as
mycompany.cloud.databricks.com), you are redirected to a single sign-on (SSO) server for authentication. When that server redirects you back to the Databricks website, the URL changes from the vanity URL to the original deployment URL (such as
dbc-XXXX.cloud.databricks.com). This can happen even if a CNAME record exists that points to the vanity URL.
Cause
This issue happens if the SSO administrator used the original deployment URL when they configured the Databricks application in the Identity Provider (IdP). | https://docs.databricks.com/user-guide/faq/cname-migration.html | 2019-04-18T15:31:16 | CC-MAIN-2019-18 | 1555578517682.16 | [] | docs.databricks.com |
Installing OpenStack CLI Tools on Windows
Prerequisites
- You must have admin rights on your Windows computer to run OpenStack CLI commands.
- You must have a Python 2.7 environment installed to run OpenStack CLI commands. Download the latest Python 2 release from the Python site.
- Ensure that you have installed Git Bash by installing git for Windows.
- Ensure you have installed the Python package management system,
pip. To install it, launch Git Bash. Use the
easy_installcommand from the setuptools package:
C:\>easy_install pip
- Alternatively, install pip with the unofficial binary installer provided by Christoph Gohlke.
- Also, it’s best to install the CLI in a virtual environment, so install
virtualenvwith
pip install virtualenvwithin Git Bash.
These instructions are tested on Windows 10 as a user with administrator privileges.
Using a virtual environment for the installation
- Start Git Bash.
After installing virtualenv with
pip install virtualenv, create a virtual environment with this command where
{name}is the name of the virtual environment:
PS C:\Python27> virtualenv {name}
For example:
PS C:\Python27> virtualenv openstack-cli
To activate the virtual environment, run the activate script:
PS C:\Python27> .\openstack-cli\Scripts\activate
Now that the virtual environment is active, your prompt changes to indicate the virtualenv you are currently using:
(openstack-cli) PS C:\Python27>
Installing the OpenStack client
With the virtual environment active, install the OpenStack client, which in turn installs python-novaclient and the other dependent clients:
(openstack-cli) PS C:\> pip install python-openstackclient
Now that the client is installed, you must provide credentials in the environment in order to run commands. Refer to Providing Metacloud Credentials to CLI Tools to continue. | http://docs.metacloud.com/4.6/user-guide/installing-openstack-cli-tools-windows/ | 2019-04-18T14:18:35 | CC-MAIN-2019-18 | 1555578517682.16 | [] | docs.metacloud.com |
Calls and callbacks follow this flow:
- Invoke a method to perform a specific action.
The invoked method invokes the synchronization helper method, specifying a parameter to determine which callback should be returned.
- The synchronization helper method waits until one of the following occurs:
- The system returns a callback
- A timeout of 20 seconds
- An exception.
- if the system returns a callback, the main flow resumes, and the the application handles and processes the information returned in the callback.
Callback Parameter Order
The order of parameters is defined in the C library defines the order of parameters but can not enforce it.
The
createTender callback returns a tender object. Use this to execute subsequent actions on a tender. Evaluate other arguments using the provided helper methods.
Error Messages can be used directly in logging or feedback without helper methods.
Actionable vs. Informational Callbacks
Several callbacks occur during the processing of a tender. There are two types of callback: Actionable and Informational.
- Actionable callbacks require action to complete the transaction.
- Informational callbacks can be logged but are not required to complete a transaction.
Actionable callbacks:
Informational callbacks: | https://docs.adyen.com/developers/point-of-sale/build-your-integration/classic-library-integrations/java-native-interface-integration/calls-and-callbacks-jni | 2019-04-18T14:28:50 | CC-MAIN-2019-18 | 1555578517682.16 | [] | docs.adyen.com |
Pricing explained
How teams pay for Mailshake
Mailshake is paid for by your team. Individuals can belong to as many teams as they like, but belonging to a team costs the team $29 per person.
Each user of a team costs $29 per month (or $259 annually). With each user, your team is allowed to connect an additional mail account. So when you upgrade the number of users on your team to 5, you're allowed to have up to 5 teammates (including yourself) and up to 5 connected mail accounts.
Example: 3 teammates and only 1 mail account for sending
3 users X $29 = $87 / month
(you would be able to connect 2 more mail accounts if you like)
Example: 2 teammates, but with 4 mail accounts for sending
4 users X $29 = $116 / month
(you need a "4 user" plan to get that many mail accounts, so you could add 2 more teammates if you like)
When are you billed?
The date that your subscription started is the date of the month you will be billed continually. So if you signed up on June 8th, you'll be billed each month on the 8th (starting immediately, on June 8th). If you switch between a yearly and a monthly plan, your recurring date will be updated to the day you made that change.
Does Mailshake charge ahead or behind?
If you sign up on April 1st, your credit card will be charged on April 1st for your use of Mailshake through April 30th.
Prorating upgrades
When you upgrade your plan (either by moving to a more expensive plan, increasing users, etc.), you will be charged a prorated amount.
For example, let's say your billing starts on April 1st, and on April 15th you change from 1 user to 3 users. On April 1st you were charged $29 which pays for the full month of April. April 15th is 50% of the way through the month and that's when you upgraded to 3 users. A few things happen here:
- You're credited $14.50 since the "1 user" plan is no longer in use for the remaining 50% of the month
- You're charged $43.50 to pay for your 3 user plan from April 15th to April 30th (normally $87 for the full month)
You will get to preview these charges before completing your upgrade and when you do, these one-time charges will be collected immediately.
Prorating downgrades
When you downgrade your subscription, you may end up with a credit on your account. Say you paid for 5 users and then downgraded to 1 partway through the month. For our example we'll say this prorating event gives you a credit of $45 on your account. You are not refunded this amount, but your Mailshake charges in the coming months will take from this balance before charging your account again.
Cancelations
Here are the steps to cancel your team's subscription. When you cancel a subscription, you tell Mailshake to stop billing you. You've already paid for the full billing cycle, so you'll be able to continue using Mailshake until the day your subscription ends. So if your billing cycle starts on June 8th, you have until July 7th to cancel without being charged for July 8th's billing cycle.
All of your campaign data is deleted 30 days after your subscription expires. If you'd like to hang on to your data, consider "hibernating" instead.
Hibernating
Here are the steps to hibernate your team's subscription. Hibernating stops your billing just like cancelling does, but we will automatically resume your subscription after the time period you select and we do not delete your campaign data.
Refunds
Mailshake only provides refunds for our 30-day money-back guarantee. These are refunds that are requested within 30 days of your team's signup date and only apply to the initial charge. | https://docs.mailshake.com/article/56-pricing-explained | 2019-04-18T15:17:51 | CC-MAIN-2019-18 | 1555578517682.16 | [array(['http://d33v4339jhl8k0.cloudfront.net/docs/assets/56f5e15f9033601eb6736648/images/5b7438ae2c7d3a03f89db9af/file-yk78GzDHsi.png',
None], dtype=object) ] | docs.mailshake.com |
sys.sysprocesses (Transact-SQL)
SQL Server (starting with 2008)
Azure SQL Database
Azure SQL Data Warehouse
Parallel Data Warehouse
Contains information about processes that are running on an instance of SQL Server. These processes can be client processes or system processes. To access sysprocesses, you must be in the master database context, or you must use the master.dbo.sysprocesses three-part name..
Remarks
If a user has VIEW SERVER STATE permission on the server, the user will see all executing sessions in the instance of SQL Server; otherwise, the user will see only the current session.
See Also
Execution Related Dynamic Management Views and Functions (Transact-SQL)
Mapping System Tables to System Views (Transact-SQL)
Compatibility Views (Transact-SQL)
Feedback
Send feedback about: | https://docs.microsoft.com/en-us/sql/relational-databases/system-compatibility-views/sys-sysprocesses-transact-sql?view=sql-server-2017 | 2019-04-18T15:13:08 | CC-MAIN-2019-18 | 1555578517682.16 | [] | docs.microsoft.com |
Specify the name, description, and session details for the new, click Add.
- Enter a Name and Description for the port mirroring session.
- (Optional) Select Allow normal IO on destination ports to allow normal IO traffic on destination ports.
If you do not select this option, mirrored traffic will be allowed out on destination ports, but no traffic will be allowed in.
- (Optional) Select Encapsulation VLAN to create a VLAN ID that encapsulates all frames at the destination ports.
If the original frames have a VLAN and Preserve original VLAN is not selected, the encapsulation VLAN replaces the original VLAN.
- (Optional) Select Preserve original VLAN to keep the original VLAN in an inner tag so mirrored frames are double encapsulated.
This option is available only if you select Encapsulation VLAN.
- (Optional) Select Mirrored packet length to put a limit on the size of mirrored frames.
If this option is selected, all mirrored frames are truncated to the specified length.
- Click Next. | https://docs.vmware.com/en/VMware-vSphere/6.0/com.vmware.vsphere.hostclient.doc/GUID-42C0D7D4-E0DB-462D-BC49-4E8464B6E2F5.html | 2019-04-18T14:17:58 | CC-MAIN-2019-18 | 1555578517682.16 | [] | docs.vmware.com |
Installation Overview
What is “installation”?
Appcues works by adding a bit of JavaScript to your application that allows Appcues to show content you build to users on your existing pages. This script loads with the page and uses user properties and events you send us to determine which users should see what content. Once you install Appcues, you can publish content live to your users!
Installation Options
If you use Segment, a third-party analytics tool, you can install Appcues instantly by using the Segment option on the installation dashboard. If you don't use Segment, read on for an overview of getting installed with our step-by-step guide!
Installation: step-by-step
Installation plan template
To make it easy to figure out what events and!
How does Appcues work once installed?
Appcues will show your published flows to targeted audiences based on conditions that you set. Conditions can include any combination of URL, domain, device type, browser language, user properties, events completed, and more.
Appcues will capture user properties and events that you choose to send to us. This information is what you'll use for audience targeting so that your flows will show to the right person at the right time.
Having issues with your installation?
Check out this troubleshooting doc, or send us an email at [email protected]. We'll help get you installed just right! | https://docs.appcues.com/article/48-install-overview | 2019-04-18T15:17:29 | CC-MAIN-2019-18 | 1555578517682.16 | [] | docs.appcues.com |
Updating Danube Cloud¶
Table of Contents
Updating Danube Cloud Software¶
The Danube Cloud software can be updated via the system maintenance GUI view or the API. This section describes updating of Danube Cloud by using es command line tool, which is used to perform API calls.
Starting with Danube Cloud 3.0 the update functionality was completely reimplemented and a maintenance GUI view was added. Before version 3.0, the update feature was considered experimental and updating was usually performed manually.
-.
See also
Some features may require a new version of the Platform Image.
In the examples below the parameters have following meaning:
<version>- version of the Danube Cloud software to which system should be updated. This can be either a git tag or SHA1 of a git commit. A version tag is prefixed with a
vcharacter (e.g,
v3.0.1-rc1). All available version tags are visible here:
update.key- X509 private key file in PEM format used for authentication against EE git server.
update.crt- X509 private cert file in PEM format used for authentication against EE git server.
Updating Danube Cloud Software on Management Server¶
When updating Danube Cloud, the management server must be updated first.
Note
file:: prefix must be used when passing files to
-key/
-cert parameters, otherwise the es command will not parse them correctly.
Note
You can set API_KEY variable in the es tool, so you don’t have to use login command. For more info see:
user@laptop:~ $ es login -username admin -password $PW user@laptop:~ $ es get /system/version user@laptop:~ $ es set /system/update -version <version> -key file::/full/path/to/update.key -cert file::/full/path/to/update.crt user@laptop:~ $ es get /system/version
Updating Danube Cloud Software on Compute Nodes¶
For a compute node one additional parameter needs to be provided:
<hostname>- name or UUID of the compute node which you are updating.
user@laptop:~ $ es login -username admin -password $PW user@laptop:~ $ es get /system/node/<hostname>/version user@laptop:~ $ es set /node/(hostname)/define -status 1 # First set the node to maintenance state user@laptop:~ $ es set /system/node/<hostname>/update -version (version) -key file::/full/path/to/update.crt -cert file::/full/path/to/update.crt user@laptop:~ $ es set /node/<hostname>/define -status 2 # Set the node back to online state user@laptop:~ $ es get /system/node/<hostname>/version
Updating Danube Cloud Software Manually¶
In case something goes wrong with the software update it is always possible to manually update Danube Cloud on the mgmt01 server or compute nodes.
The update procedure is essentially the same as performed from the GUI or API. In both cases, the
esdc-git-update [1] script is run on the mgmt01 virtual server or compute node and if successful, the Danube Cloud services should be restarted. It requires one parameter -
<version>, which is the version of the Danube Cloud software. This can be either a git tag or SHA1 of a git commit. A version tag is prefixed with a
v character (e.g,
v3.0.1-rc1). All available version tags are visible here:
Note
When updating Danube Cloud, the software on the management server must be updated first and then the procedure should be repeated on all compute nodes.
Note
Please, always read the release notes before performing an update:
Note
Please make sure that users have only read access to Danube Cloud during manual update.
First, log in as root to the mgmt01 server (should be update first) or compute node:
user@laptop:~ $ ssh root@node01 [root@node01 ~] ssh root@<ip-of-mgmt01> # available from the first compute node
Examine the current Danube Cloud version:
[root@mgmt-or-node ~] cd /opt/erigones [root@mgmt-or-node erigones] cat core/version.py __version__ = '3.0.0' __edition__ = 'ce' [root@mgmt-or-node erigones] git status # HEAD detached at v3.0.0 nothing to commit, working directory clean
Run the
esdc-git-update[1] upgrade script:
[root@mgmt-or-node erigones] bin/esdc-git-update <version> ... You should now restart all Danube Cloud system services (bin/esdc-service-control restart)
If everything goes well, restart the Danube Cloud system services:
[root@mgmt-or-node erigones] bin/esdc-service-control restart
Updating Platform Image on Compute Nodes¶
A Platform Image contains a modified version of the SmartOS hypervisor. Each version of Danube Cloud is tested and released with a specific version of the Platform Image. The Platform Image is usually upgraded with each major release of Danube Cloud or when there is some security issue in the kernel.
Note
Please, always read the release notes before performing an update:
The platform update should be carried out manually by running the
esdc-platform-upgrade script on a compute node. It requires one parameter - the Danube Cloud
<version>, which is the same as the git tag version identifier for the Danube Cloud software.
Depending on the node installation type, the script does one of the following:
- USB-booted compute node: downloads a compute node USB image and overwrites the contents of the existing USB image with it.
- HDD-booted compute node: finds out the target platform version according to the provided Danube Cloud version; downloads a platform image; creates and activates a new boot environment.
A successful platform update should be followed by a reboot of the compute node.
user@laptop:~ $ ssh root@node01 [root@node01 ~] /opt/erigones/bin/esdc-platform-upgrade v3.0.0 ... *** Upgrade completed successfully *** [root@node01 ~] init 6 # reboot | https://docs.danubecloud.org/user-guide/maintenance/update.html | 2019-04-18T14:33:06 | CC-MAIN-2019-18 | 1555578517682.16 | [] | docs.danubecloud.org |
This document is related with Business Builder Wordpress Plugin.
In this part of the documentation, installing and activation will be summarized.
Once you install plugin, you need to activate first.
Business Builder requires Redux for management. Once you activate the plugin, please install Redux. Redux is a free, open source library.
You don't need to memorize shortcode arguments. Business Builder comes with editor panel to install shortcodes.
Example for the shortcode editor panel: | http://docs.qualstudio.com/business-builder-documentation/documentation.html | 2019-04-18T14:24:07 | CC-MAIN-2019-18 | 1555578517682.16 | [] | docs.qualstudio.com |
Using
minit
The MontageJS initializer, minit, is a multipurpose command line utility that provides a convenient way to kickstart and serve your MontageJS projects locally. With
minit you can quickly generate blank application projects, directories, and add components to an existing project.
Basic Examples of Using
minit
Run the following commands from within your project directory:
- To create a new project:
$ minit create:app -n app-name
This generates a new directory `app-name`, which contains the default MontageJS application directories with production dependencies.
- To add a component to a project:
$ minit create:component -n component-name
This generates a new UI component `component-name.reel` in the `ui` directory of the current application directory. It contains default HTML, CSS, and JS files for your component.
- To spin up a local server for previewing the current project in the browser:
$ minit serve &
The ampersand `&` flag ensures that you don't have to open a second Terminal window while working on your project. To close the server, run `minit` again then hit `Ctrl C`.
- To update to the latest version of
minit:
$ npm install -g minit@latest
- For a complete list of
minitoptions:
$ minit --help
See also the minit repo on Github. | http://docs.montagestudio.com/montagejs/tools-minit.html | 2017-02-19T18:38:00 | CC-MAIN-2017-09 | 1487501170249.75 | [] | docs.montagestudio.com |
Caching that's multi-process safe
Contents
1 Problem
Develop a general procedure (or software) for implementing a cache that can be shared by several processes. There seems to be no readily available library that implements caching. It certainly needs to work for Unix; Windows suport would be good, but is.
Information:
-
-
- see open(2) using O_CREAT | O_RDWR | O_EXCEL (atomically creates a file or returns an error if it exists)
3 Proposed solution
3.1 Assumptions
- The consistency of objects in the cache is most important
- We do not want to lock the entire cache
- The cache must work with multiple processes
- we can tolerate some errors when computing the size of the cache (objects and their size, in the cache)
- The BES cache for compressed data will be one directory
- It should use some sort if LRU or similar scheme
3, have a shared lock too. Once the shared (i.e., read) lock is in place, the 'exclusive lock' can be removed, thus ensuring that at no time in the sequence is the file left in the unprotected state.
3.3. Both the BES and the HTTP caching code in libdap have support for releasing resources, so it's possible to use flock(2), et c., for this which also means we can use the open(2) and creat(2) calls for lock management.
3.4 Operations the cache must provide
- Create and lock a file so that data cam be decompressed
- Determine the cache size
- If it's too big, purge to 90% of max size
- Get a shared lock to decompressed data
3.5 Cache design
In addition to the files themselves, maintain information about the time each file was last used and their size. The cache should also probably have information about its total size. Store this in the cache in files, so that it's accessible for the processes accessing the cache.
The cache will be a directory.
Each file to be cached with have a name that is a function of its pathname, so it is guaranteed to be unique. The BES decomp cache will use the 'take the pathname and replace the slashes with hashes' technique to make the names unique. Example:
/usr/local/data/modis.hdf --> #usr#local#data#modis#hdf
The leading hash can be used or not, but it's probably easier to include it. This is a pretty easy transform; simply replace the characters [/.] with #
It would be best if the size and last use time for each file in the cache was not recorded in a single file.
How to purge:
- Look at every data file
- Store the basename (i.e., data file name), size and last used (access) time
- Find the oldest files and remove until the total size of the cache falls below 90% of the max size.
- if a file is locked when the cache tries to delete it, just skip it, since its access is clearly just become recent!
- with every delete, update the 'total size' file
How to track the total size of the cache's data:
- maintain a single 'global' cache control file
- store the total size there.
- for every file created, update that size
- for every file deleted, update that size
- use an exclusive rd/wr lock for those updates
- Note: This is a compromise given our original goal of not ever locking the whole cace because it technically locks the entire cache for write operations, but does not lock it for read operations. Also, the time the file is locked can be much smaller than the time required to actually write one of the data files.
3.5.1 Software we have
In the BES software, the class BESContainer provides a partially abstract base class that holds two methods: access() and release() that are used to access and release the 'container' that holds the data. The class BESFileContainer has concrete implementations for these methods that uncompress files if need be. The class BESUncompressManager performs the decompression operation and uses the BESCache class to store the result. BESCache handles the locking operations.
To avoid making a new BESCache object on every call to BESContainer::access(), maybe move BESCache into BESUncompressManager. | http://docs.opendap.org/index.php/Caching_that%27s_multi-process_safe | 2017-02-19T19:04:28 | CC-MAIN-2017-09 | 1487501170249.75 | [] | docs.opendap.org |
, XML_FILE_EXTENSION()
public void setSystemPropertiesModeName(java.lang.String constantName) throws java.lang.IllegalArgumentException
constantName- name of the constant
java.lang.IllegalArgumentException- if an invalid constant was specified
setSystemPropertiesMode(int)
public void setSystemPropertiesMode(int systemPropertiesMode)
The default is "fallback": If not being able to resolve a placeholder with the specified properties, a system property will be tried. "override" will check for a system property first, before trying the specified properties. "never" will not check system properties at all.
SYSTEM_PROPERTIES_MODE_NEVER,
SYSTEM_PROPERTIES_MODE_FALLBACK,
SYSTEM_PROPERTIES_MODE_OVERRIDE,
setSystemPropertiesModeName(java.lang.String)
public void setSearchSystemEnvironment(boolean searchSystemEnvironment)
Default is "true". Switch this setting off to never resolve placeholders against system environment variables. Note that it is generally recommended to pass external values in as JVM system properties: This can easily be achieved in a startup script, even for existing environment variables.
NOTE: Access to environment variables does not work on the
Sun VM 1.4, where the corresponding
System.getenv(java.lang.String) support was
disabled - before it eventually got re-enabled for the Sun VM 1.5.
Please upgrade to 1.5 (or higher) if you intend to rely on the
environment variable support.
setSystemPropertiesMode(int),
System.getProperty(String),
System.getenv(String)
protected java.lang.String resolvePlaceholder(java.lang.String placeholder, java.util.Properties props, int systemPropertiesMode)
The default implementation delegates to
resolvePlaceholder
(placeholder, props) before/after the system properties check.
Subclasses can override this for custom resolution strategies, including customized points for the system properties check.
placeholder- the placeholder to resolve
props- the merged properties of this configurer
systemPropertiesMode- the system properties mode, according to the constants in this class
setSystemPropertiesMode(int),
System.getProperty(java.lang.String),
resolvePlaceholder(String, java.util.Properties)
protected java.lang.String resolvePlaceholder(java.lang.String placeholder, java.util.Properties props)
Subclasses can override this for customized placeholder-to-key mappings or custom resolution strategies, possibly just using the given properties as fallback.
Note that system properties will still be checked before respectively after this method is invoked, according to the system properties mode.
placeholder- the placeholder to resolve
props- the merged properties of this configurer
nullif none
setSystemPropertiesMode(int)
protected java.lang.String resolveSystemProperty(java.lang.String key)
key- the placeholder to resolve as system property key
nullif not found
setSearchSystemEnvironment(boolean),
System.getProperty(String),
System.getenv(String)
protected void processProperties(ConfigurableListableBeanFactory beanFactoryToProcess, java.util.Properties props) throws BeansException
processPropertiesin class
PropertyResourceConfigurer
beanFactoryToProcess- the BeanFactory used by the application context
props- the Properties to apply
BeansException- in case of errors
@Deprecated protected java.lang.String parseStringValue(java.lang.String strVal, java.util.Properties props, java.util.Set<?> visitedPlaceholders)
resolvePlaceholder(java.lang.String, java.util.Properties, int)with
PropertyPlaceholderHelper. Only retained for compatibility with Spring 2.5 extensions.
strVal- the String value to parse
props- the Properties to resolve placeholders against
visitedPlaceholders- the placeholders that have already been visited during the current resolution attempt (ignored in this version of the code) | http://docs.spring.io/spring-framework/docs/3.2.0.RC2/api/org/springframework/beans/factory/config/PropertyPlaceholderConfigurer.html | 2017-02-19T18:44:58 | CC-MAIN-2017-09 | 1487501170249.75 | [] | docs.spring.io |
Documentation Updates for the Period Ending November 7, 2015
New Docs
The following documents are new to the help desk support series as part of our regular documentation update efforts:
- Loop Detection
Recently Edited & Reviewed Docs
The following documents were edited by request, as part of the normal doc review cycle, or as a result of updates to how the Fastly app operates:
- Adding or modifying headers on HTTP requests and responses
- Implementing API cache control
- Integrations
- Ordering a paid TLS option
- Setting up remote log streaming
- User roles and how to change them
- VCL regular expression cheat sheet
- Working with services
Our documentation archive contains PDF snapshots of docs.fastly.com site content as of the above date. Previous updates can be found in the archive as well. | https://docs.fastly.com/changes/2015/11/07/changes | 2020-05-25T15:23:24 | CC-MAIN-2020-24 | 1590347388758.12 | [] | docs.fastly.com |
If you have followed our quick start guide you probably have already a new Frontity project. If not, run
npx frontity create <project-name> and you'll get a project with the same structure as the one explained in this guide.
So, the important pieces that you get, once a project is created, are:
A
package.json file where the dependencies needed for your app to work are declared.
A
frontity.settings.js file where the basic setup for your app is already populated.
A
packages folder with
mars-theme installed inside.
The basic dependencies we'll need for our app to work to it in order to develop a Frontity app.
@frontity/wp-source : this package is the one that connects to the WordPress REST API of your site and fetches all the data needed on your Frontity theme.
@frontity/tiny-router : this is a small package that handles
window.history and helps us with the routing on
mars-theme.
@frontity/mars-theme : this is our starter theme, where we build our site with React.
As you can see, our
mars-theme dependency has no version but a path. This is how we need to add our custom packages (those we are developing inside the app) to our
package.json so they will be treated as if they were living in
node_modules.
In this file we define our project settings. We also define the extensions needed to successfully run a Frontity app. You can learn more about this file in the Settings reference.
In this folder is where we create all the custom extensions we want to develop for our site. Usually it will be a custom theme. In this case, the one installed by default is our
mars-theme. Any changes done in these extensions during development will refresh our site automatically.
When starting
frontity, all the packages defined in
frontity.settings.js are imported by
@frontity/file-settings and the settings and exports from each package are merged by
@frontity/core into a single
store where you can access the
state and
actions of the different packages during development using
@frontity/connect (our state manager).
Still have questions? Ask the community! We are here to help 😊 | https://docs.frontity.org/guides/understanding-mars-theme | 2020-05-25T14:23:16 | CC-MAIN-2020-24 | 1590347388758.12 | [] | docs.frontity.org |
Fetch report results
This endpoint retrieves the report columns and rows, and, depending on the selected
format, also the report metadata. You can only fetch a report when its status is
READY.
Note:
When running audience reports:
You might find the following segment values in your segmentation breakdowns:.
Example
Request header:
GET /v2/reports/e683514b-b4c9-4988-9b2f-9d6a6a300c08/result HTTP/1.1 Host: api.videoplaza.com Accept: text/plain x-o-api-key="<your key>"
Request body: -
Success response:
HTTP status: 200 (OK) Header: Content-Disposition: attachment; filename="name-of-report.csv" Body: "category_0", "category_0_name", "category", "category_name", "impression" "11", "Sports","","",6598 "","","111","Football",8456 "","","112","Tennis",1574 "","","113","Basketball",567 | https://docs.videoplaza.com/oadtech/ad_serving/dg/rest_custom_reporting_fetch_report_results.html | 2020-05-25T15:31:57 | CC-MAIN-2020-24 | 1590347388758.12 | [] | docs.videoplaza.com |
EmuCast supports 'Log In using SSO (Single Sign On)' approach for customer authentication, allowing one username and password to be used for easy access to your EmuCast account each time.
In general terms, SSO is better for users because:
They get to use and store their username and password for many websites with a single, secure and trusted identity provider.
It takes less time for them to log into a new website. The identity provider's username and password are all they need, instead of having to sign up with their name and contact details from scratch and verify their email.
They know what information is being shared.
They only have to remember one username and password to access many websites.
We've built our login SSO model around allowing customers to utilise their existing ID & Passwords from popular new age SaaS tools such as (google, twitter, facebook, github) to authenticate when logging into EmuCast.
Once authenticated your always logged in unless you choose to log out, change your provider details or wish to login authenticate via a different SaaS tool account.
Whether you log in via the EmuCast desktop client app or the web app, the login process is the same.
Step 1. Select 'Login' on the web app or click on your 'EmuCast desktop client app' icon.
Note: This will launch the EmuCast authentication process.
Step 2. Choose your preferred SaaS tool mode of authentication and click on the appropriate 'Continue with (Google, Twitter, Facebook, Github)' account button to commence authentication.
Step 3. If haven't logged into EmuCast prior, select your SaaS tool provider account ID and enter password. If you have previously logged in, you'll now be seamlessly authenticated and passed through to your EmuCast account without needing to enter ID credentials. Your now always logged in.
Note the following rules apply.
Seamless authentication is acheived
If your logging in using the same provider SaaS tool account selection used at last EmuCast login, you'll then be seamlessly authenticated and logged into account without entering any password.
Required to Authenticate again at EmuCast login
If logging in using a different SaaS tool account choice from last login you are required to enter your SaaS tool provider password to authenticate
If you have multiple SaaS tool accounts i.e. multiple google accounts, then each time you login you'll need to select your chosen google account.
If same google account was selected at last login, you'll be seamlessly authenticated and logged in without password
If you choose a different google account each time to login from your last login you are required re-authenticate - select account and enter password.
If you change your ID email name with you SaaS tool provider
If your SaaS tool account's use different passwords and aren't the same
Yes, if you intend to login authenticate to EmuCast each time by switching between provider SaaS tool accounts your email ID's need to be same. If for example your Google or Twitter accounts use different email ID's EmuCast will assume you are a different customer or account and fail your login attempt.
If you change any provider SaaS tool account ID "email" and/or password details, the next EmuCast login will ask you to re-validate your ID and password credentials. Your next login will again be seamless if using same SaaS tool account.
If you maintain your passwords to be same across all your SaaS tool provider (Google, Twitter, Facebook, Github) accounts then you have the freedom to use any SaaS account to login and authenticate to EmuCast.
If however, your passwords are different across your accounts then you'll need to authenticate and enter your password during the EmuCast login authentication process when you switch accounts.
If you have more than one account with your SaaS tool provider, when you attempt to EmuCast login authenticate you are prompted to choose your preferred SaaS tool account to login with each time. Your then seamlessly authenticated and logged into you EmuCast account.
For example: If you have multiple google accounts, when you attempt to authenticate during the EmuCast login process after selecting the 'Continue with Google' button you'll be prompted to choose which account to login with.
If you used a 'different' account from last login, you'll be prompted to re-enter your password to that google account.
If you used that 'same' account from last login' you'll be seamlessly authenticated and passed through to your EmuCast account without entering password.
Your free to change your external SaaS tool passwords as required with no impact to your EmuCast account. The only impact will be next EmuCast login attempt you'll be prompted to re-confirm your ID and password again, after which you'll be seamlessly authenticated again at future logins.
You are free to log out of an active EmuCast session like any other application. Once logged out, the next time you select to log back into EmuCast you will be prompted to re-login. To re-authenticate,
Step 1. Select 'Login' on the web app or click on your "EmuCast desktop client app" icon to launch authentication process where you'll be prompted to re-authenticate.
Step 2. Choose your preferred authentication method using (Google, Twitter, Facebook, Github).
If you have previously logged in using one of these SaaS tools i.e. Google, upon clicking you'll seamlessly be logged into EmuCast and ready to video conference.
If you select to log-in via a SaaS tool not previously used for authentication you'll be asked to enter your ID & Password once.
We are making login authentication to EmuCast as effortless as possible and will continue to build onto our current login process. If you have specific requirements please let's talk.
Let's talk, more information on this will be coming soon. | https://docs.emucast.com/how-to/log-in | 2020-05-25T13:19:30 | CC-MAIN-2020-24 | 1590347388758.12 | [] | docs.emucast.com |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.