content
stringlengths 0
557k
| url
stringlengths 16
1.78k
| timestamp
timestamp[ms] | dump
stringlengths 9
15
| segment
stringlengths 13
17
| image_urls
stringlengths 2
55.5k
| netloc
stringlengths 7
77
|
---|---|---|---|---|---|---|
12.2.8.2.8. PRIMARY_IDNUM¶
Integer.
Which ID number type should be considered the “internal” (primary) ID number? If specified, only tasks with this ID number present will be exported.
Must be specified for HL7 v2 and FHIR messages.
May be blank for file and e-mail transmission.
For (e.g.) file/e-mail transmission, this does not control the behaviour of anonymous tasks, which are instead controlled by INCLUDE_ANONYMOUS (see below). | https://camcops.readthedocs.io/en/stable/administrator/server_config_file.html | 2022-09-24T18:54:19 | CC-MAIN-2022-40 | 1664030333455.97 | [] | camcops.readthedocs.io |
Viewing LogsViewing Logs
Often you'll want to access logs from the services that Local Server provides. For example, PHP Errors Logs, Nginx Access Logs, or MySQL Logs. To do so, run the
composer server logs <service> command.
<service> can be any of
php,
nginx,
db,
elasticsearch,
s3,
xray,
cavalcade or
redis. This command will tail the logs (live update). To exit the log view, press
Ctrl+C.
Local Server provides these commands as aliases to the underlying
docker logs command, so you may alternatively list out running containers with
docker ps and then follow the logs for any individual running container. This lets you monitor logs from any working directory without relying on the
composer server alias. To monitor the logs of the Traefik proxy which routes requests between multiple Local Server instances, for example, you would run
docker logs docker_proxy_1 --follow. | https://docs.altis-dxp.com/v12/local-server/viewing-logs/ | 2022-09-24T20:09:37 | CC-MAIN-2022-40 | 1664030333455.97 | [] | docs.altis-dxp.com |
Installation (UNIX or OS X)
On Mac OS X and UNIX-like operating systems, such as Linux, you have a choice of using an automatic installer script (if you’re not using JRuby for anything else), or manual installation.
Automatic installation
Check Java is installed
To check Java is installed, in a Terminal window, type
java -version
You must use Java 8, build 212 or later. The version number will be shown as ‘1.8’. (Later versions of Java will be supported when the JRuby runtime is compatible.)
If you get an error, or need to update to a recent version of Java 8, download Java and install it in the default folder. You can get a free Java JVM from Adoptium. Repeat the
java -version command to check it installed correctly.
If you are using a Linux system, your package manager should be able to install Java 8 for you.
Run the automatic installation script
Open a new Terminal window and download the
haplo_plugin_install.sh installation script:
curl -O
Before executing the script, review the contents in your text editor of choice. The script downloads the binary release of JRuby into
~/haplo-dev-support/haplo-plugin , installs the
haplo ruby gem, and appends the new
jruby/bin directory to your PATH by adding a line to the
~/.profile of your current user.
To install, run the script with:
sh haplo_plugin_install.sh
After installation, open a new Terminal window or run
source ~/.profile and type:
haplo-plugin --help
to ensure installation was successful.
If you receive a command not found error check the output of
echo $PATH . It should contain the
jruby/bin directory from the
haplo-dev-support/haplo-plugin directory. If it does not, you may need to move the installer added line from
~/.profile to
~/.bash_profile or
~/.bash_login
Manual installation
Check Java is installed
In the Terminal window, type
java -version
You must have Java 8 or later. The version number shown will be prefixed with ‘1.’, so Java 8’s version will be shown as ‘1.8’.
If you get an error, or need to update to Java 8, download Java and install it in the default folder. Repeat the
java -version command to check it installed correctly.
If you are using a Linux system, your package manager should be able to install Java 8 for you.
Download JRuby
Download the current release of JRuby, which must be version 9.2.17.0 or later. Choose the “binary .tar.gz” version.
Decompress the downloaded file, then rename the extracted folder to
jruby (without the version number):
tar zxf ruby-bin-9.2.17.0.tar.gz mv jruby-9.2.17.0 jruby
Install the Plugin Tool
The Plugin Tool is distributed as a Ruby Gem.
Return to the Terminal window you opened. Type
export PATH=`pwd`/jruby/bin:$PATH jgem install haplo
(This assumes you’re running these commands with the current working directory set to the directory containing
jruby.)
Create a project folder
Create a folder inside your working folder, for example,
/Users/developer/haplo-development/example-project, and
cd to it. In the
cmd window, type
mkdir example-project cd example-project
Check the installation works
Type
haplo-plugin --help to check the plugin tool is installed correctly.
Persist PATH across sessions
If you installed JRuby manually, then you will either need to set your PATH every time you open a new Terminal window or alternatively configure your system to automatically append the
jruby/bin directory to the PATH.
To do so, add:
export PATH=/Users/developer/haplo-development/jruby/bin:$PATH
to your shell configuration file. | https://docs.haplo.org/dev/tool/plugin/install-unix | 2022-09-24T20:37:04 | CC-MAIN-2022-40 | 1664030333455.97 | [] | docs.haplo.org |
Javascript
The Javascript version of Money Button is very similar to the HTML version, but with a few key differences. First, values can be typed instead of necessarily being strings. For instance, some properties are boolean, some are objects, and some are functions. Second, properties are camelCased instead of being prefixed with 'data-'. Third, the button can be updated dynamically by rendering it again.
(The Javascript version of the button is actually what the HTML version is using under the hood.)
To use the Javascript Money Button, first add this script somewhere inside your HTML:
<script src=""></script>
The
moneyButton object
The script defines a global object called
moneyButton. It provides only one
function:
render.
renderrender
<div id='my-money-button'></div> <script> const div = document.getElementById('my-money-button') moneyButton.render(div, { to: "[email protected]", amount: "1", currency: "USD", label: "Wait...", clientIdentifier: "some public client identifier", buttonId: "234325", buttonData: "{}", type: "tip", onPayment: function (arg) { console.log('onPayment', arg) }, onError: function (arg) { console.log('onError', arg) } }) </script>
moneyButton.render takes two parameters. The first one is an DOM node. The
button is going to be placed inside that DOM node. The second one is an object
with options.
The available options are:
All the options are matched with the attributes of the HTML API, and have the exact same behavior. The HTML version uses the Javascript version under the hood.
toto
amount and
currency. If one of
them is present the other two have to be present too.
If this attribute is present then
outputs attributes cannot be present.
amount and currencyamount and currency
amount is a decimal number expressed as a string.
to. If any of the three is present, the
other two have to be present too.
labellabel
Is the label of the button.
successMessagesuccessMessage!.
outputsoutputs
This attribute is used to specify a lists of outputs on the BSV transaction.
This is what you want to use if you want to send to multiple different people at
the same time, or "multiple recipients." It can't be used at the same time with
to,
amount or
currency.
outputs is an three addresses looks like this:
moneyButton.render(div, { outputs: [ { address: "[email protected]", amount: "0.085", currency: "USD" }, { address: "[email protected]", amount: "0.075", currency: "USD" }, { address: "[email protected]", amount: "0.065", currency: "USD" } ] })
An example of a button that pays to two users looks like this:
moneyButton.render(div, { outputs: [ { userId: "6", amount: "0.085", currency: "USD" }, { userId: "1040", amount: "0.015", currency: "USD" } ] }).
clientIdentifierclientIdentifier.
You can create an app, which includes a client and the associated clientIdentifier, on the apps page in your user settings. Please see (the apps documentation)[api-applications.md].
buttonIdbuttonId.
buttonDatabuttonData
This attribute can be any string, but is meant to be a valid JSON string. The user can set arbitrary data here, that is associated with the payment and sent on the webhooks and retrieved with the API.
onPaymentonPayment
A function that is called when the user makes a successful payment.
Example:
function myCustomCallback (payment) { console.log('A payment has occurred!', payment) } moneyButton.render(div, { to: "[email protected]", amount: "0.085", currency: "USD", onPayment: myCustomCallback })
The payment attribute is a javascript object with the following attributes:
The function is always called in the context of 'window' object.
You can make a simple pay wall using the onPayment callback. An example is as follows:
<div id="my-button"></div> <div id="my-hidden-content"></div> <script> function displayHiddenContent (payment) { // be sure to validate the transaction - does it have the right outputs? document.getElementById('my-hidden-content').innerHTML = 'Hidden content.' } moneyButton.render(document.getElementById('my-button'), { to: "1040", amount: "0.01", currency: "USD", onPayment: displayHiddenContent }) </script>
Note that this simple example is not very secure. If you want to make a secure pay wall, you should use the api-client and/or webhooks.
onErroronError
A function that is called when an error occurs during the payment. Is not called if there is a problem with the parameters of the button or if there is a problem related with compatibility.
<div id="my-button"></div> <script> function myCustomCallback (error) { console.log(`An error has occurred: ${error}`) } moneyButton.render(document.getElementById('my-button'), { to: "1040", amount: "0.01", currency: "USD", onError: myCustomCallback }) </script>
The parameter received by the function is the description of the error.
The function is always called in the context of
window object.
onLoadonLoad
A function that is called when the button has finished loading.
<div id="my-button"></div> <script> function myCustomCallback () { console.log(`The button has loaded.`) } moneyButton.render(document.getElementById('my-button'), { to: "1040", amount: "0.01", currency: "USD", onLoad: myCustomCallback }) </script>
editableeditable
When this attribute is true the button is displayed in an editable mode,
allowing the user to set the amount of the transaction before pay. When this
attribute is set to
true the values of
to,
amount,
currency and
outputs are ignored.
disableddisabled
When this attribute is
true the button is displayed in a disabled mode where it
cannot be swiped.
devModedevMode
This attribute is
false by default. If it is set to
true the button becomes
a dummy component. It doesn't execute any callback and doesn't interact with the
backend at all. Instead it always succeeds.
preserveOrderpreserveOrder
preserveOrder property is available to fix the order of inputs and
outputs.
If
preserveOrder is set to
true, the transaction will have outputs in the
same order as they appear in the. | https://docs.moneybutton.com/docs/mb-javascript.html | 2022-09-24T20:05:48 | CC-MAIN-2022-40 | 1664030333455.97 | [array(['/docs/assets/labelexample.png', 'label example image'],
dtype=object) ] | docs.moneybutton.com |
New features
- gRPC
- gRPC error reporting is now configurable
- Response codes, component type, and method type are now recorded as attributes.
- The agent now reports the gRPC status code rather than "translating" to HTTP status codes.
- Vert.x
- The Java agent now provides visibility into your applications built using the Vert.x 3.8..
- XML custom instrumentation
- The custom XML instrumentation XSD has been enhanced to support now include support for specifying leaf tracers.
- Class Histogram
- A new Class Histogram extension is now available to report heap memory details as events.
- Jedis
- Added support for Jedis 3.0.0 and higher. You can now see your Jedis calls in breakdowns in the overview chart, entries in the Databases tab, and segments in transaction traces.
- Lettuce
- Instrumentation modules for Lettuce 4 and 5 are now available via the Java agent incubator.
New OSS SDK
We now have an open source Telemetry SDK for Java for sending telemetry data to New Relic. The current SDK supports sending dimensional metrics to the Metric API and spans to the Trace API.
Fixes
-. | https://docs.newrelic.com/docs/release-notes/agent-release-notes/java-release-notes/java-agent-580 | 2022-09-24T20:27:05 | CC-MAIN-2022-40 | 1664030333455.97 | [] | docs.newrelic.com |
:
- Installation and upgrade guide
Guides through the installation process of Cameo Collaborator for TWC and explains how to install Cameo Collaborator Publisher Plugin.
- User Guide
Introduces the main features of Cameo Collaborator for TWC and provides guidelines on how to use it.
- Administrator Guide
Provides instructions on how to apply a new license key, set up the product, and describes other administrator related tasks. | https://docs.nomagic.com/display/CC4TWC2021xR1/Cameo+Collaborator+for+Teamwork+Cloud+Documentation | 2022-09-24T19:38:28 | CC-MAIN-2022-40 | 1664030333455.97 | [] | docs.nomagic.com |
action=Deny | rare src_port
A more complex example of the top command
Say you're indexing an alert log from a monitoring system, and you have two fields:
msgis the message, such as
CPU at 100%.
mc_hostis the host that generates the message, such as
log01.
How do you get a report that displays the top
msg and the values of
mc_host that sent them, so you get a table like this:
To do this, set up a search that finds the top
message per
mc_host (using
limit=1 to only return one) and then
sort by the message
count in descending order:
sourcetype=alert_log | top 1 msg by mc_host | sort count! | https://docs.splunk.com/Documentation/Splunk/7.2.1/Search/Visualizefieldvaluehighsandlows | 2022-09-24T20:41:30 | CC-MAIN-2022-40 | 1664030333455.97 | [array(['/skins/OxfordComma/images/acrobat-logo.png', 'Acrobat logo'],
dtype=object) ] | docs.splunk.com |
SQLAlchemy 1.0 Documentation
SQLAlchemy Core
- SQL Expression Language Tutorial
- SQL Statements and Expressions API
- Schema Definition Language¶
- Column and Data Types
- Engine and Connection Use
- Core API Basics
Project Versions
Schema Definition Language¶
This section references SQLAlchemy schema metadata, a comprehensive system of describing and inspecting database schemas..
- Describing Databases with MetaData
- Reflecting Database Objects
- Column Insert/Update Defaults
- Defining Constraints and Indexes
- Customizing DDL | http://docs.sqlalchemy.org/en/rel_1_0/core/schema.html?highlight=onupdate | 2016-07-23T11:06:21 | CC-MAIN-2016-30 | 1469257822172.7 | [] | docs.sqlalchemy.org |
rev
< html | attributes
rev
This article is Not Ready.
Needs Summary: This article does not have a summary. Summaries give a brief overview of the topic and are automatically included on some listing pages that link to this article.
Needs Examples: This section should include examples.
Notes] | http://docs.webplatform.org/wiki/html/attributes/rev | 2016-07-23T11:06:55 | CC-MAIN-2016-30 | 1469257822172.7 | [] | docs.webplatform.org |
PhotoEditor is a set of Web controls that provides rich functionality for online editing of photos. Its main goal is to give you the possibility to easily add a convenient photo editing tool to your ASP.NET-based Web application.
By default, PhotoEditor is best fit for online photofinishing websites. However it can easily be used or configured for any other kind of photo-oriented Web applications in any other area - photo content stores, photo sharing, online galleries, online auctions, insurance, real estate, and whatever else.
Features
The most important features delivered by PhotoEditor are:
- Modular architecture of PhotoEditor allows breaking it into separate controls and rearranging them in your Web page the way you like.
- Uses advantages of ASP.NET AJAX - a cutting edge technology which allows to build highly interactive user interface in Web applications.
- The appearance of PhotoEditor can be easily changed using CSS skins. That simplifies integration of this product with your Web site.
- The API of PhotoEditor allows creating your own effects, so if there is something important to you that is missing in the standard effect set, you can easily add it.
- PhotoEditor is shipped with lots of ready-to-use effectsactions that the user can take to edit photos. This action set also contains such effects as:
- Crop
- Red-eye removal
- Rotate and flip
- Brightness/contrast enhancement (both automatic and manual)
- Add borders (simple lines, effect-based, custom images)
- Create calendars
- Color tinting
- Various artistic effects
Unnecessary effects can be hidden by adjusting a single configuration file. If some effects are missing, new effects can be easily implemented and added to PhotoEditor!
Fig. 1. Photo Editor screenshot
How This Manual Is Organized
This manual contains four large sections. They are:
- Getting Started. This section contains the essential information about PhotoEditor and its usage. It is highly recommended to read this section before using our product, as it covers the most important topics.
- Using PhotoEditor. This section contains more advanced explanation of PhotoEditor usage. Although topics in this section contain code snippets, they are more concentrated on general prinicples rather than practical implementations.
- How to.... This section contains sample solutions of the most frequent tasks for which PhotoEditor can be used. The topics of this section contain a lot of code snippets.
- Reference. This section contains reference for client-side and server-side components and for CSS rules used in appearance customization. | http://photo-editor-docs.aurigma.com/Docs/PhotoEditor/Overview.htm | 2009-07-03T23:04:07 | crawl-002 | crawl-002-007 | [array(['Introduction.jpg', 'Photo Editor screenshot'], dtype=object)] | photo-editor-docs.aurigma.com |
About Plug-ins
Plug-Ins allow you to extend the functionality of Quicksilver by making it aware of other apps, such as iTunes, or other resources, such as Safari's bookmarks. They can also add functionality, as in the case of mouse triggers. Below is a list of the available plug-ins and links to some basic information and documentation. Some documentation is also available via the Plug-ins preference pane in Quicksilver's Preferences window. This list may be out of date, so always check the Plug-ins preference pane for the most recent available plug-ins. (For users with b36 or earlier, please check the old plug-ins page.) For discussion, check out the Add-on forums.
Plug-in List
See Also:
Recently Updated
- An error occured while fetching this feed: | http://docs.blacktree.com/quicksilver/plug-ins/plug-ins | 2009-07-04T01:47:21 | crawl-002 | crawl-002-007 | [array(['http://blacktree.com/style/images/sectionicons/docs-icon.png',
None], dtype=object) ] | docs.blacktree.com |
How to reset the WordPress administrator’s password?
Step 1: Login to WordPress, as an administrator, using any WRONG password
What script from the WordPress folders checks if you are using the right password for the user “admin”? Let’s find out. Let’s try to login with the wrong password, for instance with the password “ak45mjg385v3543knj6y23″. The login page will display the error message “Incorrect password.”
Now, what file did generate that error message? We search all files from our WordPress folder for the exact phrase “Incorrect password.”. Only one file contains this phrase - the filename called
“YourWordpressFolder\wp-includes\pluggable.php”.
We search this file for the “Incorrect password.” string.
What do we have on line 450? The test.
if ( !wp_check_password($password, $user->user_pass, $user->ID) ) {
In plain English, it says : If the password is not good, then display error message, else log the user in.
We just change the above sentence to:
If the password is good, then display error message, else log the user in.
In other words, any WRONG password will get you logged.
We just have to remove the “not” word, which in PHP is the “!” character.
So, line 450 will become:
if ( wp_check_password($password, $user->user_pass, $user->ID) ) {
We upload the modified file and we login as admin, using any WRONG password.
Step 2: Change the password
Dashboard -> Users -> Your profile -> New password
Nobody will ask you for the old one
Step 3 : Change back the pluggable.php file and upload it.
Note : if you are very concerned with security, do step 3 before step 2. After you logged, nobody will ask you again for your password.
Please feel free to comment / ask questions. | http://wp-docs.com/wordpress/level-5-modify-the-code/how-to-reset-the-wordpress-administrators-password/ | 2009-07-04T01:48:22 | crawl-002 | crawl-002-007 | [array(['http://wp-docs.com/wp-includes/images/smilies/icon_smile.gif',
':)'], dtype=object) ] | wp-docs.com |
Download the Xcode Template and put it in
(~)/Library/Application Support/Apple/Developer Tools/Project Templates/
Create the folder if needed.
In Xcode, you need to create a Source Tree that points to your Quicksilver Frameworks directory. Go to the Preferences > Source Trees section, add a new entry with the setting name as QSFrameworks. Set the path for this item to /Applications/Quicksilver.app/Contents/Frameworks/ or the equivalent.
Plugins are loaded from any of the standard locations as well as from the current directory, so as long as you launch from Xcode it should work. | http://docs.blacktree.com/quicksilver/development/qsplugin_template | 2009-07-04T01:48:29 | crawl-002 | crawl-002-007 | [] | docs.blacktree.com |
How to display a nice “[... read more]” message
If you use the standard WorPress distribution, then after the excerpt you see something like this: [...]
There is no link inside the … and there is no “more” word.
I want to change [...] into a clickable [... read more], which will lead to the actual post.
In the /wp-includes/formatting.php file, search for [...] (should be around [... read more] | http://wp-docs.com/ | 2009-07-04T01:49:20 | crawl-002 | crawl-002-007 | [] | wp-docs.com |
Modular 2 2
The instrumentation level defines the requirements for implementing JMX manageable resources. A JMX manageable resource can be virtually anything, including applications, service components, devices, and so on. The manageable resource exposes a Java object or wrapper that describes its manageable features, which makes the resource instrumented so that it can be managed by JMX-compliant applications.
The user provides the instrumentation of a given resource using one or more managed beans, or MBeans. There are four varieties of MBean implementations: standard, dynamic, model, and open. The differences between the various MBean types is discussed in Managed Beans or MBeans.
The instrumentation level also specifies a notification mechanism. The purpose of the notification mechanism is to allow MBeans to communicate changes with their environment. This is similar to the JavaBean property change notification mechanism, and can be used for attribute change notifications, state change notifications, and so on.
The agent level defines the requirements for implementing agents. Agents are responsible for controlling and exposing the managed resources that are registered with the agent. By default, management agents are located on the same hosts as their resources. This collocation is not a requirement.
The agent requirements make use of the instrumentation level to define a standard MBeanServer management agent, supporting services, and a communications connector. JBoss provides both an html adaptor as well as an RMI adaptor.
The JMX agent can be located in the hardware that hosts the JMX manageable resources when a Java Virtual Machine (JVM) is available. This is how the JBoss server uses the MBeanServer. A JMX agent does not need to know which resources it will serve. JMX manageable resources may use any JMX agent that offers the services it requires.
Managers interact with an agent's MBeans through a protocol adaptor or connector, as described in the Section 2.1.3, “Distributed Services Level” in the next section. The agent does not need to know anything about the connectors or management applications that interact with the agent and its MBeans.
The JMX specification notes that a complete definition of the distributed services level is beyond the scope of the initial version of the JMX specification. This was indicated by the component boxes with the horizontal lines in Figure 2.2, “The Relationship between the components of the JMX architecture”. The general purpose of this level is to define the interfaces required for implementing JMX management applications or managers. The following points highlight the intended functionality of the distributed services level as discussed in the current JMX specification.
Provide an interface for management applications to interact transparently with an agent and its JMX manageable resources through a connector
Exposes a management view of a JMX agent and its MBeans by mapping their semantic meaning into the constructs of a data-rich protocol (for example HTML or SNMP)
Distributes management information from high-level management platforms to numerous JMX agents
Consolidates management information coming from numerous JMX agents into logical views that are relevant to the end user's business operations
Provides security
It is intended that the distributed services level components will allow for cooperative management of networks of agents and their resources. These components can be expanded to provide a complete management application.
This section offers an overview of the instrumentation and agent level components. The instrumentation level components include the following:
MBeans (standard, dynamic, open, and model MBeans)
Notification model elements
MBean metadata classes
The agent level components include:
MBean server
Agent services
An MBean is a Java object that implements one of the standard MBean interfaces and follows the associated design patterns. The MBean for a resource exposes all necessary information and operations that a management application needs to control the resource.
The scope of the management interface of an MBean includes the following:
Attribute values that may be accessed by name
Operations or functions that may be invoked
Notifications or events that may be emitted
The constructors for the MBean's Java class
JMX defines four types of MBeans to support different instrumentation needs:
Standard MBeans: These use a simple JavaBean style naming convention and a statically defined management interface. This is the most common type of MBean used by JBoss.
Dynamic MBeans: These must implement the javax.management.DynamicMBean interface, and they expose their management interface at runtime when the component is instantiated for the greatest flexibility. JBoss makes use of Dynamic MBeans in circumstances where the components to be managed are not known until runtime.
Open MBeans: These are an extension of dynamic MBeans. Open MBeans rely on basic, self-describing, user-friendly data types for universal manageability.
Model MBeans: These are also an extension of dynamic MBeans. Model MBeans must implement the javax.management.modelmbean.ModelMBean interface. Model MBeans simplify the instrumentation of resources by providing default behavior. JBoss XMBeans are an implementation of Model MBeans.
We will present an example of a Standard and a Model MBean in the section that discusses extending JBoss with your own custom services.
JMX Notifications are an extension of the Java event model. Both the MBean server and MBeans can send notifications to provide information. The JMX specification defines the javax.management package Notification event object, NotificationBroadcaster event sender, and NotificationListener event receiver interfaces. The specification also defines the operations on the MBean server that allow for the registration of notification listeners.
There is a collection of metadata classes that describe the management interface of an MBean. Users can obtain a common metadata view of any of the four MBean types by querying the MBean server with which the MBeans are registered. The metadata classes cover an MBean's attributes, operations, notifications, and constructors. For each of these, the metadata includes a name, a description, and its particular characteristics. For example, one characteristic of an attribute is whether it is readable, writable, or both. The metadata for an operation contains the signature of its parameter and return types.
The different types of MBeans extend the metadata classes to be able to provide additional information as required. This common inheritance makes the standard information available regardless of the type of MBean. A management application that knows how to access the extended information of a particular type of MBean is able to do so.
A key component of the agent level is the managed bean server. Its functionality is exposed through an instance of the javax.management.MBeanServer. An MBean server is a registry for MBeans that makes the MBean management interface available for use by management applications. The MBean never directly exposes the MBean object itself; rather, its management interface is exposed through metadata and operations available in the MBean server interface. This provides a loose coupling between management applications and the MBeans they manage.
MBeans can be instantiated and registered with the MBeanServer by the following:
Another MBean
The agent itself
A remote management application (through the distributed services)
When you register an MBean, you must assign it a unique object name. The object name then becomes the unique handle by which management applications identify the object on which to perform management operations. The operations available on MBeans through the MBean server include the following:
Discovering the management interface of MBeans
Reading and writing attribute values
Invoking operations defined by MBeans
Registering for notifications events
Querying MBeans based on their object name or their attribute values
Protocol adaptors and connectors are required to access the MBeanServer from outside the agent's JVM. Each adaptor provides a view via its protocol of all MBeans registered in the MBean server the adaptor connects to. An example adaptor is an HTML adaptor that allows for the inspection and editing of MBeans using a Web browser. As was indicated in Figure 2.2, “The Relationship between the components of the JMX architecture”, there are no protocol adaptors defined by the current JMX specification. Later versions of the specification will address the need for remote access protocols in standard ways.
A connector is an interface used by management applications to provide a common API for accessing the MBean server in a manner that is independent of the underlying communication protocol. Each connector type provides the same remote interface over a different protocol. This allows a remote management application to connect to an agent transparently through the network, regardless of the protocol. The specification of the remote management interface will be addressed in a future version of the JMX specification.
Adaptors and connectors make all MBean server operations available to a remote management application. For an agent to be manageable from outside of its JVM, it must include at least one protocol adaptor or connector. JBoss currently includes a custom HTML adaptor implementation and a custom JBoss RMI adaptor.
The JMX agent services are objects that support standard operations on the MBeans registered in the MBean server. The inclusion of supporting management services helps you build more powerful management solutions. Agent services are often themselves MBeans, which allow the agent and their functionality to be controlled through the MBean server. The JMX specification defines the following agent services:
A dynamic class loading MLet (management applet) service: This allows for the retrieval and instantiation of new classes and native libraries from an arbitrary network location.
Monitor services: These observe an MBean attribute's numerical or string value, and can notify other objects of several types of changes in the target.
Timer services: These provide a scheduling mechanism based on a one-time alarm-clock notification or on a repeated, periodic notification.
The relation service: This service defines associations between MBeans and enforces consistency on the relationships.
Any JMX-compliant implementation will provide all of these agent services. However, JBoss does not rely on any of these standard agent services.
JBoss employs a class loading architecture that facilitates sharing of classes across deployment units and hot deployment of services and applications. Before discussing the JBoss specific class loading model, we need to understand the nature of Java's type system and how class loaders fit in.
Class loading is a fundamental part of all server architectures. Arbitrary services and their supporting classes must be loaded into the server framework. This can be problematic due to the strongly typed nature of Java. Most developers know that the type of a class in Java is a function of the fully qualified name of the class. However the type is also a function of the java.lang.ClassLoader that is used to define that class. This additional qualification of type is necessary to ensure that environments in which classes may be loaded from arbitrary locations would be type-safe.
However, in a dynamic environment like an application server, and especially JBoss with its support for hot deployment are that class cast exceptions, linkage errors and illegal access errors can show up in ways not seen in more static class loading contexts. Let's take a look at the meaning of each of these exceptions and how they can happen.
A java.lang.ClassCastException results whenever an attempt is made to cast an instance to an incompatible type. A simple example is trying to obtain a String from a List into which a URL was placed:
ArrayList array = new ArrayList(); array.add(new URL("file:/tmp")); String url = (String) array.get(0); java.lang.ClassCastException: java.net.URL at org.jboss.chap2.ex0.ExCCEa.main(Ex1CCE.java:16)
The ClassCastException tells you that the attempt to cast the array element to a String failed because the actual type was URL. This trivial case is not what we are interested in however. Consider the case of a JAR being loaded by different class loaders. Although the classes loaded through each class loader are identical in terms of the bytecode, they are completely different types as viewed by the Java type system. An example of this is illustrated by the code shown in Example 2.1, “The ExCCEc class used to demonstrate ClassCastException due to duplicate class loaders”.
Example 2.1. The ExCCEc class used to demonstrate ClassCastEx a ClassCastException that * results from classes loaded through * different class loaders. * @author [email protected] * @version $Revision: 1.13 $ */ public class ExCCEc { public static void main(String[] args) throws Exception { ChapterExRepository.init(ExCCEc); Class objClass = ucl0.loadClass("org.jboss.chap2.ex0.ExObj"); StringBuffer buffer = new StringBuffer("ExObj Info"); Debug.displayClassInfo(objClass, buffer, false); ucl0Log.info(buffer.toString()); Object value = objClass.newInstance(); File jar1 = new File(chapDir+"/j0.jar"); Logger ucl1Log = Logger.getLogger("UCL1"); ucl1Log.info("jar1 path: "+jar1.toString()); URL[] cp1 = {jar1.toURL()}; URLClassLoader ucl1 = new URLClassLoader(cp1); Thread.currentThread().setContextClassLoader(ucl1); Class ctxClass2 = ucl1.loadClass("org.jboss.chap2.ex0.ExCtx"); buffer.setLength(0); buffer.append("ExCtx Info"); Debug.displayClassInfo(ctxClass2, buffer, false); ucl1Log.info(buffer.toString()); Object ctx2 = ctxClass2.newInstance(); try { Class[] types = {Object.class}; Method useValue = ctxClass2.getMethod("useValue", types); Object[] margs = {value}; useValue.invoke(ctx2, margs); } catch(Exception e) { ucl1Log.error("Failed to invoke ExCtx.useValue", e); throw e; } } }
Example 2.2. The ExCtx, ExObj, and ExObj2 classes used by the examples
package org.jboss.chap2.ex0; import java.io.IOException; import org.apache.log4j.Logger; import org.jboss.util.Debug; /** * A classes used to demonstrate various class * loading issues * @author [email protected] * @version $Revision: 1.13 $ */ public class ExCtx { ExObj value;()); } public Object getValue() { return value; } public void useValue(Object obj) throws Exception { Logger log = Logger.getLogger(ExCtx.class); StringBuffer buffer = new StringBuffer("useValue2.arg class"); Debug.displayClassInfo(obj.getClass(), buffer, false); log.info(buffer.toString()); buffer.setLength(0); buffer.append("useValue2.ExObj class"); Debug.displayClassInfo(ExObj.class, buffer, false); log.info(buffer.toString()); ExObj ex = (ExObj) obj; } void pkgUseValue(Object obj) throws Exception { Logger log = Logger.getLogger(ExCtx.class); log.info("In pkgUseValue"); } }
package org.jboss.chap2.ex0; import java.io.Serializable; /** * @author [email protected] * @version $Revision: 1.13 $ */ public class ExObj implements Serializable { public ExObj2 ivar = new ExObj2(); }
package org.jboss.chap2.ex0; import java.io.Serializable; /** * @author [email protected] * @version $Revision: 1.13 $ */ public class ExObj2 implements Serializable { }
The ExCCEc.main method uses reflection to isolate the classes that are being loaded by the class loaders ucl0 and ucl1 from the application class loader. Both are setup to load classes from the output/chap2/j0.jar, the contents of which are:
[examples]$ jar -tf output/chap2/j0.jar ... org/jboss/chap2/ex0/ExCtx.class org/jboss/chap2/ex0/ExObj.class org/jboss/chap2/ex0/ExObj2.class
We will run an example that demonstrates how a class cast exception can occur and then look at the specific issue with the example. See Appendix B, Book Example Installation for instructions on installing the examples accompanying the book, and then run the example from within the examples directory using the following command:
[examples]$ ant -Dchap=chap2 -Dex=0c run-example ... [java] [ERROR,UCL1] Failed to invoke ExCtx.useValue [java] java.lang.reflect.InvocationTargetException :324) [java] at org.jboss.chap2.ex0.ExCCEc.main(ExCCEc.java:58) [java] Caused by: java.lang.ClassCastException [java] at org.jboss.chap2.ex0.ExCtx.useValue(ExCtx.java:44) [java] ... 5 more
Only the exception is shown here. The full output can be found in the logs/chap2-ex0c.log file. At line 55 of ExCCEc.java we are invoking ExcCCECtx.useValue(Object) on the instance loaded and created in lines 37-48 using ucl1. The ExObj passed in is the one loaded and created in lines 25-35 via ucl0. The exception results when the ExCtx.useValue code attempts to cast the argument passed in to a ExObj. To understand why this fails consider the debugging output from the chap2-ex0c.log file shown in Example 2.3, “The chap2-ex0c.log debugging output for the ExObj classes seen”.
Example 2.3. The chap2-ex0c.log debugging output for the ExObj classes seen
[INFO,UCL0] ExObj Info org.jboss.chap2.ex0.ExObj(113fe2).ClassLoader=java.net.URLClassLoader@6e3914 ..java.net.URLClassLoader@6e3914 ... [INFO,ExCtx] useValue2.ExObj class org.jboss.chap2.ex0.ExObj(415de6).ClassLoader=java.net.URLClassLoader@30e280 ..java.net.URLClassLoader@30e280 ...
The first output prefixed with [INFO,UCL0] shows that the ExObj class loaded at line ExCCEc.java:31 has a hash code of 113fe2 and an associated URLClassLoader instance with a hash code of 6e3914, which corresponds to ucl0. This is the class used to create the instance passed to the ExCtx.useValue method. The second output prefixed with [INFO,ExCtx] shows that the ExObj class as seen in the context of the ExCtx.useValue method has a hash code of 415de6 and a URLClassLoader instance with an associated hash code of 30e280, which corresponds to ucl1. So even though the ExObj classes are the same in terms of actual bytecode since it comes from the same j0.jar, the classes are different as seen by both the ExObj class hash codes, and the associated URLClassLoader instances. Hence, attempting to cast an instance of ExObj from one scope to the other results in the ClassCastException.
This type of error is common when redeploying an application to which other applications are holding references to classes from the redeployed application. For example, a standalone WAR accessing an EJB. If you are redeploying an application, all dependent applications must flush their class references. Typically this requires that the dependent applications themselves be redeployed.
An alternate means of allowing independent deployments to interact in the presence of redeployment would be to isolate the deployments by configuring the EJB layer to use the standard call-by-value semantics rather than the call-by-reference JBoss will default to for components collocated in the same VM. An example of how to enable call-by-value semantics is presented in Chapter 5, EJBs on JBoss
A java.lang.IllegalAccessException is thrown when one attempts to access a method or member that visibility qualifiers do not allow. Typical examples are attempting to access private or protected methods or instance variables. Another common example is accessing package protected methods or members from a class that appears to be in the correct package, but is really not due to caller and callee classes being loaded by different class loaders. An example of this is illustrated by the code shown in Example 2.5, “Classes demonstrating the need for loading constraints”.
Example 2.4. The ExIAEd class used to demonstrate IllegalAccessEx IllegalAccessExceptions due to * classes loaded by two class loaders. * @author [email protected] * @version $Revision: 1.13 $ */ public class ExIAEd { public static void main(String[] args) throws Exception { ChapterExRepository.init(ExIA); StringBuffer buffer = new StringBuffer("ExIAEd Info"); Debug.displayClassInfo(ExIAEd.class, buffer, false); ucl0Log.info(buffer.toString()); Class ctxClass1 = ucl0.loadClass("org.jboss.chap2.ex0.ExCtx"); buffer.setLength(0); buffer.append("ExCtx Info"); Debug.displayClassInfo(ctxClass1, buffer, false); ucl0Log.info(buffer.toString()); Object ctx0 = ctxClass1.newInstance(); try { Class[] types = {Object.class}; Method useValue = ctxClass1.getDeclaredMethod("pkgUseValue", types); Object[] margs = {null}; useValue.invoke(ctx0, margs); } catch(Exception e) { ucl0Log.error("Failed to invoke ExCtx.pkgUseValue", e); } } }
The ExIAEd.main method uses reflection to load the ExCtx class via the ucl0 class loader while the ExIEAd class was loaded by the application class loader. We will run this example to demonstrate how the IllegalAccessException can occur and then look at the specific issue with the example. Run the example using the following command:
[examples]$ ant -Dchap=chap2 -Dex=0d run-example Buildfile: build.xml ... [java] [ERROR,UCL0] Failed to invoke ExCtx.pkgUseValue [java] java.lang.IllegalAccessException: Class org.jboss.chap2.ex0.ExIAEd can not access a member of class org.jboss.chap2.ex0.ExCtx with modifiers "" [java] at sun.reflect.Reflection.ensureMemberAccess(Reflection.java:57) [java] at java.lang.reflect.Method.invoke(Method.java:317) [java] at org.jboss.chap2.ex0.ExIAEd.main(ExIAEd.java:48)
The truncated output shown here illustrates the IllegalAccessException. The full output can be found in the logs/chap2-ex0d.log file. At line 48 of ExIAEd.java the ExCtx.pkgUseValue(Object) method is invoked via reflection. The pkgUseValue method has package protected access and even though both the invoking class ExIAEd and the ExCtx class whose method is being invoked reside in the org.jboss.chap2.ex0 package, the invocation is seen to be invalid due to the fact that the two classes are loaded by different class loaders. This can be seen by looking at the debugging output from the chap2-ex0d.log file.
[INFO,UCL0] ExIAEd Info org.jboss.chap2.ex0.ExIAEd(65855a).ClassLoader=sun.misc.Launcher$AppClassLoader@3f52a5 ..sun.misc.Launcher$AppClassLoader@3f52a5 ... [INFO,UCL0] ExCtx Info org.jboss.chap2.ex0.ExCtx(70eed6).ClassLoader=java.net.URLClassLoader@113fe2 ..java.net.URLClassLoader@113fe2 ...
The ExIAEd class is seen to have been loaded via the default application class loader instance sun.misc.Launcher$AppClassLoader@3f52a5, while the ExCtx class was loaded by the java.net.URLClassLoader@113fe2 instance. Because the classes are loaded by different class loaders, access to the package protected method is seen to be a security violation. So, not only is type a function of both the fully qualified class name and class loader, the package scope is as well.
An example of how this can happen in practice is to include the same classes in two different SAR deployments. If classes in the deployment have a package protected relationship, users of the SAR service may end up loading one class from SAR class loading at one point, and then load another class from the second SAR at a later time. If the two classes in question have a protected access relationship an IllegalAccessError will result. The solution is to either include the classes in a separate jar that is referenced by the SARs, or to combine the SARs into a single deployment. This can either be a single SAR, or an EAR that includes both SARs.
Loading constraints validate type expectations in the context of class loader scopes to ensure that a class X is consistently the same class when multiple class loaders are involved. This is important because Java allows for user defined class loaders. Linkage errors are essentially an extension of the class cast exception that is enforced by the VM when classes are loaded and used.
To understand what loading constraints are and how they ensure type-safety we will first introduce the nomenclature of the Liang and Bracha paper along with an example from this paper. There are two type of class loaders, initiating and defining. An initiating class loader is one that a ClassLoader.loadClass method has been invoked on to initiate the loading of the named class. A defining class loader is the loader that calls one of the ClassLoader.defineClass methods to convert the class byte code into a Class instance. The most complete expression of a class is given by <C,Ld>Li , where C is the fully qualified class name, Ld is the defining class loader, and Li is the initiating class loader. In a context where the initiating class loader is not important the type may be represented by <C,Ld>, while when the defining class loader is not important, the type may be represented by CLi . In the latter case, there is still a defining class loader, it's just not important what the identity of the defining class loader is. Also, a type is completely defined by <C,Ld>. The only time the initiating loader is relevant is when a loading constraint is being validated. Now consider the classes shown in Example 2.5, “Classes demonstrating the need for loading constraints”.
Example 2.5. Classes demonstrating the need for loading constraints
class <C,L1> { void f() { <Spoofed, L1>L1x = <Delegated, L2>L2 x.secret_value = 1; // Should not be allowed } }
class <Delegated,L2> { static <Spoofed, L2>L3 g() {...} } }
class <Spoofed, L1> { public int secret_value; }
class <Spoofed, L2> { private int secret_value; }
The class C is defined by L1 and so L1 is used to initiate loading of the classes Spoofed and Delegated referenced in the C.f() method. The Spoofed class is defined by L1, but Delegated is defined by L2 because L1 delegates to L2. Since Delegated is defined by L2, L2 will be used to initiate loading of Spoofed in the context of the Delegated.g() method. In this example both L1 and L2 define different versions of Spoofed as indicated by the two versions shown at the end of Example 2.5, “Classes demonstrating the need for loading constraints”. Since C.f() believes x is an instance of <Spoofed,L1> it is able to access the private field secret_value of <Spoofed,L2> returned by Delegated.g() due to the 1.1 and earlier Java VM's failure to take into account that a class type is determined by both the fully qualified name of the class and the defining class loader.
Java addresses this problem by generating loader constraints to validate type consistency when the types being used are coming from different defining class loaders. For the Example 2.5, “Classes demonstrating the need for loading constraints” example, the VM generates a constraint SpoofedL1=SpoofedL2 when the first line of method C.f() is verified to indicate that the type Spoofed must be the same regardless of whether the load of Spoofed is initiated by L1 or L2. It does not matter if L1 or L2, or even some other class loader defines Spoofed. All that matters is that there is only one Spoofed class defined regardless of whether L1 or L2 was used to initiate the loading. If L1 or L2 have already defined separate versions of Spoofed when this check is made a LinkageError will be generated immediately. Otherwise, the constraint will be recorded and when Delegated.g() is executed, any attempt to load a duplicate version of Spoofed will result in a LinkageError.
Now let's take a look at how a LinkageError can occur with a concrete example. Example 2.6, “A concrete example of a LinkageError” gives the example main class along with the custom class loader used.
Example 2.6. A concrete example of a LinkageError
package org.jboss.chap2.13 $ */ public class ExLE { public static void main(String[] args) throws Exception { ChapterExRepository.init(Ex()}; Ex0URLClassLoader ucl0 = new Ex0URLClassLoader(cp0); Thread.currentThread().setContextClassLoader(ucl0); Class ctxClass1 = ucl0.loadClass("org.jboss.chap2.ex0.ExCtx"); Class obj2Class1 = ucl0.loadClass("org.jboss.chap2.chap2.chap2.ex0; import java.net.URLClassLoader; import java.net.URL; import org.apache.log4j.Logger; /** * A custom class loader that overrides the standard parent delegation * model * * @author [email protected] * @version $Revision: 1.13 $ */ public class Ex0URLClassLoader extends URLClassLoader { private static Logger log = Logger.getLogger(Ex0URLClassLoader.class); private Ex0URLClassLoader delegate; public Ex0URLClassLoader(URL[] urls) { super(urls); } void setDelegate(Ex0URLClassLoader delegate) { this.delegate = delegate; } protected synchronized Class loadClass(String name, boolean resolve) throws ClassNotFoundException { Class clazz = null; if (delegate != null) { log.debug(Integer.toHexString(hashCode()) + "; Asking delegate to loadClass: " + name); clazz = delegate.loadClass(name, resolve); log.debug(Integer.toHexString(hashCode()) + "; Delegate returned: "+clazz); } else { log.debug(Integer.toHexString(hashCode()) + "; Asking super to loadClass: "+name); clazz = super.loadClass(name, resolve); log.debug(Integer.toHexString(hashCode()) + "; Super returned: "+clazz); } return clazz; } protected Class findClass(String name) throws ClassNotFoundException { Class clazz = null; log.debug(Integer.toHexString(hashCode()) + "; Asking super to findClass: "+name); clazz = super.findClass(name); log.debug(Integer.toHexString(hashCode()) + "; Super returned: "+clazz); return clazz; } }
The key component in this example is the URLClassLoader subclass Ex0URLClassLoader. This class loader implementation overrides the default parent delegation model to allow the ucl0 and ucl1 instances to both load the ExObj2 class and then setup a delegation relationship from ucl0 to ucl1. At lines 30 and 31. the ucl0 Ex0URLClassLoader is used to load the ExCtx and ExObj2 classes. At line 45 of ExLE.main the ucl1 Ex0URLClassLoader is used to load the ExObj2 class again. At this point both the ucl0 and ucl1 class loaders have defined the ExObj2 class. A delegation relationship from ucl0 to ucl1 is then setup at line 51 via the ucl0.setDelegate(ucl1) method call. Finally, at line 54 of ExLE.main an instance of ExCtx is created using the class loaded via ucl0. The ExCtx class is the same as presented in Example 2.2, “The ExCtx, ExObj, and ExObj2 classes used by the examples”, and the constructor was:()); }
Now, since the ExCtx class was defined by the ucl0 class loader, and at the time the ExCtx constructor is executed, ucl0 delegates to ucl1, line 24 of the ExCtx constructor involves the following expression which has been rewritten in terms of the complete type expressions:
<ExObj2,ucl0>ucl0 obj2 = <ExObj,ucl1>ucl0 value * ivar
This generates a loading constraint of ExObj2ucl0 = ExObj2ucl1 since the ExObj2 type must be consistent across the ucl0 and ucl1 class loader instances. Because we have loaded ExObj2 using both ucl0 and ucl1 prior to setting up the delegation relationship, the constraint will be violated and should generate a LinkageError when run. Run the example using the following command:
[examples]$ ant -Dchap=chap2 -Dex=0e run-example Buildfile: build.xml ... [java] java.lang.LinkageError: loader constraints violated when linking org/jboss/chap2/ex0/E xObj2 class [java] at org.jboss.chap2.ex0.ExCtx.<init>(ExCtx.java:24) Acce ssorImpl.java:27) [java] at java.lang.reflect.Constructor.newInstance(Constructor.java:274) [java] at java.lang.Class.newInstance0(Class.java:308) [java] at java.lang.Class.newInstance(Class.java:261) [java] at org.jboss.chap2.ex0.ExLE.main(ExLE.java:53)
As expected, a LinkageError is thrown while validating the loader constraints required by line 24 of the ExCtx constructor.
Debugging class loading issues comes down to finding out where a class was loaded from. A useful tool for this is the code snippet shown in Example 2.7, “Obtaining debugging information for a Class” taken from the org.jboss.util.Debug class of the book examples.
Example 2.7. Obtaining debugging information for a Class
Class clazz =...; StringBuffer results = new StringBuffer(); ClassLoader cl = clazz.getClassLoader(); results.append("\n" + clazz.getName() + "(" + Integer.toHexString(clazz.hashCode()) + ").ClassLoader=" + cl); ClassLoader parent = cl; while (parent != null) { results.append("\n.."+parent); URL[] urls = getClassLoaderURLs(parent); int length = urls != null ? urls.length : 0; for(int u = 0; u < length; u ++) { results.append("\n...."+urls[u]); } if (showParentClassLoaders == false) { break; } if (parent != null) { parent = parent.getParent(); } } CodeSource clazzCS = clazz.getProtectionDomain().getCodeSource(); if (clazzCS != null) { results.append("\n++++CodeSource: "+clazzCS); } else { results.append("\n++++Null CodeSource"); }
The key items are shown in bold. The first is that every Class object knows its defining ClassLoader and this is available via the getClassLoader() method. The defines the scope in which the Class type is known as we have just seen in the previous sections on class cast exceptions, illegal access exceptions and linkage errors. From the ClassLoader you can view the hierarchy of class loaders that make up the parent delegation chain. If the class loader is a URLClassLoader you can also see the URLs used for class and resource loading.
The defining ClassLoader of a Class cannot tell you from what location that Class was loaded. To determine this you must obtain the java.security.ProtectionDomain and then the java.security.CodeSource. It is the CodeSource that has the URL p location from which the class originated. Note that not every Class has a CoPdeSource. If a class is loaded by the bootstrap class loader then its CodeSource will be null. This will be the case for all classes in the java.* and javax.* packages, for example.
Beyond that it may be useful to view the details of classes being loaded into the JBoss server. You can enable verbose logging of the JBoss class loading layer using a Log4j configuration fragment like that shown in Example 2.8, “An example log4j.xml configuration fragment for enabling verbose class loading logging”.
Example 2.8. An example log4j.xml configuration fragment for enabling verbose class loading logging
<appender name="UCL" class="org.apache.log4j.FileAppender"> <param name="File" value="${jboss.server.home.dir}/log/ucl.log"/> <param name="Append" value="false"/> <layout class="org.apache.log4j.PatternLayout"> <param name="ConversionPattern" value="[%r,%c{1},%t] %m%n"/> </layout> </appender> <category name="org.jboss.mx.loading" additivity="false"> <priority value="TRACE" class="org.jboss.logging.XLevel"/> <appender-ref </category>
This places the output from the classes in the org.jboss.mx.loading package into the ucl.log file of the server configurations log directory. Although it may not be meaningful if you have not looked at the class loading code, it is vital information needed for submitting bug reports or questions regarding class loading problems.
Now that we have the role of class loaders in the Java type system defined, let's take a look at the JBoss class loading architecture. Figure 2.3, “The core JBoss class loading components”.
The central component is the org.jboss.mx.loading.UnifiedClassLoader3 (UCL) class loader. This is an extension of the standard java.net.URLClassLoader that overrides the standard parent delegation model to use a shared repository of classes and resources. This shared repository is the org.jboss.mx.loading.UnifiedLoaderRepository3. Every UCL is associated with a single UnifiedLoaderRepository3, and a UnifiedLoaderRepository3 typically has many UCLs. A UCL may have multiple URLs associated with it for class and resource loading. Deployers use the top-level deployment's UCL as a shared class loader and all deployment archives are assigned to this class loader. We will talk about the JBoss deployers and their interaction with the class loading system in more detail latter in Section 2.4.2, “JBoss MBean Services”.
When a UCL is asked to load a class, it first looks to the repository cache it is associated with to see if the class has already been loaded. Only if the class does not exist in the repository will it be loaded into the repository by the UCL. By default, there is a single UnifiedLoaderRepository3 shared across all UCL instances. This means the UCLs form a single flat class loader namespace. The complete sequence of steps that occur when a UnfiedClassLoader3.loadClass(String, boolean) method is called is:
Check the UnifiedLoaderRepository3 classes cache associated with the UnifiedClassLoader3. If the class is found in the cache it is returned.
Else, ask the UnfiedClassLoader3 if it can load the class. This is essentially a call to the superclass URLClassLoader.loadClass(String, boolean) method to see if the class is among the URLs associated with the class loader, or visible to the parent class loader. If the class is found it is placed into the repository classes cache and returned.
Else, the repository is queried for all UCLs that are capable of providing the class based on the repository package name to UCL map. When a UCL is added to a repository an association between the package names available in the URLs associated with the UCL is made, and a mapping from package names to the UCLs with classes in the package is updated. This allows for a quick determination of which UCLs are capable of loading the class. The UCLs are then queried for the requested class in the order in which the UCLs were added to the repository. If a UCL is found that can load the class it is returned, else a java.lang.ClassNotFoundException is thrown.
Another useful source of information on classes is the UnifiedLoaderRepository itself. This is an MBean that contains operations to display class and package information. The default repository is located under a standard JMX name of JMImplementation:name=Default,service=LoaderRepository, and its MBean can be accessed via the JMX console by following its link from the front page. The JMX console view of this MBean is shown in Figure 2.4, “The default class LoaderRepository MBean view in the JMX console”.
Two useful operations you will find here are getPackageClassLoaders(String) and displayClassInfo(String). The getPackageClassLoaders operation returns a set of class loaders that have been indexed to contain classes or resources for the given package name. The package name must have a trailing period. If you type in the package name org.jboss.ejb., the following information is displayed:
[org.jboss.mx.loading.UnifiedClassLoader3@e26ae7{ url=file:/private/tmp/jboss-4.0.1/server/default/tmp/deploy/tmp11895jboss-service.xml, addedOrder=2}]
This is the string representation of the set. It shows one UnifiedClassLoader3 instance with a primary URL pointing to the jboss-service.xml descriptor. This is the second class loader added to the repository (shown by addedOrder=2). It is the class loader that owns all of the JARs in the lib directory of the server configuration (e.g., server/default/lib).
The view the information for a given class, use the displayClassInfo operation, passing in the fully qualified name of the class to view. For example, if we use org.jboss.jmx.adaptor.html.HtmlAdaptorServlet which is from the package we just looked at, the following description is displayed:
The information is a dump of the information for the Class instance in the loader repository if one has been loaded, followed by the class loaders that are seen to have the class file available. If a class is seen to have more than one class loader associated with it, then there is the potential for class loading related errors.
If you need to deploy multiple versions of an application you need to use deployment based scoping. With deployment based scoping, each deployment creates its own class loader repository in the form of a HeirarchicalLoaderRepository3 that looks first to the UnifiedClassLoader3 instances of the deployment units included in the EAR before delegating to the default UnifiedLoaderRepository3. To enable an EAR specific loader repository, you need to create a META-INF/jboss-app.xml descriptor as shown in Example 2.9, “An example jboss-app.xml descriptor for enabled scoped class loading at the EAR level.”.
Example 2.9. An example jboss-app.xml descriptor for enabled scoped class loading at the EAR level.
<jboss-app> <loader-repository>some.dot.com:loader=webtest.ear</loader-repository> </jboss-app>
The value of the loader-repository element is the JMX object name to assign to the repository created for the EAR. This must be unique and valid JMX ObjectName, but the actual name is not important.
The previous discussion of the core class loading components introduced the custom UnifiedClassLoader3 and UnifiedLoaderRepository3 classes that form a shared class loading space. The complete class loading picture must also include the parent class loader used by UnifiedClassLoader3s as well as class loaders introduced for scoping and other specialty class loading purposes. Figure 2.5, “A complete class loader view” shows an outline of the class hierarchy that would exist for an EAR deployment containing EJBs and WARs.
The following points apply to this figure:
System ClassLoaders: The System ClassLoaders node refers to either the thread context class loader (TCL) of the VM main thread or of the thread of the application that is loading the JBoss server if it is embedded.
ServerLoader: The ServerLoader node refers to the a URLClassLoader that delegates to the System ClassLoaders and contains the following boot URLs:
All URLs referenced via the jboss.boot.library.list system property. These are path specifications relative to the libraryURL defined by the jboss.lib.url property. If there is no jboss.lib.url property specified, it defaults to jboss.home.url + /lib/. If there is no jboss.boot.library property specified, it defaults to jaxp.jar, log4j-boot.jar, jboss-common.jar, and jboss-system.jar.
The JAXP JAR which is either crimson.jar or xerces.jar depending on the -j option to the Main entry point. The default is crimson.jar.
The JBoss JMX jar and GNU regex jar, jboss-jmx.jar and gnu-regexp.jar.
Oswego concurrency classes JAR, concurrent.jar
Any JARs specified as libraries via -L command line options
Any other JARs or directories specified via -C command line options
Server: The Server node represent a collection of UCLs created by the org.jboss.system.server.Server interface implementation. The default implementation creates UCLs for the patchDir entries as well as the server conf directory. The last UCL created is set as the JBoss main thread context class loader. This will be combined into a single UCL now that multiple URLs per UCL are supported.
All UnifiedClassLoader3s: The All UnifiedClassLoader. There is a mechanism for scoping visibility based on EAR deployment units that we discussed in Section 2.2.2.4.2, “Scoping Classes”. Use this mechanism if you need to deploy multiple versions of a class in a given JBoss server.
EJB DynClassLoader: The EJB DynClassLoader node is a subclass of URLClassLoader that is used to provide RMI dynamic class loading via the simple HTTP WebService. It specifies an empty URL[] and delegates to the TCL as its parent class loader. If the WebService is configured to allow system level classes to be loaded, all classes in the UnifiedLoaderRepository3 as well as the system classpath are available via HTTP.
EJB ENCLoader: The EJB ENCLoader node is a URLClassLoader that exists only to provide a unique context for an EJB deployment's java:comp JNDI context. It specifies an empty URL[] and delegates to the EJB DynClassLoader as its parent class loader.
Web ENCLoader: The Web ENCLoader node is a URLClassLoader that exists only to provide a unique context for a web deployment's java:comp JNDI context. It specifies an empty URL[] and delegates to the TCL as its parent class loader.
WAR Loader: The WAR Loader is a servlet container specific classloader that delegates to the Web ENCLoader as its parent class loader. The default behavior is to load from its parent class loader and then the WAR WEB-INF classes and lib directories. If the servlet 2.3 class loading model is enabled it will first load from the its WEB-INF directories and then the parent class loader.
In its current form there are some advantages and disadvantages to the JBoss class loading architecture. Advantages include:
Classes do not need to be replicated across deployment units in order to have access to them.
Many future possibilities including novel partitioning of the repositories into domains, dependency and conflict detection, etc.
Disadvantages include:
Existing deployments may need to be repackaged to avoid duplicate classes. Duplication of classes in a loader repository can lead to class cast exceptions and linkage errors depending on how the classes are loaded.
Deployments that depend on different versions of a given class need to be isolated in separate EARs and a unique HeirarchicalLoaderRepository3 defined using a jboss-app.xml descriptor.
XMBeans are the JBoss JMX implementation version of the JMX model MBean. XMBeans have the richness of the dynamic MBean metadata without the tedious programming required by a direct implementation of the DynamicMBean interface. The JBoss model MBean implementation allows one to specify the management interface of a component through a XML descriptor, hence the X in XMBean. In addition to providing a simple mechanism for describing the metadata required for a dynamic MBean, XMBeans also allow for the specification of attribute persistence, caching behavior, and even advanced customizations like the MBean implementation interceptors. The high level elements of the jboss_xmbean_1_2.dtd for the XMBean descriptor is given in Figure 2.6, “The JBoss 1.0 XMBean DTD Overview (jboss_xmbean_1_2.dtd)”.
The mbean element is the root element of the document containing the required elements for describing the management interface of one MBean (constructors, attributes, operations and notifications). It also includes an optional description element, which can be used to describe the purpose of the MBean, as well as an optional descriptors element which allows for persistence policy specification, attribute caching, etc.
The descriptors element contains all the descriptors for a containing element, as subelements. The descriptors suggested in the JMX specification as well as those used by JBoss have predefined elements and attributes, whereas custom descriptors have a generic descriptor element with name and value attributes as show in Figure 2.7, “ The descriptors element content model”.
The key descriptors child elements include:
interceptors: The interceptors element specifies a customized stack of interceptors that will be used in place of the default stack. Currently this is only used when specified at the MBean level, but it could define a custom attribute or operation level interceptor stack in the future. The content of the interceptors element specifies a custom interceptor stack. If no interceptors element is specified the standard ModelMBean interceptors will be used. The standard interceptors are:
org.jboss.mx.interceptor.PersistenceInterceptor
org.jboss.mx.interceptor.MBeanAttributeInterceptor
org.jboss.mx.interceptor.ObjectReferenceInterceptor
When specifying a custom interceptor stack you would typically include the standard interceptors along with your own unless you are replacing the corresponding standard interceptor.
Each interceptor element content value specifies the fully qualified class name of the interceptor implementation. The class must implement the org.jboss.mx.interceptor.Interceptor interface. The interceptor class must also have either a no-arg constructor, or a constructor that accepts a javax.management.MBeanInfo.
The interceptor elements may have any number of attributes that correspond to JavaBean style properties on the interceptor class implementation. For each interceptor element attribute specified, the interceptor class is queried for a matching setter method. The attribute value is converted to the true type of the interceptor class property using the java.beans.PropertyEditor associated with the type. It is an error to specify an attribute for which there is no setter or PropertyEditor.
persistence: The persistence element allows the specification of the persistPolicy, persistPeriod, persistLocation, and persistName persistence attributes suggested by the JMX specification. The persistence element attributes are:
persistPolicy: The persistPolicy attribute defines when attributes should be persisted and its value must be one of
Never: attribute values are transient values that are never persisted
OnUpdate: attribute values are persisted whenever they are updated
OnTimer: attribute values are persisted based on the time given by the persistPeriod.
NoMoreOftenThan: attribute values are persisted when updated but no more often than the persistPeriod.
persistPeriod: The persistPeriod attribute gives the update frequency in milliseconds if the perisitPolicy attribute is NoMoreOftenThan or OnTimer.
persistLocation: The persistLocation attribute specifies the location of the persistence store. Its form depends on the JMX persistence implementation. Currently this should refer to a directory into which the attributes will be serialized if using the default JBoss persistence manager.
persistName: The persistName attribute can be used in conjunction with the persistLocation attribute to further qualify the persistent store location. For a directory persistLocation the persistName specifies the file to which the attributes are stored within the directory.
currencyTimeLimit: The currencyTimeLimit element specifies the time in seconds that a cached value of an attribute remains valid. Its value attribute gives the time in seconds. A value of 0 indicates that an attribute value should always be retrieved from the MBean and never cached. A value of -1 indicates that a cache value is always valid.
display-name: The display-name element specifies the human friendly name of an item.
default: The default element specifies a default value to use when a field has not been set. Note that this value is not written to the MBean on startup as is the case with the jboss-service.xml attribute element content value. Rather, the default value is used only if there is no attribute accessor defined, and there is no value element defined.
value: The value element specifies a management attribute's current value. Unlike the default element, the value element is written through to the MBean on startup provided there is a setter method available.
persistence-manager: The persistence-manager element gives the name of a class to use as the persistence manager. The value attribute specifies the class name that supplies the org.jboss.mx.persistence.PersistenceManager interface implementation. The only implementation currently supplied by JBoss is the org.jboss.mx.persistence.ObjectStreamPersistenceManager which serializes the ModelMBeanInfo content to a file using Java serialization.
descriptor: The descriptor element specifies an arbitrary descriptor not known to JBoss. Its name attribute specifies the type of the descriptor and its value attribute specifies the descriptor value. The descriptor element allows for the attachment of arbitrary management metadata.
injection: The injection element describes an injection point for receiving information from the microkernel. Each injection point specifies the type and the set method to use to inject the information into the resource. The injection element supports type attributes:
id: The id attribute specifies the injection point type. The current injection point types are:
MBeanServerType: An MBeanServerType injection point receives a reference to the MBeanServer that the XMBean is registered with.
MBeanInfoType: An MBeanInfoType injection point receives a reference to the XMBean ModelMBeanInfo metadata.
ObjectNameType: The ObjectName injection point receives the ObjectName that the XMBean is registered under.
setMethod: The setMethod attribute gives the name of the method used to set the injection value on the resource. The set method should accept values of the type corresponding to the injection point type.
Note that any of the constructor, attribute, operation or notification elements may have a descriptors element to specify the specification defined descriptors as well as arbitrary extension descriptor settings.
The class element specifies the fully qualified name of the managed object whose management interface is described by the XMBean descriptor.
The constructor element(s) specifies the constructors available for creating an instance of the managed object. The constructor element and its content model are shown in Figure 2.8, “The XMBean constructor element and its content model”.
The key child elements are:
description: A description of the constructor.
name: The name of the constructor, which must be the same as the implementation class.
parameter: The parameter element describes a constructor parameter. The parameter element has the following attributes:
description: An optional description of the parameter.
name: The required variable name of the parameter.
type: The required fully qualified class name of the parameter type.
descriptors: Any descriptors to associate with the constructor metadata.
The attribute element(s) specifies the management attributes exposed by the MBean. The attribute element and its content model are shown in Figure 2.9, “The XMBean attribute element and its content model”.
The attribute element supported attributes include:
access: The optional access attribute defines the read/write access modes of an attribute. It must be one of:
read-only: The attribute may only be read.
write-only: The attribute may only be written.
read-write: The attribute is both readable and writable. This is the implied default.
getMethod: The getMethod attribute defines the name of the method which reads the named attribute. This must be specified if the managed attribute should be obtained from the MBean instance.
setMethod: The setMethod attribute defines the name of the method which writes the named attribute. This must be specified if the managed attribute should be obtained from the MBean instance.
The key child elements of the attribute element include:
description: A description of the attribute.
name: The name of the attribute as would be used in the MBeanServer.getAttribute() operation.
type: The fully qualified class name of the attribute type.
descriptors: Any additional descriptors that affect the attribute persistence, caching, default value, etc.
The management operations exposed by the XMBean are specified via one or more operation elements. The operation element and its content model are shown in Figure 2.10, “The XMBean operation element and its content model”.
The impact attribute defines the impact of executing the operation and must be one of:
ACTION: The operation changes the state of the MBean component (write operation)
INFO: The operation should not alter the state of the MBean component (read operation).
ACTION_INFO: The operation behaves like a read/write operation.
The child elements are:
description: This element specifies a human readable description of the operation.
name: This element contains the operation's name
parameter: This element describes the operation's signature.
return-type: This element contains a fully qualified class name of the return type from this operation. If not specified, it defaults to void.
descriptors: Any descriptors to associate with the operation metadata.
The notification element(s) describes the management notifications that may be emitted by the XMBean. The notification element and its content model is shown in Figure 2.11, “The XMBean notification element and content model”.
The child elements are:
description: This element gives a human readable description of the notification.
name: This element contains the fully qualified name of the notification class.
notification-type: This element contains the dot-separated notification type string.
descriptors: Any descriptors to associate with the notification metadata.
JBoss includes adaptors that allow access to the JMX MBeanServer from outside of the JBoss server VM. The current adaptors include HTML, an RMI interface, and an EJB.
JBoss comes with its own implementation of a JMX HTML adaptor that allows one to view the server's MBeans using a standard web browser. The default URL for the console web application is. If you browse this location you will see something similar to that presented in Figure 2.12, “The JBoss JMX console web application agent view”.
The top view is called the agent view and it provides a listing of all MBeans registered with the MBeanServer sorted by the domain portion of the MBean's ObjectName. Under each domain are the MBeans under that domain. When you select one of the MBeans you will be taken to the MBean view. This allows one to view and edit an MBean's attributes as well as invoke operations. As an example, Figure 2.13, “The MBean view for the "jboss.system:type=Server" MBean” shows the MBean view for the jboss.system:type=Server MBean.
The source code for the JMX console web application is located in the varia module under the src/main/org/jboss/jmx directory. Its web pages are located under varia/src/resources/jmx. The application is a simple MVC servlet with JSP views that utilize the MBeanServer.
Since the JMX console web application is just a standard servlet, it may be secured using standard J2EE role based security. The jmx-console.war that is deployed as an unpacked WAR that includes template settings for quickly enabling simple username and password based access restrictions. If you look at the jmx-console.war in the server/default/deploy directory you will find the web.xml and jboss-web.xml descriptors in the WEB-INF directory. The jmx-console-roles.properties and jmx-console-users.properties files are located in the server/default/conf/props directory.
By uncommenting the security sections of the web.xml and jboss-web.xml descriptors as shown in Example 2.10, “The jmx-console.war web.xml descriptors with the security elements uncommented.”, you enable HTTP basic authentication that restricts access to the JMX Console application to the user admin with password admin. The username and password are determined by the admin=admin line in the jmx-console-users.properties file.
Example 2.10. The jmx-console.war web.xml descriptors with the security elements uncommented.
<?xml version="1.0"?> <!DOCTYPE web-app PUBLIC "-//Sun Microsystems, Inc.//DTD Web Application 2.3//EN" ""> <web-app> <!-- ... --> <!-->
Example 2.11. The jmx-console.war jboss-web.xml descriptors with the security elements uncommented.
<?xml version="1.0" encoding="UTF-8"?> <!DOCTYPE jboss-web PUBLIC "-//JBoss//DTD Web Application 2.3//EN" ""> <jboss-web> <!-- Uncomment the security-domain to enable security. You will need to edit the htmladaptor login configuration to setup the login modules used to authentication users. --> <security-domain>java:/jaas/jmx-console</security-domain> </jboss-web>
Make these changes and then when you try to access the JMX Console URL. You will see a dialog similar to that shown in Figure 2.14, “The JMX Console basic HTTP login dialog.”.
You probably to use the properties files for securing access to the JMX console application. To see how to properly configure the security settings of web applications see Chapter 8, Security on JBoss.
JBoss supplies an RMI interface for connecting to the JMX MBeanServer. This interface is org.jboss.jmx.adaptor.rmi.RMIAdaptor. The RMIAdaptor interface is bound into JNDI in the default location of jmx/invoker/RMIAdaptor as well as jmx/rmi/RMIAdaptor for backwards compatibility with older clients.
Example 2.12, “ A JMX client that uses the RMIAdaptor” shows a client that makes use of the RMIAdaptor interface to query the MBeanInfo for the JNDIView MBean. It also invokes the MBean's list(boolean) method and displays the result.
Example 2.12. A JMX client that uses the RMIAdaptor
public class JMXBrowser { /** * @param args the command line arguments */ public static void main(String[] args) throws Exception { InitialContext ic = new InitialContext(); RMIAdaptor server = (RMIAdaptor) ic.lookup("jmx/invoker/RMIAdaptor"); // Get the MBeanInfo for the JNDIView MBean ObjectName name = new ObjectName("jboss:service=JNDIView"); MBeanInfo info = server.getMBeanInfo(name); System.out.println("JNDIView Class: " + info.getClassName()); MBeanOperationInfo[] opInfo = info.getOperations(); System.out.println("JNDIView Operations: "); for(int o = 0; o < opInfo.length; o ++) { MBeanOperationInfo op = opInfo[o]; String returnType = op.getReturnType(); String opName = op.getName(); System.out.print(" + " + returnType + " " + opName + "("); MBeanParameterInfo[] params = op.getSignature(); for(int p = 0; p < params.length; p++) { MBeanParameterInfo paramInfo = params[p]; String pname = paramInfo.getName(); String type = paramInfo.getType(); if (pname.equals(type)) { System.out.print(type); } else { System.out.print(type + " " + name); } if (p < params.length-1) { System.out.print(','); } } System.out.println(")"); } // Invoke the list(boolean) op String[] sig = {"boolean"}; Object[] opArgs = {Boolean.TRUE}; Object result = server.invoke(name, "list", opArgs, sig); System.out.println("JNDIView.list(true) output:\n"+result); } }
To test the client access using the RMIAdaptor, run the following:
[examples]$ ant -Dchap=chap2 -Dex=4 run-example ... run-example4: [java] JNDIView Class: org.jboss.mx.modelmbean.XMBean [java] JNDIView Operations: [java] + java.lang.String list(boolean jboss:service=JNDIView) [java] + java.lang.String listXML() [java] + void create() [java] + void start() [java] + void stop() [java] + void destroy() [java] + void jbossInternalLifecycle(java.lang.String jboss:service=JNDIView) [java] + java.lang.String getName() [java] + int getState() [java] + java.lang.String getStateString() [java] JNDIView.list(true) output: [java] <h1>java: Namespace</h1> [java] <pre> [java] +- XAConnectionFactory (class: org.jboss.mq.SpyXAConnectionFactory) [java] +- DefaultDS (class: javax.sql.DataSource) [java] +- SecurityProxyFactory (class: org.jboss.security.SubjectSecurityProxyFactory) [java] +- DefaultJMSProvider (class: org.jboss.jms.jndi.JNDIProviderAdapter) [java] +- comp (class: javax.naming.Context) [java] +- JmsXA (class: org.jboss.resource.adapter.jms.JmsConnectionFactoryImpl) [java] +- ConnectionFactory (class: org.jboss.mq.SpyConnectionFactory) [java] +- jaas (class: javax.naming.Context) [java] | +- JmsXARealm (class: org.jboss.security.plugins.SecurityDomainContext) [java] | +- jbossmq (class: org.jboss.security.plugins.SecurityDomainContext) [java] | +- HsqlDbRealm (class: org.jboss.security.plugins.SecurityDomainContext) [java] +- timedCacheFactory (class: javax.naming.Context) [java] Failed to lookup: timedCacheFactory, errmsg=null [java] +- TransactionPropagationContextExporter (class: org.jboss.tm.TransactionPropagionContextImporter) [java] +- TransactionManager (class: org.jboss.tm.TxManager) [java] </pre> [java] <h1>Global JNDI Namespace</h1> [java] <pre> [java] +- XAConnectionFactory (class: org.jboss.mq.SpyXAConnectionFactory) [java] +- UIL2ConnectionFactory[link -> ConnectionFactory] (class: javax.naming.PluginManagerMBean) [java] +- UIL2XAConnectionFactory[link -> XAConnectionFactory] (class: javax.naming .LinkRef) [java] +- UUIDKeyGeneratorFactory (class: org.jboss.ejb.plugins.keygenerator.uuid.UUID KeyGeneratorFactory) [java] +- HTTPXAConnectionFactory (class: org.jboss.mq.SpyXAConnectionFactory) [java] +- topic (class: org.jnp.interfaces.NamingContext) [java] | +- testDurableTopic (class: org.jboss.mq.SpyTopic) [java] | +- testTopic (class: org.jboss.mq.SpyTopic) [java] | +- securedTopic (class: org.jboss.mq.SpyTopic) [java] +- queue (class: org.jnp.interfaces.NamingContext) [java] | +- A (class: org.jboss.mq.SpyQueue) [java] | +- testQueue (class: org.jboss.mq.SpyQueue) [java] | +- ex (class: org.jboss.mq.SpyQueue) [java] | +- DLQ (class: org.jboss.mq.SpyQueue) [java] | +- D (class: org.jboss.mq.SpyQueue) [java] | +- C (class: org.jboss.mq.SpyQueue) [java] | +- B (class: org.jboss.mq.SpyQueue) [java] +- ConnectionFactory (class: org.jboss.mq.SpyConnectionFactory) [java] +- UserTransaction (class: org.jboss.tm.usertx.client.ClientUserTransaction) [java] +- jmx (class: org.jnp.interfaces.NamingContext) [java] | +- invoker (class: org.jnp.interfaces.NamingContext) [java] | | +- RMIAdaptor (proxy: $Proxy35 implements interface org.jboss.jmx.adapt or.rmi.RMIAdaptor,interface org.jboss.jmx.adaptor.rmi.RMIAdaptorExt) [java] | +- rmi (class: org.jnp.interfaces.NamingContext) [java] | | +- RMIAdaptor[link -> jmx/invoker/RMIAdaptor] (class: javax.naming) [java] </pre>
By default the twiddle command will connect to the localhost at port 1099 to lookup the default jmx/rmi/RMIAdaptor binding of the RMIAdaptor service as the connector for communicating with the JMX server. To connect to a different server/port combination you can use the -s (--server) option:
[bin]$ ./twiddle.sh -s toki serverinfo -d jboss [bin]$ ./twiddle.sh -s toki:1099 serverinfo -d jboss
To connect using a different RMIAdaptor binding use the -a (--adapter) option:
[bin]$ ./twiddle.sh -s toki -a jmx/rmi/RMIAdaptor serverinfo -d jboss
To access basic information about a server, use the serverinfo command. This currently supports:
[bin]$ ./twiddle.sh -H serverinfo Get information about the MBean server usage: serverinfo [options] options: -d, --domain Get the default domain -c, --count Get the MBean count -l, --list List the MBeans -- Stop processing options [bin]$ ./twiddle.sh --server=toki serverinfo --count 460 [bin]$ ./twiddle.sh --server=toki serverinfo --domain jboss
To query the server for the name of MBeans matching a pattern, use the query command. This currently supports:
[bin]$ ./twiddle.sh -H query Query the server for a list of matching MBeans usage: query [options] <query> options: -c, --count Display the matching MBean count -- Stop processing options Examples: query all mbeans: query '*:*' query all mbeans in the jboss.j2ee domain: query 'jboss.j2ee:*' [bin]$ ./twiddle.sh -s toki query 'jboss:service=invoker,*' jboss:readonly=true,service=invoker,target=Naming,type=http jboss:service=invoker,type=jrmp jboss:service=invoker,type=local jboss:service=invoker,type=pooled jboss:service=invoker,type=http jboss:service=invoker,target=Naming,type=http
To get the attributes of an MBean, use the get command:
[bin]$ ./twiddle.sh -H get Get the values of one or more MBean attributes usage: get [options] <name> [<attr>+] If no attribute names are given all readable attributes are retrieved options: --noprefix Do not display attribute name prefixes -- Stop processing options [bin]$ ./twiddle.sh get jboss:service=invoker,type=jrmp RMIObjectPort StateString RMIObjectPort=4444 StateString=Started [bin]$ ./twiddle.sh get jboss:service=invoker,type=jrmp ServerAddress=0.0.0.0 RMIClientSocketFactoryBean=null StateString=Started State=3 RMIServerSocketFactoryBean=org.jboss.net.sockets.DefaultSocketFactory@ad093076 EnableClassCaching=false SecurityDomain=null RMIServerSocketFactory=null Backlog=200 RMIObjectPort=4444 Name=JRMPInvoker RMIClientSocketFactory=null
To query the MBeanInfo for an MBean, use the info command:
[bin]$ ./twiddle.sh -H info Get the metadata for an MBean usage: info <mbean-name> Use '*' to query for all attributes [bin]$ Description: Management Bean. +++ Attributes: Name: ServerAddress Type: java.lang.String Access: rw Name: RMIClientSocketFactoryBean Type: java.rmi.server.RMIClientSocketFactory Access: rw Name: StateString Type: java.lang.String Access: r- Name: State Type: int Access: r- Name: RMIServerSocketFactoryBean Type: java.rmi.server.RMIServerSocketFactory Access: rw Name: EnableClassCaching Type: boolean Access: rw Name: SecurityDomain Type: java.lang.String Access: rw Name: RMIServerSocketFactory Type: java.lang.String Access: rw Name: Backlog Type: int Access: rw Name: RMIObjectPort Type: int Access: rw Name: Name Type: java.lang.String Access: r- Name: RMIClientSocketFactory Type: java.lang.String Access: rw +++ Operations: void start() void jbossInternalLifecycle(java.lang.String java.lang.String) void create() void stop() void destroy()
To invoke an operation on an MBean, use the invoker command:
[bin]$ ./twiddle.sh -H invoke Invoke an operation on an MBean usage: invoke [options] <query> <operation> (<arg>)* options: -q, --query-type[=<type>] Treat object name as a query -- Stop processing options query type: f[irst] Only invoke on the first matching name [default] a[ll] Invoke on all matching names [bin]$ ./twiddle.sh invoke jboss:service=JNDIView list true <h1>java: Namespace</h1> <pre> +- XAConnectionFactory (class: org.jboss.mq.SpyXAConnectionFactory) +- DefaultDS (class: javax.sql.DataSource) +- SecurityProxyFactory (class: org.jboss.security.SubjectSecurityProxyFactory) +- DefaultJMSProvider (class: org.jboss.jms.jndi.JNDIProviderAdapter) +- comp (class: javax.naming.Context) +- JmsXA (class: org.jboss.resource.adapter.jms.JmsConnectionFactoryImpl) +- ConnectionFactory (class: org.jboss.mq.SpyConnectionFactory) +- jaas (class: javax.naming.Context) | +- JmsXARealm (class: org.jboss.security.plugins.SecurityDomainContext) | +- jbossmq (class: org.jboss.security.plugins.SecurityDomainContext) | +- HsqlDbRealm (class: org.jboss.security.plugins.SecurityDomainContext) +- timedCacheFactory (class: javax.naming.Context) Failed to lookup: timedCacheFactory, errmsg=null +- TransactionPropagationContextExporter (class: org.jboss.tm.TransactionPropagationContext Factory) +- StdJMSPool (class: org.jboss.jms.asf.StdServerSessionPoolFactory) +- Mail (class: javax.mail.Session) +- TransactionPropagationContextImporter (class: org.jboss.tm.TransactionPropagationContext Importer) +- TransactionManager (class: org.jboss.tm.TxManager) </pre> <h1>Global JNDI Namespace</h1> <pre> +- XAConnectionFactory (class: org.jboss.mq.SpyXAConnectionFactory) +- UIL2ConnectionFactory[link -> ConnectionFactory] (class: javax.naming.LinkRef) +- UserTransactionSessionFactory (proxy: $Proxy11 implements interface org.jboss.tm.usertx. interfaces.UserTransactionSessionFactory) +- HTTPConnectionFactory (class: org.jboss.mq.SpyConnectionFactory) +- console (class: org.jnp.interfaces.NamingContext) | +- PluginManager (proxy: $Proxy36 implements interface org.jboss.console.manager.Plugin ManagerMBean) +- UIL2XAConnectionFactory[link -> XAConnectionFactory] (class: javax.naming.LinkRef) +- UUIDKeyGeneratorFactory (class: org.jboss.ejb.plugins.keygenerator.uuid.UUIDKeyGenerator Factory) +- HTTPXAConnectionFactory (class: org.jboss.mq.SpyXAConnectionFactory) +- topic (class: org.jnp.interfaces.NamingContext) | +- testDurableTopic (class: org.jboss.mq.SpyTopic) | +- testTopic (class: org.jboss.mq.SpyTopic) | +- securedTopic (class: org.jboss.mq.SpyTopic) +- queue (class: org.jnp.interfaces.NamingContext) | +- A (class: org.jboss.mq.SpyQueue) | +- testQueue (class: org.jboss.mq.SpyQueue) | +- ex (class: org.jboss.mq.SpyQueue) | +- DLQ (class: org.jboss.mq.SpyQueue) | +- D (class: org.jboss.mq.SpyQueue) | +- C (class: org.jboss.mq.SpyQueue) | +- B (class: org.jboss.mq.SpyQueue) +- ConnectionFactory (class: org.jboss.mq.SpyConnectionFactory) +- UserTransaction (class: org.jboss.tm.usertx.client.ClientUserTransaction) +- jmx (class: org.jnp.interfaces.NamingContext) | +- invoker (class: org.jnp.interfaces.NamingContext) | | +- RMIAdaptor (proxy: $Proxy35 implements interface org.jboss.jmx.adaptor.rmi.RMIAd aptor,interface org.jboss.jmx.adaptor.rmi.RMIAdaptorExt) | +- rmi (class: org.jnp.interfaces.NamingContext) | | +- RMIAdaptor[link -> jmx/invoker/RMIAdaptor] (class: javax.naming.LinkRef) +- HiLoKeyGeneratorFactory (class: org.jboss.ejb.plugins.keygenerator.hilo.HiLoKeyGenerator Factory) +- UILXAConnectionFactory[link -> XAConnectionFactory] (class: javax.naming.LinkRef) +- UILConnectionFactory[link -> ConnectionFactory] (class: javax.naming.LinkRef) </pre>
With the detached invokers and a somewhat generalized proxy factory capability, you can really talk to the JMX server using the InvokerAdaptorService and a proxy factory service to expose an RMIAdaptor or similar interface over your protocol of choice. We will introduce the detached invoker notion along with proxy factories in Section 2.6, “Remote Access to Services, Detached Invokers”. See Section 2.6.1, “A Detached Invoker Example, the MBeanServer Invoker Adaptor Service” for an example of an invoker service that allows one to access the MBean server using to the RMIAdaptor interface over any protocol for which a proxy factory service exists.
When JBoss starts up, one of the first steps performed is to create an MBean server instance (javax.management.MBeanServer). The JMX MBean server in the JBoss architecture plays the role of a microkernel. All other manageable MBean components are plugged into JBoss by registering with the MBean server. The kernel in that sense is only an framework, and not a source of actual functionality. The functionality is provided by MBeans, and in fact all major JBoss components are manageable MBeans interconnected through the MBean server./default 2.
As we have seen, JBoss relies on JMX to load in the MBean services that make up a given server instance's personality. All of the bundled functionality provided with the standard JBoss distribution is based on MBeans. The best way to add services to the JBoss server is to write your own JMX MBeans.
There are two classes of MBeans: those that are independent of JBoss services, and those that are dependent on JBoss services. MBeans that are independent of JBoss services are the trivial case. They can be written per the JMX specification and added to a JBoss server by adding an mbean tag to the deploy/user-service.xml file. Writing an MBean that relies on a JBoss service such as naming requires you to follow the JBoss service pattern. The JBoss MBean service pattern consists of a set of life cycle operations that provide state change notifications. The notifications inform an MBean service when it can create, start, stop, and destroy itself. The management of the MBean service life cycle is the responsibility of three JBoss MBeans: SARDeployer, ServiceConfigurator and ServiceController.
JBoss manages the deployment of its MBean services via a custom MBean that loads an XML variation of the standard JMX MLet configuration file. This custom MBean is implemented in the org.jboss.deployment.SARDeployer class. The SARDeployer MBean is loaded when JBoss starts up as part of the bootstrap process. The SAR acronym stands for service archive.
The SARDeployer handles services archives. A service archive can be either a jar that ends with a .sar suffix and contains a META-INF/jboss-service.xml descriptor, or a standalone XML descriptor with a naming pattern that matches *-service.xml. The DTD for the service descriptor is jboss-service_4.0.dtd and is shown in Figure 2.15, “The DTD for the MBean service descriptor parsed by the SARDeployer”.
The elements of the DTD are:
loader-repository: This element specifies the name of the UnifiedLoaderRepository MBean to use for the SAR to provide SAR level scoping of classes deployed in the sar. It is a unique JMX ObjectName string. It may also specify an arbitrary configuration by including a loader-repository-config element. The optional loaderRepositoryClass attribute specifies the fully qualified name of the loader repository implementation class. It defaults to org.jboss.mx.loading.HeirachicalLoaderRepository3.
loader-repository-config: This optional element specifies an arbitrary configuration that may be used to configure the loadRepositoryClass. The optional configParserClass attribute gives the fully qualified name of the org.jboss.mx.loading.LoaderRepositoryFactory.LoaderRepositoryConfigParser implementation to use to parse the loader-repository-config content.
local-directory: This element specifies a path within the deployment archive that should be copied to the server/<config>/db directory for use by the MBean. The path attribute is the name of an entry within the deployment archive.
classpath: This element specifies one or more external JARs that should be deployed with the MBean(s). The optional archives attribute specifies a comma separated list of the JAR names to load, or the * wild card to signify that all jars should be loaded. The wild card only works with file URLs, and http URLs if the web server supports the WEBDAV protocol. The codebase attribute specifies the URL from which the JARs specified in the archive attribute should be loaded. If the codebase is a path rather than a URL string, the full URL is built by treating the codebase value as a path relative to the JBoss distribution server/<config> directory. The order of JARs specified in the archives as well as the ordering across multiple classpath element is used as the classpath ordering of the JARs. Therefore, if you have patches or inconsistent versions of classes that require a certain ordering, use this feature to ensure the correct ordering.
mbean: This element specifies an MBean service. The required code attribute gives the fully qualified name of the MBean implementation class. The required name attribute gives the JMX ObjectName of the MBean. The optional xmbean-dd attribute specifies the path to the XMBean resource if this MBean service uses the JBoss XMBean descriptor to define a Model MBean management interface.
constructor: The constructor element defines a non-default constructor to use when instantiating the MBean The arg element specify the constructor arguments in the order of the constructor signature. Each arg has a type and value attribute.
attribute: Each attribute element specifies a name/value pair of the attribute of the MBean. The name of the attribute is given by the name attribute, and the attribute element body gives the value. The body may be a text representation of the value, or an arbitrary element and child elements if the type of the MBean attribute is org.w3c.dom.Element. For text values, the text is converted to the attribute type using the JavaBean java.beans.PropertyEditor mechanism.
server/mbean/depends and server/mbean/depends-list: these elements specify a dependency from the MBean using the element to the MBean(s) named by the depends or depends-list elements. Section 2.16, “A sequence diagram highlighting the main activities performed by the SARDeployer to start a JBoss MBean service” is a sequence diagram that shows the init through start phases of a service.
Figure 2.16. A sequence diagram highlighting the main activities performed by the SARDeployer to start a JBoss MBean service
In Figure 2.16, “A sequence diagram highlighting the main activities performed by the SARDeployer to start a JBoss MBean service” the following is illustrated:
Methods prefixed with 1.1 correspond to the load and parse of the XML service descriptor.
Methods prefixed with 1.2 correspond to processing each classpath element in the service descriptor to create an independent deployment that makes the jar or directory available through a UnifiedClassLoader registered with the unified loader repository.
Methods prefixed with 1.3 correspond to processing each local-directory element in the service descriptor. This does a copy of the SAR elements specified in the path attribute to the server/<config>/db directory.
Method 1.4. Process each deployable unit nested in the service a child deployment is created and added to the service deployment info subdeployment list.
Method 2.1. The UnifiedClassLoader of the SAR deployment unit is registered with the MBean Server so that is can be used for loading of the SAR MBeans.
Method 2.2. For each MBean element in the descriptor, create an instance and initialize its attributes with the values given in the service descriptor. This is done by calling the ServiceController.install method.
Method 2.4.1. For each MBean instance created, obtain its JMX ObjectName and ask the ServiceController to handle the create step of the service life cycle. The ServiceController handles the dependencies of the MBean service. Only if the service's dependencies are satisfied is the service create method invoked.
Methods prefixed with 3.1 correspond to the start of each MBean service defined in the service descriptor. For each MBean instance created, obtain its JMX ObjectName and ask the ServiceController to handle the start step of the service life cycle. The ServiceController handles the dependencies of the MBean service. Only if the service's dependencies are satisfied is the service start method invoked.
The JMX specification does not define any type of life cycle or dependency management for MBeans. The JBoss ServiceController MBean introduces this notion. A JBoss MBean is an extension of the JMX MBean in that an MBean is expected to decouple creation from the life cycle of its service duties. This is necessary to implement any type of dependency management. For example, if you are writing an MBean that needs a JNDI naming service to be able to function, your MBean needs to be told when its dependencies are satisfied. This ranges from difficult to impossible to do if the only life cycle event is the MBean constructor. Therefore, JBoss introduces a service life cycle interface that describes the events a service can use to manage its behavior. The following listing shows the org.jboss.system.Service interface:
package org.jboss.system; public interface Service { public void create() throws Exception; public void start() throws Exception; public void stop(); public void destroy(); }
The ServiceController MBean invokes the methods of the Service interface at the appropriate times of the service life cycle. We'll discuss the methods in more detail in the ServiceController section.
JBoss manages dependencies between MBeans via the org.jboss.system.ServiceController custom MBean. The SARDeployer delegates to the ServiceController when initializing, creating, starting, stopping and destroying MBean services. Figure 2.17, “The interaction between the SARDeployer and ServiceController to start a service” shows a sequence diagram that highlights interaction between the SARDeployer and ServiceController.
The ServiceController MBean has four key methods for the management of the service life cycle: create, start, stop and destroy.
The create(ObjectName) method is called whenever an event occurs that affects the named services state. This could be triggered by an explicit invocation by the SARDeployer, a notification of a new class, or another service reaching its created state.
When a service's create method is called, all services on which the service depends have also had their create method invoked. This gives an MBean an opportunity to check that required MBeans or resources exist. A service cannot utilize other MBean services at this point, as most JBoss MBean services do not become fully functional until they have been started via their start method. Because of this, service implementations often do not implement create in favor of just the start method because that is the first point at which the service can be fully functional.
The start(ObjectName) method is called whenever an event occurs that affects the named services state. This could be triggered by an explicit invocation by the SARDeployer, a notification of a new class, or another service reaching its started state.
When a service's start method is called, all services on which the service depends have also had their start method invoked. Receipt of a start method invocation signals a service to become fully operational since all services upon which the service depends have been created and started.
The stop(ObjectName) method is called whenever an event occurs that affects the named services state. This could be triggered by an explicit invocation by the SARDeployer, notification of a class removal, or a service on which other services depend reaching its stopped state.
The destroy(ObjectName) method is called whenever an event occurs that affects the named services state. This could be triggered by an explicit invocation by the SARDeployer, notification of a class removal, or a service on which other services depend reaching its destroyed state.
Service implementations often do not implement destroy in favor of simply implementing the stop method, or neither stop nor destroy if the service has no state or resources that need cleanup.
To specify that an MBean service depends on other MBean services you need to declare the dependencies in the mbean element of the service descriptor. This is done using the depends and depends-list elements. One difference between the two elements relates to the optional-attribute-name attribute usage. If you track the ObjectNames of dependencies using single valued attributes you should use the depends element. If you track the ObjectNames of dependencies using java.util.List compatible attributes you would use the depends-list element. If you only want to specify a dependency and don't care to have the associated service ObjectName bound to an attribute of your MBean then use whatever element is easiest. The following listing shows example service descriptor fragments that illustrate the usage of the dependency related elements.
<mbean code="org.jboss.mq.server.jmx.Topic" name="jms.topic:service=Topic,name=testTopic"> <!-- Declare a dependency on the "jboss.mq:service=DestinationManager" and bind this name to the DestinationManager attribute --> <depends optional- jboss.mq:service=DestinationManager </depends> <!-- Declare a dependency on the "jboss.mq:service=SecurityManager" and bind this name to the SecurityManager attribute --> <depends optional- jboss.mq:service=SecurityManager </depends> <!-- ... --> <!-- Declare a dependency on the "jboss.mq:service=CacheManager" without any binding of the name to an attribute--> <depends>jboss.mq:service=CacheManager</depends> </mbean> <mbean code="org.jboss.mq.server.jmx.TopicMgr" name="jboss.mq.destination:service=TopicMgr"> <!-- Declare a dependency on the given topic destination mbeans and bind these names to the Topics attribute --> <depends-list <depends-list-element>jms.topic:service=Topic,name=A</depends-list-element> <depends-list-element>jms.topic:service=Topic,name=B</depends-list-element> <depends-list-element>jms.topic:service=Topic,name=C</depends-list-element> </depends-list> </mbean>
Another difference between the depends and depends-list elements is that the value of the depends element may be a complete MBean service configuration rather than just the ObjectName of the service. Example 2.13, “An example of using the depends element to specify the complete configuration of a depended on service.” shows an example from the hsqldb-service.xml descriptor. In this listing the org.jboss.resource.connectionmanager.RARDeployment service configuration is defined using a nested mbean element as the depends element value. This indicates that the org.jboss.resource.connectionmanager.LocalTxConnectionManager MBean depends on this service. The jboss.jca:service=LocalTxDS,name=hsqldbDS ObjectName will be bound to the ManagedConnectionFactoryName attribute of the LocalTxConnectionManager class.
Example 2.13. An example of using the depends element to specify the complete configuration of a depended on service.
<mbean code="org.jboss.resource.connectionmanager.LocalTxConnectionManager" name="jboss.jca:service=LocalTxCM,name=hsqldbDS"> <depends optional- <!--embedded mbean--> <mbean code="org.jboss.resource.connectionmanager.RARDeployment" name="jboss.jca:service=LocalTxDS,name=hsqldbDS"> <attribute name="JndiName">DefaultDS</attribute> <attribute name="ManagedConnectionFactoryProperties"> <properties> <config-property jdbc:hsqldb:hsql://localhost:1476 </config-property> <config-property org.hsqldb.jdbcDriver </config-property> <config-property sa </config-property> <config-property </properties> </attribute> <!-- ... --> </mbean> </depends> <!-- ... --> </mbean>
The ServiceController MBean supports two operations that can help determine which MBeans are not running due to unsatisfied dependencies. The first operation is listIncompletelyDeployed. This returns a java.util.List of org.jboss.system.ServiceContext objects for the MBean services that are not in the RUNNING state.
The second operation is listWaitingMBeans. This operation returns a java.util.List of the JMX ObjectNames of MBean services that cannot be deployed because the class specified by the code attribute is not available./default for the default, last.", "TAGS", "core", "tags".
Writing a custom MBean service that integrates into the JBoss server requires the use of the org.jboss.system.Service interface pattern if the custom service is dependent on other services. When a custom MBean depends on other MBean services you cannot perform any service dependent initialization in any of the javax.management.MBeanRegistration interface methods since JMX has no dependency notion. Instead, you must manage dependency state using the Service interface create and/or start methods. You can do this using any one of the following approaches:
Add any of the Service methods that you want called on your MBean to your MBean interface. This allows your MBean implementation to avoid dependencies on JBoss specific interfaces.
Have your MBean interface extend the org.jboss.system.Service interface.
Have your MBean interface extend the org.jboss.system.ServiceMBean interface. This is a subinterface of org.jboss.system.Service that adds getName(), getState(), getStateString() methods.
Which approach you choose depends on whether or not you want your code to be coupled to JBoss specific code. If you don't, then you would use the first approach. If you don't care about dependencies on JBoss classes, the simplest approach is to have your MBean interface extend from org.jboss.system.ServiceMBean and your MBean implementation class extend from the abstract org.jboss.system.ServiceMBeanSupport class. This class implements the org.jboss.system.ServiceMBean interface. ServiceMBeanSupport provides implementations of the create, start, stop, and destroy methods that integrate logging and JBoss service state management tracking. Each method delegates any subclass specific work to createService, startService, stopService, and destroyService methods respectively. When subclassing ServiceMBeanSupport, you would override one or more of the createService, startService, stopService, and destroyService methods as required
This section develops a simple MBean that binds a HashMap into the JBoss JNDI namespace at a location determined by its JndiName attribute to demonstrate what is required to create a custom MBean. Because the MBean uses JNDI, it depends on the JBoss naming service MBean and must use the JBoss MBean service pattern to be notified when the naming service is available.
Version one of the classes, shown in Example 2.14, “JNDIMapMBean interface and implementation based on the service interface method pattern”, is based on the service interface method pattern. This version of the interface declares the start and stop methods needed to start up correctly without using any JBoss-specific classes.
Example 2.14. JNDIMapMBean interface and implementation based on the service interface method pattern
package org.jboss.chap2.ex1; // The JNDIMap MBean interface import javax.naming.NamingException; public interface JNDIMapMBean { public String getJndiName(); public void setJndiName(String jndiName) throws NamingException; public void start() throws Exception; public void stop() throws Exception; }
package org.jboss.chap2.ex implements JNDIMapMBean { private String jndiName; private HashMap contextMap = new HashMap(); private boolean started; public String getJndiName() { return jndiName; } public void setJndiName(String jndiName) throws NamingException { String oldName = this.jndiName; this.jndiName = jndiName; if (started) { unbind(oldName); try { rebind(); } catch(Exception e) { NamingException ne = new NamingException("Failedto update jndiName"); ne.setRootCause(e); throw ne; } } } public void start() throws Exception { started = true; rebind(); } public void stop() { started = false; unbind(jndiName); } private void rebind() throws NamingException { InitialContext rootCtx = new InitialContext(); Name fullName = rootCtx.getNameParser("").parse(jndiName); System.out.println() { e.printStackTrace(); } } }
Version two of the classes, shown in Example 2.14, “JNDIMapMBean interface and implementation based on the service interface method pattern”, use the JBoss ServiceMBean interface and ServiceMBeanSupport class. In this version, the implementation class extends the ServiceMBeanSupport class and overrides the startService and stopService methods. JNDIMapMBean also implements the abstract getName method to return a descriptive name for the MBean. The JNDIMapMBean interface extends the org.jboss.system.ServiceMBean interface and only declares the setter and getter methods for the JndiName attribute because it inherits the service life cycle methods from ServiceMBean. This is the third approach mentioned at the start of the Section 2.4.2, “JBoss MBean Services”.
Example 2.15. JNDIMap MBean interface and implementation based on the ServiceMBean interface and ServiceMBeanSupport class
package org.jboss.chap2.ex2; // The JNDIMap MBean interface import javax.naming.NamingException; public interface JNDIMapMBean extends org.jboss.system.ServiceMBean { public String getJndiName(); public void setJndiName(String jndiName) throws NamingException; }
package org.jboss.chap2.ex extends org.jboss.system.ServiceMBeanSupport implements JNDIMapMBean { private String jndiName; private HashMap contextMap = new HashMap(); public String getJndiName() { return jndiName; } public void setJndiName(String jndiName) throws NamingException { String oldName = this.jndiName; this.jndiName = jndiName; if (super.getState() == STARTED) { unbind(oldName); try { rebind(); } catch(Exception e) { NamingException ne = new NamingException("Failed to update jndiName"); ne.setRootCause(e); throw ne; } } } public void startService() throws Exception { rebind(); } public void stopService() { unbind(jndiName); } private void rebind() throws NamingException { InitialContext rootCtx = new InitialContext(); Name fullName = rootCtx.getNameParser("").parse(jndiName); log.info() { log.error("Failed to unbind map", e); } } }
The source code for these MBeans along with the service descriptors is located in the examples/src/main/org/jboss/chap2/{ex1,ex2} directories.
The jboss-service.xml descriptor for the first version is shown below.
<!-- The SAR META-INF/jboss-service.xml descriptor --> <server> <mbean code="org.jboss.chap2.ex1.JNDIMap" name="chap2.ex1:service=JNDIMap"> <attribute name="JndiName">inmemory/maps/MapTest</attribute> <depends>jboss:service=Naming</depends> </mbean> </server>
The JNDIMap MBean binds a HashMap object under the inmemory/maps/MapTest JNDI name and the client code fragment demonstrates retrieving the HashMap object from the inmemory/maps/MapTest location. The corresponding client code is shown below.
// Sample lookup code InitialContext ctx = new InitialContext(); HashMap map = (HashMap) ctx.lookup("inmemory/maps/MapTest");
In this section we will develop a variation of the JNDIMap MBean introduced in the preceding section that exposes its management metadata using the JBoss XMBean framework. Our core managed component will be exactly the same core code from the JNDIMap class, but it will not implement any specific management related interface. We will illustrate the following capabilities not possible with a standard MBean:
The ability to add rich descriptions to attribute and operations
The ability to expose notification information
The ability to add persistence of attributes
The ability to add custom interceptors for security and remote access through a typed interface
Let's start with a simple XMBean variation of the standard MBean version of the JNDIMap that adds the descriptive information about the attributes and operations and their arguments. The following listing shows the jboss-service.xml descriptor and the jndimap-xmbean1.xml XMBean descriptor. The source can be found in the src/main/org/jboss/chap2/xmbean directory of the book examples.
<?xml version='1.0' encoding='UTF-8' ?> <!DOCTYPE server PUBLIC "-//JBoss//DTD MBean Service 3.2//EN" ""> <server> <mbean code="org.jboss.chap2.xmbean.JNDIMap" name="chap2.xmbean:service=JNDIMap" xmbean- <attribute name="JndiName">inmemory/maps/MapTest</attribute> <depends>jboss:service=Naming</depends> </mbean> </server>
<?xml version="1.0" encoding="UTF-8"?> <!DOCTYPE mbean PUBLIC "-//JBoss//DTD JBOSS XMBEAN 1.0//EN" ""> <mbean> <description>The JNDIMap XMBean Example Version 1</description> <descriptors> <persistence persistPolicy="Never" persistPeriod="10" persistLocation="data/JNDIMap.data" persistName="JNDIMap"/> <currencyTimeLimit value="10"/> <state-action-on-update <. The "[Ljava.lang.String;" type signature is the VM representation of the java.lang.String[] type. < map<>
You can build, deploy and test the XMBean as follows:
[examples]$ ant -Dchap=chap2 -Dex=xmbean1 run-example ... run-examplexmbean1: [copy] Copying 1 file to /tmp/jboss-4.0.2/server/default/deploy 0, value=value0 [java] handleNotification, event: javax.management.Notification[source=chap2.xmbean:s ervice=JNDIMap,type=org.jboss.chap2.xmbean.JNDIMap.put,sequenceNumber=3,timeStamp=10986315 27823986315 27940986315 27985986315 27999,message=null,userData=null]
The functionality is largely the same as the Standard MBean with the notable exception of the JMX notifications. A Standard MBean has no way of declaring that it will emit notifications. An XMBean may declare the notifications it emits using notification elements as is shown in the version 1 descriptor. We see the notifications from the get and put operations on the test client console output. Note that there is also an jmx.attribute.change notification emitted when the InitialValues attribute was changed. This is because the ModelMBean interface extends the ModelMBeanNotificationBroadcaster which supports AttributeChangeNotificationListeners.
The other major difference between the Standard and XMBean versions of JNDIMap is the descriptive metadata. Look at the chap2.xmbean:service=JNDIMap in the JMX Console, and you will see the attributes section as shown in Figure 2.18, “The Version 1 JNDIMapXMBean jmx-console view”.
Notice that the JMX Console now displays the full attribute description as specified in the XMBean descriptor rather than MBean Attribute text seen in standard MBean implementations. Scroll down to the operations and you will also see that these now also have nice descriptions of their function and parameters.
In version 2 of the XMBean we add support for persistence of the XMBean attributes. The updated XMBean deployment descriptor is given below.
<?xml version="1.0" encoding="UTF-8"?> <!DOCTYPE mbean PUBLIC "-//JBoss//DTD JBOSS XMBEAN 1.0//EN" ""> <mbean> <description>The JNDIMap XMBean Example Version 2</description> <descriptors> <persistence persistPolicy="OnUpdate" persistPeriod="10" persistLocation="${jboss.server.data.dir}" persistName="JNDIMap.ser"/> <currencyTimeLimit value="10"/> <state-action-on-update <persistence-manager < nap<>
Build, deploy and test the version 2 XMBean as follows:
[examples]$ ant -Dchap=chap2 -Dex=xmbean2 -Djboss.deploy.conf=rmi-adaptor] handleNotification, event: javax.management.Notification[source=chap2.xmbean:s ervice=JNDIMap,type=org.jboss.chap2.xmbean.JNDIMap.put,sequenceNumber=7,timeStamp=10986326 93716=8,timeStamp=10986326 93857=9,timeStamp=10986326 93896=10,timeStamp=1098632 693925,message=null,userData=null]
There is nothing manifestly different about this version of the XMBean at this point because we have done nothing to test that changes to attribute value are actually persisted. Perform this test by running example xmbean2a several times:
[examples] ant -Dchap=chap2 -Dex=xmbean2a run-example ... [java] InitialValues.length=2 [java] key=key10, value=value10
[examples] ant -Dchap=chap2 -Dex=xmbean2a run-example ... [java] InitialValues.length=4 [java] key=key10, value=value10 [java] key=key2, value=value2
[examples] ant -Dchap=chap2 -Dex=xmbean2a run-example ... [java] InitialValues.length=6 [java] key=key10, value=value10 [java] key=key2, value=value2 [java] key=key3, value=value3
The org.jboss.chap2.xmbean.TestXMBeanRestart used in this example obtains the current InitialValues attribute setting, and then adds another key/value pair to it. The client code is shown below.
package org.jboss.chap2.xmbean; import javax.management.Attribute; import javax.management.ObjectName; import javax.naming.InitialContext; import org.jboss.jmx.adaptor.rmi.RMIAdaptor; /** * A client that demonstrates the persistence of the xmbean * attributes. Every time it run it looks up the InitialValues * attribute, prints it out and then adds a new key/value to the * list. * * @author [email protected] * @version $Revision: 1.13 $ */ public class TestXMBeanRestart { /** * @param args the command line arguments */ public static void main(String[] args) throws Exception { InitialContext ic = new InitialContext(); RMIAdaptor server = (RMIAdaptor) ic.lookup("jmx/rmi/RMIAdaptor"); // Get the InitialValues attribute ObjectName name = new ObjectName("chap2.xmbean:service=JNDIMap"); String[] initialValues = (String[]) server.getAttribute(name, "InitialValues"); System.out.println("InitialValues.length="+initialValues.length); int length = initialValues.length; for (int n = 0; n < length; n += 2) { String key = initialValues[n]; String value = initialValues[n+1]; System.out.println("key="+key+", value="+value); } // Add a new key/value pair String[] newInitialValues = new String[length+2]; System.arraycopy(initialValues, 0, newInitialValues, 0, length); newInitialValues[length] = "key"+(length/2+1); newInitialValues[length+1] = "value"+(length/2+1); Attribute ivalues = new Attribute("InitialValues", newInitialValues); server.setAttribute(name, ivalues); } }
At this point you may even shutdown the JBoss server, restart it and then rerun the initial example to see if the changes are persisted across server restarts:
[examples]$ ant -Dchap=chap2 -Dex=xmbean2] key=key2, value=value2 [java] key=key3, value=value3 [java] key=key4, value=value4 [java] handleNotification, event: javax.management.Notification[source=chap2.xmbean:s ervice=JNDIMap,type=org.jboss.chap2.xmbean.JNDIMap.put,sequenceNumber=3,timeStamp=10986336 64712986336 64821986336 64860986336 64877,message=null,userData=null] [java] handleNotification, event: javax.management.Notification[source=chap2.xmbean:s ervice=JNDIMap,type=org.jboss.chap2.xmbean.JNDIMap.put,sequenceNumber=7,timeStamp=10986336 64895,message=null,userData=null] [java] handleNotification, event: javax.management.Notification[source=chap2.xmbean:s ervice=JNDIMap,type=org.jboss.chap2.xmbean.JNDIMap.put,sequenceNumber=8,timeStamp=10986336 64899,message=null,userData=null] [java] handleNotification, event: javax.management.Notification[source=chap2.xmbean:s ervice=JNDIMap,type=org.jboss.chap2.xmbean.JNDIMap.put,sequenceNumber=9,timeStamp=10986336 65614,message=null,userData=null]
You see that the last InitialValues attribute setting is in fact visible.
We have seen how to manage dependencies using the service descriptor depends and depends-list tags. The deployment ordering supported by the deployment scanners provides a coarse-grained dependency management in that there is an order to deployments. If dependencies are consistent with the deployment packages then this is a simpler mechanism than having to enumerate the explicit MBean-MBean dependencies. By writing your own filters you can change the coarse grained ordering performed by the deployment scanner.
When a component archive is deployed, its nested deployment units are processed in a depth first ordering. Structuring of components into an archive hierarchy is yet another way to manage deployment ordering.You will need to explicitly state your MBean dependencies if your packaging structure does not happen to resolve the dependencies. Let's consider an example component deployment that consists of an MBean that uses an EJB. Here is the structure of the example EAR.
output/chap2/chap2-ex3.ear +- META-INF/MANIFEST.MF +- META-INF/jboss-app.xml +- chap2-ex3.jar (archive) [EJB jar] | +- META-INF/MANIFEST.MF | +- META-INF/ejb-jar.xml | +- org/jboss/chap2/ex3/EchoBean.class | +- org/jboss/chap2/ex3/EchoLocal.class | +- org/jboss/chap2/ex3/EchoLocalHome.class +- chap2-ex3.sar (archive) [MBean sar] | +- META-INF/MANIFEST.MF | +- META-INF/jboss-service.xml | +- org/jboss/chap2/ex3/EjbMBeanAdaptor.class +- META-INF/application.xml
The EAR contains a chap2-ex3.jar and chap2-ex3.sar. The chap2-ex3.jar is the EJB archive and the chap2-ex3.sar is the MBean service archive. We have implemented the service as a Dynamic MBean to provide an illustration of their use.
package org.jboss.chap2.ex3; import java.lang.reflect.Method; import javax.ejb.CreateException; import javax.management.Attribute; import javax.management.AttributeList; import javax.management.AttributeNotFoundException; import javax.management.DynamicMBean; import javax.management.InvalidAttributeValueException; import javax.management.JMRuntimeException; import javax.management.MBeanAttributeInfo; import javax.management.MBeanConstructorInfo; import javax.management.MBeanInfo; import javax.management.MBeanNotificationInfo; import javax.management.MBeanOperationInfo; import javax.management.MBeanException; import javax.management.MBeanServer; import javax.management.ObjectName; import javax.management.ReflectionException; import javax.naming.InitialContext; import javax.naming.NamingException; import org.jboss.system.ServiceMBeanSupport; /** * An example of a DynamicMBean that exposes select attributes and * operations of an EJB as an MBean. * @author [email protected] * @version $Revision: 1.13 $ */ public class EjbMBeanAdaptor extends ServiceMBeanSupport implements DynamicMBean { private String helloPrefix; private String ejbJndiName; private EchoLocalHome home; /** These are the mbean attributes we expose */ private MBeanAttributeInfo[] attributes = { new MBeanAttributeInfo("HelloPrefix", "java.lang.String", "The prefix message to append to the session echo reply", true, // isReadable true, // isWritable false), // isIs new MBeanAttributeInfo("EjbJndiName", "java.lang.String", "The JNDI name of the session bean local home", true, // isReadable true, // isWritable false) // isIs }; /** * These are the mbean operations we expose */ private MBeanOperationInfo[] operations; /** * We override this method to setup our echo operation info. It * could also be done in a ctor. */ public ObjectName preRegister(MBeanServer server, ObjectName name) throws Exception { log.info("preRegister notification seen"); operations = new MBeanOperationInfo[5]; Class thisClass = getClass(); Class[] parameterTypes = {String.class}; Method echoMethod = thisClass.getMethod("echo", parameterTypes); String desc = "The echo op invokes the session bean echo method and" + " returns its value prefixed with the helloPrefix attribute value"; operations[0] = new MBeanOperationInfo(desc, echoMethod); // Add the Service interface operations from our super class parameterTypes = new Class[0]; Method createMethod = thisClass.getMethod("create", parameterTypes); operations[1] = new MBeanOperationInfo("The JBoss Service.create", createMethod); Method startMethod = thisClass.getMethod("start", parameterTypes); operations[2] = new MBeanOperationInfo("The JBoss Service.start", startMethod); Method stopMethod = thisClass.getMethod("stop", parameterTypes); operations[3] = new MBeanOperationInfo("The JBoss Service.stop", startMethod); Method destroyMethod = thisClass.getMethod("destroy", parameterTypes); operations[4] = new MBeanOperationInfo("The JBoss Service.destroy", startMethod); return name; } // --- Begin ServiceMBeanSupport overides protected void createService() throws Exception { log.info("Notified of create state"); } protected void startService() throws Exception { log.info("Notified of start state"); InitialContext ctx = new InitialContext(); home = (EchoLocalHome) ctx.lookup(ejbJndiName); } protected void stopService() { log.info("Notified of stop state"); } // --- End ServiceMBeanSupport overides public String getHelloPrefix() { return helloPrefix; } public void setHelloPrefix(String helloPrefix) { this.helloPrefix = helloPrefix; } public String getEjbJndiName() { return ejbJndiName; } public void setEjbJndiName(String ejbJndiName) { this.ejbJndiName = ejbJndiName; } public String echo(String arg) throws CreateException, NamingException { log.debug("Lookup EchoLocalHome@"+ejbJndiName); EchoLocal bean = home.create(); String echo = helloPrefix + bean.echo(arg); return echo; } // --- Begin DynamicMBean interface methods /** * Returns the management interface that describes this dynamic * resource. It is the responsibility of the implementation to * make sure the description is accurate. * * @return the management interface descriptor. */ public MBeanInfo getMBeanInfo() { String classname = getClass().getName(); String description = "This is an MBean that uses a session bean in the" + " implementation of its echo operation."; MBeanInfo[] constructors = null; MBeanNotificationInfo[] notifications = null; MBeanInfo mbeanInfo = new MBeanInfo(classname, description, attributes, constructors, operations, notifications); // Log when this is called so we know when in the lifecycle this is used Throwable trace = new Throwable("getMBeanInfo trace"); log.info("Don't panic, just a stack trace", trace); return mbeanInfo; } /** * Returns the value of the attribute with the name matching the * passed string. * * @param attribute the name of the attribute. * @return the value of the attribute. * @exception AttributeNotFoundException when there is no such * attribute. * @exception MBeanException wraps any error thrown by the * resource when * getting the attribute. * @exception ReflectionException wraps any error invoking the * resource. */ public Object getAttribute(String attribute) throws AttributeNotFoundException, MBeanException, ReflectionException { Object value = null; if (attribute.equals("HelloPrefix")) { value = getHelloPrefix(); } else if(attribute.equals("EjbJndiName")) { value = getEjbJndiName(); } else { throw new AttributeNotFoundException("Unknown attribute("+attribute+") requested"); } return value; } /** * Returns the values of the attributes with names matching the * passed string array. * * @param attributes the names of the attribute. * @return an {@link AttributeList AttributeList} of name * and value pairs. */ public AttributeList getAttributes(String[] attributes) { AttributeList values = new AttributeList(); for (int a = 0; a < attributes.length; a++) { String name = attributes[a]; try { Object value = getAttribute(name); Attribute attr = new Attribute(name, value); values.add(attr); } catch(Exception e) { log.error("Failed to find attribute: "+name, e); } } return values; } /** * Sets the value of an attribute. The attribute and new value * are passed in the name value pair {@link Attribute * Attribute}. * * @see javax.management.Attribute * * @param attribute the name and new value of the attribute. * @exception AttributeNotFoundException when there is no such * attribute. * @exception InvalidAttributeValueException when the new value * cannot be converted to the type of the attribute. * @exception MBeanException wraps any error thrown by the * resource when setting the new value. * @exception ReflectionException wraps any error invoking the * resource. */ public void setAttribute(Attribute attribute) throws AttributeNotFoundException, InvalidAttributeValueException, MBeanException, ReflectionException { String name = attribute.getName(); if (name.equals("HelloPrefix")) { String value = attribute.getValue().toString(); setHelloPrefix(value); } else if(name.equals("EjbJndiName")) { String value = attribute.getValue().toString(); setEjbJndiName(value); } else { throw new AttributeNotFoundException("Unknown attribute("+name+") requested"); } } /** * Sets the values of the attributes passed as an * {@link AttributeList AttributeList} of name and new * value pairs. * * @param attributes the name an new value pairs. * @return an {@link AttributeList AttributeList} of name and * value pairs that were actually set. */ public AttributeList setAttributes(AttributeList attributes) { AttributeList setAttributes = new AttributeList(); for(int a = 0; a < attributes.size(); a++) { Attribute attr = (Attribute) attributes.get(a); try { setAttribute(attr); setAttributes.add(attr); } catch(Exception ignore) { } } return setAttributes; } /** * Invokes a resource operation. * * @param actionName the name of the operation to perform. * @param params the parameters to pass to the operation. * @param signature the signartures of the parameters. * @return the result of the operation. * @exception MBeanException wraps any error thrown by the * resource when performing the operation. * @exception ReflectionException wraps any error invoking the * resource. */ public Object invoke(String actionName, Object[] params, String[] signature) throws MBeanException, ReflectionException { Object rtnValue = null; log.debug("Begin invoke, actionName="+actionName); try { if (actionName.equals("echo")) { String arg = (String) params[0]; rtnValue = echo(arg); log.debug("Result: "+rtnValue); } else if (actionName.equals("create")) { super.create(); } else if (actionName.equals("start")) { super.start(); } else if (actionName.equals("stop")) { super.stop(); } else if (actionName.equals("destroy")) { super.destroy(); } else { throw new JMRuntimeException("Invalid state, don't know about op="+actionName); } } catch(Exception e) { throw new ReflectionException(e, "echo failed"); } log.debug("End invoke, actionName="+actionName); return rtnValue; } // --- End DynamicMBean interface methods }
Believe it or not, this is a very trivial MBean. The vast majority of the code is there to provide the MBean metadata and handle the callbacks from the MBean Server. This is required because a Dynamic MBean is free to expose whatever management interface it wants. A Dynamic MBean can in fact change its management interface at runtime simply by returning different metadata from the getMBeanInfo method. Of course, some clients may not be happy with such a dynamic object, but the MBean Server will do nothing to prevent a Dynamic MBean from changing its interface.
There are two points to this example. First, demonstrate how an MBean can depend on an EJB for some of its functionality and second, how to create MBeans with dynamic management interfaces. If we were to write a standard MBean with a static interface for this example it would look like the following.
public interface EjbMBeanAdaptorMBean { public String getHelloPrefix(); public void setHelloPrefix(String prefix); public String getEjbJndiName(); public void setEjbJndiName(String jndiName); public String echo(String arg) throws CreateException, NamingException; public void create() throws Exception; public void start() throws Exception; public void stop(); public void destroy(); }
Moving to lines 67-83, this is where the MBean operation metadata is constructed. The echo(String), create(), start(), stop() and destroy() operations are defined by obtaining the corresponding java.lang.reflect.Method object and adding a description. Let's go through the code and discuss where this interface implementation exists and how the MBean uses the EJB. Beginning with lines 40-51, the two MBeanAttributeInfo instances created define the attributes of the MBean. These attributes correspond to the getHelloPrefix/setHelloPrefix and getEjbJndiName/setEjbJndiName of the static interface. One thing to note in terms of why one might want to use a Dynamic MBean is that you have the ability to associate descriptive text with the attribute metadata. This is not something you can do with a static interface.
Lines 88-103 correspond to the JBoss service life cycle callbacks. Since we are subclassing the ServiceMBeanSupport utility class, we override the createService, startService, and stopService template callbacks rather than the create, start, and stop methods of the service interface. Note that we cannot attempt to lookup the EchoLocalHome interface of the EJB we make use of until the startService method. Any attempt to access the home interface in an earlier life cycle method would result in the name not being found in JNDI because the EJB container had not gotten to the point of binding the home interfaces. Because of this dependency we will need to specify that the MBean service depends on the EchoLocal EJB container to ensure that the service is not started before the EJB container is started. We will see this dependency specification when we look at the service descriptor.
Lines 105-121 are the HelloPrefix and EjbJndiName attribute accessors implementations. These are invoked in response to getAttribute/setAttribute invocations made through the MBean Server.
Lines 123-130 correspond to the echo(String) operation implementation. This method invokes the EchoLocal.echo(String) EJB method. The local bean interface is created using the EchoLocalHome that was obtained in the startService method.
The remainder of the class makes up the Dynamic MBean interface implementation. Lines 133-152 correspond to the MBean metadata accessor callback. This method returns a description of the MBean management interface in the form of the javax.management.MBeanInfo object. This is made up of a description, the MBeanAttributeInfo and MBeanOperationInfo metadata created earlier, as well as constructor and notification information. This MBean does not need any special constructors or notifications so this information is null.
Lines 154-258 handle the attribute access requests. This is rather tedious and error prone code so a toolkit or infrastructure that helps generate these methods should be used. A Model MBean framework based on XML called XBeans is currently being investigated in JBoss. Other than this, no other Dynamic MBean frameworks currently exist.
Lines 260-310 correspond to the operation invocation dispatch entry point. Here the request operation action name is checked against those the MBean handles and the appropriate method is invoked.
The jboss-service.xml descriptor for the MBean is given below. The dependency on the EJB container MBean is highlighted in bold. The format of the EJB container MBean ObjectName is: "jboss.j2ee:service=EJB,jndiName=" + <home-jndi-name> where the <home-jndi-name> is the EJB home interface JNDI name.
<server> <mbean code="org.jboss.chap2.ex3.EjbMBeanAdaptor" name="jboss.book:service=EjbMBeanAdaptor"> <attribute name="HelloPrefix">AdaptorPrefix</attribute> <attribute name="EjbJndiName">local/chap2.EchoBean</attribute> <depends>jboss.j2ee:service=EJB,jndiName=local/chap2.EchoBean</depends> </mbean> </server>
Deploy the example ear by running:
[examples]$ ant -Dchap=chap2 -Dex=3 run-example
On the server console there will be messages similar to the following:
14:57:12,906 INFO [EARDeployer] Init J2EE application: file:/private/tmp/jboss-4.0.1/server/ default/deploy/chap2-ex3.ear 14:57:13,044.RawDynamicInvoker.preRegister(RawDynamicInvoker.java:187) ... 14:57:13,088 INFO [EjbMBeanAdaptor] preRegister notification seen 14:57:13,093:207) ... 14:57:13,117:235) ... 14:57:13,140 WARN [EjbMBeanAdaptor] Unexcepted error accessing MBeanInfo for null java.lang.NullPointerException at org.jboss.system.ServiceMBeanSupport.postRegister(ServiceMBeanSupport.java:418) at org.jboss.mx.server.RawDynamicInvoker.postRegister(RawDynamicInvoker.java:226) at org.jboss.mx.server.registry.BasicMBeanRegistry.registerMBean(BasicMBeanRegistry.j ava:312) ... 14:57:13,203,232,420 INFO [EjbModule] Deploying Chap2EchoInfoBean 14:57:13,443 INFO [EjbModule] Deploying chap2.EchoBean 14:57:13,4884258 INFO [EjbMBeanAdaptor] Begin invoke, actionName=create 14:57:13,560 INFO [EjbMBeanAdaptor] Notified of create state 14:57:13,562 INFO [EjbMBeanAdaptor] End invoke, actionName=create 14:57:13,604.server.MBeanServerImpl.isInstanceOf(MBeanServerImpl.java:639) ... 14:57:13,621.util.JMXInvocationHandler.<init>(JMXInvocationHandler.java:110) at org.jboss.mx.util.MBeanProxy.get(MBeanProxy.java:76) at org.jboss.mx.util.MBeanProxy.get(MBeanProxy.java:64) 14:57:13,641 INFO [EjbMBeanAdaptor] Begin invoke, actionName=getState 14:57:13,942 INFO [EjbMBeanAdaptor] Begin invoke, actionName=start 14:57:13,944 INFO [EjbMBeanAdaptor] Notified of start state 14:57:13,951 INFO [EjbMBeanAdaptor] Testing Echo 14:57:13,983 INFO [EchoBean] echo, info=echo info, arg=, arg=startService 14:57:13,986 INFO [EjbMBeanAdaptor] echo(startService) = startService 14:57:13,988 INFO [EjbMBeanAdaptor] End invoke, actionName=start 14:57:13,991 INFO [EJBDeployer] Deployed: file:/private/tmp/jboss-4.0.1/server/default/tmp/d eploy/tmp1418chap2-ex3.ear-contents/chap2-ex3.jar 14:57:14,075 INFO [EARDeployer] Started J2EE application: file:/private/tmp/jboss-4.0.1/serv er/default/deploy/chap2-ex3.ear
The stack traces are not exceptions. They are traces coming from line 150 of the EjbMBeanAdaptor code to demonstrate that clients ask for the MBean interface when they want to discover the MBean's capabilities. Notice that the EJB container (lines with [EjbModule]) is started before the example MBean (lines with [EjbMBeanAdaptor]).
Now, let's invoke the echo method using the JMX console web application. Go to the JMX Console () and find the service=EjbMBeanAdaptor in the jboss.book domain. Click on the link and scroll down to the echo operation section. The view should be like that shown in Figure 2.19, “The EjbMBeanAdaptor MBean operations JMX console view”.
As shown, we have already entered an argument string of -echo-arg into the ParamValue text field. Press the Invoke button and a result string of AdaptorPrefix-echo-arg is displayed on the results page. The server console will show several stack traces from the various metadata queries issues by the JMX console and the MBean invoke method debugging lines:
10:51:48,671 INFO [EjbMBeanAdaptor] Begin invoke, actionName=echo 10:51:48,671 INFO [EjbMBeanAdaptor] Lookup EchoLocalHome@local/chap2.EchoBean 10:51:48,687 INFO [EchoBean] echo, info=echo info, arg=, arg=-echo-arg 10:51:48,687 INFO [EjbMBeanAdaptor] Result: AdaptorPrefix-echo-arg 10:51:48,687 INFO [EjbMBeanAdaptor] End invoke, actionName=echo
JBoss has an extensible deployment architecture that allows one to incorporate components into the bare JBoss JMX microkernel. The MainDeployer is the deployment entry point. Requests to deploy a component are sent to the MainDeployer and it determines if there is a subdeployer capable of handling the deployment, and if there is, it delegates the deployment to the subdeployer. We saw an example of this when we looked at how the MainDeployer used the SARDeployer to deploy MBean services. Among the deployers provided with JBoss are:
AbstractWebDeployer: This subdeployer handles web application archives (WARs). It accepts deployment archives and directories whose name ends with a war suffix. WARs must have a WEB-INF/web.xml descriptor and may have a WEB-INF/jboss-web.xml descriptor.
EARDeployer: This subdeployer handles enterprise application archives (EARs). It accepts deployment archives and directories whose name ends with an ear suffix. EARs must have a META-INF/application.xml descriptor and may have a META-INF/jboss-app.xml descriptor.
EJBDeployer: This subdeployer handles enterprise bean jars. It accepts deployment archives and directories whose name ends with a jar suffix. EJB jars must have a META-INF/ejb-jar.xml descriptor and may have a META-INF/jboss.xml descriptor.
JARDeployer: This subdeployer handles library JAR archives. The only restriction it places on an archive is that it cannot contain a WEB-INF directory.
RARDeployer: This subdeployer handles JCA resource archives (RARs). It accepts deployment archives and directories whose name ends with a rar suffix. RARs must have a META-INF/ra.xml descriptor.
SARDeployer: This subdeployer handles JBoss MBean service archives (SARs). It accepts deployment archives and directories whose name ends with a sar suffix, as well as standalone XML files that end with service.xml. SARs that are jars must have a META-INF/jboss-service.xml descriptor.
XSLSubDeployer: This subdeployer deploys arbitrary XML files. JBoss uses the XSLSubDeployer to deploy ds.xml files and transform them into service.xml files for the SARDeployer. However, it is not limited to just this task.
HARDeployer: This subdeployer deploys hibernate archives (HARs). It accepts deployment archives and directories whose name ends with a har suffix. HARs must have a META-INF/hibernate-service.xml descriptor.
AspectDeployer: This subdeployer deploys AOP archives. It accepts deployment archives and directories whose name ends with an aop suffix as well as aop.xml files. AOP archives must have a META-INF/jboss-aop.xml descriptor.
ClientDeployer: This subdeployer deploys J2EE application clients. It accepts deployment archives and directories whose name ends with a jar suffix. J2EE clients must have a META-INF/application-client.xml descriptor and may have a META-INF/jboss-client.xml descriptor.
BeanShellSubDeployer: This subdeployer deploys bean shell scripts as MBeans. It accepts files whose name ends with a bsh suffix.
The MainDeployer, JARDeployer and SARDeployer are hard coded deployers in the JBoss server core. All other deployers are MBean services that register themselves as deployers with the MainDeployer using the addDeployer(SubDeployer) operation.
The MainDeployer communicates information about the component to be deployed the SubDeployer using a DeploymentInfo object. The DeploymentInfo object is a data structure that encapsulates the complete state of a deployable component.
When the MainDeployer receives a deployment request, it iterates through its registered subdeployers and invokes the accepts(DeploymentInfo) method on the subdeployer. The first subdeployer to return true is chosen. The MainDeployer will delegate the init, create, start, stop and destroy deployment life cycle operations to the subdeploy 2. Thus, one can make a naming service available for use via RMI/JRMP, RMI/HTTP, RMI/SOAP, or any arbitrary custom transport.
Let's begin our discussion of the detached invoker architecture with an overview of the components involved. The main components in the detached invoker architecture are shown in Figure 2.21, “The main components in the detached invoker architecture”.
On the client side, there exists a client proxy which exposes the interface(s) of the MBean service. This is the same smart, compile-less dynamic proxy that we 2.21, 2.21, 2.21, 2.21, 2.21, .
In the section on connecting to the JMX server we mentioned that there was a service that allows one to access the javax.management.MBeanServer via any protocol using an invoker service. In this section we present the org.jboss.jmx.connector.invoker.InvokerAdaptorService and its configuration for access via RMI/JRMP as an example of the steps required to provide remote access to an MBean service.
The InvokerAdaptorService is a simple MBean service that only exists to fulfill the target MBean role in the detached invoker pattern.
Example 2.16. The InvokerAdaptorService MBean); } } } }
Let's go through the key details of this service. The InvokerAdaptorServiceMBean Standard MBean interface of the InvokerAdaptorService has a single ExportedInterface attribute and a single invoke(Invocation) operation. The ExportedInterface attribute allows customization of the type of interface the service exposes to clients. This has to be compatible with the MBeanServer class in terms of method name and signature. The invoke(Invocation) operation is the required entry point that target MBean services must expose to participate in the detached invoker pattern. This operation is invoked by the detached invoker services that have been configured to provide access to the InvokerAdaptorService.
Lines 54-64 of the InvokerAdaptorService build.
Line 64 creates a mapping between the InvokerAdaptorService service name and its hash code representation. This is used by detached invokers to determine what the target MBean ObjectName of an Invocation is. When the target MBean name is store in the Invocation, its store as its hashCode because ObjectNames are relatively expensive objects to create. The org.jboss.system.Registry is a global map like construct that invokers use to store the hash code to ObjectName mappings in.
Lines 77-93 obtain the name of the MBean on which the MBeanServer operation is being performed and lookup.
Lines 101-105 install the ExposedInterface class method hash to method mapping if the invocation argument is of type MarshalledInvocation. The method mapping calculated previously at lines 54-62 is used here.
Lines 107-114 perform a second mapping from the ExposedInterface Method to the matching method of the MBeanServer class. The InvokerServiceAdaptor decouples the ExposedInterface from the MBeanServer class in that it allows an arbitrary interface. This is needed on one hand because the standard java.lang.reflect.Proxy class can only proxy interfaces. It also allows one to only expose a subset of the MBeanServer methods and add transport specific exceptions like java.rmi.RemoteException to the ExposedInterface method signatures.
Line 115 dispatches the MBeanServer method invocation to the MBeanServer instance to which the InvokerAdaptorService was deployed. The server instance variable is inherited from the ServiceMBeanSupport superclass.
Lines 117-124 handle any exceptions coming from the reflective invocation including the unwrapping of any declared exception thrown by the invocation.
Line 126 is the return of the successful MBeanServer method invocation result.
Note that the InvokerAdaptorService MBean does not deal directly with any transport specific details. There is the calculation of the method hash to Method mapping, but this is a transport independent detail.
Now let's will start by presenting the proxy factory and InvokerAdaptorService configurations found in the default setup in the jmx-invoker-adaptor-service.sar deployment. Example 2.17, “The default jmx-invoker-adaptor-server.sar jboss-service.xml deployment descriptor” shows the jboss-service.xml descriptor for this deployment.
Example 2.17. The default jmx-invoker-adaptor-server.sar jboss-service.xml compabitle 2.17, “The default jmx-invoker-adaptor-server.sar jboss-service.xml.
The org.jboss.invocation.jrmp.server.JRMPInvoker class is an MBean service that provides the RMI/JRMP implementation of the Invoker interface. The JRMPInvoker exports itself as an RMI server so that when it is used as the Invoker in a remote client, the JRMPInvoker stub is sent to the client instead and invocations use the RMI/JRMP protocol.
The JRMPInvoker MBean supports a number of attribute to configure the RMI/JRMP transport layer. Its configurable attributes are:
RMIObjectPort: sets the RMI server socket listening port number. This is the port RMI clients will connect to when communicating through the proxy interface. The default setting in the jboss-service.xml descriptor is 4444, and if not specified, the attribute defaults to 0 to indicate an anonymous port should be used.
RMIClientSocketFactory: specifies a fully qualified class name for the java.rmi.server.RMIClientSocketFactory interface to use during export of the proxy interface.
RMIServerSocketFactory: specifies a fully qualified class name for the java.rmi.server.RMIServerSocketFactory interface to use during export of the proxy interface.
ServerAddress: specifies the interface address that will be used for the RMI server socket listening port. This can be either a DNS hostname or a dot-decimal Internet address. Since the RMIServerSocketFactory does not support a method that accepts an InetAddress object, this value is passed to the RMIServerSocketFactory implementation class using reflection. A check for the existence of a public void setBindAddress(java.net.InetAddress addr) method is made, and if one exists the RMIServerSocketAddr value is passed to the RMIServerSocketFactory implementation. If the RMIServerSocketFactory implementation does not support such a method, the ServerAddress value will be ignored.
SecurityDomain: specifies the JNDI name of an org.jboss.security.SecurityDomain interface implementation to associate with the RMIServerSocketFactory implementation. The value will be passed to the RMIServerSocketFactory using reflection to locate a method with a signature of public void setSecurityDomain(org.jboss.security.SecurityDomain d). If no such method exists the SecurityDomain value will be ignored.
The org.jboss.invocation.pooled.server.PooledInvoker is an MBean service that provides RMI over a custom socket transport implementation of the Invoker interface. The PooledInvoker exports itself as an RMI server so that when it is used as the Invoker in a remote client, the PooledInvoker stub is sent to the client instead and invocations use the custom socket protocol.
The PooledInvoker MBean supports a number of attribute to configure the socket transport layer. Its configurable attributes are:
NumAcceptThreads: The number of threads that exist for accepting client connections. The default is 1.
MaxPoolSize: The number of server threads for processing client. The default is 300.
SocketTimeout: The socket timeout value passed to the Socket.setSoTimeout() method. The default is 60000.
ServerBindPort: The port used for the server socket. A value of 0 indicates that an anonymous port should be chosen.
ClientConnectAddress: The address that the client passes to the Socket(addr, port) constructor. This defaults to the server InetAddress.getLocalHost() value.
ClientConnectPort: The port that the client passes to the Socket(addr, port) constructor. The default is the port of the server listening socket.
ClientMaxPoolSize: The client side maximum number of threads. The default is 300.
Backlog: The backlog associated with the server accept socket. The default is 200.
EnableTcpNoDelay: A boolean flag indicating if client sockets will enable the TcpNoDelay flag on the socket. The default is false.
ServerBindAddress: The address on which the server binds its listening socket. The default is an empty value which indicates the server should be bound on all interfaces.
TransactionManagerService: The JMX ObjectName of the JTA transaction manager service.
The org.jboss.invocation.iiop.IIOPInvoker class is an MBean service that provides the RMI/IIOP implementation of the Invoker interface. The IIOPInvoker routes IIOP requests to CORBA servants. This is used by the org.jboss.proxy.ejb.IORFactory proxy factory to create RMI/IIOP proxies. However, rather than creating Java proxies (as the JRMP proxy factory does), this factory creates CORBA IORs. An IORFactory is associated to a given enterprise bean. It registers with the IIOP invoker two CORBA servants: anEjbHomeCorbaServant for the bean's EJBHome and an EjbObjectCorbaServant for the bean's EJBObjects.
The IIOPInvoker MBean has no configurable properties, since all properties are configured from the conf/jacorb.properties property file used by the JacORB CORBA service.
The org.jboss.invocation.jrmp.server.JRMPProxyFactory MBean service is a proxy factory that can expose any interface with RMI compatible semantics for access to remote clients using JRMP as the transport.
The JRMPProxyFactory supports the following attributes:
InvokerName: The server side JRMPInvoker MBean service JMX ObjectName string that will handle the RMI/JRMP transport.
TargetName: The server side MBean that exposes the invoke(Invocation) JMX operation for the exported interface. This is used as the destination service for any invocations done through the proxy.
JndiName: The JNDI name under which the proxy will be bound.
ExportedInterface: The fully qualified class name of the interface that the proxy implements. This is the typed view of the proxy that the client uses for invocations.
ClientInterceptors: An XML fragment of interceptors/interceptor elements with each interceptor element body specifying the fully qualified class name of an org.jboss.proxy.Interceptor implementation to include in the proxy interceptor stack. The ordering of the interceptors/interceptor elements defines the order of the interceptors.
The org.jboss.invocation.http.server.HttpInvoker MBean service provides support for making invocations into the JMX bus over HTTP. Unlike the JRMPInvoker, the HttpInvoker is not an implementation of Invoker, but it does implement the Invoker.invoke method. The HttpInvoker is accessed indirectly by issuing an HTTP POST against the org.jboss.invocation.http.servlet.InvokerServlet. The HttpInvoker exports a client side proxy in the form of the org.jboss.invocation.http.interfaces.HttpInvokerProxy class, which is an implementation of Invoker, and is serializable. The HttpInvoker is a drop in replacement for the JRMPInvoker as the target of the bean-invoker and home-invoker EJB configuration elements. The HttpInvoker and InvokerServlet are deployed in the http-invoker.sar discussed in the JNDI chapter in the section entitled Accessing JNDI over HTTP
The HttpInvoker supports the following attributes: if the InetAddress.getHostName() or getHostAddress() method should be used as the host component of invokerURLPrefix + host + invokerURLSuffix. If true getHostName() is used, otherwise getHostAddress() is used.
The org.jboss.proxy.generic.ProxyFactoryHA service is an extension of the ProxyFactoryHA that is a cluster aware factory. The ProxyFactoryHA fully supports all of the attributes of the JRMPProxyFactory. This means that customized bindings of the port, interface and socket transport are available to clustered RMI/JRMP as well. In addition, the following cluster specific attributes are supported:
PartitionObjectName: The JMX ObjectName of the cluster service to which the proxy is to be associated with.
LoadBalancePolicy: The class name of the org.jboss.ha.framework.interfaces.LoadBalancePolicy interface implementation to associate with the proxy.
The RMI/HTTP layer allows for software load balancing of the invocations in a clustered environment. The HA capable extension of the HTTP invoker.
The org.jboss.invocation.http.server.HttpProxyFactory MBean service is a proxy factory that can expose any interface with RMI compatible semantics for access to remote clients using HTTP as the transport.
The HttpProxyFactory supports the following attributes:
InvokerName: The server side MBean that exposes the invoke operation for the exported interface. The name is embedded into the HttpInvokerProxy context as the target to which the invocation should be forwarded by the HttpInvoker.
JndiName: The JNDI name under which the HttpInvokerProxy will be bound. This is the name clients lookup to obtain the dynamic proxy that exposes the service interfaces and marshalls invocations over HTTP. This may be specified as an empty value to indicate that the proxy should not be bound into JNDI. indicating if the InetAddress.getHostName() or getHostAddress() method should be used as the host component of invokerURLPrefix + host + invokerURLSuffix. If true getHostName() is used, otherwise getHostAddress() is used.
ExportedInterface: The name of the RMI compatible interface that the HttpInvokerProxy implements.
Using the HttpProxyFactory MBean and JMX, you can expose any interface for access using HTTP as the transport. The interface to expose does not have to be an RMI interface, but it does have to be compatible with RMI in that all method parameters and return values are serializable. There is also no support for converting RMI interfaces used as method parameters or return values into their stubs.
The three steps to making your object invocable via HTTP are:
Create a mapping of longs to the RMI interface methods using the MarshalledInvocation.calculateHash method. Here for example, is the procedure for an RMI SRPRemoteServerInterface interface:
import java.lang.reflect.Method; import java.util.HashMap; import org.jboss.invocation.MarshalledInvocation; HashMap marshalledInvocationMapping = new HashMap(); // Build the Naming interface method map Method[] methods = SRPRemoteServerInterface.class.getMethods(); for(int m = 0; m < methods.length; m ++) { Method method = methods[m]; Long hash = new Long(MarshalledInvocation.calculateHash(method)); marshalledInvocationMapping.put(hash, method); }
Either create or extend an existing MBean to support an invoke operation. Its signature is Object invoke(Invocation invocation) throws Exception, and the steps it performs are as shown here for the SRPRemoteServerInterface interface. Note that this uses the marshalledInvocationMapping from step 1 to map from the Long method hashes in the MarshalledInvocation to the Method for the interface.
import org.jboss.invocation.Invocation; import org.jboss.invocation.MarshalledInvocation; public Object invoke(Invocation invocation) throws Exception { SRPRemoteServerInterface theServer = <the_actual_rmi_server_object>; // Set the method hash to Method mapping if (invocation instanceof MarshalledInvocation) { MarshalledInvocation mi = (MarshalledInvocation) invocation; mi.setMethodMap(marshalledInvocationMapping); } // Invoke the Naming method via reflection Method method = invocation.getMethod(); Object[] args = invocation.getArguments(); Object value = null; try { value = method.invoke(theServer, args); } catch(InvocationTargetException e) { Throwable t = e.getTargetException(); if (t instanceof Exception) { throw (Exception) e; } else { throw new UndeclaredThrowableException(t, method.toString()); } } return value; }
Create a configuration of the HttpProxyFactory MBean to make the RMI/HTTP proxy available through JNDI. For example:
<!-- Expose the SRP service interface via HTTP --> <mbean code="org.jboss.invocation.http.server.HttpProxyFactory" name="jboss.security.tests:service=SRP/HTTP"> <attribute name="InvokerURL"></attribute> <attribute name="InvokerName">jboss.security.tests:service=SRPService</attribute> <attribute name="ExportedInterface"> org.jboss.security.srp.SRPRemoteServerInterface </attribute> <attribute name="JndiName">srp-test-http/SRPServerInterface</attribute> </mbean>
Any client may now lookup the RMI interface from JNDI using the name specified in the HttpProxyFactory (e.g., srp-test-http/SRPServerInterface) and use the obtain proxy in exactly the same manner as the RMI/JRMP version. | http://docs.jboss.org/jbossas/jboss4guide/r5/html/ch2.chapter.html | 2009-07-04T01:49:38 | crawl-002 | crawl-002-007 | [] | docs.jboss.org |
< Main Index Visual Page Editor News >
Hibernate Data Source
The Hibernate Data Source enables a user to specify a Hibernate Configuration or JNDI URL.
Hibernate Data Set
When the Hibernate Data Source is set up, the user can create a Hibernate Data Set
using a HQL query in the way in which the JDBC driver creates a SQL query.
Integration with Seam
The JBoss BIRT Integration feature contains the "birt" tag that allows the user to add a BIRT report to an .xhtml file.
Deployment
Basic understanding on BIRT is required to use the integration, see BIRT homepage for details.
Below we describe the current needed configuration - these might change before GA.
In any case when configured correctly you will be able to view/render the designed reports in your Seam (or any other Web application)
A Seam project that includes the BIRT facet can be deployed as any project.
If you define the Hibernate ODA driver, the JBoss BIRT engine will use JNDI URL
that has to be bind to either Hibernate Session Factory or Hibernate Entity Manager Factory.
Any Seam project with the BIRT facet that
uses the Hibernate ODA driver has to bind a Hibernate session factory
or an Hibernate entity manager factory. It doesn't matter which of
these two factories the user binds because the Hibernate ODA driver
will recognize the type of the object.
When creating a Seam EAR project, Hibernate Entity Manager Factory is
bound to java:/{projectName}EntityManagerFactory. All the user needs
to do is using the Hibernate Configuration created automatically. The
user can use default values for the Hibernate Configuration and JNDI
URL within the BIRT Hibernate Data Source.
When using a Seam WAR project, neither HSF nor HEMF aren't binded to JNDI by default.
The user has to do this manually.
For instance,
HSF can be bound to JNDI adding the following property to the persistence.xml file:
<property name="hibernate.session_factory_name" value="java:/projectname"/>
the user can use 'java:/projectname' as the JNDI URL property when creating a BIRT Hibernate Data Source.
For more details, see JBIDE-2220. | http://docs.jboss.org/tools/whatsnew/birt/birt-news-1.0.0.Alpha1.html | 2009-07-04T13:00:15 | crawl-002 | crawl-002-007 | [] | docs.jboss.org |
nate Tools is a toolset for Hibernate 3 and related projects. The tools provide Ant tasks and Eclipse plugins for performing reverse engineering, code generation, visualization and interaction with Hibernate.
First, we propose to look through the list of key features that you can benefit from if you start using Hibernate Tools.
Hibernate Tools can be used "standalone" via Ant 1.6.x or fully integrated into 5.4.7, “Generic Hibernate metamodel exporter (
<hbmtemplate>)”.
Please note that these tools do not try to hide any functionality of Hibernate. The tools make working with Hibernate easier, but you are still encouraged/required to read the Hibernate Documentation to fully utilize Hibernate Tools and especially Hibernate it self.
Hibernate mapping files are used to specify how your objects are related to database tables.
For creating a skeleton mapping file, i. e. any .hbm.xml , Hibernate Tools provide a basic wizard which you can bring up by navigating New > Hibernate XML mapping file.
At first you'll be asked to specify the location and the name for a new mapping file. On the next dialog you should type or browse the class to map.
Pressing finish creates the file and opens it in the structured hbm.xml editor.
If you start the wizard from the selected class, all values will be detected there automatically. one.
Start the wizard by clicking New > Other (Ctrl+N) , then Hibernate > Hibernate Configuration File (cfg.xml) configuration, the hibernate.cfg.xml will be automatically opened in an editor. The last option Create Console Configuration is enabled by default and when enabled, it will automatically use the hibernate.cfg.xml for the basis of a Console configuration.
A Console configuration describes how the Hibernate plugin should configure Hibernate and what configuration files, including which classpath are needed to load the POJO's, JDBC drivers etc. It is required to make usage of query prototyping, reverse engineering and code generation. You can have multiple named console configurations. Normally you would just need one per project, but more is definitely possible if your project requires this..
The fallowing table specifies the parameters of the Classpath tab of the wizard.
Parameters of the Mappings tab in the Hibernate Console Configuration wizard are explained below: Open Hibernate Code Generation Dialog... predefined properties: functionality.
Table/Column completion.
The structured editor represents the file in the tree form. It also allows to modify the structure of the file and its elements with the help of tables provided on the right-hand area.
To open any mapping file in the editor, choose Open With > Hibernate 3.0 XML Editor option from the context menu of the file. The editor should look as follows:
For the configuration file you should choose Open With > Hibernate Configuration 3.0 XML Editor option.
A reveng.xml file is used to customize and control how reverse engineering is performed by the tools. The plugins provide an editor to ease the editing of this file and hence used to configure the reverse engineering process.
The editor is intended to allow easy definition of type mappings, table include , please see Section 6.2, “hibernate.reveng.xml file”
The editor is activated as soon as an .reveng.xml file is opened. To get an initial reveng.xml file the Reverse Engineering File Wizard can be started via Ctrl+N and Hibernate > Hibernate Reverse Engineering File (reveng.xml) then.
Or you can get it via the Code Generation Launcher by checking the proper section in the Main tab of the Hibernate Code Generation Wizard.
The following screenshotBDC types to any Hibernate type (including usertypes) if the default rules are not applicable. Here again to see the database tables press Refresh button underneath. More about type mappings you can find further in the Type Mappings section.
The Table and Columns page allows you to explicit set e.g. which hibernatetype and propertyname that should be used in the reverse engineered model. For more details on how to configure the tables while reverse engineering read the Specific table configuration section.
Now that you have configured all necessary parts, you can learn how to work with Hibernate Console Perspective., switch to Hibernate Configurations View. Expanding the tree allows you to browse the class/entity structure and see the relationships.
The Console Configuration does not dynamically adjust to changes done in mappings and java code. To reload the configuration select the configuration and click the Reload button in the view toolbar or in the context menu.
Besides, it's possible to open source and mapping files for objects showed in Hibernate Configurations View. Just bring up the context menu for a necessary object and select Open Source File to see appropriate Java class or Open Mapping File to open a proper .hbm.xml.
In order to get a visual feel on how entities are related as well as view their structures, a Mapping Diagram is provided. It is available by right clicking on the entity you want a mapping diagram for and then choosing Open Mapping Diagram.
For better navigating on the Diagram use Outline view which is available in the structural and graphical modes.
To switch over between the modes use the buttons in the top-right corner of the Outline view.
As in Hibernate Configurations View in Mapping Diagram it's also possible to open source/mapping file for a chosen object by selecting appropriate option in the context menu.
If you ask to open source/mapping file by right clicking on any entity element, this element will be highlighted in the open file..
Queries can be prototyped by entering them in the HQL or Criteria Editor. The query editors are opened by right-clicking the Console Configuration and selecting either HQL Editor or Hibernate Criteria Editor. The editors automatically detect the chosen configuration.
If the menu item is disabled then you need at first to create a Session Factory. That is done by simply expanding the Session Factory node..
Executing the query is done by clicking the green run button in the toolbar or pressing Ctrl+Enter .
Errors during creation of the Session Factory or running errors in the HQL the parse exception will be shown embedded in the view.
As you can see on the figure, Properties view shows the number of query results as well as the time of executing.
It also displays the structure of any persistent object selected in the Hibernate Query Results View. Editing is not yet supported..
Find more on how to configure logging via a log4j property file in Log4j documentation.
. | http://docs.jboss.org/tools/2.1.0.GA/en/hibernatetools/html_single/index.html | 2009-07-04T13:02:22 | crawl-002 | crawl-002-007 | [] | docs.jboss.org |
Viewing, Graphing, and Publishing Metrics
Metrics are data about the performance of your systems. By default, a set of free metrics is provided for Amazon Elastic Compute Cloud (Amazon EC2) instances, Amazon Elastic Block Store (Amazon EBS) volumes, Amazon Relational Database Service (Amazon RDS) DB instances, and Elastic Load Balancing. You can also choose to enable detailed monitoring for your Amazon EC2 instances, or add your own application metrics. Metric data is kept for a period of two weeks enabling you to view up-to-the-minute data and also historical data. Amazon CloudWatch can load all the metrics in your account for search, graphing and alarms with the AWS Management Console. This includes both AWS resource metrics and application metrics that you provide.
You can use the following procedures to graph metrics in CloudWatch. After you have completed these procedures, you can then create alarms for a metric. For more information, see Creating Amazon CloudWatch Alarms. | http://docs.aws.amazon.com/AmazonCloudWatch/latest/monitoring/working_with_metrics.html | 2016-10-21T09:00:40 | CC-MAIN-2016-44 | 1476988718034.77 | [] | docs.aws.amazon.com |
is derived from PHP_CodeSniffer but incompatible with the original tool.
Sonar PHP Plugin 0.3 and next versions rely directly PHP_CodeSniffer.: | http://docs.codehaus.org/pages/viewpage.action?pageId=201556016 | 2014-08-20T14:57:38 | CC-MAIN-2014-35 | 1408500809686.31 | [] | docs.codehaus.org |
Public key authentication
SSH, SCP, and SFTP adapters can use public-key-based authentication when servicing adapter requests. This feature is an alternative to password-based authentication. To use public key authentication, define the file location of the SSH key file and an associated pass phrase. The following table describes optional elements that you can use for the adapter configuration and dynamic targets in the adapter requests.
The default authentication method is password-based; if a
<password> element is present in an adapter configuration or the dynamic target node of an adapter request, password authentication is used, regardless of the presence of
<private-key-file> and
<pass-phrase> elements. If the
<password> element is omitted, the
<private-key-file> and
<pass-phrase> elements are used.
Optional elements for public key authentication
The following figure shows an XML sample using the optional elements for public key authentication.
XML sample of public key authentication optional elements
... <target> <host>test.target1.com</host> <port>22</port> <user-name>user1</user-name> <private-key-file>/usr/home/user1/.ssh/id_dsa<private-key-file> <pass-phrasecGFzcyBwaHJhc2U=</pass-phrase> <prompt>user1$</prompt> <known-hosts-config>/path/to/known_hosts</known-hosts-config> <allow-unknown-hosts>false</allow-unknown-hosts> <preferred-pk-algorithm>ssh-dss</preferred-pk-algorithm> </target> ... | https://docs.bmc.com/docs/AtriumOrchestratorContent/201701/public-key-authentication-677084840.html | 2020-10-20T06:57:16 | CC-MAIN-2020-45 | 1603107869933.16 | [] | docs.bmc.com |
Hadoop Authentication with FreeIPA for ML Workspaces
CDP uses FreeIPA to provide centralised identity management. FreeIPA combines four identity management capabilities: an LDAP user directory, a Kerberos KDC, a DNS server for shared services, and a shared Certificate Authority. This method of identity management, where your users/groups are maintained in FreeIPA and passwords are authenticated via SSO to Active Directory, provides the infrastructure needed for CDP services, without requiring you to expose your AD over the network.
This procedure is required if you want to run Spark workloads in an ML workspace.
- Gather your FreeIPA credentials from the CDP Management Console.
- Log in to the CDP web interface.
- From the bottom-left corner of the navigation bar, click on your username and go to your user profile.
- Click Set FreeIPA Password and set a password.
- Make a note of the following credentials. These will be required later.
- The workload cluster username that is available in the Workload User Name field on your profile page.
- The FreeIPA password configured in the previous step.
- Click ML Workspaces and navigate to your ML workspace.
- Go to the top-right dropdown menu, click .
- Enter the FreeIPA credentials from step 1d and click Authenticate.
- Kerberos Principal:
<workload_username>
- Password: FreeIPA password configured in the previous step. | https://docs.cloudera.com/machine-learning/1.0/security/topics/ml-kerberos.html | 2020-10-20T07:02:10 | CC-MAIN-2020-45 | 1603107869933.16 | [] | docs.cloudera.com |
OSGi
The driver is available as an OSGi bundle. More specifically, the following maven artifacts are valid OSGi bundles:
java-driver-core
java-driver-query-builder
java-driver-core-shaded
Thread.currentThread
.getContextClassLoader() if available, otherwise it uses its own
ClassLoader. This is typically
adequate except in environments like application containers or OSGi frameworks where class loading
logic is much more deliberate and libraries are isolated from each other.
If the chosen
ClassLoader is not able to ascertain whether a loaded class is the same instance
as its expected parent type,).
You may also encounter
ClassNotFoundException if the
ClassLoader does not have access to the
class being loaded.();
What does the “Error loading libc” DEBUG message mean? | https://docs.datastax.com/en/developer/java-driver/4.0/manual/osgi/ | 2020-10-20T06:02:07 | CC-MAIN-2020-45 | 1603107869933.16 | [] | docs.datastax.com |
To access content on this site, you need to sign in with an approved partner domain account. Your contact at Google creates a partner domain for you, but you need to accept and configure your account. See Partner Domain Accounts.
Once you have access to your partner domain account, sign in with that account to access the content on this site.
Troubleshooting
If you're signed in but don't have content access or receive a 404 error when trying to access the documentation, confirm that you're signed in with your partner domain account and not another Google account. If you continue having trouble accessing content, send a message to your contact at Google. | https://docs.jibemobile.com/access | 2020-10-20T06:32:06 | CC-MAIN-2020-45 | 1603107869933.16 | [] | docs.jibemobile.com |
Welcome to the Azure CLI! This article introduces the CLI and helps you complete common tasks..
Install or run in Azure Cloud Shell.
Note
If you're using the Azure classic deployment model, install the Azure classic CLI.
Before using any CLI commands with a local install, you need to sign in with az login.
Run the
logincommand.
az login
If the CLI can open your default browser, it will do so and load an Azure sign-in page.
Otherwise, open a browser page at and enter the authorization code displayed in your terminal.
After logging in, you see a list of subscriptions associated with your Azure account. The subscription information with
isDefault: true is the currently activated subscription after logging in. To select another subscription, use the az account set command with the subscription ID to switch to. For more information about subscription selection, see Use multiple Azure subscriptions.
There are ways to sign in non-interactively, which are covered in detail in Sign in with Azure CLI.
Common commands
This table lists some common commands used in the CLI and links to their reference documentation.
Finding commands
Commands in the CLI are organized as commands of groups. Each group represents an Azure service, and commands operate on that service.
To search for commands, use az find. For example, to search for command names containing
secret,
use the following command:
az find secret
Use the
--help argument to get a complete list of commands and subgroups of a group. For example, to find the CLI commands for working with
Network Security Groups (NSGs):
az network nsg --help
The CLI has full tab completion for commands under the bash shell.
Globally available arguments
There are some arguments that are available for every command.
--helpprints CLI reference information about commands and their arguments and lists available subgroups and commands.
--outputchanges the output format. The available output formats are
json,
jsonc(colorized JSON),
tsv(Tab-Separated Values),
table(human-readable ASCII tables), and
yaml. By default the CLI outputs
json. To learn more about the available output formats, see Output formats for Azure CLI.
--queryuses the JMESPath query language to filter the output returned from Azure services. To learn more about queries, see Query command results with Azure CLI and the JMESPath tutorial.
--verboseprints information about resources created in Azure during an operation, and other useful information.
--debugprints even more information about CLI operations, used for debugging purposes. If you find a bug, provide output generated with the
--debugflag on when submitting a bug report.
Interactive mode
The CLI offers an interactive mode that automatically displays help information and makes it easier to select subcommands. You enter interactive mode with the az interactive command.
az interactive
For more information on interactive mode, see Azure CLI Interactive Mode.
There's also a Visual Studio Code plugin that offers an interactive experience, including autocomplete and mouse-over documentation.
Learn CLI basics with quickstarts and tutorials
To get you started with the Azure CLI, try an in-depth tutorial for setting up virtual machines and using the power of the CLI to query Azure resources.
There are also quickstarts for other popular services.
- Create a storage account using the Azure CLI
- Transfer objects to/from Azure Blob storage using the CLI
- Create a single Azure SQL database using the Azure CLI
- Create an Azure Database for MySQL server using the Azure CLI
- Create an Azure Database for PostgreSQL using the Azure CLI
- Create a Python web app in Azure
- Run a custom Docker Hub image in Azure Web Apps for Containers
Give feedback
We welcome your feedback for the CLI to help us make improvements and resolve bugs. You can file an issue on GitHub or use the built-in features of the CLI to leave general feedback with the az feedback command.
az feedback | https://docs.microsoft.com/en-us/cli/azure/get-started-with-azure-cli?WT.mc_id=online-jhipster-judubois | 2020-10-20T06:58:11 | CC-MAIN-2020-45 | 1603107869933.16 | [] | docs.microsoft.com |
Ubuntu.Components.Icon
The Icon component displays an icon from the icon theme. More...
Properties
- asynchronous : bool
- color : color
- keyColor : color
- name : string
- source : url
Detailed Description
The icon theme contains a set of standard icons referred to by their name. Using icons whenever possible enhances consistency accross applications. Each icon has a name and can have different visual representations depending on the size requested.
Icons can also be colorized. Setting the color property will make all pixels with the keyColor (by default #808080) colored.
Example:
Icon { width: 64 height: 64 name: "search" }
Example of colorization:
Icon { width: 64 height: 64 name: "search" color: UbuntuColors.warmGrey }
Icon themes are created following theFreedesktop Icon Theme Specification.
Property Documentation
The property drives the image loading of the icon. Defaults to false.
The color that all pixels that originally are of color keyColor should take.
The color of the pixels that should be colorized. By default it is set to #808080.
The name of the icon to display.
If both name and source are set, name will be ignored.
Note: The complete list of icons available in Ubuntu is not published yet. For now please refer to the folders where the icon themes are installed:
- Ubuntu Touch: /usr/share/icons/suru
- Ubuntu Desktop: /usr/share/icons/ubuntu-mono-dark
These 2 separate icon themes will be merged soon.
The source url of the icon to display. It has precedence over name.
If both name and source are set, name will be ignored.
This QML property was introduced in Ubuntu.Components 1.1. | https://phone.docs.ubuntu.com/en/apps/api-qml-current/Ubuntu.Components.Icon | 2020-10-20T05:28:16 | CC-MAIN-2020-45 | 1603107869933.16 | [] | phone.docs.ubuntu.com |
Activate Responsive Dashboards[com.glideapp.dashboards] Easily create, modify and share dashboards using responsive and dynamic widget layouts. Jakarta plugins | https://docs.servicenow.com/bundle/jakarta-performance-analytics-and-reporting/page/use/performance-analytics/task/pa-domain-sep-msp-plugin.html | 2018-01-16T09:34:09 | CC-MAIN-2018-05 | 1516084886397.2 | [] | docs.servicenow.com |
For efficiency reasons, Atomikos transaction manager commits transactions in a separate thread to the thread making the cache operations and until 4.2.1.CR1, Infinispan had problems with this type of scenarios and resulted on distributed caches not sending data to other nodes (see ISPN-927 for more details). Please note that replicaticated, invalidated or local caches would work fine. It's only distributed caches that would suffer this problem.
There're two ways to get around this issue, either:
- Upgrade to Infinispan 4.2.1.CR2 or higher where the issue has been fixed.
- If using Infinispan 4.2.1.CR1 or earlier, configure Atomikos so that com.atomikos.icatch.threaded_2pc is set to false. This results in commits happening in the same thread that made the cache operations.
Labels:
None | https://docs.jboss.org/author/pages/viewpage.action?pageId=5832827 | 2018-01-16T09:53:23 | CC-MAIN-2018-05 | 1516084886397.2 | [] | docs.jboss.org |
Before you can use the Mobile SSO for iOS authentication method, you must initialize the Key Distribution Center (KDC) in the VMware Identity Manager appliance.
About this task.
Prerequisites
VMware Identity Manager is installed and configured.
Realm name identified. See Using the Built-in KDC.
Procedure
- SSH into the VMware Identity Manager appliance as the root user.
- Initialize the KDC. Enter /etc/init.d/vmware-kdc init --realm {REALM.COM} --subdomain {sva-name.subdomain}.
For example, /etc/init.d/vmware-kdc init --realm MY-IDM.EXAMPLE.COM --subdomain my-idm.example.com
If you are using a load balancer with multiple identity manager appliances, use the name of the load balancer in both cases.
- Restart the VMWare Identity Manager service. Enter service horizon-workspace restart.
- Start the KDC service. Enter service vmware-kdc restart.
What to do next
Create public DNS entries. DNS records must be provisioned to allow the clients to find the KDC. See Creating Public DNS Entries for KDC with Built-In Kerberos. | https://docs.vmware.com/en/VMware-AirWatch/9.2/vidm-install/GUID-58EF2B63-C733-45DD-94CD-E4E4CA671FBB.html | 2018-01-16T09:59:52 | CC-MAIN-2018-05 | 1516084886397.2 | [] | docs.vmware.com |
Delta.
How Delta Propagation Works
Geode propagates object deltas using methods that you program on the client side. The methods are in the delta interface, which you implement in your cached objects’ classes.
Delta propagation uses configuration properties and a simple API to send and receive deltas.
With cloning enabled, Geode does a deep copy of the object, using serialization. You can improve performance by implementing the appropriate
clonemethod for your API, making a deep copy of anything to which a delta may be applied.
Implementing Delta Propagation
By default, delta propagation is enabled in your distributed system and is used for objects that implement the delta interface. You program the client-side methods to extract delta information for your entries and to apply received delta information.
Exceptions and Limitations
Examples of Delta Propagation
Examples describe delta propagation operations and provide implementation code for C# .NET and C++. | http://gemfire-native-90.docs.pivotal.io/native/delta-propagation/delta-propagation.html | 2018-01-16T09:50:54 | CC-MAIN-2018-05 | 1516084886397.2 | [] | gemfire-native-90.docs.pivotal.io |
Texoma Yacht Docs LLC is a marine detail service provider designed specifically for the boater with high expectations, values their time, and has a keen sense of pride in ownership. These boat owners are searching for a professional company to meet any and all needs on both the Yacht and the Dock with turnkey marine detailing and quality dock construction services including but not limited to Boat Washing, Spider Net and Shade Curtain Installation, waxing, compounding, polishing, and boathouse construction.
In operation since 1997, Yacht Docs is dedicated to both private owners and brokers at numerous marinas across North Texas and Southern Oklahoma applying great emphasis on boat detailing scheduled boat washing, and boathouse build projects where quality and design is key.
With countless marine service provisions, we can create custom packages designed to meet your specific boat cleaning needs and preferences. As a boat owner you made a purchase with one goal in mind… Relaxation, and Yacht Docs goal is to make sure you look good doing it.
From cleaning to construction, let us bring your boat back to life. contact us | http://yachtdocs.com/about-yacht-docs/ | 2018-01-16T09:42:43 | CC-MAIN-2018-05 | 1516084886397.2 | [] | yachtdocs.com |
Service Bus FAQ
This article discusses some frequently asked questions about Microsoft Azure Service Bus. You can also visit the Azure Support FAQs for general Azure pricing and support information.
General questions about Azure Service Bus
What is Azure Service Bus?
Azure Service Bus is an asynchronous messaging cloud platform that enables you to send data between decoupled systems. Microsoft offers this feature as a service, which means that you do not need to host any of your own hardware in order to use it.
What is a Service Bus namespace?
A namespace provides a scoping container for addressing Service Bus resources within your application. Creating a namespace is necessary to use Service Bus and is one of the first steps in getting started.
What is an Azure Service Bus queue?
A Service Bus queue is an entity in which messages are stored. Queues are useful when you have multiple applications, or multiple parts of a distributed application that need to communicate with each other. The queue is similar to a distribution center in that multiple products (messages) are received and then sent from that location.
What are Azure Service Bus topics and subscriptions?
A topic can be visualized as a queue and when using multiple subscriptions, it becomes a richer messaging model; essentially a one-to-many communication tool. This publish/subscribe model (or pub/sub) enables an application that sends a message to a topic with multiple subscriptions to have that message received by multiple applications.
What is a partitioned entity?.
Note that ordering is not ensured when using partitioned entities. In the event that a partition is unavailable, you can still send and receive messages from the other partitions.
Best practices
What are some Azure Service Bus best practices?
See Best practices for performance improvements using Service Bus – this article describes how to optimize performance when exchanging messages.
What should I know before creating entities?
The following properties of a queue and topic are immutable. Consider this limitation when you provision your entities, as these properties cannot be modified without creating a new replacement entity.
- Size
- Partitioning
- Sessions
- Duplicate detection
- Express entity
This section answers some frequently asked questions about the Service Bus pricing structure.
The Service Bus pricing and billing article explains the billing meters in Service Bus. For specific information about Service Bus pricing options, see Service Bus pricing details.
You can also visit the Azure Support FAQs for general Azure pricing information.
How do you charge for Service Bus?
For complete information about Service Bus pricing, see Service Bus pricing details. In addition to the prices noted, you are charged for associated data transfers for egress outside of the data center in which your application is provisioned.
What usage of Service Bus is subject to data transfer? What is not?
Any data transfer within a given Azure region is provided at no charge, as well as any inbound data transfer. Data transfer outside a region is subject to egress charges, which can be found here.
Does Service Bus charge for storage?
No, Service Bus does not charge for storage. However, there is a quota limiting the maximum amount of data that can be persisted per queue/topic. See the next FAQ.
Quotas
For a list of Service Bus limits and quotas, see the Service Bus quotas overview.
Does Service Bus have any usage quotas?
By default, for any cloud service Microsoft sets an aggregate monthly usage quota that is calculated across all of a customer's subscriptions. Because we understand that you may need more than these limits, you can contact customer service at any time so that we can understand your needs and adjust these limits appropriately. For Service Bus, the aggregate usage quota is 5 billion messages per month.
While we do reserve the right to disable a customer account that has exceeded its usage quotas in a given month, we provide e-mail notification and make multiple attempts to contact a customer before taking any action. Customers exceeding these quotas are still responsible for charges that exceed the quotas.
As with other services on Azure, Service Bus enforces a set of specific quotas to ensure that there is fair usage of resources. You can find more details about these quotas in the Service Bus quotas overview.
Troubleshooting
What are some of the exceptions generated by Azure Service Bus APIs and their suggested actions?
For a list of possible Service Bus exceptions, see Exceptions overview.
What is a Shared Access Signature and which languages support generating a signature?
Shared Access Signatures are an authentication mechanism based on SHA – 256 secure hashes or URIs. For information about how to generate your own signatures in Node, PHP, Java, and C#, see the Shared Access Signatures article.
Subscription and namespace management
How do I migrate a namespace to another Azure subscription?
You can move a namespace from one Azure subscription to another, using either the Azure portal or PowerShell commands. In order to execute the operation, the namespace must already be active. The user executing the commands must be an administrator on both the source and target subscriptions.
Portal
To use the Azure portal to migrate Service Bus namespaces to another subscription, follow the directions here.
PowerShell
The following sequence of PowerShell commands moves a namespace from one Azure subscription to another. To execute this operation, the namespace must already be active, and the user running the PowerShell commands must be an administrator on both the source and target subscriptions.
# Create a new resource group in target subscription Select-AzureRmSubscription -SubscriptionId 'ffffffff-ffff-ffff-ffff-ffffffffffff' New-AzureRmResourceGroup -Name 'targetRG' -Location 'East US' # Move namespace from source subscription to target subscription Select-AzureRmSubscription -SubscriptionId 'aaaaaaaa-aaaa-aaaa-aaaa-aaaaaaaaaaaa' $res = Find-AzureRmResource -ResourceNameContains mynamespace -ResourceType 'Microsoft.ServiceBus/namespaces' Move-AzureRmResource -DestinationResourceGroupName 'targetRG' -DestinationSubscriptionId 'ffffffff-ffff-ffff-ffff-ffffffffffff' -ResourceId $res.ResourceId
Next steps
To learn more about Service Bus, see the following articles: | https://docs.microsoft.com/en-us/azure/service-bus-messaging/service-bus-faq | 2018-01-16T09:32:52 | CC-MAIN-2018-05 | 1516084886397.2 | [] | docs.microsoft.com |
These are some of the most commonly encountered problems or frequently asked questions which we receive from users. They aren’t intended as a substitute for reading the rest of the documentation, especially the usage docs, so please make sure you check those out if your question is not answered here.
Init-style start/stop/restart scripts (e.g. /etc/init.d/apache2 start) sometimes don’t like Fabric’s allocation of a pseudo-tty, which is active by default. In almost all cases, explicitly calling the command in question with pty=False works correctly:
sudo("/etc/init.d/apache2 restart", pty=False)
If you have no need for interactive behavior and run into this problem frequently, you may want to deactivate pty allocation globally by setting env.always_use_pty to False.
While Fabric can be used for many shell-script-like tasks, there’s a slightly unintuitive catch: each run or sudo call has its own distinct shell session. This is required in order for Fabric to reliably figure out, after your command has run, what its standard out/error and return codes were.
Unfortunately, it means that code like the following doesn’t behave as you might assume:
def deploy(): run("cd /path/to/application") run("./update.sh")
If that were a shell script, the second run call would have executed with a current working directory of /path/to/application/ – but because both commands are run in their own distinct session over SSH, it actually tries to execute $HOME/update.sh instead (since your remote home directory is the default working directory).
A simple workaround is to make use of shell logic operations such as &&, which link multiple expressions together (provided the left hand side executed without error) like so:
def deploy(): run("cd /path/to/application && ./update.sh")
Fabric provides a convenient shortcut for this specific use case, in fact: cd.
Note
You might also get away with an absolute path and skip directory changing altogether:
def deploy(): run("/path/to/application/update.sh")
However, this requires that the command in question makes no assumptions about your current working directory!
This message is typically generated by programs such as biff or mesg lurking within your remote user’s .profile or .bashrc files (or any other such files, including system-wide ones.) Fabric’s default mode of operation involves executing the Bash shell in “login mode”, which causes these files to be executed.
Because Fabric also doesn’t bother asking the remote end for a tty by default (as it’s not usually necessary) programs fired within your startup files, which expect a tty to be present, will complain – and thus, stderr output about “stdin is not a tty” or similar.
There are multiple ways to deal with this problem:
Because Fabric executes a shell on the remote end for each invocation of run or sudo (see also), backgrounding a process via the shell will not work as expected. Backgrounded processes may still prevent the calling shell from exiting until they stop running, and this in turn prevents Fabric from continuing on with its own execution.
The key to fixing this is to ensure that your process’ standard pipes are all disassociated from the calling shell, which may be done in a number of ways:
Use a pre-existing daemonization technique if one exists for the program at hand – for example, calling an init script instead of directly invoking a server binary.
Run the program under nohup and redirect stdin, stdout and stderr to /dev/null (or to your file of choice, if you need the output later):
run("nohup yes >& /dev/null < /dev/null &")
(yes is simply an example of a program that may run for a long time or forever; >&, < and & are Bash syntax for pipe redirection and backgrounding, respectively – see your shell’s man page for details.)
Use tmux, screen or dtach to fully detach the process from the running shell; these tools have the benefit of allowing you to reattach to the process later on if needed (among many other such benefits).
While Fabric is written with bash in mind, it’s not an absolute requirement. Simply change env.shell to call your desired shell, and include an argument similar to bash‘s -c argument, which allows us to build shell commands of the form:
/bin/bash -l -c "<command string here>"
where /bin/bash -l -c is the default value of env.shell.
Note
The -l argument specifies a login shell and is not absolutely required, merely convenient in many situations. Some shells lack the option entirely and it may be safely omitted in such cases.
A relatively safe baseline is to call /bin/sh, which may call the original sh binary, or (on some systems) csh, and give it the -c argument, like so:
from fabric.api import env env.shell = "/bin/sh -c"
This has been shown to work on FreeBSD and may work on other systems as well.
Due to a bug of sorts in our SSH layer, it’s not currently possible for Fabric to always accurately detect the type of authentication needed. We have to try and guess whether we’re being asked for a private key passphrase or a remote server password, and in some cases our guess ends up being wrong.
The most common such situation is where you, the local user, appear to have an SSH keychain agent running, but the remote server is not able to honor your SSH key, e.g. you haven’t yet transferred the public key over or are using an incorrect username. In this situation, Fabric will prompt you with “Please enter passphrase for private key”, but the text you enter is actually being sent to the remote end’s password authentication.
We hope to address this in future releases by modifying a fork of the aforementioned SSH library.
Currently, no, it’s not – the present version of Fabric relies heavily on shared state in order to keep the codebase simple. However, there are definite plans to update its internals so that Fabric may be either threaded or otherwise parallelized so your tasks can run on multiple servers concurrently. | http://docs.fabfile.org/en/1.4.3/faq.html | 2018-01-16T09:41:17 | CC-MAIN-2018-05 | 1516084886397.2 | [] | docs.fabfile.org |
Use the following command format to launch HBase:
su <user> /usr/hdp/current/slider-client/bin/./slider create <hb_cluster_name> --template appConfig.json --resources resources.json
Where
<user> is the user who installed the HBase application
package.
For example:
su <user> /usr/hdp/current/slider-client/bin/./slider create hb1 --template /usr/work/app-packages/hbase/appConfig.json --resources /usr/work/app-packages/hbase/resources.json
You can use the Slider CLI
status command to verify the application
launch:
/usr/hdp/current/slider-client/bin/./slider status <application_name>
You can also verify the successful launch of the HBase<< | https://docs.hortonworks.com/HDPDocuments/HDP2/HDP-2.6.2/bk_yarn-resource-management/content/ch06s06s05.html | 2018-01-16T09:36:45 | CC-MAIN-2018-05 | 1516084886397.2 | [array(['figures/3/figures/HBase_on_Slider_RM_page.png', None],
dtype=object)
array(['figures/3/figures/Slider_HBase_App_Master.png', None],
dtype=object) ] | docs.hortonworks.com |
Common Parameters for details.
hideStepIf¶
A callable or bool to determine whether this step should be shown in the waterfall and build details pages. See Common Parameters
describe(done=False)¶.
Build steps have statistics, a simple key/value store of data which can later be aggregated over all steps in a build. Note that statistics are not preserved after a build is complete.):
NoteTh).4.7.5. Exceptions¶
- exception
buildbot.process.buildstep.
BuildStepFailed¶
This exception indicates that the buildstep has failed. It is useful as a way to skip all subsequent processing when a step goes wrong. It is handled by
BuildStep.failed. | http://docs.buildbot.net/0.9.15/developer/cls-buildsteps.html | 2018-01-16T09:53:30 | CC-MAIN-2018-05 | 1516084886397.2 | [] | docs.buildbot.net |
Ticket #1528 (closed defect: fixed)
Assassin in update screen will jump out "Fatal message" very often
Description
Summary:
Assassin in update screen will jump out "Fatal message" very often
Fatal message: Fatal message: system time out! Restart Assassin is recommended.
Kernel : 20080621-asu.stable-uImage.bin
Root file system :20080701-asu.stable-rootfs.jffs2
note: this is a known ticket, thank you.
Change History
comment:1 Changed 10 years ago by tick
- Status changed from new to accepted
- Owner changed from tick@… to tick
comment:2 Changed 10 years ago by tick
- Status changed from accepted to in_testing
Shall not happened after assassin svn 192
comment:3 Changed 10 years ago by wendy_hung
- Status changed from in_testing to closed
- Resolution set to fixed. :)
Note: See TracTickets for help on using tickets.
The packagekit actually timeouted when searching repository.
The reason is unknown yet.
I will looking into this. | http://docs.openmoko.org/trac/ticket/1528 | 2018-01-16T09:41:09 | CC-MAIN-2018-05 | 1516084886397.2 | [] | docs.openmoko.org |
About.
The Geode native client uses a set of assemblies managed by the C++ Common Language Infrastructure (C++ CLI). C++ CLI includes the libraries and objects necessary for common language types, and it is the framework for .NET applications.
The .NET API for the native client adds .NET Framework CLI language binding for the Geode native client product.
Using C#, you can write callbacks and define user objects in the cache. The following figure shows an overview of how a C# application accesses the native client C++ API functionality through C++/CLI .
Figure: C# .NET Application Accessing the C++ API
Note: This chapter uses C# as the reference language, but other .NET languages work the same way.
The Geode .NET API is provided in the
GemStone::GemFire::Cache::Generic namespace. This namespace allows you to manage your cache, regions, and data using the .NET Generics APIs.
Use the Geode .NET API to programmatically create, populate, and manage a Geode distributed system.
Note: The .NET library is thread-safe except where otherwise indicated in the API documentation.
For complete information on the APIs, see the .NET API documentation at. For general information on .NET, see the Microsoft developer’s network website. | http://gemfire-native-90.docs.pivotal.io/native/dotnet-caching-api/csharp-dotnet-api.html | 2018-01-16T09:49:55 | CC-MAIN-2018-05 | 1516084886397.2 | [array(['../common/images/6-DotNet_API-1.gif', None], dtype=object)] | gemfire-native-90.docs.pivotal.io |
For cPanel & WHM version 60,.
However, &..
cPanel, Inc. will attempt to provide suggestions with EOL notifications.
- For more specific information, contact your technical support provider.
- If you are unsure who to contact, visit our Support page.
Additional documentation
There is no content with the specified labels
There is no content with the specified labels
There is no content with the specified labels | https://docs.cpanel.net/display/60Docs/Third-Party+Software+End+Of+Life+Policy | 2018-01-16T09:20:12 | CC-MAIN-2018-05 | 1516084886397.2 | [] | docs.cpanel.net |
List Registered Networks activity The List Registered Networks activity retrieves all the networks associated with an Infoblox server. The network activities use the REST web service activity template to manage network addresses using an Infoblox DDI Grid Server. These activities are configured to use a MID Server with REST capabilities. To access this activity in the workflow editor, select the Custom tab, and then navigate to Custom Activities > Infoblox DDI > Network. Input variables Table 1. Activity.. | https://docs.servicenow.com/bundle/jakarta-servicenow-platform/page/administer/orchestration-activities/reference/r_ListRegisteredNetworksActivity.html | 2018-01-16T09:21:02 | CC-MAIN-2018-05 | 1516084886397.2 | [] | docs.servicenow.com |
Changes related to "J1.5:Supporting plugins in your component"
← J1.5:Supporting plugins in your component
This is a list of changes made recently to pages linked from a specified page (or to members of a specified category). Pages on your watchlist are bold.
No changes during the given period matching these criteria. | https://docs.joomla.org/Special:RecentChangesLinked/J1.5:Supporting_plugins_in_your_component | 2015-04-18T07:23:15 | CC-MAIN-2015-18 | 1429246633972.52 | [] | docs.joomla.org |
Administrators/Resources From Joomla! Documentation < Portal:AdministratorsRevision as of 18:04, 16 June 2013 by Tom Hutchison (Talk | contribs) (diff) ← Older revision | Latest revision (diff) | Newer revision → (diff) 300px Retrieved from ‘’ Categories: Pages with broken file linksLanding subpages | https://docs.joomla.org/index.php?title=Portal:Administrators/Resources&direction=prev&oldid=101690 | 2015-04-18T07:21:53 | CC-MAIN-2015-18 | 1429246633972.52 | [] | docs.joomla.org |
JHtmlAccess/assetgrouplist
From Joomla! Documentation
< API16:JHtmlAccessRevision as of 21
Displays a Select list of the available asset groups
[<! removed edit link to red link >]
<! removed transcluded page call, red link never existed >
Syntax
static assetgrouplist($name, $selected, $attribs=null, $config=array())
Returns
mixed An HTML string or null if an error occurs
Defined in
libraries/joomla/html/html/access.php
Importing
jimport( 'joomla.html.html.access' );
Source Body
public static function assetgrouplist($name, $selected, $attribs = null, $config = array()) { static $count; $options = JHtmlAccess::assetgroups(); if (isset($config['title'])) { array_unshift($options, JHtml::_('select.option', '', $config['title'])); } return JHtml::_( 'select.genericlist', $options, $name, array( 'id' => isset($config['id']) ? $config['id'] : 'assetgroups_'.++$count, 'list.attr' => (is_null($attribs) ? 'class="inputbox" size="3"' : $attribs), 'list.select' => (int) $selected, 'list.translate' => true ) ); }
[<! removed edit link to red link >] <! removed transcluded page call, red link never existed >
Examples
<CodeExamplesForm /> | https://docs.joomla.org/index.php?title=API16:JHtmlAccess/assetgrouplist&oldid=99154 | 2015-04-18T08:03:26 | CC-MAIN-2015-18 | 1429246633972.52 | [] | docs.joomla.org |
Difference between revisions of "Menus Menu Item Article Archived"
From Joomla! Documentation
Latest revision as of 11:29, 30 April 2014 Options
The Archived Articles Layout has the following Archive Options, as shown below.
- Article Order. Order of articles in this Layout. The following options are available.
- Oldest first: Articles are displayed starting with the oldest and ending with the most recent.
- Most recent first: Articles are displayed starting with the most recent and ending with the oldest
- Title Alphabetical: Articles are displayed by Title in alphabetical order (A to Z)
- Title Reverse Alphabetical: Articles are displayed by Title.
- # Articles to List. The number of articles to include in the list. Select the desired number from the list box.
- Filter Field. (Hide/Title/Author/Hits) Whether to show a Filter Field for the list of articles. If Hide, no Filter Field is shown. Otherwise, the Filter Field is shown using the selected field (Title, Author, or Hits).
- Intro Text Limit. The maximum number of characters of the Intro Text to show. If the Intro Text is longer than this value, it will be truncated to this length.
Article Options
The). Archived Articles layout allows you to access old or outdated articles that you don't want to remove entirely from the site.
- If you want to be able to see old articles in a category blog or list, create a category for older articles and move them to this category (instead of changing the Published state to Archived).
Related Information
- Articles are archived using Article Manager. | https://docs.joomla.org/index.php?title=Help33:Menus_Menu_Item_Article_Archived&diff=117923&oldid=78753 | 2015-04-18T07:57:15 | CC-MAIN-2015-18 | 1429246633972.52 | [] | docs.joomla.org |
Changes related to "Should I update from Joomla! 2.5 to 3.0?"
← Should I update from Joomla! 2.5 to 3.0?
This is a list of changes made recently to pages linked from a specified page (or to members of a specified category). Pages on your watchlist are bold.
No changes during the given period matching these criteria. | https://docs.joomla.org/index.php?title=Special:RecentChangesLinked&hideminor=1&target=Should_I_update_from_Joomla!_2.5_to_3.0%3F | 2015-04-18T08:05:27 | CC-MAIN-2015-18 | 1429246633972.52 | [] | docs.joomla.org |
How to override the component mvc from the Joomla! core
From Joomla! Documentation
Revision as of 12:38, 14 February 2013 by Javiparati (Talk | contribs)
Contents
NOTICE
This method only works if you install and enable the 3rd party MVC plugin - or provide your own equivalent plugin. It is fine for advanced developers - just be aware that this is not part of Joomla! Core code__) . DS . 'comcontentoverride' . DS . 'my_content_controller.php'); } } }
In the examples below we are using an Override MVC plugin } | https://docs.joomla.org/index.php?title=How_to_override_the_component_mvc_from_the_Joomla!_core&oldid=81447 | 2015-04-18T08:33:24 | CC-MAIN-2015-18 | 1429246633972.52 | [] | docs.joomla.org |
Difference between revisions of "Creating a simple module"
From Joomla! Documentation
Revision as of 19:46, 2 January 2014
. | https://docs.joomla.org/index.php?title=J3.x:Creating_a_simple_module&diff=106710&oldid=104138 | 2015-04-18T07:17:33 | CC-MAIN-2015-18 | 1429246633972.52 | [] | docs.joomla.org |
All public logs
Combined display of all available logs of Joomla! Documentation. You can narrow down the view by selecting a log type, the username (case-sensitive), or the affected page (also case-sensitive).
- 16:52, 10 May 2013 JoomlaWikiBot (Talk | contribs) moved page JDatabaseSQLSrv::getTableList/11.1 to API17:JDatabaseSQLSrv::getTableList without leaving a redirect (Robot: Moved page)
- 20:44, 27 April 2011 Doxiki2 (Talk | contribs) automatically marked revision 56567 of page JDatabaseSQLSrv::getTableList/11.1 patrolled | https://docs.joomla.org/index.php?title=Special:Log&page=JDatabaseSQLSrv%3A%3AgetTableList%2F11.1 | 2015-04-18T07:50:29 | CC-MAIN-2015-18 | 1429246633972.52 | [] | docs.joomla.org |
Difference between revisions of "Robots.txt file"
From Joomla! Documentation
Revision as of 15:12, 10 February and must be named "robots.txt".
Joomla in a subdomain. . the robots.txt
Infos:
Syntax checking
For syntax checking you can use a validator for robots.txt files. Try one of these:
- Motoricerca Robots.txt Checker
- [1]Robots.txt Frobee Robots.txt Checker]
- Search Engine Promotion Help robots.txt Checker | https://docs.joomla.org/index.php?title=Robots.txt&diff=prev&oldid=80990 | 2015-04-18T07:45:39 | CC-MAIN-2015-18 | 1429246633972.52 | [] | docs.joomla.org |
Changes related to "My first pull request to Joomla! on Github"
← My first pull request to Joomla! on Github
This is a list of changes made recently to pages linked from a specified page (or to members of a specified category). Pages on your watchlist are bold.
No changes during the given period matching these criteria. | https://docs.joomla.org/index.php?title=Special:RecentChangesLinked&from=20140525233438&target=My_first_pull_request_to_Joomla!_on_Github | 2015-04-18T08:18:33 | CC-MAIN-2015-18 | 1429246633972.52 | [] | docs.joomla.org |
1.2.1 Normative References
We conduct frequent surveys of the normative references to assure their continued availability. If you have any issue with finding a normative reference, please contact [email protected]. We will assist you in finding the relevant information.
[MS-DRMND] Microsoft Corporation, "Windows Media Digital Rights Management (WMDRM): Network Devices Protocol".
[MS-DSLR] Microsoft Corporation, "Device Services Lightweight Remoting Protocol".
[RFC2119] Bradner, S., "Key words for use in RFCs to Indicate Requirement Levels", BCP 14, RFC 2119, March 1997, | https://docs.microsoft.com/en-us/openspecs/windows_protocols/ms-drmri/2f38a83c-ffa6-4991-836b-7511207009ef | 2021-02-24T21:04:53 | CC-MAIN-2021-10 | 1614178347321.0 | [] | docs.microsoft.com |
Load data from files and other sources
Learn how to load data for processing and training into ML.NET using the API. The data is originally stored in files or other data sources such as databases, JSON, XML or in-memory collections.
If you're using Model Builder, see Load training data into Model Builder.
Create the data model
ML.NET enables you to define data models via classes. For example, given the following input data:
Size (Sq. ft.), HistoricalPrice1 ($), HistoricalPrice2 ($), HistoricalPrice3 ($), Current Price ($) 700, 100000, 3000000, 250000, 500000 1000, 600000, 400000, 650000, 700000
Create a data model that represents the snippet below:
public class HousingData { [LoadColumn(0)] public float Size { get; set; } [LoadColumn(1, 3)] [VectorType(3)] public float[] HistoricalPrices { get; set; } [LoadColumn(4)] [ColumnName("Label")] public float CurrentPrice { get; set; } }
Annotating the data model with column attributes
Attributes give ML.NET more information about the data model and the data source.
The
LoadColumn attribute specifies your properties' column indices.
Important
LoadColumn is only required when loading data from a file.
Load columns as:
- Individual columns like
Sizeand
CurrentPricesin the
HousingDataclass.
- Multiple columns at a time in the form of a vector like
HistoricalPricesin the
HousingDataclass.
If you have a vector property, apply the
VectorType attribute to the property in your data model. It's important to note that all of the elements in the vector need to be the same type. Keeping the columns separated allows for ease and flexibility of feature engineering, but for a very large number of columns, operating on the individual columns causes an impact on training speed.
ML.NET Operates through column names. If you want to change the name of a column to something other than the property name, use the
ColumnName attribute. When creating in-memory objects, you still create objects using the property name. However, for data processing and building machine learning models, ML.NET overrides and references the property with the value provided in the
ColumnName attribute.
Load data from a single file
To load data from a file use the
LoadFromTextFile method along with the data model for the data to be loaded. Since
separatorChar parameter is tab-delimited by default, change it for your data file as needed. If your file has a header, set the
hasHeader parameter to
true to ignore the first line in the file and begin to load data from the second line.
//Create MLContext MLContext mlContext = new MLContext(); //Load Data IDataView data = mlContext.Data.LoadFromTextFile<HousingData>("my-data-file.csv", separatorChar: ',', hasHeader: true);
Load data from multiple files
In the event that your data is stored in multiple files, as long as the data schema is the same, ML.NET allows you to load data from multiple files that are either in the same directory or multiple directories.
Load from files in a single directory
When all of your data files are in the same directory, use wildcards in the
LoadFromTextFile method.
//Create MLContext MLContext mlContext = new MLContext(); //Load Data File IDataView data = mlContext.Data.LoadFromTextFile<HousingData>("Data/*", separatorChar: ',', hasHeader: true);
Load from files in multiple directories
To load data from multiple directories, use the
CreateTextLoader method to create a
TextLoader. Then, use the
TextLoader.Load method and specify the individual file paths (wildcards can't be used).
//Create MLContext MLContext mlContext = new MLContext(); // Create TextLoader TextLoader textLoader = mlContext.Data.CreateTextLoader<HousingData>(separatorChar: ',', hasHeader: true); // Load Data IDataView data = textLoader.Load("DataFolder/SubFolder1/1.txt", "DataFolder/SubFolder2/1.txt");
Load data from a relational database
ML.NET supports loading data from a variety of relational databases supported by
System.Data that include SQL Server, Azure SQL Database, Oracle, SQLite, PostgreSQL, Progress, IBM DB2, and many more.
Note
To use
DatabaseLoader, reference the System.Data.SqlClient NuGet package.
Given a database with a table named
House and the following schema:
CREATE TABLE [House] ( [HouseId] INT NOT NULL IDENTITY, [Size] INT NOT NULL, [NumBed] INT NOT NULL, [Price] REAL NOT NULL CONSTRAINT [PK_House] PRIMARY KEY ([HouseId]) );
The data can be modeled by a class like
HouseData.
public class HouseData { public float Size { get; set; } public float NumBed { get; set; } public float Price { get; set; } }
Then, inside of your application, create a
DatabaseLoader.
MLContext mlContext = new MLContext(); DatabaseLoader loader = mlContext.Data.CreateDatabaseLoader<HouseData>();
Define your connection string as well as the SQL command to be executed on the database and create a
DatabaseSource instance. This sample uses a LocalDB SQL Server database with a file path. However, DatabaseLoader supports any other valid connection string for databases on-premises and in the cloud.
string connectionString = @"Data Source=(LocalDB)\MSSQLLocalDB;AttachDbFilename=<YOUR-DB-FILEPATH>;Database=<YOUR-DB-NAME>;Integrated Security=True;Connect Timeout=30"; string sqlCommand = "SELECT Size, CAST(NumBed as REAL) as NumBed, Price FROM House"; DatabaseSource dbSource = new DatabaseSource(SqlClientFactory.Instance, connectionString, sqlCommand);
Numerical data that is not of type
Real has to be converted to
Real. The
Real type is represented as a single-precision floating-point value or
Single, the input type expected by ML.NET algorithms. In this sample, the
NumBed column is an integer in the database. Using the
CAST built-in function, it's converted to
Real. Because the
Price property is already of type
Real it is loaded as is.
Use the
Load method to load the data into an
IDataView.
IDataView data = loader.Load(dbSource);
Load data from other sources
In addition to loading data stored in files, ML.NET supports loading data from sources that include but are not limited to:
- In-memory collections
- JSON/XML
Note that when working with streaming sources, ML.NET expects input to be in the form of an in-memory collection. Therefore, when working with sources like JSON/XML, make sure to format the data into an in-memory collection.
Given the following in-memory collection:
HousingData[] inMemoryCollection = new HousingData[] { new HousingData { Size =700f, HistoricalPrices = new float[] { 100000f, 3000000f, 250000f }, CurrentPrice = 500000f }, new HousingData { Size =1000f, HistoricalPrices = new float[] { 600000f, 400000f, 650000f }, CurrentPrice=700000f } };
Load the in-memory collection into an
IDataView with the
LoadFromEnumerable method:
Important
LoadFromEnumerable assumes that the
IEnumerable it loads from is thread-safe.
// Create MLContext MLContext mlContext = new MLContext(); //Load Data IDataView data = mlContext.Data.LoadFromEnumerable<HousingData>(inMemoryCollection);
Next steps
- To clean or otherwise process data, see Prepare data for building a model.
- When you're ready to build a model, see Train and evaluate a model. | https://docs.microsoft.com/sk-sk/dotnet/machine-learning/how-to-guides/load-data-ml-net | 2021-02-24T21:45:42 | CC-MAIN-2021-10 | 1614178347321.0 | [] | docs.microsoft.com |
If.
{ "proxy_ip_or_hostname": "[IP or name]", "use_proxy": [true/false], "proxy_username": "[username]", "proxy_password": "[password]", "proxy_port": [port value], "proxy_ssh_port": [port value: default is 443] } | https://docs.netapp.com/sfe-113/topic/com.netapp.doc.sfe-mg-mn/GUID-9CE4E2F7-F956-4AA9-8633-726D1E93A659.html?lang=en | 2021-02-24T20:52:13 | CC-MAIN-2021-10 | 1614178347321.0 | [] | docs.netapp.com |
.
object
Deployment enables declarative updates for Pods and ReplicaSets.
object
ReplicaSet ensures that a specified number of pod replicas are running at any given time.
object
StatefulSet represents a set of pods with consistent identities. Identities are defined as:
Network: A single stable DNS and hostname.
Storage: As many VolumeClaims as requested. The StatefulSet guarantees that a given network identity will always map to the same storage identity.
object | https://docs.openshift.com/aro/3/rest_api/apps/apps-index.html | 2021-02-24T20:58:33 | CC-MAIN-2021-10 | 1614178347321.0 | [] | docs.openshift.com |
1.1.2
SOTI Insight 1.1.2 (October 26, 2020)
General Improvements
- Performance is significantly improved to achieve faster application load time and data retrieval.
Device Support
Android 6.0+ devices are supported by SOTI Insight. The SOTI Insight Android agent must be installed on the devices you wish to analyse.
System Requirements
- A multi-tenant, cloud-only solution hosted on AWS
- 13" monitor @ 1024x768 resolution (1600x1080 resolution or higher recommended)
- Google Chrome web browser | https://docs.soti.net/soti-insight/release-notes/v11/112/ | 2021-02-24T20:54:24 | CC-MAIN-2021-10 | 1614178347321.0 | [] | docs.soti.net |
>> master node's configuration bundle location. Once the apps are in that location, the master node can then distribute them to the peer nodes via the configuration bundle method. See Use deployment server to distribute the apps! | https://docs.splunk.com/Documentation/Splunk/7.0.1/Indexer/Managecommonconfigurations | 2021-02-24T21:16:33 | CC-MAIN-2021-10 | 1614178347321.0 | [array(['/skins/OxfordComma/images/acrobat-logo.png', 'Acrobat logo'],
dtype=object) ] | docs.splunk.com |
You can easily create a template that you want to use with Tutor LMS, using the template builder of Elementor settings. This process will save you a lot of time when designing your Tutor LMS course pages.
You won’t need to add the elements manually every time you are designing your course page. Simply create a template from the Elementor Template section build for Tutor LMS and use it every time you want to add a single course page.
Hierarchy for The Template Design (If not Defined)
If you have not set any priority for the template design, it will be applied by following the given hierarchy.
- The first priority is for the Tutor LMS template that is pre-built for the core plugin.
- The single course template from your theme or a custom plugin will be given preference if you have overridden the pre-built core template.
- If you create global templates from the Elementor templates option then that will be used as the default design for all the single course pages.
Note: If you want to set each course page with a different design then you need to apply the design from the specific course edit screen for the specific course.
Step 1: Create an Elementor Template
To start creating a template navigate to Templates → Add New
There set the type of template you want to work on. For this particular tutorial, you will need to choose the page option. Then select the “Tutor LMS Single Course Template” using the checkbox. After that set a name for your template.
Now that you have created your template it’s time to design it. Once you have decided how you want your template layout to look like, hit publish to save it.
Step 2: Add Template to Post/Page/Course
After you have created your Elementor Template for Tutor LMS Single Course page. it’s now time to add it to your page/post/course of Tutor LMS.
Go to the editing page of your post and click on the “Edit with Elementor” button. After that click on the “Add Template” to find your preset template. Then click on the insert button. Congratulation! your predefined template has been successfully imported to your page. | https://docs.themeum.com/tutor-lms/integrations/elementor-page-builder/create-a-elementor-template-to-use-with-tutor-lms/ | 2021-02-24T20:54:26 | CC-MAIN-2021-10 | 1614178347321.0 | [array(['https://docs.themeum.com/wp-content/uploads/2021/01/themeum-tutor-lms-course-single-page-edit-with-elementor.png',
None], dtype=object) ] | docs.themeum.com |
The Motor panel defines the conveyors the motor powers.
The following properties are on the Motor panel:. | https://docs.flexsim.com/en/21.1/Reference/PropertiesPanels/ConveyorPanels/Motor/Motor.html | 2021-02-24T20:42:30 | CC-MAIN-2021-10 | 1614178347321.0 | [] | docs.flexsim.com |
2.2.1.2.87 DHCP_CLASS_INFO_ARRAY_V6
The DHCP_CLASS_INFO_ARRAY_V6 structure contains a list of information regarding a user class or a vendor class.
typedef struct _DHCP_CLASS_INFO_ARRAY_V6 { DWORD NumElements; [size_is(NumElements)] LPDHCP_CLASS_INFO_V6 Classes; } DHCP_CLASS_INFO_ARRAY_V6, *LPDHCP_CLASS_INFO_ARRAY_V6;
NumElements: This is of type DWORD, specifying the number of classes whose information is contained in the array specified by the Classes member.
Classes: A pointer to an array of DHCP_CLASS_INFO_V6 (section 2.2.1.2.70) structures that contains information regarding the various user classes and vendor classes. | https://docs.microsoft.com/en-us/openspecs/windows_protocols/ms-dhcpm/5d1ef878-5559-43da-820c-970cb2597206 | 2021-02-24T21:35:17 | CC-MAIN-2021-10 | 1614178347321.0 | [] | docs.microsoft.com |
Skins
To make customizing the appearance of RadDropDownTree as easy as possible, the control uses skins. A skin is a set of images and a CSS stylesheet that can be applied to RadDropDown elements (items, images, etc.) and defines their look and feel.
RadDropDownTree is installed with a number of built-in skins:
The Material skin is available for the Lightweight RenderMode only. If you experience visual issues with it, make sure your controls are not using the default Classic mode. | https://docs.telerik.com/devtools/aspnet-ajax/controls/dropdowntree/appearance-and-styling/skins | 2021-02-24T21:38:36 | CC-MAIN-2021-10 | 1614178347321.0 | [array(['images/dropdowntree-skins.png', 'dropdowntree skins'],
dtype=object) ] | docs.telerik.com |
Probably one of the first settings that you need to adjust is the currency. You may want to set it to the currency of your country as well as set the position of the currency symbol and the type of thousands separator that you would like to use.
You can make these changes on the WPCasa settings page on WP-Admin > WPCasa > Settings.
Currency
From a dropdown select you can choose the default currency you would like to display the listing prices in. This option defaults to
$ (US Dollar).
The following options are currently available:
AED => United Arab Emirates Dirham ANG => Netherlands Antillean Guilderan Pula DZD => Algerian Dinar EEK => Estonian Kroon EGP => Egyptian Pound EUR => Euro FJD => Fijian Dollar GBP => British Pound HKD => Hong Kong Dollar HNL => Honduran Lempira HRK => Croatian Kuna HUF => Hungarian Forint IDR => Indonesian Rupiah ILS => Israeli New Sheqel INR => Indian Rupee JMD => Jamaican Dollar JOD => Jordanian Dinar JPY => Japanese Yen KES => Kenyan Shilling KRW => South Korean Won KWD => Kuwaiti Dinar KYD => Cayman Islands Dollar KZT => Kazakhstani Tenge LBP => Lebanese Pound LKR => Sri Lankan Rupee LTL => Lithuanian Litas LVL => Latvian Lats MAD => Moroccan Dirham MDL => Moldovan LeuIO => Nicaraguan Cordoba NOK => Norwegian Krone NPR => Nepalese Rupee NZD => New Zealand Dollar OMR => Omani Rial SAR => Saudi Riyal SCR => Seychellois Rupee SEK => Swedish Krona SGD => Singapore Dollar SKK => Slovak Koruna SLL => Sierra Leonean Leone SVC => Salvadoran Colon THB => Thai Baht TND => Tunisian Dinar TRY => Turkish Lira TTD => Trinidad and Tobago Dollar TWD => New Taiwan Dollar TZS => Tanzanian Shilling UAH => Ukrainian Hryvnia UGX => Ugandan Shilling USD => US Dollar UYU => Uruguayan Peso UZS => Uzbekistan Som VEF => Venezuelan Bolivar VND => Vietnamese Dong XOF => CFA Franc BCEAO YER => Yemeni Rial ZAR => South African Rand ZMK => Zambian Kwacha
If your currency is not among these options, you can select Other and enter the currency abbreviation and/or currency sumbol yourself.
Currency Symbol
With this option you can select the position of the currency symbol that is common in your country. It is possible to display it before or after the price value.
Thousands Separator
Finally you’ll need to decide whether to display your price like
$1,000,000 or
$1.000.000 (1 million dollar). The thousands separator can be a comma or a period.
| https://docs.wpcasa.com/article/change-currency-and-price-format/ | 2021-02-24T21:08:37 | CC-MAIN-2021-10 | 1614178347321.0 | [array(['https://docs.wpcasa.com/wp-content/uploads/sites/6/2015/11/wpcasa-settings-currency.png',
'wpcasa-settings-currency'], dtype=object)
array(['https://docs.wpcasa.com/wp-content/uploads/sites/6/2015/11/wpcasa-settings-currency-symbol.png',
'wpcasa-settings-currency-symbol'], dtype=object)
array(['https://docs.wpcasa.com/wp-content/uploads/sites/6/2015/11/wpcasa-settings-currency-thousands-separator.png',
'wpcasa-settings-currency-thousands-separator'], dtype=object) ] | docs.wpcasa.com |
Release notes -- vEL3450 (since v2.208.756)
Highlights
EB-55: JSON Web Token - grant type for OAuth in Element Builder
- Added JWT OAUTH as authentication type in Element Builder
- Added ability to provision using JWT OAUTH for these elements:
- Salesforce
- Box
- Boxv2
EL-3039: adding optional payload to bulk callback url
- adding optional payload to bulk callback url
EL-3035: Type casting while converting from csv to jsonl format for SFDC element
- Fixes SFDC Bulk JSON converts boolean to string
EL-3162: Eloqua - Bulkdata is not consistent if object contains null properties
- Fix - Bulk data is not giving results for CSV files
EL-1376: Zendesk Renaming Subdomain to siteAddress as a standard
- Renamed Zendesk element subdomain field to siteAddress as configuration standardization change
EL-3175: Google Drive invalidate all other existing instances by deleting one instance
- FIX - GoogleDrive - Revoke access token for all existing instances
EL-2872: SFDC Service Cloud Element Builder migration
- Customer can see the resources under the element resources section
EL-3296: adding delete payments by id resource
- Added DELETE /payments/{id} resource to SageOne
EL-3314: Make client id mandatory for ConnectWiseCrmRest
- Correctly reflect on the UI that client-id for ConnectWise CRM REST element during instance creation.-238: null check to avoid failures with instance creation
EL-3331 | Sage Accounting: Getting 500 error with "Internal failure while handling request" response when search with id in where clause
- Fixed NPE for Sage Accounting when get call is used with where clause
EL-3294: Added webinar fields endpoint and header for post user to webinar (GoToWebinar)
- Adds
Accept:application/vnd.citrix.g2wapi-v1.1+jsonspecific header required while creating
Userfor
webinar
- Adds
/meetings/{id}/fieldsto get fields of specific webinar which help in creating user
EL-3154: Intacct - Discrepancy with the primary key defined in the model for sales-order endpoint
- updating the primary key to sotransactionid for sage intacct for create sales order resource
EL-2815: Box ObjectId is missing in box events
- Fix - Box v1 webhooks missing objectId in events response
EL-3329: Zendesk Element Added additional required to article Resource schema
- Zendesk resources/sections/{id}/articles POST call require two additional fields in its request payload i.e. usersegmentid and permissiongroupid
EL-3258:Bamboo HR Changes from XML to JSON Type
- Modified swagger for /categories resource with correct payload
- Changes to support XML to JSON payload structure changes from Bamboo HR | https://docs.cloud-elements.com/home/production-release-notes-vhotfix-el-3450 | 2021-02-24T20:23:21 | CC-MAIN-2021-10 | 1614178347321.0 | [] | docs.cloud-elements.com |
TS-7180
Contents
- 1 Overview
- 2 Getting Started
- 3 U-Boot
- 4 Debian
- 5 Backup / Restore
- 6 Compile the Kernel
- 7 Features
- 7.1 ADC
- 7.2 Bluetooth
- 7.3 CAN
- 7.4 COM Ports
- 7.5 CPU
- 7.6 eMMC
- 7.7 Ethernet Ports
- 7.8 FPGA
- 7.9 FRAM
- 7.10 GPIO
- 7.11 GPS
- 7.12 IMU
- 7.13 Interrupts
- 7.14 Jumpers
- 7.15 LEDs
- 7.16 MicroSD Card Interface
- 7.17 PWM
- 7.18 Quadrature & Edge-Counters
- 7.19 RTC
- 7.20 SPI
- 7.21 SuperCaps
- 7.22 TWI
- 7.23 USB
- 7.24 Watchdog
- 7.25 WIFI
- 8 Physical Interfaces
- 9 Specifications
- 10 Revisions and Changes
- 11 Product Notes
1 Overview
The TS-7180 is an SBC designed for low power systems and is ideal for remote deployment and fleet tracking.7. This accepts an 8-28 VDC input.
Once power is applied, the device will output information via the console. The first output is from U-Boot:
U-Boot 2016.03-14506-gfee2f5b (Jan 13 2017 - 12:29:29 -0700) CPU: Freescale i.MX6UL rev1.1 at 396 MHz Reset cause: POR Board: Technologic Systems TS-7180 FPGA: Rev 0 I2C: ready DRAM: 512 MiB MMC: FSL_SDHC: 0, FSL_SDHC: 1 *** Warning - bad CRC, using default environment Net: FEC0 [PRIME]. If the U-Boot jumper is not installed, then the "SD Boot" jumper will be examined: when installed, it will cause U-Boot to boot to SD; otherwise, U-Boot will boot to eMMC.
Console from Windows
Putty is a small simple client available for download here. Open up Device Manager to determine your console port. See the putty configuration image for more details.
3 U-Boot
3.1 U-Boot Environment
The U-Boot environment on the TS-7180 is stored in the on-board eMMC flash.
#
3.2 U-Boot Commands
# The most important command is help # This can also be used to see more information on a specific command help i2 used for scripting: false # do nothing, unsuccessfully true # do nothing, successfully # This command lets you set fuses in the processor # Setting fuses can brick your board, will void your warranty, # and should not be done in most cases fuse # # You can view the fdt from u-boot with fdt load mmc 0:1 ${fdtaddr} /boot/imx6ul-ts7180.dtb fdt addr ${fdtaddr} fdt print # You can blindly jump into any memory # This is similar to bootm, but it does not use the # u-boot header load mmc 0:1 ${loadaddr} /boot/custombinary go ${loadaddr} # Browse fat,ext2,ext3,or ext4 filesystems: ls bdinfo # Print U-boot version/build information version
4 Debian
Debian is a community run Linux distribution. Debian provides tens of thousands of precompiled applications and services. This distribution is known for stability and large community providing support and documentation.
4.1 Getting Started with Debian
Once installed the default user is "root" with no password.
To prepare an SD card, use partitioning tools such as 'fdisk' 'cfdisk' or 'gparted' in linux to create a single linux partition on the SD card. See the guide here for more information. Note the partition table must be "MBR" or "msdos", and the "GPT" partition table format is NOT supported by U-Boot. Once it is formatted, extract the above tarball with:
# Assuming your SD card is /dev/sdc with one partition mkfs.ext3 /dev/sdc1 mkdir /mnt/sd/ sudo mount /dev/sdc1 /mnt/sd/ sudo tar xjf debian-armhf-jessie-latest.tar.bz2 -C /mnt/sd sudo umount /mnt/sd sync
To rewrite the eMMC the unit must be booted to SD or any other media that is not eMMC. Once booted, run the following commands.:
mkfs.ext3 /dev/mmcblk2p1 mkdir /mnt/emmc mount /dev/mmcblk2p1 /mnt/emmc wget -qO- | tar xj -C /mnt/emmc/ umount /mnt/emmc sync
The same commands can be used to write a SATA drive by substituting /dev/mmcblk2p1 with /dev/sda1.
4.2 Debian Networking.2.1 Debian
4.4 Debian.5 Debian.6 Debian
These instructions assume you have an SD card with one partition. Most SD cards ship this way by default, but if you have modified the partitions you may need to use a utility such as gparted or fdisk to remove partitions and recreate it with one partition.
Plug the SD card into a USB reader and connect it to your Linux PC. These instructions assume your SD interface is /dev/sdc, but check dmesg in your PC to see what
Running these commands will reflash the SD card to our default latest image.
# Verify nothing else has this mounted sudo umount /dev/sdc1 sudo mkfs.ext4 /dev/sdc1 sudo mkdir /mnt/sd sudo mount /dev/sdc1 /mnt/sd/ wget sudo tar -xjf debian-armhf-jessie-latest.tar.bz2 -C /mnt/sd sudo umount /mnt/sd sync
After it is written you can verify the data was written correctly. Reinsert the disk to verify any block cache is gone, then run these:
mount /dev/sdc1 /mnt/sd cd /mnt/sd/ sudo md5sum -c md5sums.txt umount /mnt/sd sync
The md5sums command will report what differences there are, if any, and return if it passed or failed.
5.2 eMMC
The simplest way to backup/restore the eMMC is through u-boot. If you boot up and stop in u-boot you can run this command:
ums 0 mmc 1
This will make the board act as a USB mass storage device with direct access to the emmc disk. On a linux workstation, to backup the image:
dmesg | tail -n 30 # Look for the last /dev/sd* device connected. This should also match the eMMC # size of around 3.78GiB. On my system, this is /dev/sdd. sudo mkdir /mnt/emmc/ sudo mount /dev/mmcblk1p1 /mnt/emmc/ cd /mnt/emmc/ tar -cjf /path/to/ts7180-backup-image.tar.bz2 cd ../ umount /mnt/emmc/ sync
To write a new filesystem to the TS-7180:
dmesg | tail -n 30 # Look for the last /dev/sd* device connected. This should also match the eMMC # size of around 3.78GiB. On my system, this is /dev/sdd. sudo mkdir /mnt/emmc/ sudo mkfs.ext4 /dev/mmcblk1p1 # If the above command fails, use fdisk or gparted to repartition the emmc # to have one large partition. sudo mount /dev/mmcblk1p1 /mnt/emmc/ tar -xjf /path/to/ts7180-new-image.tar.bz2 -C /mnt/emmc umount /mnt/emmc/ sync
6 Compile the Kernel
This board has several kernels released and available in our git depending on the branch name. Compiling the kernel requires an armhf toolchain. We recommend development under Debian which includes an armhf compiler in the repositories.
This also requires several tools from your distribution. For Debian:
su root apt-get install curl git build-essential lzop u-boot-tools libncursesw5-dev echo "deb jessie main" > /etc/apt/sources.list.d/emdebian.list curl | apt-key add - dpkg --add-architecture armhf apt-get update apt-get install crossbuild-essential-armhf
For Ubuntu:
sudo apt-get update sudo apt-get install crossbuild-essential-armhf git build-essential lzop u-boot-tools libncursesw5-dev
Once those are installed:
git clone cd linux-tsimx git checkout ts-imx_4.1.15_2.0.0_ga ## If you are using the 64-bit toolchain: export CROSS_COMPILE=arm-linux-gnueabihf- export ARCH=arm export LOADADDR=0x80800000 make tsimx6ul_defconfig ## Make any changes in "make menuconfig" or driver modifications, then compile make && make uImage
To install this to a board you would use a USB SD reader and plug in the card. Assuming your Linux rootfs is all on "sdc1":
export DEV=/dev/sdc1 sudo mount "$DEV" /mnt/sd sudo rm /mnt/sd/boot/uImage sudo cp arch/arm/boot/uImage /mnt/sd/boot/uImage sudo cp arch/arm/boot/dts/imx6*ts*.dtb /mnt/sd/boot/ INSTALL_MOD_PATH="/mnt/sd" sudo -E make modules_install INSTALL_HDR_PATH="/mnt/sd" sudo -E make headers_install sudo umount /mnt/sd/ sync
Note: If you experience problems compiling the kernel with the compiler in your distribution, please try the one below:
In the case of either toolchain you would run these commands to install them:
chmod a+x poky-*.sh sudo ./poky-*.sh
7 Features
7.1 ADC
The TS-7180 has four channels of ADC, and those inputs are available on the P3 connector, as AN_IN_1 through AN_IN_4. Each input may be configured to measure voltage in two ranges (0-2.5V and 0-10.9V), or a 20mA current-loop.
The standard linux kernel offers a simple interface to the ADC that does not require any sort of programming to use. For example, to see the "raw" reading from the AN_IN_1 input, type the following command: option hciattach /dev/ttymxc2 any 115200 noflow hciconfig hci0 up hcitool cmd 0x3F 0x0053 00 10 0E 00 01 stty -F /dev/ttymxc2 921600 crtscts
Now you may scan for available devices with:
hcitool scan
This will return a list of devices such as:
14:74:11:AB:12:34 SAMSUNG-SM-G900A
You may request more information from a detected device like so:
hcitool info 14:74:11:AB:12:34
This will produce lots of details about the device, for example:
Requesting information ... BD Address: 14:74:11:AB:12:34 OUI Company: Samsung Electronics Co.,Ltd (4C-A5-6D) Device Name: SAMSUNG-SM-G900A LMP Version: 4.1 (0x7) LMP Subversion: 0x610c Manufacturer: Broadcom Corporation (15) Features page 0: 0xbf 0xfe 0xcf 0xfe 0xdb 0xff 0x7b 0x87 . . .
Bluez has support for many different profiles for HID, A2DP, and many more. Refer to the Bluez documentation for more information.
7.3 CAN
The i.MX6UL includes 2 CAN controllers which support the SocketCAN interface, and these are presented on the P3 & P5 connectors (custom populations may differ).
Before proceeding with the examples, see the Kernel's CAN documentation here.
This board comes preinstalled with can-utils which can be used to communicate over a CAN network without writing any code. The candump utility can be used to dump all data on the network
## First, set the baud rate and bring up the device: ip link set can0 type can bitrate 250000 ip link set can0 up ## Dump data & errors: candump can0 & ## Send the packet with: #can_id = 0x7df #data 0 = 0x3 #data 1 = 0x1 #data 2 = 0x0c cansend can0 -i 0x7DF 0x3 0x1 0x0C ## Some versions of cansend use a different syntax. If the above ## commands gives an error, try this instead: #cansend can0 7DF.4 COM Ports
The TS-7180 provides three standard RS-232 COM ports, and one RS-485 COM port. The latter has auto-transmit-enable. All of these ports are presented on the P5 connector. The RS-485 port has an on-board terminator that may be enabled by installing the "485" jumper.
Custom populations may provide a fourth RS-232 uart on P5, at the cost of one of the CAN controllers. Pin-outs are shown in the table below.
The daughter-card interface (HD1 header) contains TTL-level TX/RX pins that may be used to connect to a CPU UART, with the caveat that to do so, one of the assigned UARTs must be reassigned to the header. The reassignment is done by writing to the register at address 308 in the FPGA. The table in the FPGA Registers section shows which UARTS may be used. By default, the HD1 TX/RX pins are not connected to any UART.
7.5 CPU
The TS-7180 board uses the i.MX6ul 528MHz or 696MHz CPU which is very similar to the i.MX6 Solo used on the TS-4900 using many of the same CPU peripherals IP cores, but using a Cortex-A7 instead to target lower power consumption.
Refer to NXP's documentation for more detailed information on the CPU core:
7.6 eMMC
This board includes a Micron eMMC module. Our off-the-shelf builds are 4GiB, but up to 64GiB are available for larger builds. The eMMC flash appears to Linux as an SD card at /dev/mmcblk1. Our default programming will include one partition programmed with our Debian about halve the size of the eMMC module to 1.759Gib by default, and write speed will be slightly slower.
The mmc-utils package is used to enable these modes.
mmc write_reliability set -n 0 /dev/mmcblk1 mmc enh_area set -y 0 1847296 /dev/mmcblk1
After this is run, reboot the board. On all future boots the eMMC will be detected at the smaller size.
7.7 Ethernet Ports
The TS-7180 includes two 10/100 ethernet ports using the dual onboard CPU controllers and external Microchip/Micrel KSZ8081 PHYs. The MAC address is assigned from the Technologics pool using 00:d0:69:00:00:00, and the two MAC addresses will always be sequential. The ports are presented on RJ45 jacks on the board.
7.8 FPGA
7.8.1 FPGA Registers
The TS-7180 FPGA registers are accessed over I2C. The supplied 'tshwctl' utility may be used to access these registers; run 'tshwctl -h' to see how to use it.
The FPGA is available at I2C addresses 0x28-0x2f. First write the address which is 16-bits, followed by the data which is 8-bits.
The tables below list all the registers and their functions.
In the above table the GPIO uses are listed in the table below.
7.9 FRAM
The TS-7180 has 16Kbit ferroelectric random access memory (FRAM) organized as 2KB. It is accessible via the SPI bus.
The FRAM may be accessed from userspace. It will appear as an EEPROM at: /sys/class/spi_master/spi2/spi2.2/eeprom
The datasheet is available from Cypress Semiconductors' website.
7.10 GPIO
The TS-7180 provides seven IO ports that can sink up to 500mA, or withstand up to 30V at the input. These are available on the P3 connector. DIO_1 through DIO_7 appear as GPIO #37 through #43 (when used as inputs), and as GPIO #22 through #28 (when used as outputs). For example, to read the state of DIO_1, enter the following command:
tshwctl -a 37 -r
Bit #2 will reflect the state of the pin.
To drive DIO_1 low, enter the following command:
tshwctl -a 22 -w 3
Additionally, there are four input-only pins, DIG_IN_1 through DIG_IN_4, available on P3 connector. To read from these pins, run:
tshwctl -a N -r
...where N is 32 through 35.
7.11 GPS
The TS-7180 has an optional on-board Telit SL869 GPS receiver. An SMA female connector is provided for the connection of an antenna.
Before it can be used, power to the GPS receiver must be enabled by setting GPIO #19 low, as shown below:
tshwctl -a 19 -r -w 1
The GPS receiver may then be accessed via the /dev/ttymxc7 uart.
7.12.
7.13 Interrupts
7.14 Jumpers
The TS-7180 has a set of jumpers located near the SuperCaps on the edge of the SBC.
These jumpers control a number of aspects of the TS-7180's behavior. The jumpers are labeled on the silkscreen rather than numbered:
7.15 LEDs
There are four LEDS on the TS-7180 that may be controlled by the user, through the sysfs interface. These are colored yellow, green, red, and blue.
To turn an LED on, write a 1 to 'brightness'. To turn it off again, write a 0.
# Example: Turn on the Blue LED... echo 1 > /sys/class/leds/blue-led/brightness # Turn it off again... echo 0 > /sys/class/leds/blue-led/brightness
7.16 MicroSD Card Interface
The i.MX6ul SDHCI driver supports MicroSD (0-2GB), MicroSDHC (4-32GB), and MicroSDXC(64GB-2TB). The cards available on our website on average support up to 16MB/s read, and 22MB/s write using this interface. Sandisk Extreme cards with UHS support have shown 58MB/s Read and 59MB/s write. The linux driver provides access to this socket at /dev/mmcblk0 as a standard Linux block device.
Seethe IMX6ul reference manual for more information on this controller.
We have performed compatibility testing on the Sandisk MicroSD cards we provide, and we do not suggest switching brands/models without your own qualification testing. Though SD cards in theory will all follow the standard and just work, in practice cards vary significantly and can fail in subtle ways. We do not recommend ATP or Transcend MicroSD cards specifically due to known corruption issues that can occur after many GB of written data.
Our testing has shown that on average microSD cards will last between 6-12TB of written data before showing a single bit of corruption. This is enough for most applications to write for years and not see any issues, but for more reliable consider the eMMC which is expected to last over 100TB of writes. Higher end SD cards can also extend this, but industrial grade SD cards typically carry a cost much higher than the eMMC.
MicroSD cards should not be powered down during a write/erase cycle or you will eventually experience disk corruption. It is not always possible for fsck to recover from the types of failures that will be seen with SD power loss. The system should be designed to avoid power loss to SD cards, or the eMMC module should be used for storage instead which can be configured to be resilient to power loss.
7.17 PWM
The TS-7180 provides a single PWM channel, available on DIO_1 (pin #1 of P3-A). Because DIO_1 is a general-purpose IO, to use it as a PWM output it is first necessary to enable such usage by writing to address 309 in the FPGA, as follows:
tshwctl -a 309 -w 1
PWM devices are available though the sysfs filesystem, they will appear at "/sys/class/pwm/pwmchipX/" where X is the PWM channel number. Due to the layout of the PWM controller, each PWM channel is on a separate chip. Normally a single PWM chip can support multiple PWM devices through linux, however in this case each chip only has a single device; pwm0. This device is not enabled by default and must be turned on manually: can be used to control the PWM.
# Each PWM controller has "1" PWM device which will be PWM channel 0 echo 0 > /sys/class/pwm/pwmchipX/export
This will create a pwm0/ directory under each pwmchipX/ directory which will contain the following relevant files.:
As an example, this will set a 50khz signal with 50 percent duty cycle on PWM channel 4:
# 20us is the period for 50khz echo 20000 > /sys/class/pwm/pwmchip4/pwm0/period echo 10000 > /sys/class/pwm/pwmchip4/pwm0/duty_cycle echo 1 > /sys/class/pwm/pwmchip4/pwm0/enable
7.18 Quadrature & Edge-Counters
Quadrature Counters
The TS-7180 provides three independent quadrature counters. The associated inputs are shown in the table below.
Each of the quadrature counters (which are in the FPGA) is 16-bits wide, and are accessed via i2c. The addresses are shown below.
For example, to read the MSB for Quad1:
tshwctl -r -a 99
The MSB aliases are used to detect 16-bit rollover. If the first reading of the MSB is not equal to the second, overflow/underflow was detected during the read.
Edge-Counters, Period-counters
For each input pin, there is an edge-counter, and a period-counter. The former counts the positive edges on an input pin, while the latter may be used to measure the elapsed time between N positive-edges.
Edge-counters are 16-bits wide, and their addresses are shown in the table below.
Period counters are 32-bits wide, and their addresses are shown in the table below.
To use the period counters, it is first necessary to write N (for the number of edges to count) to address 155. This may be done like so:
tshwctl -a 155 -w N
As soon as address 155 is written, counting begins, clocked at 63MHz. After N edges have been detected, the period registers may be read. The frequency of the input may be calculated from the period, as shown here:
frequency = (N * 63000000) / period
Technologic Systems has provided a simple test program for accessing and displaying the values from the quadrature and edge-counters. Download the source tarball here: File:Test-edges.tar.gz
7.19 RTC
The TS-7180 includes an ST M41T00S low-power serial real-time clock (RTC) with battery-backup. This is /dev/rtc0 in our images, and is accessed using the standard hwclock command.
7.20 SPI
The i.MX6UL CPU has a native SPI peripheral that is used in a number of places on the TS-7180. Additionally, kernel spidev support is added to allow SPI access from userspace. User SPI can be used fora.
7.21 SuperCaps
The TS-7180 has an option to add two 2.7 V 10 to disable the charging and use of the SuperCaps. This mode is very useful for development to allow for proper power-off conditions without having to wait for the SuperCaps to discharge. The supervisory microcontroller will also not allow the TS-7180 to boot if power input is invalid. If the system shuts-down safely due to a power failure, it will remain in a powered off state until external power is re-applied, or the SuperCaps discharge below the sustainable threshold.ilomon-7180. See the U-Boot section for information on setting environment variables.
A recommended value is 100%.180 hardware or software, such as connecting powered devices like USB or adding additional applications may cause the recommended value to not sustain the TS-7180 until a safe shutdown is completed. The time it takes to reach 100% charge will vary depending on the current charge of the SuperCaps. On average, it will take about 20 seconds to charge the SuperCaps to 100%; this is assuming the SuperCaps have very recently fallen below the threshold voltage to sustain the TS-7180.
7.22 TWI
The i.MX6 supports standard I2C at 100khz, or using fast mode for 400khz operation. The CPU has 2 I2C buses used on the TS-7180.
I2C 1 is internal to the TS-7180 and connects to the onboard Silabs supervisory microcontroller at 100khz; and to the onboard ST M41T00S real-time clock (RTC).
The second I2C bus is connected to the onboard FPGA. This bus also runs at 400khz by default.
In addition to the CPU i2c buses, a bit-banged i2c interface is available on the daughter-card interface, using gpio. The following command will instantiate (create a device node for) a new ssd1306 display at I2C address 0x3C:
echo ssd1306 0x3c > /sys/bus/i2c/devices/i2c-4/new_device
Once this is done, i2c-tools can manipulate the I2C device, or a the downstream developer can write their own client. Technologic Systems has provided a simple client program for writing to an SSD1306 OLED display connected to the HD1 connector. The photo below shows output on the display.
Download the source-code tarball here: File:Ssd1306-demo.tar.gz
The kernel makes the I2C available at /dev/i2c-#. You can use the i2c-tools (i2cdetect, i2cget, i2cset), or you can write your own client.
7.23 USB
The TS-7180 has both a Host connector and a Device connector.
7.23.1 USB Host
The TS-7180 provides a standard USB 2.0 host supporting 480Mb/s. Typically this is interfaced with by using standard Linux drivers, but low level USB communication is possible using libusb.
7.23.2 USB DEVICE
The USB type B device port is connected to the onboard Silabs for USB to serial console.
7.24 Watchdog
The kernel provides an interface to the watchdog driver at /dev/watchdog. Refer to the kernel documentation for more information:
7.25 WIFI
The TS-7180 uses an Atmel ATWILC3000-MR110CA IEEE 802.11 b/g/n Link Controller Module With Integrated Bluetooth® 4.0. Linux provides support for this module using the wilc3000 driver.
Summary features:
- IEEE 802.11 b/g/n RF/PHY/MAC SOC
- IEEE 802.11 b/g/n (1x1) for up to 72 Mbps PHY rate
- Single spatial stream in 2.4GHz ISM band
- Integrated PA and T/R Switch Integrated Chip Antenna
- Superior Sensitivity and Range via advanced PHY signal processing
- Advanced Equalization and Channel Estimation
- Advanced Carrier and Timing Synchronization
- Wi-Fi Direct and Soft-AP support
- Supports IEEE 802.11 WEP, WPA, and WPA2 Security
- Supports China WAPI security
- Operating temperature range of -40°C to +85°C
8 Physical Interfaces
8.1 Internal Interfaces
8.2 External Interfaces
8.2.1 Power Connector
The power connector, CN7, is shown in the photograph below. This accepts an 8-28 VDC input.
8.2.2 Terminal Blocks
The TS-7180 includes four removable terminal blocks (OSTTJ0811030) for UARTs, CAN, ADC, and other general purpose IO.
8.2.3 Modem Socket
The TS-7180 has provision for the mounting of a Multitech or NimbelLink modem.
To be completed.
The power to the modem (if installed) must be enabled by writing a 1 to the EN_CELL_MODEM_PWR bit in the FPGA GPIO registers.
8.2.4 Daughter Card Interface
The TS-7180 Daughter Card Interface (HD1) may be used to connect a variety of off-board peripherals. The interface includes pins for the following:
- I2C
- SPI
- USB OTG
- PoE
- UART (TTL levels)
The table below shows the pinout of the header.
9 Specifications
9.1 Power Consumption
All tests are performed at 12V, with Ethernet, USB, supercaps, SD, disconnected or disabled unless otherwise specified.
10 Revisions and Changes
10.1 TS-7180 PCB Revisions
10.2 U-Boot Revisions
10.3 FPGA Revisions
10.4 Software Images
10.4.1 Kernel Changelog
11 Product Notes
11.
11. | https://docs.embeddedarm.com/TS-7180 | 2021-02-24T20:47:43 | CC-MAIN-2021-10 | 1614178347321.0 | [] | docs.embeddedarm.com |
Networking and IPv6 protocols.
Rules all applied atomically instead of fetching, updating, and storing a complete ruleset.
Support for debugging and tracing in the ruleset ( ruleset can be observed using the
nft list ruleset command.
Since these tools add tables, chains, and rules to the
nftables rules) | https://docs.fedoraproject.org/pt/fedora/f32/release-notes/sysadmin/Networking/ | 2021-02-24T21:40:16 | CC-MAIN-2021-10 | 1614178347321.0 | [] | docs.fedoraproject.org |
-ERREF] Microsoft Corporation, "Windows Error Codes".
[MS-RPCE] Microsoft Corporation, "Remote Procedure Call Protocol Extensions".
[RFC2119] Bradner, S., "Key words for use in RFCs to Indicate Requirement Levels", BCP 14, RFC 2119, March 1997,
[RFC4122] Leach, P., Mealling, M., and Salz, R., "A Universally Unique Identifier (UUID) URN Namespace", RFC 4122, July 2005, | https://docs.microsoft.com/en-us/openspecs/windows_protocols/ms-wdsc/1eee3503-eff1-43bf-bc9a-22b04338e6b1 | 2021-02-24T22:20:52 | CC-MAIN-2021-10 | 1614178347321.0 | [] | docs.microsoft.com |
Software Download Directory
frevvo v7.2 contains new features that improve usability and makes designing forms easier. Here are a few of the top reasons to upgrade:
URLs no longer include the designer's user id (#19777).
On This Page:
We recommend that you read the information below before you begin.
Live Forms Online cloud hosted customers will be automatically upgraded on TBD. The automatic cloud upgrade will be seamless. Cloud customers should review these topics to prepare for the new version of frevvo:
If you have any questions, please email [email protected].
frevvo frevvo.
The Opaque URL feature provides:
An example of the opaque URL for a frevvo frevvo previous to v7.2 can download the PVE connector from here or download the tomcat bundle to retrieve the v5.4 connector. Simply replace the existing pve.war file with the new one. frevvo.
Java 1.7 is no longer supported. In-house customers should upgrade to Java 8 before installing frevvo v7.2. Please review Supported Platforms for a complete list.
A new version (v2.5) of the Database Connector now included in the frevvo tomcat bundle. Version 2.5 of the database connector requires Java 8.
In-house customers must upgrade to frevvo v7.2 to use the Database Connector v2.5. There are 2 options:
Review the DB Connector Release Notes to help you decide.
#18662 - CF : Form/flow submission should be edited by users as per ACL permissions.
#16318 - 5.3.3 ACL behavior in confluence
In-house customers should review the topics below, the instructions in the Upgrade Guide and Supported Platforms before migrating. It is recommended that you perform a full installation of frevvo. | https://docs.frevvo.com/d/pages/viewpage.action?pageId=19793127&navigatingVersions=true | 2021-02-24T20:19:05 | CC-MAIN-2021-10 | 1614178347321.0 | [array(['/d/images/icons/linkext7.gif', None], dtype=object)] | docs.frevvo.com |
You can add a connection to flat files, both local and remote, using ThoughtSpot DataFlow.
Follow these steps:
Click Connections in the top navigation bar.
In the Connections interface, click Add connection in the top right corner.
In the Create Connection interface, enter the Connection name, and select the Connection type.
After you select the File Connection type, the rest of the connection properties appear.
Depending on your choice of authentication mechanism, you may use different properties.
- Connection name
Name your connection.
- Connection type
Choose the Files connection type.
- File location
Specify the base location of the file on the server.
- Files on remote location
Specify If the files on remote server.
- Protocol
Select the required remote server connection
Mandatory field. For remote location files only.
- Authentication type
Specify the authentication type for SFTP Protocol
Mandatory field. For SFTP protocol only
- Host
Specify the Hostname or the IP address of the remote server
Mandatory field. For remote location files only.
- Port
Specify the Port to connect the remote server
Mandatory field. For remote location files only.
- User
Specify the user to connect to remote server. This user must have data access privileges. For remote location files only.
Specify the password. For remote location files only, when using password authentication.
- Key file
Specify the key file and its fully qualified path. For remote location files only, when using key authentication.
- Passphrase for key file
Specify the passphrase for the key file. For remote location files only, when using key authentication.
See Connection properties for details, defaults, and examples.
Click Create connection. | https://docs.thoughtspot.com/6.2/data-integrate/dataflow/dataflow-files-add.html | 2021-02-24T21:03:41 | CC-MAIN-2021-10 | 1614178347321.0 | [] | docs.thoughtspot.com |
Configuring Role and Resource-Based Access Control¶
- date
2020-07-30
Tungsten Fabric Role and Resource-Based Access (RBAC) Overview¶
Tungsten Fabric supports role and resource-based access control (RBAC) with API operation-level access control.
The RBAC implementation relies on user credentials obtained from Keystone from a token present in an API request. Credentials include user, role, tenant, and domain information.
API-level access is controlled by a list of rules. The attachment points
for the rules include
global-system-config, domain, and project.
Resource-level access is controlled by permissions embedded in the
object.
API-Level Access Control¶
If the RBAC feature is enabled, the API server requires a valid token to
be present in the
X-Auth-Token of any incoming request. The API
server trades the token for user credentials (role, domain, project, and
so on) from Keystone.
If a token is missing or is invalid, an HTTP error 401 is returned.
The
api-access-list object holds access rules of the following form:
<object, field> => list of <role:CRUD>
Where:
object—An API resource such as network or subnet.
field—Any property or reference within the resource. The
fieldoption can be multilevel, for example,
network.ipam.host-routescan be used to identify multiple levels. The
fieldis optional, so in its absence, the create, read, update, and delete (CRUD) operation refers to the entire resource.
role—The Keystone role name.
Each rule also specifies the list of roles and their corresponding permissions as a subset of the CRUD operations.
Example: ACL RBAC Object¶
The following is an example access control list (ACL) object for a
project in which the admin and any users with the
Development role
can perform CRUD operations on the network in a project. However, only
the
admin role can perform CRUD operations for policy and IP address
management (IPAM) inside a network.
<virtual-network, network-policy> => admin:CRUD <virtual-network, network-ipam> => admin:CRUD <virtual-network, *> => admin:CRUD, Development:CRUD
Rule Sets and ACL Objects¶
The following are the features of rule sets for access control objects in TF.
The rule set for validation is the union of rules from the ACL attached to:
User project
User domain
Default domain
It is possible for the project or domain access object to be empty.
Access is only granted if a rule in the combined rule set allows access.
There is no explicit deny rule.
An ACL object can be shared within a domain. Therefore, multiple projects can point to the same ACL object. You can make an ACL object the default.
Object Level Access Control¶
The
perms2 permission property of an object allows fine-grained
access control per resource.
The
perms2 property has the following fields:
owner— This field is populated at the time of creation with the
tenant UUID value extracted from the token.
share list— The share list gets built when the object is selected for sharing with other users. It is a list of tuples with which the object is shared.
The
permission field has the following options:
R—Read object
W—Create or update object
X—Link (refer to) object
Access is allowed as follows:
If the user is the owner and permissions allow (rwx)
Or if the user tenant is in a shared list and permissions allow
Or if world access is allowed
Configuration¶
This section describes the parameters used in TF RBAC.
Parameter: aaa-mode¶
RBAC is controlled by a parameter named
aaa-mode. This parameter is
used in place of the multi-tenancy parameter of previous releases.
The
aaa-mode can be set to the following values:
no-auth—No authentication is performed and full access is granted to all.
cloud-admin—Authentication is performed and only the admin role has access.
rbac—Authentication is performed and access is granted based on role.
If you are using TF Ansible Deployer to provision Tungsten Fabric, set the value for AAA_MODE to rbac to enable RBAC by default.
contrail_configuration: . . . AAA_MODE: rbac
After enabling RBAC, you must restart the neutron server by running the service neutron-server restart command for the changes to take effect.
Note
The
multi_tenancy parameter is deprecated, starting with Tungsten Fabric
3.0. The parameter should be removed from the configuration. Instead,
use the
aaa_mode parameter for RBAC to take effect.
If the
multi_tenancy parameter is not removed, the
aaa-mode
setting is ignored.
Parameter: cloud_admin_role¶
A user who is assigned the
cloud_admin_role has full access to
everything.
This role name is configured with the
cloud_admin_role parameter in
the API server. The default setting for the parameter is
admin. This
role must be configured in Keystone to change the default value.
If a user has the
cloud_admin_role in one tenant, and the user has a
role in other tenants, then the
cloud_admin_role role must be
included in the other tenants. A user with the
cloud_admin_role
doesn’t need to have a role in all tenants, however, if that user has
any role in another tenant, that tenant must include the
cloud_admin_role.
Configuration Files with Cloud Admin Credentials¶
The following configuration files contain
cloud_admin_role
credentials:
/etc/contrail/contrail-keystone-auth.conf
/etc/neutron/plugins/opencontrail/ContrailPlugin.ini
/etc/contrail/contrail-webui-userauth.js
Changing Cloud Admin Configuration Files¶
Modify the cloud admin credential files if the
cloud_admin_role role
is changed.
Change the configuration files with the new information.
Restart the following:
API server
service supervisor-config restart
Neutron server
service neutron-server restart
WebUI
service supervisor-webui restart
Global Read-Only Role¶
You can configure a global read-only role (
global_read_only_role).
A
global_read_only_role allows read-only access to all TF
resources. The
global_read_only_role must be configured in Keystone.
The default
global_read_only_role is not set to any value.
A
global_read_only_role user can use the Tungsten Fabric WebUI to view the
global configuration of TF default settings.
Parameter Changes in /etc/neutron/api-paste.ini¶
TF RBAC operation is based upon a user token received in the
X-Auth-Token header in API requests. The following change must be
made in
/etc/neutron/api-paste.ini to force Neutron to pass the user
token in requests to the Tungsten Fabric API server:
keystone = user_token request_id catch_errors .... ... ... [filter:user_token] paste.filter_factory = neutron_plugin_contrail.plugins.opencontrail.neutron_middleware:token_factory
Upgrading from Previous Releases¶
The
multi_tenancy parameter is deprecated.. The parameter should be
removed from the configuration. Instead, use the
aaa_mode parameter
for RBAC to take effect.
If the
multi_tenancy parameter is not removed, the
aaa-mode
setting is ignored.
Configuring RBAC Using the Tungsten Fabric WebUI¶
To use the TF WebUI with RBAC:
Set the aaa_mode to no_auth.
/etc/contrail/contrail-analytics-api.conf
aaa_mode = no-auth
Restart the
analytics-apiservice.
service contrail-analytics-api restart
Restart services by restarting the container.
You can use the TF WebUI to configure RBAC at both the API level and the object level. API level access control can be configured at the global, domain, and project levels. Object level access is available from most of the create or edit screens in the TF WebUI.
Configuring RBAC Details¶
Configuring RBAC is similar at all of the levels. To add or edit an API access list, navigate to the global, domain, or project page, then click the plus (+) icon to add a list, or click the gear icon to select from Edit, Insert After, or Delete.
Creating or Editing API Level Access¶
Clicking create, edit, or insert after activates the Edit API Access popup window, where you enter the details for the API Access Rules. Enter the user type in the Role field, and use the + icon in the Access filed to enter the types of access allowed for the role, including, Create, Read, Update, Delete, and so on.
Creating or Editing Object Level Access¶
You can configure fine-grained access control by resource. A Permissions tab is available on all create or edit popups for resources. Use the Permissions popup to configure owner permissions and global share permissions. You can also share the resource to other tenants by configuring it in the Share List.
RBAC Resources¶
Refer to the OpenStack Administrator Guide for additional information about RBAC: | https://docs.tungsten.io/en/latest/tungsten-fabric-installation-and-upgrade-guide/role-resource-access-control-vmc.html | 2021-02-24T20:11:41 | CC-MAIN-2021-10 | 1614178347321.0 | [array(['../_images/s018763.png', 'Figure 4: RBAC Details API Access'],
dtype=object)
array(['../_images/s018764.png', 'Figure 5: Edit API Access'],
dtype=object)
array(['../_images/s018765.png', 'Figure 6: Edit Object Level Access'],
dtype=object) ] | docs.tungsten.io |
i4SCADA Alarm States
Check out this article and learn about the i4SCADA possible alarm states and the available timestamp combinations.
The four possible alarm states in a i4SCADA system (Active, Acknowledged, Inactive and Acknowledged and Gone) are defined by logical combinations of alarm time stamps (Date ON, Date ACKNOWLEDGED and Date OFF). The following table describes the time stamps combinations and the resulting alarm states:
Summarising the information provided by the table above, we can conclude the following:
an alarm is in the Acknowledged and Gone state when an occurrence exists and it has been acknowledged and closed
an alarm is in the Inactive state when an occurrence exists and it has been closed but not acknowledged
an alarm is in the Acknowledged state when an occurrence exists and it has been acknowledged but not closed
an alarm is in the Active state when an occurrence exists but it has not been acknowledged and closed
The alarm states are set based on a specific order which, as represented in the table above, is always top to bottom. Explicitly, the system first checks if an alarm occurrence has all three time stamps and goes down from there until at least the Date ON time stamp is found. Thus, an alarm occurrence states are checked in the following order:
Acknowledged and Gone
Inactive
Acknowleged
Active | https://docs.webfactory-i4.com/i4scada/en/i4scada-alarm-states.html | 2021-02-24T20:11:30 | CC-MAIN-2021-10 | 1614178347321.0 | [] | docs.webfactory-i4.com |
Notes on Numba Runtime¶
The Numba Runtime (NRT) provides the language runtime to the nopython mode Python subset. NRT is a standalone C library with a Python binding. This allows NPM runtime feature to be used without the GIL. Currently, the only language feature implemented in NRT is memory management.
Memory Management¶
NRT implements memory management for NPM code. It uses atomic reference count
for threadsafe, deterministic memory management. NRT maintains a separate
MemInfo structure for storing information about each allocation.
Cooperating with CPython¶
For NRT to cooperate with CPython, the NRT python binding provides adaptors for
converting python objects that export a memory region. When such an
object is used as an argument to a NPM function, a new
MemInfo is created
and it acquires a reference to the Python object. When a NPM value is returned
to the Python interpreter, the associated
MemInfo (if any) is checked. If
the
MemInfo references a Python object, the underlying Python object is
released and returned instead. Otherwise, the
MemInfo is wrapped in a
Python object and returned. Additional process maybe required depending on
the type.
The current implementation supports Numpy array and any buffer-exporting types.
Compiler-side Cooperation¶
NRT reference counting requires the compiler to emit incref/decref operations according to the usage. When the reference count drops to zero, the compiler must call the destructor routine in NRT.
Optimizations¶
The compiler is allowed to emit incref/decref operations naively. It relies on an optimization pass to remove redundant reference count operations.
A new optimization pass is implemented in version 0.52.0 to remove reference
count operations that fall into the following four categories of control-flow
structure—per basic-block, diamond, fanout, fanout+raise. See the documentation
for
NUMBA_LLVM_REFPRUNE_FLAGS for their descriptions.
The old optimization pass runs at block level to avoid control flow analysis.
It depends on LLVM function optimization pass to simplify the control flow,
stack-to-register, and simplify instructions. It works by matching and
removing incref and decref pairs within each block. The old pass can be
enabled by setting
NUMBA_LLVM_REFPRUNE_PASS to 0.
Important assumptions¶
Both the old (pre-0.52.0) and the new (post-0.52.0) optimization passes assume
that the only function that can consume a reference is
NRT_decref.
It is important that there are no other functions that will consume references.
Since the passes operate on LLVM IR, the “functions” here are referring to any
callee in a LLVM call instruction.
To summarize, all functions exposed to the refcount optimization pass
must not consume counted references unless done so via
NRT_decref.
Quirks of the old optimization pass¶
Since the pre-0.52.0 refcount optimization pass requires the LLVM function optimization pass, the pass works on the LLVM IR as text. The optimized IR is then materialized again as a new LLVM in-memory bitcode object..
Recursion Support¶
During the compilation of a pair of mutually recursive functions, one of the
functions will contain unresolved symbol references since the compiler handles
one function at a time. The memory for the unresolved symbols is allocated and
initialized to the address of the unresolved symbol abort function
(
nrt_unresolved_abort) just before the machine code is
generated by LLVM. These symbols are tracked and resolved as new functions are
compiled. If a bug prevents the resolution of these symbols,
the abort function will be called, raising a
RuntimeError exception.
The unresolved symbol abort function is defined in the NRT with a zero-argument signature. The caller is safe to call it with arbitrary number of arguments. Therefore, it is safe to be used inplace of the intended callee.; }
Future Plan¶
The plan for NRT is to make a standalone shared library that can be linked to Numba compiled code, including use within the Python interpreter and without the Python interpreter. To make that work, we will be doing some refactoring:
- numba NPM code references statically compiled code in “helperlib.c”. Those functions should be moved to NRT. | https://numba.readthedocs.io/en/stable/developer/numba-runtime.html | 2021-02-24T20:07:09 | CC-MAIN-2021-10 | 1614178347321.0 | [] | numba.readthedocs.io |
Monitoring
As described in Overview of objects, TrueSight Middleware and Transaction Monitor (TMTM) monitors objects, and you can use any of the following methods to monitor them..
Best practice
Object discovery, in combination with the policy action to register the object for monitoring, is the preferred method for the WebSphere MQ extension and its objects.
Was this page helpful? Yes No Submitting... Thank you | https://docs.bmc.com/docs/TMTM/81/monitoring-715620705.html | 2021-02-24T21:18:44 | CC-MAIN-2021-10 | 1614178347321.0 | [] | docs.bmc.com |
Registering staging schema names
Users with AR Administrator permissions can register any new custom forms that will be used to automatically generate spreadsheets. This is a prerequisite task that must be completed before you can automatically generate spreadsheets.
For information about automatically generating spreadsheets, see Autogenerating spreadsheets.
To register a staging schema name
- From the Application Administration Console, click the Custom Configuration tab.
- From the Application Settings list, select Foundation> Advanced Options> System Configuration Settings - Schema Names.
- From the Schema Names form, complete the required fields. You must:
- Select the Form Lookup check box, which requires that you enter a unique form code in the Form Code field.
- Select Staging Form in the Form Type list.
- To save your changes, click Save.
Was this page helpful? Yes No Submitting... Thank you | https://docs.bmc.com/docs/itsm1808/registering-staging-schema-names-817897829.html | 2021-02-24T21:10:26 | CC-MAIN-2021-10 | 1614178347321.0 | [] | docs.bmc.com |
Globalization Properties
The Culture property can be set using the drop down list in the Properties Window or set in code. The screenshot below shows the Culture property set to "French (France)".
Figure 1: RadCalendar with French culture.
Setting CultureInfo in code
radCalendar1.Culture = CultureInfo.GetCultureInfo("fr-FR");
RadCalendar1.Culture = CultureInfo.GetCultureInfo("fr-FR")
Additional properties that relate to globalization are: | https://docs.telerik.com/devtools/winforms/controls/calendar/localization/globalization-properties | 2021-02-24T20:59:31 | CC-MAIN-2021-10 | 1614178347321.0 | [array(['images/calendar-localization-globalization-properties001.png',
'calendar-localization-globalization-properties 001'], dtype=object)] | docs.telerik.com |
Gloss.
- bytecode
- Python bytecode
- The original form in which Python functions are executed. Python bytecode describes a stack-machine executing abstract (untyped) operations using operands from both the function stack and the execution environment (e.g. global variables).
-
-decorator will automatically fall back to object mode if nopython mode cannot be used.
- Numba IR
- Numba intermediate representation
- A representation of a piece of Python code which is more amenable to analysis and transformations than the original Python bytecode.
-.
OptionalType
- An
OptionalTypeis effectively a type union of a
typeand
None. They typically occur in practice due to a variable being set to
Noneand then in a branch the variable being set to some other value. It’s often not possible at compile time to determine if the branch will execute so to permit type inference to complete, the type of the variable becomes the union of a
type(from the value) and
None, i.e.
OptionalType(type).
- type inference
- The process by which Numba determines the specialized types of all values within a function being compiled. Type inference can fail if arguments or globals have Python types unknown to Numba, or if functions are used that are not recognized by Numba. Successful type inference is a prerequisite for compilation in nopython mode.
- typing
- The act of running type inference on a value or operation.
- ufunc
- A NumPy universal function. Numba can create new compiled ufuncs with the @vectorize decorator.
- reflection
In numba, when a mutable container is passed as argument to a nopython function from the Python interpreter, the container object and all its contained elements are converted into nopython values. To match the semantics of Python, any mutation on the container inside the nopython function must be visible in the Python interpreter. To do so, Numba must update the container and its elements and convert them back into Python objects during the transition back into the interpreter.
Not to be confused with Python’s “reflection” in the context of binary operators (see). | https://numba.readthedocs.io/en/stable/glossary.html | 2021-02-24T20:01:09 | CC-MAIN-2021-10 | 1614178347321.0 | [] | numba.readthedocs.io |
Shifts can be imported into a new weekly schedule with the import function. First of all you need to create a weekly schedule which you then can save as template:
You can import this template anytime into new weekly schedules. This saves you a lot of time when the allocation of the shifts will be the same as the weeks before.
Import function:
To import shifts you need to create a new schedule. The second step is to click on 'more' and 'import shifts':
Now you can choose the favored schedule and departments. You can also import the assignments of the employees. Complete the process by clicking on 'import shifts':
| http://docs.staffomatic.com/staffomatic-help-center/shifts/can-i-import-shifts | 2018-05-20T14:11:23 | CC-MAIN-2018-22 | 1526794863570.21 | [array(['https://uploads.intercomcdn.com/i/o/26586502/1d1e43a0a83fbc1f7873df89/Bildschirmfoto+2017-06-15+um+15.41.08.png',
None], dtype=object)
array(['https://uploads.intercomcdn.com/i/o/26586529/721b0e4c9d8534fd697a2a01/Bildschirmfoto+2017-06-15+um+15.40.37.png',
None], dtype=object)
array(['https://uploads.intercomcdn.com/i/o/26586553/e528eaa7b2b34e151b7e661d/Bildschirmfoto+2017-06-15+um+15.39.42.png',
None], dtype=object) ] | docs.staffomatic.com |
Create a transform category Create a transform category to group the transform definitions together. Navigate to Field Normalization > Administration > Transform Categories. Click New in the Transform Categories Related List. Enter the Name of this category and a description. Select an Order for this category and save the record. The order determines the display order of categories in lists and forms. Two Related lists appear: Field Types: Click Edit to select an existing field type for this category or click New to create a new field type. The normalization field types provided are: Decimal Float Integer Numeric String URL Transform Definitions: Click Edit to select the transform definitions that are included in this category. | https://docs.servicenow.com/bundle/kingston-platform-administration/page/administer/field-administration/task/t_CreateATransformCategory.html | 2018-05-20T13:53:17 | CC-MAIN-2018-22 | 1526794863570.21 | [] | docs.servicenow.com |
Clear-Windows
Corrupt Mount Point: Delete all resources associated with a mounted image
PS C:\>Clear-WindowsCorruptMountPoint
This command deletes all of the resources on the computer that are associated with a mounted image that has been corrupted.
Optional.BasicDismObject
Outputs
Microsoft.Dism.Commands.BaseDismObject | https://docs.microsoft.com/en-us/powershell/module/dism/clear-windowscorruptmountpoint?view=winserver2012-ps | 2018-05-20T14:43:20 | CC-MAIN-2018-22 | 1526794863570.21 | [] | docs.microsoft.com |
Discovery classification parameters Each type of Discovery classify probe (Windows, UNIX, SNMP, etc.) returns a unique set of parameters that contains classification information. Any of the supported parameters can be used as configurable criteria for classifying devices that Discovery finds. The tables in this page define the available parameters for each classification. Discovery also provides a way to classify devices it finds when no credentials are available. Discovery for IP addresses makes certain assumptions about devices and the applications running on those devices from the ports that it finds open. Classification parameters for this type of Discovery are generated differently from scans in which credentials are available. See Forming Parameters for IP Address Scanning for details. Create classification criteria for a Discovery definitionTo add criteria to a Classification record, navigate to Discovery Definition > Classification > <type> and click New in the Classification Criteria related list.UNIX parametersThe UNIX parameters define the characteristics of several types of computers, such as Linux, Solaris, and HP-UX, communicating with SSH protocol, version 2.Windows Discovery classification parametersWindows parameters identify Windows computers communicating with the WMI protocol.SNMP parametersThe SNMP parameters can define the characteristics of several types of devices, such as routers, switches, and printers.Process parametersProcess parameters identify processes such as those used by LDAP, Apache Server, and JBoss Server.Discovery classification for IP address scanningClassification is available for the IP Addresses Discovery type and returns information about CIs (Scan CI) and applications running on CIs (Scan Application).Form parameters for IP address scanningThe syntax for creating parameters is derived from the fields returned by the Shazzam probe when conducting a Discovery for IP addresses. | https://docs.servicenow.com/bundle/geneva-it-operations-management/page/product/discovery/concept/c_DiscoClassificationParam.html | 2018-05-20T14:15:11 | CC-MAIN-2018-22 | 1526794863570.21 | [] | docs.servicenow.com |
WCM Editor is a portlet application where user can manage content.
There is two main roles in WCM Editor: editors and managers.
A editor user can write content, manage categories and upload files.
A manager user have additional features like manage templates and access to manager functions.
All WCM Editor users should be added to /wcm group in Group Management.
- Editor role is marked with member Membership Type.
- Manager role is marked with manager Membership Type.
Terminology
WCM Editor uses three basic content concepts: Posts, Categories and Uploads.
Post is the main content in GateIn WCM. Other similar terminologies can be "article" or "entry". WCM Editor offers simple and intuitive interface to write a Post, giving it a title, organizing it under categories and publishing. Meanwhile other CMS solutions offer a vast and sophisticated features, GateIn WCM is focused to extend these features to GateIn Portal, making easy to combine web content with portal applications in a simple way.
Categories
Categories allow to organize content. Content can have one or more categories that can be used to filter content or to give security rights.
Categories can be used like folders or simple tags.
Uploads
Uploads are files that can be uploaded into GateIn WCM repository and referenced inside posts.
Typical uploads in a WCM system can be images used in an article but also whatever file to download.
| https://docs.jboss.org/author/display/GTNWCM/2.+Creating+content+from+WCM+Editor | 2018-05-20T13:46:59 | CC-MAIN-2018-22 | 1526794863570.21 | [array(['/author/download/attachments/75137357/wcm-2-1.png?version=1&modificationDate=1382802853000',
None], dtype=object)
array(['/author/download/attachments/75137357/wcm-2-2.png?version=1&modificationDate=1382803023000',
None], dtype=object)
array(['/author/download/attachments/75137357/wcm-2-3.png?version=1&modificationDate=1382803637000',
None], dtype=object)
array(['/author/download/attachments/75137357/wcm-2-4.png?version=1&modificationDate=1382808394000',
None], dtype=object)
array(['/author/download/attachments/75137357/wcm-2-5.png?version=1&modificationDate=1382808396000',
None], dtype=object)
array(['/author/download/attachments/75137357/wcm-2-6.png?version=1&modificationDate=1382808402000',
None], dtype=object) ] | docs.jboss.org |
- Getting Started
- Tutorials
- COPYandPAY
- Server-to-Server
- Standalone 3D Secure
- Mobile SDK
- Manage Payments
- Fraud Screening
- ReD Shield
- Reporting
- Webhooks
- Reference
- API Reference
- Result Codes
- Brands Reference
Asynchronous Payments
In an asynchronous workflow a redirection takes place to allow the account holder to complete/verify the payment. After this the account holder is redirected back to the app and the status of the payment can be queried.
This article will guide you how to support communication between apps for the asynchronous payment methods.
NOTE: This article assumes you've already followed our First Integration tutorial. | https://docs.monei.net/tutorials/mobile-sdk/custom-ui/asynchronous-payments | 2018-05-20T13:49:17 | CC-MAIN-2018-22 | 1526794863570.21 | [array(['https://test.monei-api.net/static/images/8a82941746287806014628a0e31c04e0.png',
'Home'], dtype=object) ] | docs.monei.net |
Exercise2¶
These are two extensions for Jupyter, for hiding/showing solutions cells.
They use the same approach and codebase and differ only by the type of
cell widget used the show/hide the solutions. The two extensions can be used
simultaneously. They require the
rubberband extension to be installed and
enabled.
The example below demonstrates some of the features of the exercise extensions.
- First, an solution or “details” cell is created by (a) selecting two cells with the rubberband and (b) clicking on the menu-button [exercise extension]
- Second, the two next cells are selected using a keyboard shortcut, and a solution is created using the shortcut Alt-D [exercise2 extension]
- Third, the two solutions are expanded by clicking on the corresponding widgets
- Fourth, the solutions are removed by selecting them and clicking on the buttons in the toolbar.
The extensions provide¶
- a menubar button
- a cell widget – A plus/minus button in
exerciseand a sliding checkbox in
exercise2.
The menubar button is devoted to the creation or removing of the solution. The solution consists in several consecutive cells that can be selected by the usual notebook multicell selection methods (e.g. Shift-down (select next) or Shift-up (select previous) keyboard shortcuts, or using the rubberband extension.
Creating a solution¶
Several cells being selected, pressing the menubar button adds a
cell widget and hides the cells excepted the first one which serves as a heading cell. Do not forget to keep the Shift key pressed down while clicking on the menu button
(otherwise selected cells will be lost). It is also possible to use a keyboard shortcut for creating the solution from selected cells: Alt-S for exercise extension and Alt-D for exercise2.
Removing a solution¶
If a solution heading (first) cell is selected, then clicking the menu bar button removes this solution and its solutions cells are shown. Using the keyboard shortcut has the same effect.
Showing/hiding solution¶
At creation of the solution, the solution cells are hidden. Clicking the
cell widget toggles the hidden/shown state of the solution.
Persistence¶
The state of solutions, hidden or shown, is preserved and automatically restored at startup and on reload.
Internals¶
exercise and exercise2 add respectively a solution and solution2 metadata to solution cells, with for value the current state hidden/shown of the solution. For exercise, a div with the plus/minus character is prepended to the solution heading cell. For exercise2, a flex-wrap style is added to the solution heading cell and a checkbox widget, with some css styling, is appended to the cell. A solution[.2]_first metadada is also added to enable an easy detection of the first cell in an “exercise” and then allow several consecutive exercises. | https://jupyter-contrib-nbextensions.readthedocs.io/en/latest/nbextensions/exercise2/readme.html | 2018-05-20T13:37:35 | CC-MAIN-2018-22 | 1526794863570.21 | [array(['../../_images/image1.gif', None], dtype=object)] | jupyter-contrib-nbextensions.readthedocs.io |
Active Directory Application Mode
Applies To: Windows Server 2003, Windows Server 2003 R2, Windows Server 2003 with SP1, Windows Server 2003 with SP2, Windows Server 2012, Windows Server 2012 R2
Active Directory Application Mode (ADAM) is a new mode of the Active Directory directory service that is designed to meet the specific needs of organizations that use directory-enabled applications. While Active Directory supports directory-enabled applications, as well as the server operating system, some directory-enabled applications have requirements that Active Directory does not meet. For example, some directory-enabled applications require schema changes that administrators may not want to make to Active Directory.
In addition, organizations may want to:
Support directory-enabled applications but not implement Active Directory domains and forests.
Support directory-enabled applications outside their existing domains and forests.
Use X.500 naming conventions for top-level directory partitions.
Run multiple directory service instances on a single server.
ADAM is designed to support these and other directory service scenarios. ADAM runs completely independently from Active Directory, and ADAM has its own schema. You can make changes to the ADAM schema with no impact to the Active Directory schema. ADAM does not require the presence of Active Directory domain controllers, domains, or forests. Therefore, organizations that have not implemented Active Directory can install and run ADAM. ADAM supports X.500 naming conventions for top-level directory partitions, and you can run multiple instances of ADAM on a single server.
ADAM runs on the following:
Domain controllers running operating systems in the Microsoft Windows Server 2003 family (Note that Microsoft Windows Server 2003, Web Edition cannot be a domain controller.)
Member servers running operating systems in the Windows Server 2003 family (except for Windows Server 2003, Web Edition)
Client computers running Microsoft Windows XP Professional | https://docs.microsoft.com/en-us/previous-versions/windows/it-pro/windows-server-2003/cc736765(v=ws.10) | 2018-05-20T14:12:59 | CC-MAIN-2018-22 | 1526794863570.21 | [] | docs.microsoft.com |
View an online user How to view a list of all users who are available to chat (status of Online) in legacy chat. Right-click the Users section header or click Options on the toolbar. Figure 1. Chat window menu Select Show Online Users. Start a one-to-one chat with a user on the list by double-clicking a name. Send Message or Add To Friend List by right-clicking a name. Figure 2. Online users | https://docs.servicenow.com/bundle/geneva-servicenow-platform/page/use/using_social_it/task/t_ViewAnOnlineUser.html | 2018-05-20T14:15:37 | CC-MAIN-2018-22 | 1526794863570.21 | [] | docs.servicenow.com |
Set a CI field to be mandatory Configure a CI field as mandatory so it is included in the CMDB Health tests for the required metric if enabled. Required is a metric of the CMDB Health completeness KPI. Before you beginRole required: itil_admin. Click Hierarchy to display the CI Classes list. Then select the class with the field that needs to be set as mandatory. In the class navigation bar, expand Class Info and then select Columns. In the Columns view, click Added. Locate the column that you want to set as mandatory, and then double-click its Mandatory value and set it to true. The next time the form is opened, a field status indicator appears next to the field label, indicating that a value is mandatory. Note: Mandatory fields are global. The field is marked as mandatory everywhere it appears on a form. Related TopicsCMDB Health Dashboard for Helsinki | OverviewMake a field mandatory | https://docs.servicenow.com/bundle/kingston-servicenow-platform/page/product/configuration-management/task/t_SetCIFieldMandatory.html | 2018-05-20T14:02:14 | CC-MAIN-2018-22 | 1526794863570.21 | [] | docs.servicenow.com |
- Reference >
- Operators >
- Query and Projection Operators >
- Geospatial Query Operators >
- $polygon
$polygon¶
On this page
Definition¶
$polygon¶
Specifies a polygon for a geospatial
$geoWithinquery on legacy coordinate pairs. The query returns pairs that are within the bounds of the polygon. The operator does not query for GeoJSON objects.
To define the polygon, specify an array of coordinate points:
The last point is always implicitly connected to the first. You can specify as many points, i.e. sides, as you like.
Important
If you use longitude and latitude, specify longitude first.
Behavior¶
The
$polygon operator calculates distances using flat (planar)
geometry.
Applications can use
$polygon without having a geospatial index.
However, geospatial indexes support much faster queries than the
unindexed equivalents.
Only the 2d geospatial index supports the
$polygon operator.
← $box $uniqueDocs → | https://docs.mongodb.com/v3.2/reference/operator/query/polygon/ | 2018-05-20T13:38:52 | CC-MAIN-2018-22 | 1526794863570.21 | [] | docs.mongodb.com |
Testdroid - Frequently Asked Questions
Testdroid Cloud & Our Devices
Do we need to share the source code of the app in the Cloud?
No you don’t, unless you choose to use a test framework that specifically requires this.
How does Testdroid manage the cleaning of devices?
Each phone is cleaned of apps that are not expected to be there.
Android browser: we do remove any browser app local data implying there will not be any navigation data on next session. We also uninstall all apps that is possible to unistall and remove files we know are created during tests.
iOS browser: we rely on iOS’s own browser history cleaning services to revert Safari to an initial state.
Each device is put through a clean up phase after each test run. Regardless of all these efforts some files or data may stay hidden somewhere on the device in the Public Cloud. For a truly private environment we recommend a Private or Enterprise Cloud installation.
What are the requirements for signing my iOS applications to use on Testdroid’s iOS devices?
The app needs to be an ad-hoc distribution developer debug build. You can find more information here.
How do you support Testdroid devices communicating with servers behind our firewall?
For Public cloud the only way is for the customer servers to whitelist our public IPs to allow connections from our devices. For Private Cloud we can implement a VPN or proxy connection. In Enterprise Cloud the customer has complete freedom on implementing the networking as the cloud is in their own premises.
Do you support turning off and on Wifi on Testdroid?
For Public cloud the only way is for the customer servers to whitelist our public IPs allowing our devices to connect. For private cloud we can implement a VPN or proxy connections. In Enterprise Cloud it is up to the customer to define as the installation is in their own premise.
Do you support in app purchases?
In Public Cloud it is complicated to manage Google account cash. If a test account can be used to log in and do the purchase completely inside the tested app, then we support it. For Private and Enterprise environments we can set accounts controlled by the customer to enable such purchases.
Do you support testing of push notifications?
If the used test framework supports testing push notifications, then we can support them too. Triggering of push notifications is up to the customer to handle.
Can we change device settings on Testdroid?
In Public Cloud it is not allowed to access device settings at all. For Private and Enterprise environments settingscan be changed through remote access to the device.
What types of device performance data does Testdroid provide for test runs?
We provide CPU and Memory usage data. We have also the ability to provide Gamebench statistics on test runs. To get more information about this please contact our [email protected].
Can we do remote manual testing with Testdroid devices?
Yes, for most Android and iOS devices. Some models are incompatible with the VNC technology in use. Manual testing on iOS devices is supported on public cloud through dedicated devices. Enterprise and Private cloud users can decide which users get access to manual testing.
Does Testdroid support parallel or concurrent test runs?
Yes. Here is more information.
Can I use my own Google accounts on Testdroid devices?
In Public Cloud setting a Google account on the device isn’t allowed. Using a Google API/Service to log in to an app with your own account can be done. For Private and Enterprise environments a customer’s own Google account can be set on the devices.
Which Google Play account is used on your devices?
In Bitbar Testing public cloud a number of Google accounts are in use. These Google accounts may appear when runing tests in Bitbar Testing public cloud.
- [email protected]
- [email protected] where x is in 1…18
- [email protected] where x is either 1 or 2
- [email protected]
- [email protected] where X > 18
Where can I see a list of your devices? How often are you updating it? And how long does it take you to have new devices?
The complete list of devices - with all details – can be found under Testdroid Cloud. You don’t need to log in to Testdroid Cloud to see all details about our devices. The full list is here. We’re constantly updating our device roster, approx.. 5-20 new devices per month, depending on release cycles by OEMs.
Can Testdroid provide dedicated private devices for my Testing?
Dedicated devices are available through Private Cloud installations or as part of the public cloud. In a private cloud installation the customer is able to freely select the number and type of devices and these are managed by Bitbar. Dedicated devices is a service enabling customers to reserve one or multiple devices to their use only in the public cloud. For more information please contact sales.
Can I change or choose the OS on the devices? If not, how do you choose what OS should be on there?.
Can the handsets under test receive an email via wifi?
Yes, but your application needs to be configured to receive email. The regular email application in device cannot be currently configured for sending/receiving emails.
Where can I find the free test trial and how can I get started with Bitbar Testing Cloud?
You can create yourself a free account at Testdroid Cloud or bitbar.com. After leaving your email address in any of these forms, you’ll get an activation email. Just click the link and you’ll be guided through the registration process. You can now log in to Testdroid Cloud using your new credentials and access our free device group.
My test run failed on most devices. Why was that?
There are several reasons why test runs fails at Testdroid Cloud. First and the most typical case is that there is something wrong with application, and instrumentation makes it crash. A good rule of thumb is that if AppCrawler run crashes with your app, then the problem is in app itself. If the execution crashes with your tests, it can be either way. Device problems is typically seen as pending test runs.
Are the devices jail broken/rooted?
No. None of our devices are jail broken or rooted.
Our app requests Device Administrator privileges from the user. After the user grants the app Device Admin privileges these privileges cannot be removed without first entering a password. It is conceivable that some tests may leave an app on the device. Does any of this cause a problem for your test environment?
No, this does not cause problems to our devices or environment. Our system automatically cleans, reboots and hard resets devices before any new test run.
The app connects to our internal server. Our IT department will only allow connections from known IP address ranges. Is it possible for you to tell us what IP ranges are used to originate traffic from your test devices?
We have two public cloud data centers with IP ranges 185.75.2.0/28 and 216.38.149.11/32. Public Cloud users can also implement a test app to find the current IP of used device/connection and communicate it to external service that can open that IP for connections. For Private and Enterprise Cloud installations most special network configurations are possible.
Where are your devices located?
Bitbar Testing devices are located in our data centers in San Jose, CA and Wroclaw, Poland.
How safe are.
How long is usually the queue? In each priority group? Do I have to wait for 2 hour or day?.
How long will our projects be online?
We periodically clean unused projects and project files from our public cloud. For unused projects (where tests have not been run) after three months the project files are removed. After this the whole project is removed after next three months.
Can I take pictures, sound recording (mic) with the phones?
Yes, you can. Our devices are not positioned for any specific photo target and recording of audio can give you arbitrary recording. However, our devices are fully functioning Android and iOS devices, and both mentioned functions are enabled on the devices.?
No, but there are plenty of good resources of information for uiautomator provided by Google:
Bitbar’s info video:
iOS Test Automation Frameworks
We support KIF, XCTest, XCUITest, Jasmine, Calabash and Appium for iOS test automation with Testdroid Cloud. Testdroid RiC?
iOS Appcrawler isn’t available at the moment in Run in Cloud -plugin
How can I start an iOS Appcrawler using API?
You can do it on iOS UI Automation project -type.
API Calls:
1) Get current project config: GET /api/v2/projects/{projectId}/config
2) Set Project Config: POST /api/v2/projects/{projectId}/config
projectId, {projectId} mode, "IOS_CRAWLER" usedDeviceGroupId, {deviceGroupId} (mandatory, or test run start fails)
3) Start new Test Run: POST /api/v2/runs
Testdroid Recorder
Testdroid Recorder is not supported anymore by us. There are newer and better tools for this purpose, eg. Google search Appium Inspector
My device doesn’t connect to Recorder. How to make it work?.
Is it possible to download the Testdroid Recorder in more than one machine with the same account?.
Do you have a recorder for iOS? If not, will you have that soon?
No. If users are OK to use UI Automation, we typically recommend Xcode Instruments. More information about this can be found here:
Can I test my recorded script on my own devices, instead of always pushing it to the cloud?
Of course. You can record any interactions with your app/game and then replay it as Android JUnit Test or Android JUnit Test From APK (under Run As menu) by highlighting the generated test project.
Testdroid Enterprise
Can I cluster my Testdroid Enterprise device set over various locations? E.g. 15 Android devices in UK, 10 iOS in India? And is that one installation?: docs.testdroid.com/testdroid-cloud-integration/api/
Java source code + client can be found at Github repository:
Do you have an API? Where can I find a documentation? Can I integrate Jenkins or my system to it?
Yes, the documentation with full API description can be found at docs.testdroid.com. You can integrate your CIs or any scripts with Testdroid using API’s JSON based calls.
Do you provide Jenkins and other CI plug-ins for Testdroid?
We provide a Jenkins plugin, which is available at JenkinsCI Github.
Support
Where is your support located/What are the business hours? Is it email support or also call?.
Do you offer devices with pre-released OS versions of Android and iOS?
Not for regular app testing. Maintaining pre-release OS is problematic as such releases can be very unstable and out of our hands to handle.
Generic Questions
Is your solution strictly for testing mobile apps, or is it possible to use it also for testing mobile sites?
Testdroid products are not limited to apps only. E.g. Appium framework can be used to test mobile websites using Testdroid device cloud.
How many people are you? And where are you located? Are you guys like 5 guys in garage or…
Currently (mid Feb 2016), we’re about 55 in headcount, located in 4 countries. We have two major R&D sites and two bigger sales offices. Totally 5 different offices in the USA, Finland and Poland.
Testing manually is very slow.
Manual remote testing is powered by a direct VNC connection. Latency can be expected as the connection is live. | http://docs.testdroid.com/testdroid-faq/ | 2017-01-16T15:09:33 | CC-MAIN-2017-04 | 1484560279189.36 | [] | docs.testdroid.com |
.. _
Using A Chameleon Macro Name Within a Renderer Name
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Sometime you'd like to render a macro inside of a Chameleon ZPT template
instead of the full Chameleon ZPT template. To render the content of a
``define-macro`` field inside a Chameleon ZPT template, given a Chameleon
template file named ``foo.pt`` and a macro named ``bar`` defined within it
(e.g. ``
Welcome to
This template doesn't use any advanced features of Mako, only the
``${}`` replacement syntax for names that are passed in as
:term:`renderer globals`. See the `the Mako documentation
${project}, an
application generated by the pyramid web application framework. | http://docs.pylonsproject.org/projects/pyramid/en/1.4-branch/_sources/narr/templates.txt | 2017-01-16T14:59:55 | CC-MAIN-2017-04 | 1484560279189.36 | [] | docs.pylonsproject.org |
Table of Contents
The. | http://docs-testing.evergreen-ils.org/docs/reorg/staffclient_sysadmin/_cancel_delay_reasons.html | 2017-08-16T19:38:31 | CC-MAIN-2017-34 | 1502886102393.60 | [] | docs-testing.evergreen-ils.org |
Tools: Gemini
This topic explains how to integrate TestRail with CounterSoft Gemini. There are currently three ways to integrate TestRail with Gemini, namely:
- Using defect URLs to link test results to Gemini
- Using the defect plugin for Gemini to push and look up Gemini issues
- Using reference URLs to link test cases to Gemini Gemini instance. Once the URLs have been configured, a new Add link appears next to the Defects field in the Add Test Result dialog. This link allows you to jump to Gemini's Create Issue form to report a new bug. Additionally, entered issue IDs are linked to your Gemini instance to make it easier to track the status of your issues.
To configure Gemini's URLs in TestRail, select Administration > Integration. You can alternatively enter separate bug tracker URLs for each project under Administration > Projects. Use the following example URLs to configure the addresses:
Gemini installation Defect View Url: Defect Add Url: Gemini on demand Defect View Url: Defect Add Url:
Defect plugins
Defect plugins can be used to implement a deeper bug tracker integration and TestRail comes with a ready-to-use Gemini defect plugin. To configure the defect plugin, select Administration > Integration and select Gemini Gemini without leaving TestRail. Once the test result was added, hovering the mouse cursor over an issue ID will open a window with useful information and status details about the issue in Gemini.
Customizations
The Gemini defect plugin was built to work with a standard Gemini configuration. TestRail allows you to customize the integration to work with your own custom fields or to map users between TestRail and Gemini. Please see the following articles for details on how to customize the integration:
Reference URLs
The reference URLs are used to link test cases to issues stored in Gemini via the References field. Once the URLs have been configured, issue IDs entered in the References field are linked to your Gemini instance to make it easier to jump to related issues, feature specifications or requirements.
To configure Gemini's URLs for the References field, select Administration > Integration. You can alternatively enter separate reference URLs for each project under Administration > Projects. Use the following example URLs to integrate TestRail with Gemini:
Gemini installation Reference View Url: Reference Add Url: Gemini on demand Reference View Url: Reference Add Url: | http://docs.gurock.com/testrail-integration/tools-gemini | 2017-08-16T19:25:36 | CC-MAIN-2017-34 | 1502886102393.60 | [array(['/_media/testrail-integration/gemini-add.png', None], dtype=object)
array(['/_media/testrail-integration/gemini-push.png', None], dtype=object)] | docs.gurock.com |
Order submitted. During checkout, the customer reviews the order, agrees to the Terms and Conditions, and clicks the Place Order button. Customers receive a confirmation of their orders, with a link to their customer account.
Order “Pending.” Before payment is processed the status of a
sales order is “Pending.” At this point, the order can still be canceled.
Payment received. Depending on the payment method, you, may be notified when the transaction is authorized and in some cases, processed. The status of the invoice is now “Processing.”.
Order “Processing.” When the customer logs into his account to check on the order, the status is still "Processing."
Order shipped. The shipment is submitted, and packing slips printed. You ship the package, and the customer is notified by email. Congratulations! You’re in business.
FEEDBACK | http://docs.magento.com/m1/ce/user_guide/order-processing/order-process-overview.html | 2017-08-16T19:32:30 | CC-MAIN-2017-34 | 1502886102393.60 | [] | docs.magento.com |
For the latest, most advanced host monitoring, check out New Relic Infrastructure.
Available and used disk space appears on the New Relic Servers Disks page on the bottom Space usage chart. While the displayed value is percentage used, you can view the available and used space when you hover over the chart.
This is an example of how to use the New Relic REST API (v2) to find available and used disk space for a specific server ID and API key. The default time range has been changed to one week.
When acquiring data, the values returned may be affected by the time period you specify and the way the data is stored. For more information, see Extracting metric timeslice data.
Disk space available and used
To obtain the disk space for a specific disk for the selected time period, use the metric name
System/Disk/${DISK_NAME}/Used/bytes. For example:
curl -X GET "{APPID}/metrics/data.xml"\ -H "X-Api-Key:${API_KEY}" -i\ - d'names[]=System/Filesystem/${DISK_NAME/Used/bytes&values[]=average_response_time&values[]=average_exclusive_time&from=2014-09-17T21:00:00+00:00&to=2014-09-24T21:00:00+00:00&summarize=true'
Metric values include:
- Total disk space used, indicated by
average_response_time
- Capacity of the disk, indicated by
average_exclusive_time
Currently the returned values can be a factor of 1000 greater than expected.
Disk names
Replace the placeholder ${DISK_NAME} with the appropriate string. This depends on the type of system being used.
- For Windows, use these string values:
D:,
E:, etc.
- For Linux, use these string values:
^or
^mntor
^bootor similar, where the
^character represents the
/character found in Linux file systems. The
^character must be URL encoded as
%5efor input.
Linux example:
This example shows the placeholder
^ encoded on a Linux system.
- Linux file system name:
/dev/xvda
- Displayed in API output:
name=System/Disk/^dev^xvda
Encoded for API input:
names[]=System/Disk/%5Edev%5Exvda
To determine the disks available, use a command similar to this:
curl -X GET '{APPID}/metrics.json' \ -H 'X-Api-Key:${API_KEY}' -i \ -d 'name=System/Filesystem'
This will return the metric names for the file systems detected on your host.
Convert bytes (API) to megabytes (New Relic UI)
Data stored and returned by the API for disk space is in bytes, but it appears on the New Relic Servers Disks page in terms of gigabytes (based on 1024 being a kilobyte). To match the values obtained from your API calls with the values that appear in the UI, use these calculations:
Total disk space used = <average_response_time> / (1024)**2 Capacity of the disk = <average_exclusive_time>/ (1024)**2 Percent disk used = (<average_response_time> / <average_exclusive_time>) * 100
Currently the returned values can be a factor of 1000 greater than expected. | https://docs.newrelic.com/docs/servers/rest-api-examples-v2/server-api-examples/get-available-used-disk-space-v2 | 2017-08-16T19:14:48 | CC-MAIN-2017-34 | 1502886102393.60 | [] | docs.newrelic.com |
HipChat
Get real time notifications in HipChat from TestFairy! Setting up the integration is easy.
Log into your HipChat account and create a new room
Give your room an awesome name. Hit Create room
From the options menu, select Integrations...
On the next page, simply select Build your own integration
Select your newly created room from the dropdown option
HipChat requires that you name the integration. Here, we've named it testfairy. Hit Create
On the next page, highlight the URL shown, and copy it to your clipboard
Now, head over to your TestFairy page, and select account preferences from the top menu
On the next page, select Webhooks from the menu, and select the +Add webhooks button
Give the integration an awesome name, and paste the URL from HipChat in the url field. Select the Events you're team is most interested in and select Add webhook to add and confirm the new integration
Success! You should now be recieving notifications in your HipChat channel.
Congratulations, you should now be receiving notifications from TestFairy in your HipChat channel!
Note Integrations require a paid account, click here for more information. | https://docs.testfairy.com/Integrations/HipChat.html | 2017-08-16T19:20:16 | CC-MAIN-2017-34 | 1502886102393.60 | [] | docs.testfairy.com |
Tests¶
We strongly encourage developers to apply TDD. Not only as a test tool but as a design tool.
Run tests¶
Tuleap comes with a handy test environment, based on SimpleTest. File organization:
- Core tests (for things in src directory) can be found in tests/simpletest directory with same subdirectory organization (eg. src/common/frs/FRSPackage.class.php tests are in tests/simpletest/common/frs/FRSPackageTest.php).
- Plugins tests are in each plugin tests directory (eg. plugins/tracker/include/Tracker.class.php tests are in plugins/tracker/tests/TrackerTest.php).
To run tests you can either use:
- the web interface available at (given localhost is your development server)
- the CLI interface: make tests (at the root of the sources). You can run a file or a directory: php tests/bin/simpletest plugins/docman
Run tests with docker¶
We have docker images to run unit tests on all environments:
- centos6 + php 5.6: enalean/tuleap-simpletest:c6-php56
Basically, executing tests is as simple as, from root of Tuleap sources:
$> docker run --rm=true -v $PWD:/tuleap:ro enalean/tuleap-simpletest:c6-php56 \ /tuleap/tests/simpletest /tuleap/tests/integration /tuleap/plugins
If there is only one file or directory you are interested in:
$> docker run --rm=true -v $PWD:/tuleap:ro enalean/tuleap-simpletest:c6-php56 --nodb \ /tuleap/tests/simpletest/common/project/ProjectManagerTest.php
Note
Please note the –nodb switch, it allows a faster start when there is no DB involved.
REST tests¶
There is also a docker image for REST tests, just run the following command:
$> make tests_rest
It will execute all REST tests in a docker container. This container is stopped and removed once the tests are finished. If you need to run tests manually, do the following instead:
$> make tests_rest_setup $root@d4601e92ca3f> ./tests/rest/bin/run.sh setup $root@d4601e92ca3f> scl enable rh-php70 bash $root@d4601e92ca3f> ./tests/rest/vendor/bin/phpunit tests/rest/tests/ArtifactFilesTest.php
In case of failure, you may need to attach to this running container in order to parse logs for example:
$> docker exec -ti <name-of-the-container> bash $root@d4601e92ca3f> tail -f /var/log/httpd/error_log
Note
If you’re using an old version of docker, you might encounter error unknown flag: –mount
You can run your test container with:
docker run -ti –rm -v .tuleap:/usr/share/tuleap –tmpfs /tmp -w /usr/share/tuleap enalean/tuleap-test-rest:c6-php56-httpd24-mysql56 bash
Organize your tests¶
All the tests related to one class (therefore to one file) should be kept in one
test file (
src/common/foo/Bar.class.php tests should be in
tests/simpletest/common/foo/BarTest.php). However, we strongly encourage you
to split test cases in several classes to leverage on setUp.
class Bar_IsAvailableTest extends TuleapTestCase { //... Will test Bar->isAvailable() public method } class Bar_ComputeDistanceTest extends TuleapTestCase { //... Will test Bar->computeDistance() public method }
Of course, it’s by no mean mandatory and always up to the developer to judge
if it’s relevant or not to split tests in several classes. A good indicator
would be that you can factorize most of tests set up in the
setUp() method.
But if the
setUp() contains things that are only used by some tests,
it’s probably a sign that those tests (and corresponding methods) should
be in a dedicated class.
Write a test¶
What makes a good test:
- It’s simple
- It has an explicit name that fully describes what is tested
- It tests only ONE thing at a time
Differences with simpletest:
- tests methods can start with
itXxxkeyword instead of
testXxx. Example:
public function itThrowsAnExceptionWhenCalledWithNull()
On top of simpletest we added a bit of syntactic sugar to help writing readable tests. Most of those helpers are meant to help dealing with mock objects.
<?php class Bar_IsAvailableTest extends TuleapTestCase { public function itThrowsAnExceptionWhenCalledWithNull() { $this->expectException(); $bar = new Bar(); $bar->isAvailable(null); } public function itIsAvailableIfItHasMoreThan3Elements() { $foo = mock('Foo'); stub($foo)->count()->returns(4); // Syntaxic sugar for : // $foo = new MockFoo(); // $foo->setReturnValue('count', 4); $bar = new Bar(); $this->assertTrue($bar->isAvailable($foo)); } public function itIsNotAvailableIfItHasLessThan3Elements() { $foo = stub('Foo')->count()->returns(2); $bar = new Bar(); $this->assertFalse($bar->isAvailable($foo)); } }
Available syntaxic sugars:
$foo = mock('Foo'); stub($foo)->bar($arg1, $arg2)->returns(123); stub($foo)->bar($arg1, $arg2)->once(); stub($foo)->bar()->never(); stub($foo)->bar(arg1, arg2)->at(2); stub($foo)->bar()->count(4);
See details and more helpers in
plugins/tests/www/MockBuilder.php.
Helpers and database¶
Hint
A bit of vocabulary
Interactions between Tuleap and the database should be done via
DataAccessObject
(aka. dao) objects (see
src/common/dao/include/DataAccessObject.class.php)
A dao that returns rows from database wrap the result in a
DataAccessResult
(aka. dar) object (see
src/common/dao/include/DataAccessResult.class.php)
Tuleap test helpers ease interaction with database objects. If you need to interact
with a query result you can use mock’s
returnsDar(),
returnsEmptyDar()
and
returnsDarWithErrors().
public function itDemonstrateHowToUseReturnsDar() { $project_id = 15; $project = stub('Project')->getId()->returns($project_id); $dao = stub('FooBarDao')->searchByProjectId($project_id)->returnsDar( array( 'id' => 1 'name' => 'foo' ), array( 'id' => 2 'name' => 'klong' ), ); $some_factory = new Some_Factory($dao); $some_stuff = $some_factory->getByProject($project); $this->assertEqual($some_stuff[0]->getId(), 1); $this->assertEqual($some_stuff[1]->getId(), 2); }
Builders¶
Keep tests clean, small and readable is a key for maintainability (and avoid writing crappy tests). A convenient way to simplify tests is to use Builder Pattern to wrap build of complex objects.
Note: this is not an alternative to partial mocks and should be used only on “Data” objects (logic less, transport objects). It’s not a good idea to create a builder for a factory or a manager.
At time of writing, there are 2 builders in Core aUser.php and aRequest.php:
public function itDemonstrateHowToUseUserAndRequest() { $current_user = aUser()->withId(12)->withUserName('John Doe')->build(); $new_user = aUser()->withId(655957)->withUserName('Usain Bolt')->build(); $request = aRequest() ->withUser($current_user) ->withParam('func', 'add_user') ->withParam('user_id', 655957) ->build(); $some_manager = new Some_Manager($request); $some_manager->createAllNewUsers(); }
There are plenty of builders in plugins/tracker/tests/builders and you are strongly encouraged to add new one when relevant.
Integration tests for REST API of plugins¶
If your new plugin provides some new REST routes, you should implement new integration tests. These tests must be put in the tests/rest/ directory of your plugin.
If you want more details about integration tests for REST, go have a look at tuleap/tests/rest/README.md. | http://tuleap-documentation.readthedocs.io/en/latest/developer-guide/tests.html | 2018-02-18T03:22:16 | CC-MAIN-2018-09 | 1518891811352.60 | [] | tuleap-documentation.readthedocs.io |
Checking Request Execution in Product Advertising API
You can check the execution of a request first by examining the
IsValid element in each response.
If the element is set to True, the request was executed successfully and you can display the information in the response.
If the value is False, there was an error in the request syntax. You can start troubleshooting the error
in the request by viewing the errors returned in the response. The following example
error statement shows that the request did not contain a required parameter,
ItemId.
<IsValid>False</IsValid> ... <Error> <Code>AWS.MissingParameters</Code> <Message>Your request is missing required parameters. Required parameters include ItemId.</Message> </Error>
The
IsValid element, however, is not always returned when a request
fails. For example, if you mistype the name of the operation, Product Advertising
API returns the following
message, which does not include the
IsValid element :
<Error> <Code>AWS.InvalidOperationParameter</Code> <Message>The Operation parameter is invalid. Please modify the Operation parameter and retry. Valid values for the Operation parameter include ListLookup, CartGet, SellerListingLookup, ItemLookup, SimilarityLookup, SellerLookup, ItemSearch, BrowseNodeLookup, CartModify, CartClear, CartCreate, CartAdd, SellerListingSearch. </Message> </Error>
Although an
IsValid value of True specifies that the request was
valid and executed, it does not mean that a result was obtained. There may not have
been any
items that satisfied the search criteria, for example. To check for this condition,
either
search for the presence of an Error element, or evaluate the value of the
TotalItems element. If the value is zero, there are no results to
display, as shown in the following example.
<IsValid>True</IsValid> ... <Error> <Code>AWS.ECommerceService.NoExactMatches</Code> <Message>We did not find any matches for your request.</Message> </Error> ... <TotalResults>0</TotalResults> <TotalPages>0</TotalPages>
Java
Errors can occur at many levels in the XML response. The following example determines
if the response contains the element,
OperationRequest. This response element is included in every response. If it missing, the response
is null. That might happen, for example, if the Product Advertising API web service
times out the request. The second error check determines if there is an
Items response element in the response.
assertNotNull("OperationRequest is null", operationRequest ); System.out.println("Result Time = " + operationRequest.getRequestProcessingTime()); for (Items itemList : response.getItems()) { Request requestElement = itemList.getRequest(); assertNotNull("Request Element is null", requestElement);
To do a thorough job of error checking, you would have to evaluate all of the response elements returned to see if they were, in fact, returned. The preceding example provides a template for such code. Including all of that code here would complicate the example beyond the scope of this guide.
C#
The following code snippet verifies that the request was executed successfully. The code checks for a null response.
//Verify a successful request ItemSearchResponse response = service.ItemSearch(itemSearch); //Check for null response if (response == null) throw new Exception("Server Error - no response received!"); ItemSearchResult[] itemsArray = response.GetItemSearchResult; if (response.OperationRequest.Errors != null) throw new Exception(response.OperationRequest.Errors[0].Message);
Perl
The following code snippet verifies that the request was executed successfully. The code checks for the presence of "Error" in the response.
#See if "Error" is in the response. if ( $xp->find("//Error") ) { print "There was an error processing your request:\n", " Error code: ", $xp->findvalue("//Error/Code"), "\n", " ", $xp->findvalue("//Error/Message"), "\n\n"; }
PHP
The following code snippet verifies that the request was executed successfully. The code checks for an Error element in the XML response.
//Verify a successful request foreach($parsed_xml->OperationRequest->Errors->Error as $error){ echo "Error code: " . $error->Code . "\r\n"; echo $error->Message . "\r\n"; echo "\r\n"; } | https://docs.aws.amazon.com/AWSECommerceService/latest/GSG/CheckingRequestExecution.html | 2018-02-18T02:44:28 | CC-MAIN-2018-09 | 1518891811352.60 | [] | docs.aws.amazon.com |
Install snapd on Ubuntu
Ubuntu 16.04 (and derivatives)
Ubuntu includes snapd by default starting with the 16.04 LTS (xenial) release. No installation steps are required and you can use snapd directly.
Ubuntu 14.04
For the older 14.04 LTS (trusty) release or any flavour (e.g. Lubuntu) which doesn’t include snapd by default, you have to install it manually from the archive:
sudo apt update sudo apt install snapd
Lubuntu
Snaps which use the pulseaudio interface to playback sounds & music also require pulseaudio to be installed. This is already installed for the majority of Ubuntu flavours, however Lubuntu does not ship pulseaudio, so it must be installed manually if audio is desired from those snaps.
sudo apt install pulseaudio
Once installed, logout and back in to ensure pulseaudio is running. | https://docs.snapcraft.io/core/install-ubuntu | 2018-02-18T03:18:31 | CC-MAIN-2018-09 | 1518891811352.60 | [] | docs.snapcraft.io |
Modifying Default Styles
This article will show you two ways to modify the default style of a control:
Modifying the Default Style Using Microsoft Blend
Modifying the Default Style Using Visual Studio
Modifying the Default Style By Extracting It From the Theme XAML File
For the purposes of this article, we will modify RadGridView's FilteringDropDown element, but the steps described can be applied to every control.
Modifying the Default Style Using Microsoft Blend
This article shows how to modify styles using Blend for Visual Studio 2012, but the approach should be similar with different versions of the program.
Editing Additional Styles
The first option to create the needed style is to right-click on your instance of RadGridView and from the context menu select Edit Additional Templates -> Desired Style -> Edit a Copy. You can then procede to the Create Style Resource section.
Figure 1: Editing additional templates
Creating a Dummy Control
If you cannot find the desired style from the list of additional styles, you will first need to create a dummy control in Blend. To do so, open the UserControl that hosts your RadGridView in Expression Blend and locate the desired control in the Assets tab.
In our case, we can find the FilteringDropDown under Controls -> All -> FilteringDropDown.
Figure 2: Selecting the FilteringDropDown from the Assets tab
You can then double-click or draw to place a dummy control of the selected type on the scene.
Figure 3: The dummy FilteringDropDown
Right-click on the created dummy control and select Edit Template -> Edit a Copy.
Create Style Resource
The Create Style Resource dialog will prompt you for the name of the style and where to place it within your application.
For this example, we will choose to apply this style to all FilteringDropDown controls and place it in our App.xaml file.
If you choose to define the style in the resources of the application, it would be available for the entire application. This allows you to define a style only once and then reuse it where needed.
Figure 4: The "Create Style Resource" window
After clicking OK, the default style of the control will be created in the selected location. If you prefer, you can modify it directly from XAML by right-clicking in the scene and choosing View Source from the context menu. The other options is to modify it in Blend as we will do now.
Figure 5: The FilteringDropDown template structure
Please bear in mind that the control template may be different in the different themes. This example modifies the OfficeBlack theme.
Note that when changing a Control Template you should include all required parts. Even if your code compiles, some of the functionality may be impacted due to the omission of the required parts. The required parts are usually marked with the prefix "PART_".
Modifying the Control Template
To change the funneling icon's border, for example, let's select the Path control responsible for the border of the FilteringDropDown from the Objects and Timeline pane and set its Fill to Red.
Figure 6: Changing the fill of the path
Here is a snapshot of the final result:
Figure 7: The modified FilteringDropDown
Modifying the Default Style Using Visual Studio
You could also modify the default style of a control by using the Design view of Visual Studio similarly to using Blend.
Figure 8: Modifying default styles through Visual Studio's Design view
Modifying the Default Style by Extracting it from the Theme XAML File
If you prefer, you can manually extract the needed style from the respective XAML file in the Themes.Implicit folder of your UI for WPF installation and modify its code to suit your needs.
The process is similar to manually extracting the Control Template of a given control.
Note that when changing a Control Template you should include all required parts. Even if your code compiles, some of the functionality may be impacted due to the omission of the required parts. The required parts are usually marked with the prefix "PART_". | https://docs.telerik.com/devtools/silverlight/controls/radgridview/styles-and-templates/modifying-default-styles | 2018-02-18T03:21:02 | CC-MAIN-2018-09 | 1518891811352.60 | [array(['images/RadGridView_Styles_and_Templates_Additional_Styles.png',
'Editing additional templates'], dtype=object)
array(['images/RadGridView_Styles_and_Templates_Styling_FilteringControl_1.png',
'Selecting the FilteringDropDown from the Assets tab'],
dtype=object)
array(['images/RadGridView_Styles_and_Templates_Styling_FilteringControl_2.png',
'The dummy FilteringDropDown'], dtype=object)
array(['images/RadGridView_Styles_and_Templates_Styling_FilteringControl_7.png',
'The "Create Style Resource" window'], dtype=object)
array(['images/RadGridView_Styles_and_Templates_Styling_FilteringControl_4.png',
'The FilteringDropDown template structure'], dtype=object)
array(['images/RadGridView_Styles_and_Templates_Styling_FilteringControl_5.png',
'Changing the fill of the path'], dtype=object)
array(['images/RadGridView_Styles_and_Templates_Styling_FilteringControl_6.png',
'The modified FilteringDropDown'], dtype=object)
array(['images/RadGridView_Styles_and_Templates_Visual_Studio_Design_View.png',
"Modifying default styles through Visual Studio's Design view"],
dtype=object) ] | docs.telerik.com |
Use( 'woocommerce_product_search' ) ) { echo woocommerce_product_search( array( 'limit' => 20 ) ); } ?>
This code can be used in any template file, for example, you could add it to your theme’s
single-product.php template.
Replacing the default Search Form ↑ Back to top
You need to edit your theme’s searchform.php file, preferably using a child theme to preserve customizations against upgrades. More here: How to set up and use a child theme
From your parent theme’s searchform.php template file, copy it from the parent theme directory to your child theme directory and edit to conditionally output the WooCommerce Product Search (if it exists). The following snippet demonstrates one way that this might be accomplished, in this case placing it at the beginning of the
*This does not apply to Storefront and child themes based on it. For these, please refer to the section below.
Replacing the search form in Storefront themes ↑ Back to top
If using Storefront or one of the Storefront child themes, there isn’t a searchform.php file and you can use this code instead in your theme’s
functions.php – please remember to create a child theme for that, don’t put this in Storefront’s
functions.php as it will be overwritten next time you update the theme. | https://docs.woocommerce.com/document/woocommerce-product-search/api/ | 2018-02-18T02:56:23 | CC-MAIN-2018-09 | 1518891811352.60 | [] | docs.woocommerce.com |
All content with label consistent_hash+distribution+remoting.
Related Labels:
clustering, br, client_server, jboss_cache, infinispan, murmurhash2, hinting, murmurhash, rehash, state_transfer, cache, grid, hash_function, memcached, hotrod, buddy_replication, colocation, client, rebalance,
data_grid, partitioning
more »
( - consistent_hash, - distribution, - remoting )
Powered by a free Atlassian Confluence Open Source Project License granted to Red Hat, Inc.. Evaluate Confluence today. | https://docs.jboss.org/author/label/consistent_hash+distribution+remoting | 2018-02-18T03:59:38 | CC-MAIN-2018-09 | 1518891811352.60 | [] | docs.jboss.org |
Briz framework¶
Briz is an easy to use PHP framework designed to write powerful web applications ranging from simple api to large web applications.
Rapid to Develop and Quick to run¶
It is faster than many micro frameworks out there. Even if you use most features like route inheritance, identity, controllers and views
(as in
Briz loaded in the chart) it will still be faster than many popular micro frameworks. so dont worry about speed.
Features¶
Briz has a good set of features to help you.
Easy to Add Dependancies¶
Briz uses dependancy injection. It is not that strict. so adding dependancies is very easy.
Introducing new Routing System¶
Briz comes with a new Routing system. it helps in easily extending and seperating your web application. Route inheritance helps you to specify a route extends another route. this works just as in a programming language.
Identity¶
Identity is about identifying a route from one another. this feature can also be used in other parts using a trait.
PSR-7¶
Briz supports PSR-7. You can work interoperably with any Psr-7 message implementations.
view Briz Hello world for basic hello world examples or view Quick Start for more details.
Basic Usage¶
require './vendor/autoload.php'; $app = new Briz\App(); $app->route("web", function($router){ $router->get('/',function($b){ $b->response->write('hello world'); } $router->get('/{name}',function($b){ $data = 'hello'.$b->args['name']; $b->response->write($data); }); }); $app->run();
Contents:
- GitHub
- Briz Hello world
- Quick Start
- Basics
- Routing
- Controllers
- View
- Identity
- Providers
- Container Reference
- Collections | http://briz.readthedocs.io/en/stable/ | 2018-02-18T03:25:10 | CC-MAIN-2018-09 | 1518891811352.60 | [] | briz.readthedocs.io |
2014,. Installation
2.1. Installation of the software
The installation procedure is described in the KiCad documentation.
2.2. Modifying the default configuration. The Preferences menu. Dimensions menu:
Setting Design Rules (tracks and vias sizes, clerances).
Setting Layers (number, enabled and layers names)
3.9.10. The Help menu
Provides access to the user manuals and to the version information menu (Pcbnew About).
3.10. Using icons on the top toolbar.
Deleting elements.. Available options. Layers
5.1. Introduction
Pcbnew can work with 50 different layers:
Between 1 and 32 copper layers for routing tracks.
14 fixed-purpose technical layers:
12 paired layers (Front/Back): Adhesive, Solder Paste, Silk Screen, Solder Mask, Courtyard, Fabrication
2 standalone layers: Edge Cuts, Margin
4 auxiliary layers that you can use any way you want: Comments, E.C.O. 1, E.C.O. 2, Drawings
5.2. Setting up layers
To open the Layers Setup from the menu bar, select Design Rules → Layers Setup.
The number of copper layers, their names, and their function are configured there. Unused technical layers can be disabled.
5.3. Layer Description
5.3.1. Copper Layersing of all footprints on board. can:
Net
Layer
Filling options
Pad options
Priority level:
Other formats:. Footprint Editor - Managing Libraries
12.1. Overview of Footprint Editor
Footprint Editor enables the creation and the editing of footprints:
Adding and removing pads.
Changing pad properties (shape, layer) for individual pads or globally for all pads of a footprint.
Editing graphic elements (lines, text)..
It is also possible to create new libraries.
The library extension is .mod.
12.2. Accessing Footprint Editor
The Footprint Editor can be accessed in two different ways:. Fields. KiCad Scripting Reference() | http://docs.kicad-pcb.org/4.0.7/en/pcbnew.html | 2018-02-18T03:24:13 | CC-MAIN-2018-09 | 1518891811352.60 | [array(['images/Library_list_menu_item.png',
'images/Library_list_menu_item.png'], dtype=object)
array(['images/Footprint_library_list.png',
'images/Footprint_library_list.png'], dtype=object)
array(['images/Library_tables_menu_item.png',
'images/Library_tables_menu_item.png'], dtype=object)
array(['images/Footprint_tables_list.png',
'images/Footprint_tables_list.png'], dtype=object)
array(['images/Right-click_legacy_menu.png',
'images/Right-click_legacy_menu.png'], dtype=object)
array(['images/Pcbnew_coordinate_status_display.png',
'images/Pcbnew_coordinate_status_display.png'], dtype=object)
array(['images/Pcbnew_legacy_block_selection_dialog.png',
'images/Pcbnew_legacy_block_selection_dialog.png'], dtype=object)
array(['images/icons/unit_inch.png', 'images/icons/unit_inch.png'],
dtype=object)
array(['images/Pcbnew_top_menu_bar.png', 'images/Pcbnew_top_menu_bar.png'],
dtype=object)
array(['images/Pcbnew_file_menu.png', 'images/Pcbnew_file_menu.png'],
dtype=object)
array(['images/Pcbnew_edit_menu.png', 'images/Pcbnew_edit_menu.png'],
dtype=object)
array(['images/Pcbnew_view_menu.png', 'images/Pcbnew_view_menu.png'],
dtype=object)
array(['images/Sample_3D_board.png', 'images/Sample_3D_board.png'],
dtype=object)
array(['images/Pcbnew_place_menu.png', 'images/Pcbnew_place_menu.png'],
dtype=object)
array(['images/Pcbnew_route_menu.png', 'images/Pcbnew_route_menu.png'],
dtype=object)
array(['images/Pcbnew_preferences_menu.png',
'images/Pcbnew_preferences_menu.png'], dtype=object)
array(['images/Pcbnew_dimensions_menu.png',
'images/Pcbnew_dimensions_menu.png'], dtype=object)
array(['images/Pcbnew_tools_menu.png', 'images/Pcbnew_tools_menu.png'],
dtype=object)
array(['images/Pcbnew_design_rules_menu.png',
'images/Pcbnew_design_rules_menu.png'], dtype=object)
array(['images/Pcbnew_top_toolbar.png', 'images/Pcbnew_top_toolbar.png'],
dtype=object)
array(['images/Pcbnew_popup_normal_mode.png',
'images/Pcbnew_popup_normal_mode.png'], dtype=object)
array(['images/Pcbnew_popup_normal_mode_track.png',
'images/Pcbnew_popup_normal_mode_track.png'], dtype=object)
array(['images/Pcbnew_popup_normal_mode_footprint.png',
'images/Pcbnew_popup_normal_mode_footprint.png'], dtype=object)
array(['images/Pcbnew_popup_footprint_mode.png',
'images/Pcbnew_popup_footprint_mode.png'], dtype=object)
array(['images/Pcbnew_popup_footprint_mode_track.png',
'images/Pcbnew_popup_footprint_mode_track.png'], dtype=object)
array(['images/Pcbnew_popup_footprint_mode_footprint.png',
'images/Pcbnew_popup_footprint_mode_footprint.png'], dtype=object)
array(['images/Pcbnew_popup_track_mode.png',
'images/Pcbnew_popup_track_mode.png'], dtype=object)
array(['images/Pcbnew_popup_track_mode_track.png',
'images/Pcbnew_popup_track_mode_track.png'], dtype=object)
array(['images/Pcbnew_popup_track_mode_footprint.png',
'images/Pcbnew_popup_track_mode_footprint.png'], dtype=object)
array(['images/icons/netlist.png', 'images/icons/netlist.png'],
dtype=object)
array(['images/en/Pcbnew_netlist_dialog.png',
'images/en/Pcbnew_netlist_dialog.png'], dtype=object)
array(['images/Pcbnew_import_spread_footprints.png',
'images/Pcbnew_import_spread_footprints.png'], dtype=object)
array(['images/Pcbnew_stacked_footprints.png',
'images/Pcbnew_stacked_footprints.png'], dtype=object)
array(['images/Pcbnew_move_all_modules.png',
'images/Pcbnew_move_all_modules.png'], dtype=object)
array(['images/Pcbnew_unstacked_footprints.png',
'images/Pcbnew_unstacked_footprints.png'], dtype=object)
array(['images/Pcbnew_layer_setup_dialog.png',
'images/Pcbnew_layer_setup_dialog.png'], dtype=object)
array(['images/Pcbnew_layer_setup_dialog_layer_properties.png',
'images/Pcbnew_layer_setup_dialog_layer_properties.png'],
dtype=object)
array(['images/Pcbnew_layer_manager_pane.png',
'images/Pcbnew_layer_manager_pane.png'], dtype=object)
array(['images/Pcbnew_layer_selection_dropdown.png',
'images/Pcbnew_layer_selection_dropdown.png'], dtype=object)
array(['images/Pcbnew_layer_selection_popup.png',
'images/Pcbnew_layer_selection_popup.png'], dtype=object)
array(['images/Pcbnew_layer_selection_dialog.png',
'images/Pcbnew_layer_selection_dialog.png'], dtype=object)
array(['images/Pcbnew_via_layer_pair_popup.png',
'images/Pcbnew_via_layer_pair_popup.png'], dtype=object)
array(['images/Pcbnew_via_layer_pair_dialog.png',
'images/Pcbnew_via_layer_pair_dialog.png'], dtype=object)
array(['images/icons/contrast_mode.png', 'images/icons/contrast_mode.png'],
dtype=object)
array(['images/Pcbnew_copper_layers_contrast_normal.png',
'images/Pcbnew_copper_layers_contrast_normal.png'], dtype=object)
array(['images/Pcbnew_copper_layers_contrast_high.png',
'images/Pcbnew_copper_layers_contrast_high.png'], dtype=object)
array(['images/Pcbnew_technical_layers_contrast_normal.png',
'images/Pcbnew_technical_layers_contrast_normal.png'], dtype=object)
array(['images/Pcbnew_technical_layers_contrast_high.png',
'images/Pcbnew_technical_layers_contrast_high.png'], dtype=object)
array(['images/Pcbnew_simple_board_outline.png',
'images/Pcbnew_simple_board_outline.png'], dtype=object)
array(['images/Pcbnew_board_outline_imported_from_a_DXF.png',
'images/Pcbnew_board_outline_imported_from_a_DXF.png'],
dtype=object)
array(['images/en/Pcbnew_netlist_dialog.png',
'images/en/Pcbnew_netlist_dialog.png'], dtype=object)
array(['images/Pcbnew_board_outline_with_dogpile.png',
'images/Pcbnew_board_outline_with_dogpile.png'], dtype=object)
array(['images/Pcbnew_board_outline_with_globally_placed_modules.png',
'images/Pcbnew_board_outline_with_globally_placed_modules.png'],
dtype=object)
array(['images/Pcbnew_bad_tracks_deletion_option.png',
'images/Pcbnew_bad_tracks_deletion_option.png'], dtype=object)
array(['images/Pcbnew_extra_footprints_deletion_option.png',
'images/Pcbnew_extra_footprints_deletion_option.png'], dtype=object)
array(['images/Pcbnew_unlock_footprint_option.png',
'images/Pcbnew_unlock_footprint_option.png'], dtype=object)
array(['images/Pcbnew_exchange_module_option.png',
'images/Pcbnew_exchange_module_option.png'], dtype=object)
array(['images/Pcbnew_module_selection_option.png',
'images/Pcbnew_module_selection_option.png'], dtype=object)
array(['images/Pcbnew_change_modules_button.png',
'images/Pcbnew_change_modules_button.png'], dtype=object)
array(['images/Pcbnew_footprint_exchange_options.png',
'images/Pcbnew_footprint_exchange_options.png'], dtype=object)
array(['images/Pcbnew_ratsnest_during_move.png',
'images/Pcbnew_ratsnest_during_move.png'], dtype=object)
array(['images/Pcbnew_circuit_after_placement.png',
'images/Pcbnew_circuit_after_placement.png'], dtype=object)
array(['images/Pcbnew_context_module_mode_module_under_cursor.png',
'images/Pcbnew_context_module_mode_module_under_cursor.png'],
dtype=object)
array(['images/Pcbnew_context_module_mode_no_module_under_cursor.png',
'images/Pcbnew_context_module_mode_no_module_under_cursor.png'],
dtype=object)
array(['images/Pcbnew_design_rules_dropdown.png',
'images/Pcbnew_design_rules_dropdown.png'], dtype=object)
array(['images/Pcbnew_design_rules_top_toolbar.png',
'images/Pcbnew_design_rules_top_toolbar.png'], dtype=object)
array(['images/Pcbnew_preferences_menu.png',
'images/Pcbnew_preferences_menu.png'], dtype=object)
array(['images/Pcbnew_general_options_dialog.png',
'images/Pcbnew_general_options_dialog.png'], dtype=object)
array(['images/Pcbnew_design_rules_editor_netclass_tab.png',
'images/Pcbnew_design_rules_editor_netclass_tab.png'], dtype=object)
array(['images/Pcbnew_design_rules_editor_global_tab.png',
'images/Pcbnew_design_rules_editor_global_tab.png'], dtype=object)
array(['images/Pcbnew_specific_size_options.png',
'images/Pcbnew_specific_size_options.png'], dtype=object)
array(['images/Pcbnew_dr_example_rustic.png',
'images/Pcbnew_dr_example_rustic.png'], dtype=object)
array(['images/Pcbnew_dr_example_standard.png',
'images/Pcbnew_dr_example_standard.png'], dtype=object)
array(['images/Pcbnew_creating_new_track.png',
'images/Pcbnew_creating_new_track.png'], dtype=object)
array(['images/Pcbnew_track_in_progres_context.png',
'images/Pcbnew_track_in_progres_context.png'], dtype=object)
array(['images/Pcbnew_track_toolbar.png',
'images/Pcbnew_track_toolbar.png'], dtype=object)
array(['images/Pcbnew_track_context_menu.png',
'images/Pcbnew_track_context_menu.png'], dtype=object)
array(['images/Pcbnew_new_track_in_progress.png',
'images/Pcbnew_new_track_in_progress.png'], dtype=object)
array(['images/Pcbnew_new_track_completed.png',
'images/Pcbnew_new_track_completed.png'], dtype=object)
array(['images/Pcbnew_track_global_edit_context_menu.png',
'images/Pcbnew_track_global_edit_context_menu.png'], dtype=object)
array(['images/Pcbnew_track_global_edit_dialog.png',
'images/Pcbnew_track_global_edit_dialog.png'], dtype=object)
array(['images/en/rules_editor.png', 'Rules editor'], dtype=object)
array(['images/en/opengl_menu.png', 'OpenGL mode'], dtype=object)
array(['images/en/router_options.png', 'Router options window screenshot'],
dtype=object)
array(['images/Pcbnew_zone_properties_dialog.png',
'images/Pcbnew_zone_properties_dialog.png'], dtype=object)
array(['images/Pcbnew_zone_limit_example.png',
'images/Pcbnew_zone_limit_example.png'], dtype=object)
array(['images/Pcbnew_zone_priority_level_setting.png',
'images/Pcbnew_zone_priority_level_setting.png'], dtype=object)
array(['images/Pcbnew_zone_priority_example.png',
'images/Pcbnew_zone_priority_example.png'], dtype=object)
array(['images/Pcbnew_zone_priority_example_after_filling.png',
'images/Pcbnew_zone_priority_example_after_filling.png'],
dtype=object)
array(['images/Pcbnew_zone_context_menu.png',
'images/Pcbnew_zone_context_menu.png'], dtype=object)
array(['images/Pcbnew_zone_filling_result.png',
'images/Pcbnew_zone_filling_result.png'], dtype=object)
array(['images/Pcbnew_zone_filled_with_cutout.png',
'images/Pcbnew_zone_filled_with_cutout.png'], dtype=object)
array(['images/Pcbnew_zone_filling_options.png',
'images/Pcbnew_zone_filling_options.png'], dtype=object)
array(['images/Pcbnew_zone_include_pads.png',
'images/Pcbnew_zone_include_pads.png'], dtype=object)
array(['images/Pcbnew_zone_exclude_pads.png',
'images/Pcbnew_zone_exclude_pads.png'], dtype=object)
array(['images/Pcbnew_zone_thermal_relief.png',
'images/Pcbnew_zone_thermal_relief.png'], dtype=object)
array(['images/Pcbnew_thermal_relief_settings.png',
'images/Pcbnew_thermal_relief_settings.png'], dtype=object)
array(['images/Pcbnew_thermal_relief_parameters.png',
'images/Pcbnew_thermal_relief_parameters.png'], dtype=object)
array(['images/Pcbnew_add_cutout_menu_item.png',
'images/Pcbnew_add_cutout_menu_item.png'], dtype=object)
array(['images/Pcbnew_zone_unfilled_cutout_outline.png',
'images/Pcbnew_zone_unfilled_cutout_outline.png'], dtype=object)
array(['images/Pcbnew_zone_modification_menu_items.png',
'images/Pcbnew_zone_modification_menu_items.png'], dtype=object)
array(['images/Pcbnew_zone_corner_move_during.png',
'images/Pcbnew_zone_corner_move_during.png'], dtype=object)
array(['images/Pcbnew_zone_corner_move_after.png',
'images/Pcbnew_zone_corner_move_after.png'], dtype=object)
array(['images/Pcbnew_zone_add_similar_during.png',
'images/Pcbnew_zone_add_similar_during.png'], dtype=object)
array(['images/Pcbnew_zone_add_similar_after.png',
'images/Pcbnew_zone_add_similar_after.png'], dtype=object)
array(['images/Pcbnew_technical_layer_zone_dialog.png',
'images/Pcbnew_technical_layer_zone_dialog.png'], dtype=object)
array(['images/icons/add_keepout_area.png',
'images/icons/add_keepout_area.png'], dtype=object)
array(['images/Pcbnew_keepout_area_properties.png',
'images/Pcbnew_keepout_area_properties.png'], dtype=object)
array(['images/Pcbnew_final_preparation_example_board.png',
'images/Pcbnew_final_preparation_example_board.png'], dtype=object)
array(['images/Pcbnew_layer_colour_key.png',
'images/Pcbnew_layer_colour_key.png'], dtype=object)
array(['images/Pcbnew_DRC_dialog.png', 'images/Pcbnew_DRC_dialog.png'],
dtype=object)
array(['images/Pcbnew_setting_pcb_origin.png',
'images/Pcbnew_setting_pcb_origin.png'], dtype=object)
array(['images/Pcbnew_plot_dialog.png', 'images/Pcbnew_plot_dialog.png'],
dtype=object)
array(['images/Pcbnew_plot_postscript_dialog.png',
'images/Pcbnew_plot_postscript_dialog.png'], dtype=object)
array(['images/Pcbnew_plot_fine_scale_setting.png',
'images/Pcbnew_plot_fine_scale_setting.png'], dtype=object)
array(['images/Pcbnew_plot_options_gerber.png',
'images/Pcbnew_plot_options_gerber.png'], dtype=object)
array(['images/Pcbnew_plot_options_other_formats.png',
'images/Pcbnew_plot_options_other_formats.png'], dtype=object)
array(['images/Pcbnew_pad_mask_clearance_menu_item.png',
'images/Pcbnew_pad_mask_clearance_menu_item.png'], dtype=object)
array(['images/Pcbnew_pad_mask_settings_dialog.png',
'images/Pcbnew_pad_mask_settings_dialog.png'], dtype=object)
array(['images/Pcbnew_drill_file_dialog.png',
'images/Pcbnew_drill_file_dialog.png'], dtype=object)
array(['images/Pcbnew_drill_origin_setting.png',
'images/Pcbnew_drill_origin_setting.png'], dtype=object)
array(['images/Pcbnew_advanced_tracing_options.png',
'images/Pcbnew_advanced_tracing_options.png'], dtype=object)
array(['images/Pcbnew_module_properties.png',
'images/Pcbnew_module_properties.png'], dtype=object)
array(['images/Modedit_main_window.png', 'images/Modedit_main_window.png'],
dtype=object)
array(['images/Modedit_top_toolbar.png', 'images/Modedit_top_toolbar.png'],
dtype=object)
array(['images/Modedit_module_properties.png',
'images/Modedit_module_properties.png'], dtype=object)
array(['images/Pcbnew_archive_footprints_menu.png',
'images/Pcbnew_archive_footprints_menu.png'], dtype=object)
array(['images/Pcbnew_example_library.png',
'images/Pcbnew_example_library.png'], dtype=object)
array(['images/Modedit_main_window.png', 'images/Modedit_main_window.png'],
dtype=object)
array(['images/Modedit_context_menu_module_parameters.png',
'images/Modedit_context_menu_module_parameters.png'], dtype=object)
array(['images/Modedit_context_menu_pads.png',
'images/Modedit_context_menu_pads.png'], dtype=object)
array(['images/Modedit_context_menu_graphics.png',
'images/Modedit_context_menu_graphics.png'], dtype=object)
array(['images/Modedit_module_properties_dialog.png',
'images/Modedit_module_properties_dialog.png'], dtype=object)
array(['images/Modedit_pad_properties_dialog.png',
'images/Modedit_pad_properties_dialog.png'], dtype=object)
array(['images/Modedit_pad_offset_example.png',
'images/Modedit_pad_offset_example.png'], dtype=object)
array(['images/Modedit_pad_delta_example.png',
'images/Modedit_pad_delta_example.png'], dtype=object)
array(['images/Modedit_footprint_level_pad_settings.png',
'images/Modedit_footprint_level_pad_settings.png'], dtype=object)
array(['images/Modedit_pad_level_pad_settings.png',
'images/Modedit_pad_level_pad_settings.png'], dtype=object)
array(['images/Modedit_footprint_text_properties.png',
'images/Modedit_footprint_text_properties.png'], dtype=object)
array(['images/Modedit_module_autoplace_settings.png',
'images/Modedit_module_autoplace_settings.png'], dtype=object)
array(['images/Modedit_module_attributes.png',
'images/Modedit_module_attributes.png'], dtype=object)
array(['images/Modedit_module_properties_documentation_fields.png',
'images/Modedit_module_properties_documentation_fields.png'],
dtype=object)
array(['images/Modedit_module_3d_options.png',
'images/Modedit_module_3d_options.png'], dtype=object)
array(['images/Modedit_footprint_3d_preview.png',
'images/Modedit_footprint_3d_preview.png'], dtype=object)
array(['images/icons/duplicate_pad.png', 'images/icons/duplicate_pad.png'],
dtype=object)
array(['images/icons/duplicate_line.png',
'images/icons/duplicate_line.png'], dtype=object)
array(['images/icons/duplicate_text.png',
'images/icons/duplicate_text.png'], dtype=object)
array(['images/icons/duplicate_module.png',
'images/icons/duplicate_module.png'], dtype=object)
array(['images/icons/duplicate_target.png',
'images/icons/duplicate_target.png'], dtype=object)
array(['images/icons/duplicate_zone.png',
'images/icons/duplicate_zone.png'], dtype=object)
array(['images/Pcbnew_move_exact_cartesian.png',
'images/Pcbnew_move_exact_cartesian.png'], dtype=object)
array(['images/Pcbnew_move_exact_polar.png',
'images/Pcbnew_move_exact_polar.png'], dtype=object)
array(['images/icons/array_pad.png', 'images/icons/array_pad.png'],
dtype=object)
array(['images/icons/array_line.png', 'images/icons/array_line.png'],
dtype=object)
array(['images/icons/array_text.png', 'images/icons/array_text.png'],
dtype=object)
array(['images/icons/array_module.png', 'images/icons/array_module.png'],
dtype=object)
array(['images/icons/array_target.png', 'images/icons/array_target.png'],
dtype=object)
array(['images/icons/array_zone.png', 'images/icons/array_zone.png'],
dtype=object)
array(['images/Pcbnew_array_dialog_grid.png',
'images/Pcbnew_array_dialog_grid.png'], dtype=object)
array(['images/Pcbnew_array_grid_offsets.png',
'images/Pcbnew_array_grid_offsets.png'], dtype=object)
array(['images/Pcbnew_array_grid_stagger_rows_2.png',
'images/Pcbnew_array_grid_stagger_rows_2.png'], dtype=object)
array(['images/Pcbnew_array_grid_stagger_cols_3.png',
'images/Pcbnew_array_grid_stagger_cols_3.png'], dtype=object)
array(['images/Pcbnew_array_dialog_circular.png',
'images/Pcbnew_array_dialog_circular.png'], dtype=object)] | docs.kicad-pcb.org |
You're viewing Apigee Edge documentation.
View Apigee X documentation.
This release process
This section describes the release process for Apigee Edge for Private Cloud... | https://docs.apigee.com/release/apigee-edge-release-process?hl=he-IL | 2021-07-24T07:22:03 | CC-MAIN-2021-31 | 1627046150134.86 | [] | docs.apigee.com |
clear¶
Examples¶
A = rndn(1000, 1000); /* ** Code that uses 'A' would be here ** Free memory holding 'A' */ clear A;
Remarks¶
If your program is running out of memory, or uses considerable system
resources, using
clear() to deallocate large matrices after they are no
longer needed may allow it to run more efficiently.
clear x;
is equivalent to
x = 0;
Matrix names are retained in the symbol table after they are cleared.
Matrices can be cleared even though they have not previously been
defined.
clear() can be used to initialize matrices to scalar 0. | https://docs.aptech.com/gauss/clear.html | 2021-07-24T07:59:05 | CC-MAIN-2021-31 | 1627046150134.86 | [] | docs.aptech.com |
Subsets and Splits
No saved queries yet
Save your SQL queries to embed, download, and access them later. Queries will appear here once saved.