content
stringlengths 0
557k
| url
stringlengths 16
1.78k
| timestamp
timestamp[ms] | dump
stringlengths 9
15
| segment
stringlengths 13
17
| image_urls
stringlengths 2
55.5k
| netloc
stringlengths 7
77
|
---|---|---|---|---|---|---|
tile_set_visible(index, visible);
Returns: N/A
Like instances, tiles can be flagged as visible or invisible.
This function can change that flag, and by setting it to
true you make the tile visible, or by setting it to
false you can set it to invisible. This is not permanent
and if the player leaves the room and returns to it again, the tile
will be visible (if it was placed in the room editor and the room
is not persistent).
var tile;
tile = tile_layer_find(-1000, mouse_x, mouse_y);
if !tile_get_visible(tile)
{
tile_set_visible(tile, true);
}
The above code will first get the index of a tile at the mouse position with a depth of -1000 and then use that value to check whether the tile is visible or not, and if not set the tile to visible. | http://docs.yoyogames.com/source/dadiospice/002_reference/game%20assets/backgrounds/background%20tiles/tile_set_visible.html | 2017-11-17T19:17:52 | CC-MAIN-2017-47 | 1510934803906.12 | [] | docs.yoyogames.com |
Product: Elite Human Surface Shader
Product Code: ds_ap47
Created By: DAZ 3D
Released: 2008-07-08
Required product: DAZ Studio version 2.2.2.17 or higher.
A User Guide as well as Tips and Tricks can be found on the brokered artist's wiki.
Human Surface Shader User Guide
First time users can follow these instructions to get good results without having to wade through the extensive documentation provided by Omnifreaker. Once you see what you can achieve, you'll be encouraged to read the documentation in detail.
1) First, Select the figure in the Scene Tab (VIEW –> Tabs –> Scene).
2) Select the surfaces that you wish to apply the HSS to in the Surface Tab (VIEW –> Tabs –> Surface).
On the Generation 4 figure (V4, M4, etc.), these surfaces are ideal. To multi-select, press the ctrl button.
1_Lip
1_Nostril
1_SkinFace
2_Nipple
2_SkinHead
2_SkinHip
2_SkinNeck
2_SkinTorso
3_SkinArm
3_SkinFoot
3_SkinForearm
3_SkinHand
3_SkinLeg
3) Apply the Human Surface Shader. Make sure that you do not remove the underlying texture. Here's how to do it.
a) Find the icon for the Human Surface Shader as directed by the readme here:
DAZ Studio portion of Content Tab → Shaders → omnifreaker → Human Surface → !HumanSurface Base
b) Hold down the CTRL button on the keyboard and double-click the !HumanSurface Base icon
c) You'll see an option window labeled “Shader Preset (!HumanSurface Base)” appear.
d) For Surfaces, choose “Selected” and for Map Settings, choose “Ignore.” The Ignore part is important. If you choose, Replace, the texture maps (.jpg), will be removed and the figure will look pale grey and very much wrong. Press 'Accept.'
4) The EHSS is applied, but the figure looks a bit 'shiny.' Now you need to fine-tune the settings. There's a great deal of variation possible, but try using these settings given here and then modify as you wish. Go to the Surface Tab where you should still have the above surfaces selected. In the lower half under 'Advanced.' Then modify the properties as indicated below:
a) Bump: if the base textures have a bump map, change this from Off to 'On.' Most textures do.
b) Displacement: if the base textures have a displacement map, change this from Off to 'On.' If you turn this on, then change 'Trace Displacements' to 'On' also.
c) Specular Strength: Change this from 100% to 25%. The excessive shininess will disappear on the figure.
d) Velvet: Change to 'On.' Velvet Strength: Change from 100% to 10%. This is one of the powerful properties of the EHSS that adds much realism.
e) Subsurface Scattering: Change to 'On.' Subsurface Scattering Strength: Change from 100% to 25%. This property adds wonderful realism to the figure.
5) As you can see there are many other properties and settings you can play with. However, the above settings will give you good results. Once the shader is fine-tuned, it's ready to be rendered. You may still need to add lighting and make other improvements in the scene before you do so.
6) Render Setting. To avoid getting seams in the render, change the Shading Rate.
To find this setting, go to RENDER –> Render Settings –> Advanced –> Shading Rate —> change to 0.2
This change will eliminate seams and improve render quality. It will also increase the time of the render.
Visit our site for further technical support questions or concerns: DAZ Support
Thank you and enjoy your new products!
DAZ Productions Technical Support
12637 South 265 West #300
Draper, UT 84020
Phone:(801) 495-1777
TOLL-FREE 1-800-267-5170
OM_KHPark_EnvRefl.hdr
OM_KHPark_EnvRefl.jpg
OM_KHPark_EnvRefl.tif
OM_Kitchen_EnvRefl.hdr
OM_Kitchen_EnvRefl.jpg
OM_Kitchen_EnvRefl.tif
!HumanSurface Base.ds
HDR KHPark.ds
HDR KHPark.png
HDR Kitchen.ds
HDR Kitchen.png
omHumanSurfaceAttribs.ds
omHumanSurfaceDef.ds
omHumanSurfaceSurf.ds
omHumanSurface.sdl | http://docs.daz3d.com/doku.php/artzone/azproduct/7412 | 2017-10-17T00:07:18 | CC-MAIN-2017-43 | 1508187820487.5 | [] | docs.daz3d.com |
ActiveForm is a widget that builds an interactive HTML form for one or multiple data models.
For more details and usage information on ActiveForm, see the guide article on forms.
The form action URL. This parameter will be processed by yii\helpers\Url::to().
See also $method for specifying the HTTP method for this form.
public array|string $action = ''
The type of data that you're expecting back from the server.
public string $ajaxDataType = 'json'
The name of the GET parameter indicating the validation request is an AJAX request.
public string $ajaxParam = 'ajax'
The client validation options for individual attributes. Each element of the array represents the validation options for a particular attribute.
public array $attributes = []
Whether to enable AJAX-based data validation. If yii\widgets\ActiveField::$enableAjaxValidation is set, its value will take precedence for that input field.
public boolean $enableAjaxValidation = false().
public boolean $enableClientScript = true
Whether to enable client-side data validation. If yii\widgets\ActiveField::$enableClientValidation is set, its value will take precedence for that input field.
public boolean $enableClientValidation = true
Whether to perform encoding on the error summary.
public boolean $encodeErrorSummary = true
The CSS class that is added to a field container when the associated attribute has validation error.
public string $errorCssClass = 'has-error'
The default CSS class for the error summary container.
See also errorSummary().
public string $errorSummaryCssClass = 'error-summary'
The default field class name when calling field() to create a new field.
See also $fieldConfig.
public string $fieldClass = 'yii\widgets\ActiveField'.
public array|Closure $fieldConfig = []'], ]);
public string $method = 'post'
The HTML attributes (name-value pairs) for the form tag.
See also yii\helpers\Html::renderTagAttributes() for details on how attributes are being rendered.
public array $options = []
The CSS class that is added to a field container when the associated attribute is required.
public string $requiredCssClass = 'required'
Whether to scroll to the first error after validation.
public boolean $scrollToError = true
Offset in pixels that should be added when scrolling to the first error.
public integer $scrollToErrorOffset = 0
The CSS class that is added to a field container when the associated attribute is successfully validated.
public string $successCssClass = 'has-success'
Whether to perform validation when an input field loses focus. If yii\widgets\ActiveField::$validateOnBlur is set, its value will take precedence for that input field.
public boolean $validateOnBlur = true
Whether to perform validation when the value of an input field is changed. If yii\widgets\ActiveField::$validateOnChange is set, its value will take precedence for that input field.
public boolean $validateOnChange = true
Whether to perform validation when the form is submitted.
public boolean $validateOnSubmit = true
Whether to perform validation while the user is typing in an input field. If yii\widgets\ActiveField::$validateOnType is set, its value will take precedence for that input field.
See also $validationDelay.
public boolean $validateOnType = false
The CSS class that is added to a field container when the associated attribute is being validated.
public string $validatingCssClass = 'validating'
Number of milliseconds that the validation should be delayed when the user types in the field and $validateOnType is set
true. If yii\widgets\ActiveField::$validationDelay is set, its value will take precedence for that input field.
public integer $validationDelay = 500.
public array|string $validationUrl = null ...
© 2008–2017 by Yii Software LLC
Licensed under the three clause BSD license. | http://docs.w3cub.com/yii~2.0/yii-widgets-activeform/ | 2017-10-17T00:22:36 | CC-MAIN-2017-43 | 1508187820487.5 | [] | docs.w3cub.com |
Contents ServiceNow Platform Previous Topic Next Topic Test the ODBC driver Add To My Docs Add selected topic Add selected topic and subtopics Subscribe to Updates Share Save as PDF Save selected topic Save selected topic and subtopics Save all topics in Contents Test the ODBC driver Test the ODBC driver After configuring the ODBC driver, test that the driver can connect to the ServiceNow ITSA Suite instance as the ODBC user and can query data from a target table. About this task To ServiceNow ITSA Suitelogin.Querying table and column namesYou can get a list of accessible tables and columns, based on the read ACLs for the querying user.Related TasksConfigure the ODBC driver Last Updated: 186 Tags: Products > Now Platform > ODBC driver; Versions > Geneva Add To My Docs Add selected topic Add selected topic and subtopics Subscribe to Updates Share Save as PDF Save selected topic Save selected topic and subtopics Save all topics in Contents Send Feedback Previous Topic Next Topic On this page | https://docs.servicenow.com/bundle/geneva-servicenow-platform/page/integrate/odbc_driver/task/t_TestingTheODBCDriver.html | 2017-10-17T00:34:08 | CC-MAIN-2017-43 | 1508187820487.5 | [] | docs.servicenow.com |
Connect to the mobile network manually while using Mobile Hotspot mode
If your
BlackBerry
smartphone is connected to a GSM network, and Mobile Hotspot mode is turned on, you might have to manually connect to the mobile network using your APN. To get the user name and password for your APN, contact your wireless service provider. Depending on your wireless service provider, you might be able to enter up to two APNs.
On the home screen, click the connections area at the top of the screen, or click the
Manage Connections
icon.
Select the
Mobile Hotspot
checkbox.
If necessary, enter the APN name and APN login information for each of your APNs.
Click
OK
.
Parent topic:
Mobile Hotspot mode basics | http://docs.blackberry.com/en/smartphone_users/deliverables/41619/alk1319566729363.html | 2014-12-18T08:44:11 | CC-MAIN-2014-52 | 1418802765698.11 | [] | docs.blackberry.com |
Message-ID: <826545887.8483.1418891973476.JavaMail.haus-conf@codehaus02.managed.contegix.com> Subject: Exported From Confluence MIME-Version: 1.0 Content-Type: multipart/related; boundary="----=_Part_8482_1037695082.1418891973475" ------=_Part_8482_1037695082.1418891973475 Content-Type: text/html; charset=UTF-8 Content-Transfer-Encoding: quoted-printable Content-Location:
Rather than testing every possible combination of things, all pa= irs simplifies the exercise to testing every pair of things which reduces t= he complexity significantly, for example instead of 700,000 possible combin= ations, all pairs would be about 500 combinations.=20
While this reduces the number of tests, does it help find bugs? Al= l pairs works because when things break, they have a tendency to break beca= use of the faulty interaction of two things rather than 3 or more. &nb= sp; A long term study of medical device failures found a strong correl= ation for this. For all the failures reported, a quarter of the= m would have been found with all pairs testing. The true result is pr= obably much better than this, because a lot of the failure reports did not = have enough detail to allow proper analysis. Of the detailed reports,= 98% of the failures would have been found with all pairs testing! Th= e paper, "Failure Modes in Medical Devices", is at csrc.ncsi= .nist.gov/staff/kuhn/final-rqse.pdf=20
If you do all pairs testing, you could still use minimal pairs to = get better test effectivess. By starting your tests with all the mini= mal pairs first, this would give a good broad coverage of combinations.&nbs= p; The remaining all pairs combinations could then be used to finish t= he exercise. | http://docs.codehaus.org/exportword?pageId=60396 | 2014-12-18T08:39:33 | CC-MAIN-2014-52 | 1418802765698.11 | [] | docs.codehaus.org |
Message-ID: <975944339.8497.1418892211806.JavaMail.haus-conf@codehaus02.managed.contegix.com> Subject: Exported From Confluence MIME-Version: 1.0 Content-Type: multipart/related; boundary="----=_Part_8496_2133358893.1418892211805" ------=_Part_8496_2133358893.1418892211805 Content-Type: text/html; charset=UTF-8 Content-Transfer-Encoding: quoted-printable Content-Location:
Discussion: need some answers to these before even pushing this to the l=
ist
TODO: Jesse and Greg spent a lot of time getting the async SSL wo= rking so a little description this work would be useful
TODO: archite= cture document about Mercury transport as the async HTTP/DAV client
T= ODO: example of user facing API for Mercury
TODO: architecture docume= nt and spec for mercury (largely in the wiki)
TODO: example of user f= acing API for maven-shared-model
TODO: architecture document on maven= itself, plugin manager, lifecycle executor, profile construction
TOD= O: check with kenney to see if his work survived in substituting components= or if it's his work that's actually making it work
Technical Preparation: not necessary before discussions can start but he=
lpful inte= gration tests
We must ensure that plugins and reports written against the Maven 2.0.x =
APIs remain to work in 2.1. We don't want people to have to rewrite
= their plugins. There are several plugins that are using the current artifac= t resolution code that will not be supported (please see the Mercury sectio= n below). The
ones that are in our control we can port over to use Me= rcury, and external users will have to deal with the major version change. = Most people will not be affected and
Mercury will be a far better sol= ution.
We must also ensure that POMs of version 4.0.0 are supported in 2.1.x al=
ong with the behavior currently experienced. We are relying heavily on our =
integrations tests right
now but as we move forward the work that Sha= ne is doing on the project builder with maven-shared-model will help us to = accommodate different versions of a POM, and different
formats we dec= ide.=20
comment from kenney:
The problem with this is that it's a hack. If= xpp3dom/plexus utils is updated and the plugin requires the new xpp3dom cl= ass, which has a new method for instance, this will break the plugin.
About this specific issue (MNG-3012): The best solution is to only share= java., javax., and core maven api classes, so we can no longer export anyt= hing outside the plugin api (which includes maven-model, maven-project, mav= en-settings e.a.). This would require to phase out plugin.getConfiguration(= ) and other model methods that return xpp3dom classes, and let them return = interfaces present in the maven core api. Those interfaces would have an im= plementation class that implements both that interface and extends xpp3dom,= which will be hidden for the plugin. Another solution could be to use xmlp= lexusconfiguration=20
There's one more solution to consider; using ASM to rewrite plugins as t= hey're loaded. We could add code modifiers that workaround incompatibilitie= s by detecting usage patterns, like (Xpp3Dom) plugin.getConfiguration(). An= example could be to modify the code to wrap Xpp3DomParser.parse( new Strin= gReader( String.valueOf( /plugin.getConfiguration()/ ) ) ) around the call.= This is even more of a hack though. Perhaps a mojo that scans for plugin i= ncompatibilities using ASM is more feasible (no code modification).=20
So the basic problem we're up against is that there can be core api chan= ges between major versions that pose incompatibilities for plugins written = against an older version. The simplest solution would be to let plugins spe= cify the maven versions they work against (which is partly present: <req= uires><mavenVersion>2.0.6</mavenVersion></requires>. I= f this field supports a versionrange, or we'd default the version interpret= ation above to mean [2.0.6,2.1), we can detect plugins that won't run. The = shading mentioned above is solving only one incompatibility problem, and th= ere are bound to be more. Maybe we even need 2 versions of a plugin at some= point, targeted toward different maven versions, though I'd really like to= avoid that. But we cannot just assume our 2.0 plugin api will never change= across 'major' (read: minor) releases.=20
Benjamin, Dennis, Arnaud, and Olivier have been improving the harness we=
have to for running integration tests on the plugins. So it will be easier=
to test a plugin set with a given installation
in Maven.
There are many changes that users have requested in the POM, in addition=
to wholesale formatting changes. Acommodating these requests is a little t=
ricky
because we need to support different versions simultaneously so= that if projecta A builds with 2.0.x, project B can consume the project A = POM using 2.1.x.
We just need some way to easy support multiple versi= ons and support mediation between the different versions.
Full embedding of the Maven core is a major feature of the 2.1.x line. T=
he embedder was created primarily for IDE integration and is now being cons=
umed by m2eclipse, Mevenide and IDEA,
but the embedder is also used b= y the Maven CLI to ensure parity between IDEs and the CLI as much as possib= le. To understand how the embedder work you can refer to
the Maven Embedder documentation.
As discussed in Substituting of Custom =
Components we now have two ways
to insert new components into the= system.
main is org.apache.maven.cli.MavenCli from plexus.core = = =20 = = =20 set maven.home default ${user.home}/m2 = = =20 = = =20 [plexus.core] = = =20 load ${maven.home}/tycho/*.jar = = =20 load ${maven.home}/lib/*.jar=20=20
But what we ultimately need for Tycho is a way to dynamically pull in a =
set of components based on the packaging of a project. In our case with Tyc=
ho the packaging is
maven-osgi-bundle and that should kick in the set= of components that do builds for OSGi bundles. We also have another use ca= se in Tycho where we are building OSGi bundles
without a POM and actu= ally using a manifest. In this case we need to somehow detect the manifest = and then have the custom set of components kick in. In the case of Tycho
we need a different project builder, and artifact resolver.
Mercury is a replacement for the current Maven Artifact subsystem, and a= complete replacement for the HTTP and DAV portions of the existing transpo= rt.=20
The primary reasons for replacing the code are that it is unmaintainable=
and nearly impossible to navigate, it uses completely non-standard structu=
res and libraries for
version calculations, the API is too hard for p= eople to use, and it is not given to users to consume as a single componmen==
dability and error reporting for IDE integration. This was a direct result =
of all IDE integrators
having to reimplement the current artifact res= olver to provide decent feedback to users when errors occured. The artifact= subsystem would just die and leave the IDE in
an unusable state. Mil= os was the first to implement his own artifact resolver, and Eugene soon ha= d to do the same in m2eclipse. Oleg and I were also trying to use the
= current artifact mechanism in an embedded mode for some Eclipse plugins an= d this also proved to be quite painful. After the first attempt of removing= the fail-fast
behavior, Oleg and I decided to make a break from the = old codebase and attempt to create Mercury with the following goals in mind= :
So in the end I believe it would be detrimental to use the Maven Artifac=
t code in the 2.1.x tree and the change needs to be made to use Mercury bef=
ore the first alpha ships. Oleg
and I started this work, and Oleg has= subsequently worked tirelessly on Mercury along with a great deal of help = from Greg, Jan and Jesse. I think Oleg understands the requirements
a= s he's seen Maven in action in one of the largest development environments = in the world and watched how Maven can fail spectacularly.
Java5 annotations for plugins: we have two implementations that have now=
been merged in plexus-cdc. QDOX 1.7 has now been released so we may want t=
o check the
source level gleaning again. Jason Dillon has created a = working class processing model. We need to deal with Plexus components and = Maven plugins.
Support for "java" projects in Eclipse has certain overhead an= d it is desirable to only enable for projects that actually require it. Mor= e specifically, java maven projects have JRE classpath container, maven cla= sspath container, have java-specific UI elements enabled and are offered in= various java-related searches. Also, tools like WTP and AJDT treat (eclips= e) java projects specially.=20
There is currently no direct way to tell if a (maven) project needs to b= e configured as java project in eclipse. The closest test condition I can t= hink off is=20
1 the project ArtifactHandler language=3Djava=20
and=20
2.1 ArtifactHandler addedToClasspath=3Dtrue=20
or=20
2.2 MavenProject.getCompileSourceRoots().size() > 0=20
or=20
2.3 MavenProject.getTestCompileSourceRoots().size() > 0=20
(in other words the project is java and either itself is added to classp= ath or has sources to compile).=20.=20 | http://docs.codehaus.org/exportword?pageId=97452186 | 2014-12-18T08:43:31 | CC-MAIN-2014-52 | 1418802765698.11 | [] | docs.codehaus.org |
.
Thanks to the enthusiastic, borderline pestering community :), yet another tynamo module, this time tynamo-federatedaccounts 0.5.0, gets an upgrade to T5.4! Use it before the code expires, see and code at.
Release notes at
User guide at, code at.
Enjoy the ride!
Tynamo Team! | http://docs.codehaus.org/pages/viewpage.action?pageId=191987931 | 2014-12-18T08:38:31 | CC-MAIN-2014-52 | 1418802765698.11 | [] | docs.codehaus.org |
To use this plugin, add an entry into the
plugins table of
build.settings. When added, the build server will integrate the plugin during the build phase.
settings = { plugins = { ["CoronaProvider.native.popup.social"] = { publisherId = "com.coronalabs" }, }, }
© 2014 Corona Labs Inc. All Rights Reserved. (Last updated: 17-Dec-2014)
Help us help you! If you notice a problem with this page, please report it. | http://docs.coronalabs.com/daily/plugin/CoronaProvider_native_popup_social/index.html | 2014-12-18T08:28:14 | CC-MAIN-2014-52 | 1418802765698.11 | [] | docs.coronalabs.com |
This provides specific information to assist you with development of programs for the ADSP-BF527 EZ-KIT Lite evaluation system. These are tailored excerpts from the BF527 EZ-Kit manual v1.2 (2008) BF527 EZ-Kit manual - v1.6 (2010)
Your ADSP-BF527 EZ-KIT Lite evaluation system package contains the following items.
It also may include things not needed for Linux development including:
When removing the EZ-KIT Lite board from the package, handle the board carefully to avoid the discharge of static electricity, which can damage some components. Figure below shows the default jumper settings, switches, connector locations, and LEDs used in installation. Confirm that your board is in the default configuration before using the board.
The default EZ-Kit setup has test LDRs in the parallel NOR flash and the SPI flash. You have two choices to get U-Boot setup initially:
Both methods are documented in loading.
The default SW1 settings will have the ethernet disconnected.
If you get errors in U-Boot, like:
bfin> dhcp BOOTP broadcast 1 Ethernet: tx errorThis normally means this switch is set incorrectly.
The relevant BMODE settings are the same as the BF52x family. For the full list, see the BF52x datasheet.
Keep in mind that UART1 is the populated connector on the BF527-EZkit and UART0 is just the pin header. Also keep in mind the dot on the switch bar is not the the direction indicator, the gap far away from the dot is. for example if switch the dot to position between 2 and 3, you are actually switching to position 8 for UART1 boot.
If MIC is selected as capture input,then the gain can be adjusted through SW4.
The codec must be configured to TWI/SPI mode as well.
“x” means it has nothing to do with the mode selection,but they should be configured to “Off” in real use.
The Ethernet mode flash CS switch (SW9) sets the bootstrapping options for the LAN8700 RMII PHY chip (U14).
SW9.4 disconnects SPISEL1 from the SPI flash chip (U8). Setting SW9 position 4 OFF is useful when using SPISEL1 on the expansion interface at connector J2 pin 11. SW9 default setting is position 4 ON.
If you are having troubles with the UART, check out if you are using the correct serial cable
For USB Host Support make sure SW13 is configured following
SW17,SW20 must all be configured to “ON” to enable the SPORT0A for audio RX/TX.To enable on-board SPORT0 socket,these two switchs must all be configured to “OFF”.
The codec's registers can be configured through TWI or SPI,currently TWI is used by the audio codec by default.
^ SW19 Positions ^ Mode ^
The ADSP-BF527 EZ-KIT Lite board includes four types of external memory:
The ADSP-BF527 processor connects to a 64 MB Micron MT48LC32M16A2TG-75 chip through the external bus interface unit (EBIU). The SDRAM chip can operate at a maximum clock frequency of 133 MHz.
There is a trade-off between selecting the maximum core clock (CCLK) of the processor and the maximum system clock. Consequently, the respective control registers must be initialized appropriately to get either maximum CCLK or maximum SCLK.
The parallel flash memory interface of the ADSP-BF527 EZ-KIT Lite contains a 4 MB (2M x 16 bits) ST Micro M29W320EB chip. Flash memory connects to the 16-bit data bus and address lines 1 through 19. Chip enable is decoded by using AMS0–3 select lines through NAND and AND gates. The address range for flash memory is 0x20000000 to 0x203FFFFF.
Flash memory is pre-loaded with boot code for the blink, LCD images, and power-on-self test (POST) programs.
By default, the EZ-KIT Lite boots from the 16-bit parallel flash memory. The processor boots from flash memory if the boot mode select switch (SW2) is set to a position of 1.
The ADSP-BF527 processor is equipped with an internal NAND flash controller, which allows the 4 Gbit ST Micro’s NAND04 device to be attached gluelessly to the processor. NAND flash is attached via the processor’s specific NAND flash control and data lines. NAND flash shares pins with the Ethernet PHY, host connector, and expansion interface.
The NAND chip enable signal (NDCE#_HOSTD10) can be disconnected from NAND flash by turning OFF SW11.4 (switch 11 position 4). This ensures that the NAND will not be driving data when HOSTD10 changes state. See “Rotary NAND Enable Switch (SW11)” on page 2-16 for more information.
The Ethernet PHY (U14) must be disabled in order for NAND flash to function properly. This is accomplished by setting SW1 to OFF, OFF, ON, OFF.
For more information about the NAND04 device, refer to the ST Microelectronics Web site at:
The ADSP-BF527 processor has one serial peripheral interface (SPI) port with multiple chip select lines. The SPI port connects directly to serial flash memory, MAX1233 touchscreen and keypad controller, audio codec, and expansion interface.
Serial flash memory is a 16 Mb ST Micro M25P16 device, which is selected using the SPISEL1 line of the processor. SPI flash memory is pre-loaded with boot code for the blink and POST programs. By default, the EZ-KIT Lite boots from the 16-bit flash parallel memory. SPI flash can be selected as the boot source by setting the boot mode select switch (SW2) to position 3.
There are no Linux drivers for the MAXIM part that drives the touchscreen and the keypad on earlier versions of the EZkit. There are no plans to write such drivers at this time. Users have developed a driver for the 2008R1+ release. You can find it by searching the uClinux-dist forums.
There are Linux drivers for the AD7879 touchscreen controller which is found on current versions of the development board.
There is a Linux driver for the USB interface and should generally work OK. Any feedback on it is appreciated via our forums.
Issue:
Pull-Down on SPISCK prevent SPI SD/MMC card to work properly.
Workaround:
Remove Pull-Up R25 located bottom side of the EZKIT close to connector J3. This may cause problems with the onboard SPI Flash. | http://docs.blackfin.uclinux.org/doku.php?id=hw:boards:bf527-ezkit | 2014-12-18T08:28:57 | CC-MAIN-2014-52 | 1418802765698.11 | [] | docs.blackfin.uclinux.org |
# Install a package from an already copied file - svr4pkg: name: CSWcommon src: /tmp/cswpkgs.pkg state: present # Install a package directly from an http site - svr4pkg: name: CSWpkgutil src: '' state: present zone: current # Install a package with a response file - svr4pkg: name: CSWggrep src: /tmp/third-party.pkg response_file: /tmp/ggrep.response state: present # Ensure that a package is not installed. - svr4pkg: name: SUNWgnome-sound-recorder state: absent # Ensure that a category is not installed. - svr4pkg: name: FIREFOX state: absent category: true. | http://docs.ansible.com/ansible/latest/svr4pkg_module.html | 2017-08-16T19:38:42 | CC-MAIN-2017-34 | 1502886102393.60 | [] | docs.ansible.com |
Manage collections and categories
This article will help you understand the basic organizational structure within Docs. You'll also learn how to manage collections and categories.
In this article
What are Collections and Categories?
Collections
Think of a collection as a bookshelf. Each collection covers a different topic, product, or department. Depending on how you structure your documentation, you might need one collection or a couple of collections. In our example below, we use 3 collections: For Customers, Wholesale and For the Team.
You'll see collections appear on the top navigation bar of your public Docs website. Everybody loves a neat, organized book shelf, so don't get too crazy with multiple collections.
Categories
Categories are books on your bookshelf, and each collection can have separate categories. Articles are like the chapters within each book. Articles can belong to multiple categories, so your customers can find what they need in multiple locations.
Here's one point we want to drive home: articles cannot belong to more than one collection. If you find yourself needing the same article in two collections, you might want to turn those collections in to categories instead.
Creating and managing collections
We recommend having some structure setup before you start relentlessly pecking away at your keyboard to write articles. Let's walk through creating and managing collections.
To get started, head over to Manage → Docs, then click the Docs site you'd like to create or manage collections for.
- 1
To create a new collection, just click the New Collection button at the top of the page. Name your collection and select a visibility option.
- 2
If you need to rearrange how your collections appear on the book shelf (the navigation bar), place your mouse cursor over the bar icons and drag and drop as you see fit.
- 3
Clicking on a collection lets you customize the name, visibility, and category sorting order. You can modify collection settings whenever you'd like. If you don't have any categories just yet, create one (or a couple) before moving on. You can reorder categories by dragging them up or down on the page. Don't forget to save your changes. If you delete a collection, any existing articles or categories that belong to that collection will also be deleted.
Creating categories
You can create categories from a few different locations in Docs:
- From the New dropdown menu on the sidebar.
- From the Collections page, when you create or modify a collection.
- While working on an article, from the categories sidebar.
Editing or deleting an existing category
To update or edit a category, just click on the category name from the left-hand sidebar, then click the same category name at the top of the page. If you delete a category, any existing articles that belonged to that category will be marked as uncategorized.
| http://docs.helpscout.net/article/87-collections-and-categories | 2017-08-16T19:20:37 | CC-MAIN-2017-34 | 1502886102393.60 | [array(['http://d33v4339jhl8k0.cloudfront.net/docs/assets/524448053e3e9bd67a3dc68a/images/577a47e4903360258a10e494/file-2xO1g7Y7jW.png',
None], dtype=object)
array(['http://d33v4339jhl8k0.cloudfront.net/docs/assets/524448053e3e9bd67a3dc68a/images/57d44a309033602da7bdcf70/file-f1za8YIIcC.png',
None], dtype=object)
array(['http://d33v4339jhl8k0.cloudfront.net/docs/assets/524448053e3e9bd67a3dc68a/images/57d44876c697914ce32d9bb1/file-zGmR5tIwE8.png',
None], dtype=object)
array(['http://d33v4339jhl8k0.cloudfront.net/docs/assets/524448053e3e9bd67a3dc68a/images/57d44aa3c697914ce32d9bb5/file-EigBZrVA4C.png',
None], dtype=object) ] | docs.helpscout.net |
Information for "JTable" Basic information Display titleAPI17:JTable Default sort keyJTable Page length (in bytes)240 Page ID29083:15, 11 May 2013 Latest editorTom Hutchison (Talk | contribs) Date of latest edit09:15,’ | https://docs.joomla.org/index.php?title=API17:JTable&action=info | 2015-05-22T15:41:52 | CC-MAIN-2015-22 | 1432207925274.34 | [] | docs.joomla.org |
What's in this manual
This documentation does not apply to the most recent version of Splunk. Click here for the latest version.
What's in this manual
Prior to Splunk 4.3, this manual was titled Splunk Developer Manual. However, with the introduction of the Splunk Developer Portal , which covers material related to development on Splunk using the Splunk SDKs and the Splunk App Framework, the Splunk Developer Manual is now more accurately titled Developing Dashboards, Views, and Apps for Splunk Web.
The content of this manual for Splunk 4.2.5 or earlier remains the same. The change in title does not affect any links or bookmarks to previous content. For Splunk 4.3, the manual contains updated content to reflect new features introduced with that release.
This manual contains information for building dashboards, forms, and advanced views. It also provides an introduction to building apps and add-ons for Splunk Web.
This manual no longer covers leveraging Splunk SDKs in your applications. Developers who want to use the Splunk SDKs or build and customize apps using the Splunk App Framework should visit the Splunk Developer Portal.
This documentation applies to the following versions of Splunk: 4.3 , 4.3.1 , 4.3.2 , 4.3.3 , 4.3.4 , 4.3.5 , 4.3.6 , 4.3.7 View the Article History for its revisions. | http://docs.splunk.com/Documentation/Splunk/4.3.1/Developer/Whatsinthismanual | 2015-05-22T14:30:26 | CC-MAIN-2015-22 | 1432207925274.34 | [] | docs.splunk.com |
Article
From Joomla! Documentation
Revision as of 19:32, 11 January 2008 by Chris Davenport (Talk | contribs)
In Joomla! an Article is a piece of content usually consisting mainly of text, but may contain other resources too (for example, images). An Article is most often regarded as the third level in the hierarchy Sections -> Categories -> Articles. For example, a website might have Sections called "Animals" and "Plants". Within the "Animals" Section, the website might have Categories such as "Birds" and "Mammals". In the "Birds" Category there might be Articles called "Parrots" and "Sparrows" which describe the relevant birds in detail. However, it is possible to have Uncategorised Articles that exist without being associated with any Section or Category.
Articles are maintained using the Article Manager (help screen) which can be reached in the Administrator (Back-end) by clicking on the Content menu, then the Article Manager menu item.
See also: Section, Category | https://docs.joomla.org/index.php?title=Chunk:Article&oldid=286 | 2015-05-22T15:24:27 | CC-MAIN-2015-22 | 1432207925274.34 | [] | docs.joomla.org |
Bug Squad
From Joomla! Documentation
Revision as of 20:41, 2 October 2012 by Tom Hutchison (Talk | contribs)
Pages that are useful for Bug Squad members and coordinators. See the Bug Squad Portal for more information.
Subcategories
This category has the following 5 subcategories, out of 5 total.
B
- [×] Bug Tracker (12 P)
- [×] Bug Tracker/en (1 P)
G
T
- [×] Testing/en (empty)
Pages in category ‘Bug Squad’
The following 100 pages are in this category, out of 100 total. | https://docs.joomla.org/index.php?title=Category:Bug_Squad&oldid=76140 | 2015-05-22T15:07:53 | CC-MAIN-2015-22 | 1432207925274.34 | [] | docs.joomla.org |
Talk:Magic quotes and security - Revision history 2015-05-22T14:59:43Z Revision history for this page on the wiki MediaWiki 1.23.9 //docs.joomla.org/index.php?title=Talk:Magic_quotes_and_security&diff=76694&oldid=prev Phild: added discussion info about Magic Quotes 2012-10-17T17:56:54Z <p>added discussion info about Magic Quotes</p> <p><b>New page</b></p><div>Magic Quotes:<br /> <br /> : Warning: <br /> :: This feature has been DEPRECATED as of PHP 5.3.0 and REMOVED as of PHP 5.4.0.<br /> <br /> :: As of Joomla 3.0 it is recommended to have magic quotes off see technical requirements<br /> ::</div> Phild | https://docs.joomla.org/index.php?title=Talk:Magic_quotes_and_security&feed=atom&action=history | 2015-05-22T14:59:43 | CC-MAIN-2015-22 | 1432207925274.34 | [] | docs.joomla.org |
Dashboards provide a way to display any kinds of data through widgets at different levels:
- Global dashboards display data at instance level
- Project dashboards display data at project level.
Overview
Concepts<<
Project Dashboards
Project dashboards are the entry point when looking at a project. They display an overview of the project data: measures, issues, etc.
Dashboard: Default Project Dashboard Shipped with SonarQube
This default dashboard gives an overview of your project (with widgets like Size, etc.) and its quality (with widgets like Rules compliance, Duplications, etc.).
From there, you will be able to hunt for seven different kinds of quality flaw:
. | http://docs.codehaus.org/pages/viewpage.action?pageId=231736568 | 2015-05-22T14:49:55 | CC-MAIN-2015-22 | 1432207925274.34 | [array(['/download/attachments/163872785/dashboard-menu.png?version=1&modificationDate=1338908797724&api=v2',
None], dtype=object)
array(['/download/attachments/163872785/creating-new-dashboard.png?version=1&modificationDate=1338908978551&api=v2',
None], dtype=object)
array(['/download/attachments/163872785/sonar-shared-dashboard.png?version=1&modificationDate=1338908249312&api=v2',
'sonar-shared-dashboard.png'], dtype=object)
array(['/download/attachments/163872785/changing-layout.png?version=1&modificationDate=1338909262801&api=v2',
None], dtype=object)
array(['/download/attachments/163872785/filter-widgets.png?version=1&modificationDate=1343209350398&api=v2',
None], dtype=object)
array(['/download/attachments/163872785/edit-widget.png?version=1&modificationDate=1341582139810&api=v2',
None], dtype=object)
array(['/download/attachments/163872785/moving-widget.png?version=1&modificationDate=1338909701813&api=v2',
None], dtype=object)
array(['/download/attachments/163872785/sonar-removing-widget.png?version=1&modificationDate=1338908429429&api=v2',
'sonar-removing-widget.png'], dtype=object)
array(['/download/attachments/163872785/managing-dashboad.png?version=1&modificationDate=1338909918563&api=v2',
None], dtype=object)
array(['/download/attachments/163872785/default-dashboards.png?version=1&modificationDate=1338910265354&api=v2',
None], dtype=object)
array(['/download/attachments/163872785/sonar_widgets.png?version=1&modificationDate=1338908147838&api=v2',
'sonar_widgets.png'], dtype=object)
array(['/download/attachments/163872785/widget-size.png?version=1&modificationDate=1339059056935&api=v2',
None], dtype=object)
array(['/download/attachments/163872785/duplications.png?version=1&modificationDate=1374573878076&api=v2&effects=drop-shadow',
None], dtype=object)
array(['/download/attachments/163872785/widget-complexity-1.png?version=1&modificationDate=1339059114123&api=v2',
None], dtype=object)
array(['/download/attachments/163872785/design-widgets.png?version=1&modificationDate=1350318517485&api=v2&effects=drop-shadow',
None], dtype=object)
array(['/download/attachments/163872785/widget-code-coverage-new-code.png?version=3&modificationDate=1339059906621&api=v2',
None], dtype=object)
array(['/download/attachments/163872785/widget-violations.png?version=1&modificationDate=1339059384246&api=v2',
None], dtype=object)
array(['/download/attachments/163872785/documentation-comments.png?version=1&modificationDate=1374573905836&api=v2&effects=drop-shadow',
None], dtype=object)
array(['/download/attachments/163872785/events.png?version=1&modificationDate=1339062729136&api=v2',
None], dtype=object)
array(['/download/attachments/163872785/treemap.png?version=3&modificationDate=1410796229374&api=v2',
None], dtype=object) ] | docs.codehaus.org |
This example uses the MovieFinder and MovieLister example Martin Fowler described in his article Inversion of Control Containers and the Dependency Injection pattern. We will use the code that has already been written to take advantage of Dependency Injection. The code below demonstrates how to define classes to be registered with the Container through NanoContainer.NET custom Attributes.
MovieLister simply registers itself to the container without a key. Therefor it'll be registered under it's concrete class type.
ColonMovieFinder registers itself to the container under the key
typeof(IMovieFinder). Now this isn't a requirment. PicoContainer.NET is smart enough to realize that ColonMovieFinder is an implementation of IMovieFinder. The
ConstantParameter Attribute is also used on this class. This states that when creating an instance of ColonMovieFinder set the first parameter (0) passed to the constructor the value "movies1.txt". In other words when the container initializes ColonMovieFinder the filename is set to "movies1.txt"
Okay so now lets set up the code so that these classes are registered to the container. The code below will follow the path to "MyNanoAssembly.dll" and load the dll. Those classes tagged with
RegisterWithContainer attribute will be added to the container.
So as you can see with only a few lines of code and using some meta data you can completely define and construct a ccontainer. | http://docs.codehaus.org/plugins/viewsource/viewpagesrc.action?pageId=31547 | 2015-05-22T14:44:36 | CC-MAIN-2015-22 | 1432207925274.34 | [] | docs.codehaus.org |
Hibernate.orgCommunity Documentation
Welcome to Hibernate Search. The following chapter will guide you through the initial steps required to integrate Hibernate Search into an existing Hibernate ORM enabled application. In case you are a Hibernate new timer we recommend you start here..2.0.1.Final</version> </dependency>
Example 1.2. Optional Maven dependencies for Hibernate Search
<dependency> <!-- If using JPA, add: --> <dependency> <groupId>org.hibernate</groupId> <artifactId>hibernate-entitymanager</artifactId> <version>4.3.8.Final</version> </dependency> <!-- Infinispan integration: --> <dependency> <groupId>org.hibernate</groupId> <artifactId>hibernate-search-infinispan</artifactId> <version>5.0. Analyzer Framework you have to start with a tokenizer followed by an arbitrary number of filters.
Example 1.10. Using
@AnalyzerDef and the Analyzer”). | http://docs.jboss.org/hibernate/search/5.0/reference/en-US/html/getting-started.html | 2015-05-22T15:47:58 | CC-MAIN-2015-22 | 1432207925274.34 | [] | docs.jboss.org |
Django team was created the design that was used for nearly the first ten years on the Django Project Web site, as well as the current design for Django’s acclaimed admin interface. Wilson was the designer for EveryBlock and Rdio. He now designs for Facebook.
Wilson lives in San Francisco, USA.
The current team¶
These are the folks who have a long history of contributions, a solid track record of being helpful on the mailing lists, and a proven desire to dedicate serious time to Django. In return, they’ve been invited to join the core team.
-ames Bennett
James is one of Django’s release managers,, TX.
- virtualenv and pip.
He has worked on Django’s auth, admin and staticfiles apps as well as the form, core, internationalization and test systems. He currently works at Mozilla.
Jannis lives in Berlin, German San Francisco, CA, US’s the CTO of Oscaro, an e-commerce company based.
- Anssi Kääriäinen
Anssi works as a developer at Finnish National Institute for Health and Welfare. He is also a computer science student at Aalto University. In his work he uses Django for developing internal business applications and sees Django as a great match for that use case.
Anssi is interested in developing the object relational mapper (ORM) and all related features. He’s also a fanck
Jeremy was rescued from corporate IT drudgery by Free Software and, in part, Django. Many of Jeremy’s interests center around access to information.
Jeremy was the lead developer of Pegasus News, one of the first uses of Django outside World Online, and has since joined Votizen, a startup intent on reducing the influence of money in politics.
He serves as DSF Secretary, organizes and helps organize sprints, cares about the health and equity of the Django community. He has gone an embarrassingly long time without a working blog.
Jeremy lives in Mountain View, CA, USA.
- Bryan Veloso
Bryan found Django 0.96 through a fellow designer who was evangelizing its use. It was his first foray outside of the land that was PHP-based templating. Although he has only ever used Django for personal projects, it is the very reason he considers himself a designer/developer hybrid and is working to further design within the Django community.
Bryan works as a designer at GitHub by day, and masquerades as a vlogger and shoutcaster in the after-hours. Bryan lives in Los Angeles, Bistuer Manfre Anderson Christie
Tom has background in speech recognition, networking, and web development. He has a particular interest in Web API design and is the original author of Django REST framework.
Tom lives in the seaside city of Brighton, UK.
- Curtis Maloney Holtermann
Markus is a Master of Science student in Computer Science.
Markus lives in Berlin, Germany.
- Josh Smeaton.
- Preston Timmons
- Preston is a software developer with a background in mathematics. He enjoys Django because it enables consistent, simple, and tested systems to be built that even new programmers can quickly dive into. Preston lives in Dallas, TX.
Past team members.
- Alex Gaynor
Alex was involved in many parts of Django, he contributed to the ORM, forms, admin, amongst others; he is most known for his work on multiple-database support in Django.
Alex lives in San Francisco, CA, USA.
- Simon Meers
Simon discovered Django 0.96 during his Computer Science PhD research and has been developing with it full-time ever since. His core code contributions are mostly in Django’s admin application. developer in the SF Bay Area, CA, USA.
- Malcolm Tredinnick.
- Preston Holmes
Preston is a recovering neuroscientist who originally discovered Django as part of a sweeping move to Python from a grab bag of half a dozen languages. He was drawn to Django’s balance of practical batteries included philosophy, care and thought in code design, and strong open source.
- Matt Boersma
- Matt helped with Django’s Oracle support.
- Ian Kelly
- Ian also helped with Oracle support.
- Joseph Kocherhans
Joseph was the director of lead development at EveryBlock and previously developed at the Lawrence Journal-World. lives in Austin, Texas, USA.
- Brian Rosner
Brian enjoys learning more about programming languages and system architectures and contributing to open source projects.
He helped immensely in getting Django’s “newforms-admin” branch finished in time for Django 1.0.
Brian lives in Denver, Colorado, US. | https://docs.djangoproject.com/en/dev/internals/team/ | 2015-05-22T14:32:35 | CC-MAIN-2015-22 | 1432207925274.34 | [] | docs.djangoproject.com |
Difference between revisions of "Creating a section and category hierarchy"
From Joomla! Documentation
Revision as of 04:56, 27 November 2008
This Namespace has been archived - Please Do Not Edit or Create Pages in this namespace. Pages contain information for a Joomla! version which is no longer supported. It exists only as a historical reference, will not be improved and its content may be incomplete.
I am a student at Al Basrah University. | https://docs.joomla.org/index.php?title=J1.5:Creating_a_section_and_category_hierarchy&diff=12017&oldid=7913 | 2015-05-22T14:42:30 | CC-MAIN-2015-22 | 1432207925274.34 | [] | docs.joomla.org |
Jetty is a project at the Eclipse Foundation.
private support for your internal/customer projects ... custom extensions and distributions ... versioned snapshots for indefinite support ... scalability guidance for your apps and Ajax/Comet projects ... development services from 1 day to full product delivery
Intercepting Connections and Requests.
Interception is a technique of Aspect Oriented Programming that allows.
Connector
Behaviour may be attached to a particular type of connector (eg SelectChanelConnector ) by extending the class and adding implementations of one or more of:
- configure(Socket) - Configure a socket after it is accepted.
- customize(Endpoint,Request) - Customize a request for this connector / endpoint.
Server
All connections and requests are channeled via the Server instance, so it may be extended to add interception style behaviour on one or more of the following methods:
- handle:
- extending the handle method of existing handlers
- writing new handlers that can be linked into the chain of handlers where required.
Filter
The Filter API is the standard in-context interceptor defined by the servlet specification. These allow portable interception of requests within the servlet spec. | http://docs.codehaus.org/pages/viewpage.action?pageId=64179 | 2015-05-22T14:36:13 | CC-MAIN-2015-22 | 1432207925274.34 | [] | docs.codehaus.org |
A Scaling Activity is a long-running process that represents a change to your AutoScalingGroup, such as changing the size of the
group. It can also be a process to replace an instance, or a process to perform any other long-running operations supported by the API.
Specifies whether the PutScalingPolicy ScalingAdjustment parameter is an absolute number or a percentage of the current
capacity.
The Alarm data type.
The AutoScalingGroup data type.
The AutoScalingInstanceDetails data type.
The BlockDeviceMapping data type.
Creates a new Auto Scaling group with the specified name. When the creation request is completed, the Auto Scaling group is ready to
be used in other calls.
NOTE: The Auto Scaling group name must be unique within the scope of your AWS account, and
under the quota of Auto Scaling groups allowed for your account.
Creates a new launch configuration. The launch configuration name must be unique within the scope of the client's AWS account. The
maximum limit of launch configurations, which by default is 100, must not yet have been met; otherwise, the call will fail. When created,
the new launch configuration is available for immediate use.
You can create a launch configuration with Amazon EC2 security
groups or with Amazon VPC security groups. However, you can't use Amazon EC2 security groups together with Amazon VPC security groups, or
vice versa.
NOTE: At this time, Auto Scaling launch configurations don't support compressed (e.g. gzipped) user data
files.
Deletes the specified auto scaling group if the group has no instances and no scaling activities in progress.
NOTE: To remove all instances before calling DeleteAutoScalingGroup, you can call UpdateAutoScalingGroup to set the minimum and
maximum size of the AutoScalingGroup to zero.
Deletes the specified LaunchConfiguration.
The specified launch configuration must not be attached to an Auto Scaling
group. When this call completes, the launch configuration is no longer available for use.
Deletes notifications created by PutNotificationConfiguration.
Deletes a policy created by PutScalingPolicy
Deletes a scheduled action previously created using the PutScheduledUpdateGroupAction.
Returns policy adjustment types for use in the PutScalingPolicy action.
The output of the DescribeAdjustmentTypes action.
Returns a full description of each Auto Scaling group in the given list. This includes all Amazon EC2 instances that are members of
the group. If a list of names is not provided, the service returns the full details of all Auto Scaling groups.
This action
supports pagination by returning a token if there are more pages to retrieve. To get the next page, call this action again with the returned
token as the NextToken parameter.
The AutoScalingGroupsType data type.
Returns a description of each Auto Scaling instance in the InstanceIds list. If a list is not provided, the service returns
the full details of all instances up to a maximum of 50. By default, the service returns a list of 20 items.
The AutoScalingInstancesType data type.
Returns a list of all notification types that are supported by Auto Scaling.
Returns a full description of the launch configurations given the specified names.
If no names are specified, then the
full details of all launch configurations are returned.
The LaunchConfigurationsType data type.
Returns a list of metrics and a corresponding list of granularities for each metric.
The output of the DescribeMetricCollectionTypes action.
Returns a list of notification actions associated with Auto Scaling groups for specified events.
The output of the DescribeNotificationConfigurations action.
Returns descriptions of what each policy does. This action supports pagination. If the response includes a token, there are more
records available. To get the additional records, repeat the request with the response token as the NextToken parameter.
The PoliciesType data type.
Returns the scaling activities for the specified Auto Scaling group.
If the specified ActivityIds list is
empty, all the activities from the past six weeks are returned. Activities are sorted by completion time. Activities still in progress
appear first on the list.
This action supports pagination. If the response includes a token, there are more records
available. To get the additional records, repeat the request with the response token as the NextToken parameter.
The output for the DescribeScalingActivities action.
Returns scaling process types for use in the ResumeProcesses and SuspendProcesses actions.
The output of the DescribeScalingProcessTypes action.
Lists all the actions scheduled for your Auto Scaling group that haven't been executed. To see a list of action already executed, see
the activity record returned in DescribeScalingActivities.
A scaling action that is scheduled for a future time and date. An action can be scheduled up to thirty days in advance.
Starting with API version 2011-01-01, you can use recurrence to specify that a scaling action occurs regularly on a schedule.
Disables monitoring of group metrics for the Auto Scaling group specified in AutoScalingGroupName. You can specify the list of
affected metrics with the Metrics parameter.
The Ebs data type.
The EnabledMetric data type.
Enables monitoring of group metrics for the Auto Scaling group specified in AutoScalingGroupName. You can specify the list of enabled
metrics with the Metrics parameter.
Auto scaling metrics collection can be turned on only if the InstanceMonitoring
flag, in the Auto Scaling group's launch configuration, is set to True .
Runs the policy you create for your Auto Scaling group in PutScalingPolicy.
The Instance data type.
The InstanceMonitoring Data Type.
The LaunchConfiguration data type.
The MetricCollectionType data type.
The MetricGranularityType data type.
The NotificationConfiguration data type.
There are two primary Auto Scaling process types-- Launch and Terminate . The Launch process creates a new
Amazon EC2 instance for an Auto Scaling group, and the Terminate process removes an existing Amazon EC2 instance.
The
remaining Auto Scaling process types relate to specific Auto Scaling features:
IMPORTANT: If you suspend Launch or Terminate, all other process types are affected to varying degrees. The following
descriptions discuss how each process type is affected by a suspension of Launch or Terminate.
The AddToLoadBalancer
process type adds instances to the the load balancer when the instances are launched. If you suspend this process, Auto Scaling will launch
the instances but will not add them to the load balancer. If you resume the AddToLoadBalancer process, Auto Scaling will also resume
adding new instances to the load balancer when they are launched. However, Auto Scaling will not add running instances that were launched
while the process was suspended; those instances must be added manually using the the RegisterInstancesWithLoadBalancer call in the
Elastic Load Balancing API Reference .
The AlarmNotifications process type accepts notifications from Amazon CloudWatch alarms that are associated
with the Auto Scaling group. If you suspend the AlarmNotifications process type, Auto Scaling will not automatically execute scaling
policies that would be triggered by alarms.
Although the AlarmNotifications process type is not directly affected by a
suspension of Launch or Terminate ,
alarm notifications are often used to signal that a change in the size of the Auto Scaling group is warranted. If you suspend
Launch or Terminate , Auto Scaling might not be able to implement the alarm's associated policy.
The
AZRebalance process type seeks to maintain a balanced number of instances across Availability Zones within a Region. If you remove an
Availability Zone from your Auto Scaling group or an Availability Zone otherwise becomes unhealthy or unavailable, Auto Scaling launches new
instances in an unaffected Availability Zone before terminating the unhealthy or unavailable instances. When the unhealthy Availability Zone
returns to a healthy state, Auto Scaling automatically redistributes the application instances evenly across all of the designated
Availability Zones.
IMPORTANT: If you call SuspendProcesses on the launch process type, the AZRebalance process will
neither launch new instances nor terminate existing instances. This is because the AZRebalance process terminates existing instances only
after launching the replacement instances. If you call SuspendProcesses on the terminate process type, the AZRebalance process can cause
your Auto Scaling group to grow up to ten percent larger than the maximum size. This is because Auto Scaling allows groups to temporarily
grow larger than the maximum size during rebalancing activities. If Auto Scaling cannot terminate instances, your Auto Scaling group could
remain up to ten percent larger than the maximum size until you resume the terminate process type.
The HealthCheck
process type checks the health of the instances. Auto Scaling marks an instance as unhealthy if Amazon EC2 or Elastic Load Balancing informs
Auto Scaling that the instance is unhealthy. The HealthCheck process can override the health status of an instance that you set with
SetInstanceHealth.
The ReplaceUnhealthy process type terminates instances that are marked as unhealthy and
subsequently creates new instances to replace them. This process calls both of the primary process types--first Terminate and then
Launch .
IMPORTANT: The HealthCheck process type works in conjunction with the ReplaceUnhealthly process type to provide
health check functionality. If you suspend either Launch or Terminate, the ReplaceUnhealthy process type will not function properly.
The ScheduledActions process type performs scheduled actions that you create with PutScheduledUpdateGroupAction. Scheduled
actions often involve launching new instances or terminating existing instances. If you suspend either Launch or Terminate ,
your scheduled actions might not function as expected.
Configures an Auto Scaling group to send notifications when specified events take place. Subscribers to this topic can have messages
for events delivered to an endpoint such as a web server or email address.
A new PutNotificationConfiguration
overwrites an existing configuration.
Creates or updates a policy for an Auto Scaling group. To update an existing policy, use the existing policy name and set the
parameter(s) you want to change. Any existing parameter not changed in an update to an existing policy is not changed in this update
request.
The PolicyARNType data type.
Creates a scheduled scaling action for a Auto Scaling group. If you leave a parameter unspecified, the corresponding value remains
unchanged in the affected Auto Scaling group.
Resumes Auto Scaling processes for an Auto Scaling group. For more information, see SuspendProcesses and ProcessType.
The ScalingPolicy data type.
This data type stores information about an scheduled update to an Auto Scaling group.
Adjusts the desired size of the AutoScalingGroup by initiating scaling activities. When reducing the size of the group, it is not
possible to define which EC2 instances will be terminated. This applies to any Auto Scaling decisions that might result in terminating
instances.
There are two common use cases for SetDesiredCapacity :
one for users of the Auto Scaling triggering system, and another for developers who write their own triggering systems. Both use
cases relate to the concept of cooldown.
In the first case, if you use the Auto Scaling triggering system,
SetDesiredCapacity changes the size of your Auto Scaling group without regard to the cooldown period. This could be useful, for
example, if Auto Scaling did something unexpected for some reason. If your cooldown period is 10 minutes, Auto Scaling would normally reject
requests to change the size of the group for that entire 10 minute period. The SetDesiredCapacity command allows you to circumvent
this restriction and change the size of the group before the end of the cooldown period.
In the second case, if you write
your own triggering system, you can use SetDesiredCapacity to control the size of your Auto Scaling group. If you want the same
cooldown functionality that Auto Scaling offers, you can configure SetDesiredCapacity to honor cooldown by setting the
HonorCooldown parameter to true .
Sets the health status of an instance.
An Auto Scaling process that has been suspended. For more information, see ProcessType.
Suspends Auto Scaling processes for an Auto Scaling group. To suspend specific process types, specify them by name with the
ScalingProcesses.member.N parameter. To suspend all process types, omit the ScalingProcesses.member.N parameter.
IMPORTANT: Suspending either of the two primary process types, Launch or Terminate, can prevent other process types from
functioning properly. For more information about processes and their dependencies, see ProcessType.
To resume processes that
have been suspended, use ResumeProcesses.
Terminates the specified instance. Optionally, the desired group size can be adjusted.
NOTE: This call simply
registers a termination request. The termination of the instance cannot happen immediately.
The output for the TerminateInstanceInAutoScalingGroup action.
Updates. Triggers that are currently in progress aren't affected.
NOTE: If the new values are specified for the MinSize or MaxSize parameters, then there will be an implicit call to
SetDesiredCapacity to set the group to the new MaxSize. All optional parameters are left unchanged if not passed in the request. | http://docs.amazonwebservices.com/sdkfornet/latest/apidocs/html/N_Amazon_AutoScaling_Model.htm | 2012-05-26T17:11:29 | crawl-003 | crawl-003-016 | [] | docs.amazonwebservices.com |
Of course, a language feature would not be worthy of the name ``class'' without supporting inheritance. The syntax for a derived class definition looks as follows:
class DerivedClassName(BaseClassName): <statement-1> . . . <statement-N>
The name BaseClassName must be defined in a scope containing the derived class definition. Instead of a base class name, an expression is also allowed. This is useful when the base class is defined in another module, e.g.,, it is searched in fact end up calling a method of a derived class that overrides it. (For C++ programmers: all methods in Python are ``virtual functions''.) defined or imported directly in the global scope.)
[email protected]@python.org | http://docs.python.org/release/1.5/tut/node65.html | 2012-05-26T18:49:20 | crawl-003 | crawl-003-016 | [] | docs.python.org |
Python supports a limited form of multiple inheritance as well. A class definition with multiple base classes looks as follows:
class DerivedClassName(Base1, Base2, Base3): <statement-1> . . . <statement-N>
The only rule necessary to explain the semantics is the resolution rule used for class attribute references. This is depth-first, left-to-right. Thus, if an attribute is not found in DerivedClassName, it is searched in Base1, then (recursively) in the base classes of Base1, and only if it is not found there, it is searched in Base2, and so on.
(To some people breadth first -- searching Base2 and Base3 before the base classes of Base1 -- looks more natural. However, this would require you to know whether a particular attribute of Base1 is actually defined in Base1 or in one of its base classes before you can figure out the consequences of a name conflict with an attribute of Base2. The depth-first rule makes no differences between direct and inherited attributes of Base1.). | http://docs.python.org/release/1.5/tut/node66.html | 2012-05-26T18:49:30 | crawl-003 | crawl-003-016 | [] | docs.python.org |
There is now limited support for class-private identifiers.., e.g. for the debugger, and that's one reason why this loophole is not closed. (Buglet: derivation of a class with the same name as the base class makes use of private variables of the base class possible.)
Notice that code passed to exec, eval() or evalfile().
Here's an example of a class that implements its own __getattr__ and __setattr__ methods and stores all attributes in a private variable, in a way that works in Python 1.4 as well as in previous versions:
class VirtualAttributes: __vdict = None __vdict_name = locals().keys()[0] def __init__(self): self.__dict__[self.__vdict_name] = {} def __getattr__(self, name): return self.__vdict[name] def __setattr__(self, name, value): self.__vdict[name] = value
[email protected]@python.org | http://docs.python.org/release/1.5/tut/node67.html | 2012-05-26T18:49:39 | crawl-003 | crawl-003-016 | [] | docs.python.org |
If you have changed the contents of the Pattern Registry using the Pattern Organizer (created new shortcuts, exported or created shared folders), these changes are synchronized with the Registry automatically. When you close the Pattern Organizer, you are prompted to save changes. Each time you start UML modeling, the contents of the available storage is scanned for patterns. The contents of the registry is synchronized with the actual availability of the pattern folders. If you have made changes to the patterns outside the Organizer, these changes will be synchronized when UML modeling is started. | http://docs.embarcadero.com/products/rad_studio/delphiAndcpp2009/HelpUpdate2/EN/html/devcommon/savechangesinpatternorganizer_xml.html | 2012-05-26T19:17:09 | crawl-003 | crawl-003-016 | [] | docs.embarcadero.com |
About events
This documentation does not apply to the most recent version of Splunk. Click here for the latest version.
About events
Events are records of activity within log files, and they are what is primarily indexed by Splunk. They provide information about the systems that have produced these log files. We often refer to the output of the indexing process as "event data."
Here's a sample event:
172.26.34.223 - - [01/Jul/2005:12:05:27 -0700] "GET /trade/app?action=logout HTTP/1.1" 200 2953
When Splunk indexes events, it:
- Identifies event timestamps (and applies timestamps to events if they do not exist).
- Performs event segmentation.
- Recognizes multi-line events and performs linebreaking as appropriate.
- Extracts a set of useful standard fields such as
host,
source, and
sourcetype.
In this topic we'll provide brief overviews of these activities and show you where to go for more information about them.
For an overview of the Splunk indexing process, see the "Indexing and event processing". | http://docs.splunk.com/Documentation/Splunk/4.0/Knowledge/Aboutevents | 2012-05-26T23:34:12 | crawl-003 | crawl-003-016 | [] | docs.splunk.com |
Help Center
Local Navigation
Search This Document
Use a USB or serial port connection
Using a serial or USB connection, BlackBerry® device applications can communicate with desktop applications when they are connected to a computer using a serial or USB port. This type of connection also lets BlackBerry device applications communicate with a peripheral device that plugs into the serial or USB port.
- Import the following classes:
- Import the javax.microedition.io.StreamConnection interface.
- Invoke Connector.open(), and specify comm as the protocol and COM1 or USB as the port to open a USB or serial port connection, .
private StreamConnection _conn = (StreamConnection)Connector.open( "comm:COM1;baudrate=9600;bitsperchar=8;parity=none;stopbits=1");
- To send data on the USB or serial port connection, invoke openDataOutputStream() or openOutputStream().
DataOutputStream _dout = _conn.openDataOutputStream();
- Use the write methods on the output stream to write data.
private String data = "This is a test"; _dout.writeChars(data);
- To receive data on the USB or serial port connection, use a non-main event thread to read data from the input stream. Invoke openInputStream() or openDataInputStream().
DataInputStream _din = _conn.openInputStream(); Use the read methods on the input stream to read data.
- Use the read methods on the input stream to read data.
String contents = _din.readUTF();
- To close the USB or serial port connection, invoke close() on the input and output streams, and on the port connection object. The close() method can throw IOExceptions. Make sure the BlackBerry device application implements exception handling.
_din.close(); _dout.close(); conn.close();
Was this information helpful? Send us your comments. | http://docs.blackberry.com/en/developers/deliverables/5779/Use_a_USB_or_serial_port_connection_508962_11.jsp | 2012-05-26T22:44:35 | crawl-003 | crawl-003-016 | [] | docs.blackberry.com |
Help Center
Local Navigation
Search This Document
Use a datagram connection
Datagrams are independent packets of data that applications send over networks. A Datagram object is a wrapper for the array of bytes that is the payload of the datagram. Use a datagram connection to send and receive datagrams.
To use a datagram connection, you must have your own infrastructure to connect to the wireless network, including an APN for GPRS networks. Using UDP connections requires that you work closely with service providers. Verify that your service provider supports UDP connections.
- Import the following classes and interfaces:
- Import the following interfaces:
- Use the CoverageInfo class and the CoverageStatusListener interface of the net.rim.device.api.system package to make sure that the BlackBerry device is in a wireless network coverage area.
- Invoke Connector.open(), specify udp as the protocol and cast the returned object as a DatagramConnection object to open a datagram connection.
(DatagramConnection)Connector.open("udp://host:dest_port[;src_port]/apn");
where:
- To receive datagrams from all ports at the specified host, omit the destination port in the connection string.
- To open a datagram connection on a non-GPRS network, specify the source port number, including the trailing slash mark.For example, the address for a CDMA network connection would be udp://121.0.0.0:2332;6343/. You can send and receive datagrams on the same port.
- To create a datagram, invoke DatagramConnection.newDatagram().
Datagram outDatagram = conn.newDatagram(buf, buf.length);
- To add data to a diagram, invoke Datagram.setData().
byte[] buf = new byte[256]; outDatagram.setData(buf, buf.length);
- To send data on the datagram connection, invoke send() on the datagram connection.
conn.send(outDatagram);If a BlackBerry® Java® Application attempts to send a datagram on a datagram connection and the recipient is not listening on the specified source port, an IOException is thrown. Make sure that the BlackBerry Java Application implements exception handling.
- To receive data on the datgram connection, invoke receive() on the datagram connection. The receive() method blocks other operations until it receives a data packet. Use a timer to retransmit the request or close the connection if a reply does not arrive.
byte[] buf = new byte[256]; Datagram inDatagram = conn.newDatagram(buf, buf.length); conn.receive(inDatagram);
- To extract data from a datagram, invoke getData(). If you know the type of data that you are receiving, convert the data to the appropriate format.
String received = new String(inDatagram.getData());
- Close the datagram connection, invoke close() on the input and output streams, and on the datagram connection object.
conn.close();
Was this information helpful? Send us your comments. | http://docs.blackberry.com/en/developers/deliverables/5779/Use_a_datagram_connection_508961_11.jsp | 2012-05-26T22:44:42 | crawl-003 | crawl-003-016 | [] | docs.blackberry.com |
Help Center
Local Navigation
Search This Document
Use an HTTPS connection
If your BlackBerry device is associated with a BlackBerry® Enterprise Server and uses an HTTPS proxy server that requires authentication, you will not be able to use end-to-end TLS.
- Import the following classes:
- Import the following interfaces:
- Use the CoverageInfo class and CoverageStatusListener interface of the net.rim.device.api.system package to make sure that the BlackBerry device is in a wireless network coverage area.
- Invoke Connector.open(), specifying HTTPS as the protocol and cast the returned object as an HttpsConnection object to open an HTTP connection.
HttpsConnection stream = (HttpsConnection)Connector.open("");
- To specify the connection mode, add one of the following parameters to the connection string that passes to Connector.open()
- Specify that an end-to-end HTTPS connection must be used from the BlackBerry device to the target server: EndToEndRequired
- Specify that an end-to-end HTTPS connection should be used from the BlackBerry device to the target server. If the BlackBerry device does not support end-to-end TLS, and the BlackBerry device user permits proxy TLS connections, then a proxy connection is used: EndToEndDesired.
HttpsConnection stream = (HttpsConnection)Connector.open(";EndToEndDesired");
Was this information helpful? Send us your comments. | http://docs.blackberry.com/en/developers/deliverables/5779/Use_an_HTTPS_connection_508959_11.jsp | 2012-05-26T22:44:48 | crawl-003 | crawl-003-016 | [] | docs.blackberry.com |
Header File
stdlib.h
Syntax
extern char ** _environ;
extern wchar_t ** _wenviron
Description
_environ is an array of pointers to strings; it is used to access and alter the operating system environment variables. Each string is of the form:
envvar = varvalue
where envvar is the name of an environment variable (such as PATH), and varvalue is the string value to which envvar is set (such as C:\Utils;C:\MyPrograms). The string varvalue can be empty.
When a program begins execution, the operating system environment settings are passed directly to the program. Note that env, the third argument to main, is equal to the initial setting of _environ.
The _environ array can be accessed by getenv; however, the putenv function is the only routine that should be used to add, change or delete the _environ array entries. This is because modification can resize and relocate the process environment array, but _environ is automatically adjusted so that it always points to the array.
Portability | http://docs.embarcadero.com/products/rad_studio/delphiAndcpp2009/HelpUpdate2/EN/html/devwin32/environ_xml.html | 2012-05-26T21:15:32 | crawl-003 | crawl-003-016 | [] | docs.embarcadero.com |
Warning: This module is obsolete. As of Python 1.5a4, package support (with different semantics for __init__ and no support for __domain__ or __) is built in the interpreter. The ni module is retained only for backward compatibility. As of Python 1.5b2, it has been renamed to ni1; if you really need it, you can use import ni1, but the recommended approach is to rely on the built-in package support, converting existing packages if needed. Note that mixing ni and the built-in package support doesn't work: once you import ni, all packages use it.
The ni module defines a new importing scheme, which supports packages containing several Python modules. To enable package support, execute import ni before importing any packages. Importing this module automatically installs the relevant import hooks. There are no publicly-usable functions or variables in the ni module.
To create a package named spam containing sub-modules ham, bacon and eggs, create a directory `spam' somewhere on Python's module search path, as given in sys.path. Then, create files called `ham.py', `bacon.py' and `eggs.py' inside `spam'.
To import module ham from package spam and use function hamneggs() from that module, you can use any of the following possibilities:
import spam.ham # *not* "import spam" !!! spam.ham.hamneggs()
from spam import ham ham.hamneggs()
from spam.ham import hamneggs hamneggs()
__.spam_inited = 1 # Set a package-level variable | http://docs.python.org/release/1.5/lib/node40.html | 2012-05-26T18:15:47 | crawl-003 | crawl-003-016 | [] | docs.python.org |
This message occurs when you try to call a symbol from within a procedure or function that has been tagged with the local directive.
The local directive, which marks routines as unavailable for export, is platform-specific and has no effect in Windows programming.
On Linux, the local directive is used for routines that are compiled into a library but are not exported. This directive can be specified for standalone procedures and functions, but not for methods. A routine declared with local, for example,
function Contraband(I: Integer): Integer; local;
does not refresh the EBX register and hence | http://docs.embarcadero.com/products/rad_studio/delphiAndcpp2009/HelpUpdate2/EN/html/devcommon/cm_cannot_take_address_xml.html | 2012-05-26T17:31:22 | crawl-003 | crawl-003-016 | [] | docs.embarcadero.com |
This module provides a subset of the operating system dependent functionality provided by the optional built-in module posix. It is best accessed through the more portable standard module os.: xstat(). This function returns the same information as stat(), but with three extra values appended: the size of the resource fork of the file and its 4-char creator and type. | http://docs.python.org/release/1.5/lib/node204.html | 2012-05-26T17:54:12 | crawl-003 | crawl-003-016 | [] | docs.python.org |
The modules in this chapter are available on the Apple Macintosh only.
Aside from the modules described here there are also interfaces to various MacOS toolboxes, which are currently not extensively described. The toolboxes for which modules exist are: AE (Apple Events), Cm (Component Manager), Ctl (Control Manager), Dlg (Dialog Manager), Evt (Event Manager), Fm (Font Manager), List (List Manager), Menu (Moenu Manager), Qd (QuickDraw), Qt (QuickTime), Res (Resource Manager and Handles), Scrap (Scrap Manager), Snd (Sound Manager), TE (TextEdit), Waste (non-Apple TextEdit replacement) and Win (Window Manager). or similar works. | http://docs.python.org/release/1.5/lib/node203.html | 2012-05-26T17:54:03 | crawl-003 | crawl-003-016 | [] | docs.python.org |
') '\2534\363' >>> rt.encryptmore('bar') '\357\375$' >>> rt.encrypt('bar') '\2534\363' >>> rt.decrypt('\2534\363') 'bar' >>> rt.decryptmore('\357\375$') 'bar' >>> rt.decrypt('\357\375$') 'l(\315' >>> del rt
The original Enigma cipher was broken in 1944. The version implemented here is probably a good deal more difficult to crack (especially if you use many rotors), but it won't be impossible for a truly skil.
[email protected]@python.org | http://docs.python.org/release/1.5/lib/node202.html | 2012-05-26T17:53:52 | crawl-003 | crawl-003-016 | [] | docs.python.org |
Ipswitch, Inc.
In this File
Note: New Premium AntiVirus installations will now default to delete for Infected File Action, all upgrade installations will remain with bounce.
Symantec AntiVirus for IMail Server is an add-on product for IMail Server.
Ipswitch IMail Server provides Symantec's Scan Engine technology as the Premium AntiVirus Edition.
Premium AntiVirus for IMail
IMail AntiVirus IMail AntiVirus AntiVirus Scan Engine Web Administrator. You can access the Scan Engine Web Administrator at the IP address entered in the Proxy Server IP Address on the AntiVirus Settings page followed by :8004 (the default port for the Scan Engine Web Administrator).
For example:. The default password for the Scan Engine Web Administrator is admin. The Symantec Anti-Virus Scan Engine Administration page appears.
The new Symantec Scan Engine has several added features:
Scan Engine changes:
Known Issues
IMail AntiVirus Premium Edition is included separately from the IMail Server installation. IMail AntiVirus Installation will enable the antivirus features within IMail Server and install the AntiVirus Server.
To begin the installation, do one of the following:
On the installation screen, select the components you want to install, then click Install. Follow the on-screen instructions.
Upon successful installation, open IMail Administrator, and click on the new AntiVirus tab at the top of the page.. | http://docs.ipswitch.com/_Messaging/Antivirus/PremiumAV/v5.1.4/ReleaseNotes/Index.htm | 2008-05-16T05:12:08 | crawl-001 | crawl-001-006 | [] | docs.ipswitch.com |
Promotion types are returned by the PromotionCategory element and include:
FreeShipping--The item is shipped free of charge.
BuyQuantityXGetAmounOffX--If you buy at least the specified number of items, the cost of the next item is discounted. For example, if a customer buys three shirts, the fourth shirt is half off.
ForEachQuantityXGetAmountOffX--Each item is discounted by the specified amount. For example, all shirts are 30% off. | http://docs.amazonwebservices.com/AWSECommerceService/2008-03-03/DG/PromotionDetails_PromotIonTypes.html | 2008-05-16T05:16:45 | crawl-001 | crawl-001-006 | [] | docs.amazonwebservices.com |
For assistance with Yahoo! products please refer to our help pages.
If you are reporting a breach of our Terms
of Service please refer to our abuse
reporting pages.
· Add
my Site!: Want to add your site to the Yahoo! database?
· Fix this Listing:
Want to change some information on your current listing (i.e. URL,
comment, email address, etc.)?
· Where's
my Site?: You submitted your site to be added over a month
ago and there's still is no sign of it? | http://uk.docs.yahoo.com/writeus/suggest.html | 2008-05-16T07:07:37 | crawl-001 | crawl-001-006 | [] | uk.docs.yahoo.com |
Customizable and skinnable social platform dedicated to (open)data.
We split the documentation given your profile (help us to improve it if you do not feel comfortable with those!):
I want to launch the project to play with it locally¶
The prefered way to test udata is to use the docker images to easily have an up and ready udata instance.
I’m a regular developer of the platform¶
Install system dependencies
and then read the
Developping sections starting with Development environment.
Once the project is up and running, it’s time to customize it!
Take a look at our advanced documentation on adapting settings,
creating a custom theme, testing your code,
adding translations and so on.
I want to use it for my country¶
The project is currently in production for France and in development for Luxembourg. We can help you to set up your infrastructure, you can contact the team via a Github issue or through Gitter to chat directly.
Take a look at the governance section to know how we deal with feature voting from all the community.
To perform a full installation for production purpose, follow the System dependencies and installation sections.! | http://udata.readthedocs.io/en/stable/ | 2018-07-16T04:22:27 | CC-MAIN-2018-30 | 1531676589179.32 | [] | udata.readthedocs.io |
Troubleshoot company resource access problems with Microsoft Intune
Use the error and status codes in this topic to help you troubleshoot problems when a Microsoft Intune action returns an error code.
If this information does not solve your problem, see How to get support for Microsoft Intune to find more ways to get help.
Status codes for MDM managed Windows devices
Company resource access (common errors)
Errors returned by iOS devices
Company Portal errors
Service errors
OMA response codes
Next steps
If this troubleshooting information didn't help you, contact Microsoft Support as described in How to get support for Microsoft Intune. | https://docs.microsoft.com/en-us/intune/troubleshoot-company-resource-access-problems | 2018-07-16T05:10:49 | CC-MAIN-2018-30 | 1531676589179.32 | [] | docs.microsoft.com |
Remote Installation Services, and can exist as two types of extensions:
Server-side extension (SSE). A Microsoft Management Console (MMC) snap-in that is used for administration and configuration.
Client-side extension (CSE). An extension that runs on the client computer and interprets and applies the MMC-type settings to the client computer. The client computer is also known as the target computer.
The GPO settings are configured through the server-side extension, and are applied to individual computers and users by the Group Policy engine and the client-side extension (CSE). The Group Policy engine is the framework used to manage client side-extensions.
Remote Installation Services (RIS) is different from most other Group Policy extensions. RIS has no CSE or dynamic-link library (DLL) providing a CSE because RIS is used by the client before the client has an operating system installed.
In this subject | https://docs.microsoft.com/en-us/previous-versions/windows/it-pro/windows-server-2003/cc776851(v=ws.10) | 2018-07-16T05:19:35 | CC-MAIN-2018-30 | 1531676589179.32 | [] | docs.microsoft.com |
There are four database schemas available in datazilla: datazilla, schema_hgmozilla.sql.tmpl, schema_objectstore.sql.tmpl, and schema_perftest.sql.tmpl. Three of these are template schemas that are used by manage commands to create new databases with the storage engine specified by the user. Access to each schema is provided through the Model layer. The model layer is used by controllers to retrieve data in each of the schemas and is exposed to the user through a set of web service methods.
This schema is accessed using the django ORM, the model for it is defined here. and consists of a single table with the following structure.
All databases storing data used by datazilla are stored as a row in this table. Each database has three classifiers associated with it: project, contenttype, and dataset. The name of the database is typically these three classifiers joined on an underscore but there is no requirement for this, the name can be any string. There is no physical requirement for the databases referenced in this table to be co-located. The only requirement is that both the web service and machine that run’s the cron jobs have access to each of the databases in this table. Any database can have OAuth credentials associated with it but they are not required so the field can be null. Currently the only databases that require OAuth are the objectstore and only for the storage of the JSON object. Each database can also have a cron batch interval associated with it. This interval specifies the time interval of cron jobs run.
project - A descriptive string associated with the project: talos, b2g, schema etc... This string becomes the location field in the url for related web service methods.
contenttype - A string describing the content type associated with the database. The perftest content type stores performance test results, the objectstore content type stores JSON objects in it. A project can have any number of contenttypes associated with it.
dataset - An integer that can be enumerated. This allows more than one database to exist for the same project/contenttype pair.
host - Name of the database host.
read_only_host - A read only host associated with the database.
name - Name of the database.
type - Type of storage engine associated with the database. This is automatically added to the template schema when a user runs a manage command that creates a database schema. There is currently support for MariaDB and MySQL storage engines.
oauth_consumer_key - The OAuth consumer key. This is created for databases with objectstores automatically by the create_project manage command.
oauth_consumer_secret - The OAuth consumer secret. This is created for databases with objectstores automatically by the create_project manage command.
creation_date - Date the database was created.
cron_batch - The cron interval to use when running cron jobs on this database.
The hgmozilla schema currently holds the mozilla mercurial push log data. However, the only part of it that’s specific to mercurial is the web service method used to retrieve data to populate it. The data used to populate the schema is generated by the json-pushes web service method. The manage command, update_pushlog, calls this web service method and populates the associated schema. The data can be used to create an ordered list of code base changes pushed to the build/test system. This is required for any statistical method that requires a comparison between a push and its parent.
The objectstore schema holds the unprocessed json objects submitted to the project. When objects are successfully processed into a corresponding index the test_run_id field is populated with an integer. The test_run_id corresponds to the test_run.id field in the perftest schema.
This perftest schema translates the JSON structure in the objectstore into a relational index. It also contains tables for the storage of statistical data generated post object submission.
The model layer found in /datazilla/model provides an interface for getting/setting data in a database. The datazilla model classes rely on a module called datasource. This module encapsulates SQL manipulation. All of the SQL used by the system is stored in JSON files found in /datazilla/model/sql. There can be any number of SQL files stored in this format. The JSON structure allows SQL to be stored in named associative arrays that also contain the host type to be associated with each statement. Any command line script or web service method that requires data should use a derived model class to obtain it.
ptm = PerformanceTestModel(project) products = ptm.get_product_test_os_map()
The ptm.get_product_test_os_map() method looks like this:
def get_product_test_os_map(self): proc = 'perftest.selects.get_product_test_os_map' product_tuple = self.sources["perftest"].dhub.execute( proc=proc, debug_show=self.DEBUG, return_type='tuple', ) return product_tuple
perftest.selects.get_product_test_os_map found in datazilla/model/sql/perftest.json looks like this:
{ "selects":{ "get_product_test_os_map":{ "sql":"SELECT b.product_id, tr.test_id, b.operating_system_id FROM test_run AS tr LEFT JOIN build AS b ON tr.build_id = b.id WHERE b.product_id IN ( SELECT product_id FROM product ) GROUP BY b.product_id, tr.test_id, b.operating_system_id", "host":"master_host" }, "...more SQL statements..." }
The string, perftest, in perftest.selects.get_product_test_os_map refers to the SQL file name to load in /datazilla/model/sql. The SQL in perftest.json can also be written with placeholders and a string replacement system, see datasource for all of the features available. | http://datazilla.readthedocs.io/en/latest/architecture/ | 2018-07-16T04:30:54 | CC-MAIN-2018-30 | 1531676589179.32 | [] | datazilla.readthedocs.io |
Create a knowledge base An administrator can set knowledge base field values when creating a new knowledge base. Knowledge managers can set field values for knowledge bases they manage. Table 1. Knowledge Base form fields Field Description Title The name of the knowledge base. Icon The image that is displayed next to an article from this knowledge base on the search and browse result pages. Disable commenting A check box for preventing users from commenting on articles in the knowledge base. You can override this setting for specific articles using the Disable commenting check box on the Knowledge form. Disable suggesting A check box for preventing users from suggesting edits to articles in the knowledge base. You can override this setting for specific articles using the Disable suggesting check box on the Knowledge form. Disable category editing A check box for preventing contributors from creating and editing categories when selecting a category. When this check box is selected, only knowledge managers can define knowledge categories. Owner The user responsible for the knowledge base.Note: Knowledge managers cannot change the Owner field value. Managers A list of knowledge base managers. Publish workflow The publishing workflow for articles within the knowledge base. Retire workflow The retiring workflow for articles within the knowledge base. Active A check box that indicates if the knowledge base is active. Enable social questions and answers A check box that indicates if the knowledge base allows Social Q&A questions. Description A text description of the knowledge base. Set default knowledge field values Default values for knowledge articles in this publication. To define a default value, select a field in the left column, then use the right column to enter the data to automatically populate the selected field. Knowledge contributors can choose to apply default field values when selecting a knowledge base for an article.Note: You cannot set a default value for the Author field. Related TasksEnable social Q&A for a knowledge baseCreate a custom knowledge homepageRelated ConceptsI18N - Knowledge internationalizationRelated ReferenceKnowledge workflowsKnowledge properties | https://docs.servicenow.com/bundle/helsinki-servicenow-platform/page/product/knowledge-management/reference/r_KnowledgeBaseFields.html | 2018-07-16T04:36:33 | CC-MAIN-2018-30 | 1531676589179.32 | [] | docs.servicenow.com |
<![CDATA[ ]]>User Guide > Drawing > Drawing with Line Texture > Erasing Textured Lines
Erasing Textured Lines
When you use the Eraser tool to erase a portion of a textured line, the vector frame is cut straight and you lose the feather created while drawing with the Brush tool. In Toon Boom Animate Pro, a special option in the Eraser
tool lets you create a soft edge on your textured lines. You can also cut or keep the vector frame as is.
The Brush Properties panel opens.
Related Topics | https://docs.toonboom.com/help/animate-pro/Content/HAR/Stage/004_Drawing/074_H2_Erasing_Textured_Lines_.html | 2018-07-16T04:28:08 | CC-MAIN-2018-30 | 1531676589179.32 | [array(['../../../Resources/Images/SBP/stroke_preview.png', None],
dtype=object)
array(['../../../Resources/Images/HAR/Stage/Drawing/Steps/brush_properties.png',
None], dtype=object)
array(['../../../Resources/Images/SBP/tip_shape.png', None], dtype=object)
array(['../../../Resources/Images/SBP/an_brushshape_01.png', None],
dtype=object) ] | docs.toonboom.com |
When you apply the vRealize Code Stream standalone license to a vRealize Automation appliance, you enable the vRealize Code Stream functions.
About this task
You can use the vRealize Code Stream standalone license to enable the Artifact Management, Release Management, Release Dashboard, Approval Services, and Advanced Service Designer features.
Prerequisites
Verify that the vRealize Automation appliance is set up. See Configure the vRealize Automation appliance.
Procedure
- Open the vRealize Automation Appliance management console with the fully qualified domain name, https:// vrcs-va-hostname.domain.name:5480/.
- Log in as the root user.
- Enter a valid vRealize Code Stream license key and click Submit Key.
The default Artifactory server is enabled when the valid license key is accepted.
The vRealize Code Stream license includes the Artifactory Pro version.
- Confirm that you can log in to the vRealize Automation console.
- Open a Web browser.
- Navigate to.
What to do next
Configure a repository in the Artifactory server. See the JFrog Web site. | https://docs.vmware.com/en/vRealize-Code-Stream/2.1/com.vmware.vrcs.overview-install.doc/GUID-D8811925-C5DF-4E1E-9A7B-5B2B6BC878F9.html | 2018-07-16T05:10:01 | CC-MAIN-2018-30 | 1531676589179.32 | [] | docs.vmware.com |
Creating Your First Extension¶
Plugins consist of two things. First, a meta-data file describing the extension which includes things like a name, the author, and where to find the extension. Second, the code which can take the form of a shared library or python module.
Builder supports writing extensions in C, Vala, or Python. We will be using Python for our examples in this tutorial because it is both succinct and easy to get started with.
First, we will look at our extension meta-data file. The file should have the file-suffix of “.plugin” and it’s format is familiar. It starts with a line containing “[Plugin]” indicating this is extension metadata. Then it is followed by a series of “Key=Value” key-pairs.
Note
We often use the words “extension” and “plugin” interchangeably.
# my_plugin.plugin [Plugin] Name=My Plugin Loader=python3 Module=my_plugin Author=Angela Avery
Now we can create a simple plugin that will print “hello” when Builder starts and “goodbye” when Builder exits.
# my_plugin.py import gi from gi.repository import GObject from gi.repository import Ide class MyAppAddin(GObject.Object, Ide.ApplicationAddin): def do_load(self, application): print("hello") def do_unload(self, application): print("goodbye")
In the python file above, we define a new extension called
MyAppAddin.
It inherits from
GObject.Object (which is our base object) and implements the interface
Ide.ApplicationAddin.
We wont get too much into objects and interfaces here, but the plugin manager uses this information to determine when and how to load our extension.
The
Ide.ApplicationAddin requires that two methods are implemented.
The first is called
do_load and is executed when the extension should load.
And the second is called
do_unload and is executed when the plugin should cleanup after itself.
Each of the two functions take a parameter called
application which is an
Ide.Application instance.
Now place the two files in
~/.local/share/gnome-builder/plugins as
my_plugin.plugin and
my_plugin.py.
If we run Builder from the command line, we should see the output from our plugin!
[angela@localhost ~] gnome-builder hello
Now if we close the window, we should see that our plugin was unloaded.
[angela@localhost ~] gnome-builder hello goodbye
Embedding Resources¶
Sometimes plugins need to embed resources. Builder will automatically
load a file that matches the name
$module_name.gresource if it
placed alongside the
$module_name.plugin file.
Note
If you are writing an extension in C or Vala, simply embed GResources as normal.
<?xml version="1.0" encoding="UTF-8"?> <gresources> <gresource prefix="/org/gnome/builder/plugins/my-plugin"> <file preprocess="xml-stripblanks" compressed="true">gtk/menus.ui</file> </gresource> </gresources>
Next, compile the resources using
glib-compile-resources.
glib-compile-resources --generate my-plugin.gresource my-plugin.gresource.xml
Now you should have a file named
my-plugin.gresource in the current directory.
Ship this file along with your
my-plugin.plugin and Python module.
Next, continue on to learn about other interfaces you can implement in Builder to extend it’s features! | http://builder.readthedocs.io/en/latest/plugins/creating.html | 2018-07-16T04:49:04 | CC-MAIN-2018-30 | 1531676589179.32 | [] | builder.readthedocs.io |
Replication Stored Procedures (Transact-SQL)
.
Important
Only the replication stored procedures documented in SQL Server Books Online are supported. Undocumented stored procedures are only for the use of internal replication components and should not be used to administer replication.
See Also
Replication Management Objects Concepts
Replication Programming Concepts | https://docs.microsoft.com/en-us/sql/relational-databases/system-stored-procedures/replication-stored-procedures-transact-sql?view=sql-server-2017 | 2018-07-16T05:11:31 | CC-MAIN-2018-30 | 1531676589179.32 | [array(['../../includes/media/yes.png?view=sql-server-2017', 'yes'],
dtype=object)
array(['../../includes/media/no.png?view=sql-server-2017', 'no'],
dtype=object)
array(['../../includes/media/no.png?view=sql-server-2017', 'no'],
dtype=object)
array(['../../includes/media/no.png?view=sql-server-2017', 'no'],
dtype=object) ] | docs.microsoft.com |
Combine Regular and Custom Filters in Grid for ASP.NET MVC
Environment
Description
I have an ASP.NET MVC Grid with regular filters and local filtering. How can I add custom filters and combine the regular filters with the custom ones?
Solution
To allow the application of both filter types, wrap the custom filter in an additional filter with the
"OR" criteria.
For the complete implementation of the approach, refer to this runnable example, which applies the
filter method of the DataSource. | https://docs.telerik.com/kendo-ui/knowledge-base/add-custom-filters | 2018-07-16T05:07:07 | CC-MAIN-2018-30 | 1531676589179.32 | [] | docs.telerik.com |
The File module¶
The FILE > Filelist module is where you can manage all the media associated with the TYPO3 CMS web site.
Managing files in TYPO3 CMS¶
This module is very similare to the WEB > List module. It displays a navigation tree, which corresponds to the file structure on the server, and a list of all files for the selected directory. You can choose to always display thumbnails (this might get slow if you have a lot of files in the chose directory).
For admin users, the folder displayed by default is called
"fileadmin/ (auto-generated)" and corresponds to the
fileadmin/ folder located under the root folder of your web
server directory.
Using those files inside content elements to display them or link to them in your web site is covered in the Editors Tutorial.
Note
There exists extensions which make it possible to connect to remote storages (like a WebDAV server or an Amazon S3 account) and work with the files as if they were on the TYPO3 CMS server.
There's a clipboard just like in the List module.
Using the action icons, files can be renamed or replaced (just hover over the icons and you will get a help text). | https://docs.typo3.org/typo3cms/GettingStartedTutorial/FileModule/Index.html | 2018-07-16T04:59:02 | CC-MAIN-2018-30 | 1531676589179.32 | [array(['../_images/BackendFileModule.png', 'The File module'],
dtype=object)
array(['../_images/BackendFileClipboard.png',
"The File module's clipboard"], dtype=object)] | docs.typo3.org |
Management for Natural Disasters
There are so many things that can happen in this world that can really destroy and damage a lot of cities and towns such as natural disasters. If you are really concerned about the people living in your country and how you can really lessen the damage made by natural disasters, you should really look for good strategies in order to fight off these disasters. When it comes to finding strategies how you can really help when natural disasters happen, there are now many people who have come up with really good and helpful strategies. We are now going to look at some of the well thought of ideas that managements have come up with in order to really lessen the damage and the destruction that natural disaster can give to homes, towns, and people; if you are curious to find out what these wonderful and very beneficial strategies are, please read on down below and we will have all these things made clear to you.
When it comes to natural disasters, one thing that you can do is to be prepared for this disaster to happen. If you are not prepared for these things to happen, you can loose your life of you can really experience a lot of bad damage to your houses and to your properties. There are many types of natural disasters and you should be prepared for each one. In cases such as fires, there are many fire extinguishers that you can find in buildings and there are also many fire alarms. These things are really helpful when it comes to disasters that can easily happen. It is really good to be aware of what can happen and to be prepared to deal with anything.
Today, there are so many technologies that have been created to help out when natural disaster strike. These technologies are now able to track the path of a hurricane or other natural disasters so this can really help people get out of places because of these storms and hurricanes.. Natural disaster can be really bad and it can really do a lot of damage and hurt a lot of people and families but if you are prepared and if you use good technologies to manage these disasters, you can really be more well off than those people who are not aware and who do not use these modern technologies to help track these natural disasters out. We hope you had a good read today and that you have learned something really important. Industrial pumps. | http://docs-prints.com/2018/01/10/6-facts-about-services-everyone-thinks-are-true/ | 2018-07-16T04:27:47 | CC-MAIN-2018-30 | 1531676589179.32 | [] | docs-prints.com |
<![CDATA[ ]]>User Guide > Effects > Using Effects > Highlight
Highlight
Use the Highlight effect to turn a drawing's area lighter to simulate a light source. To produce the highlight effect, you must draw a shape to control where the highlight will appear on the original drawing.
_2<<
Related Topics | https://docs.toonboom.com/help/animate-pro/Content/HAR/Stage/019_Effects/075_H2_Highlight_.html | 2018-07-16T04:46:38 | CC-MAIN-2018-30 | 1531676589179.32 | [array(['../../../Resources/Images/HAR/Stage/Effects/an_highlight.png',
None], dtype=object)
array(['../../../Resources/Images/HAR/Stage/Effects/an_highlight_network.png',
None], dtype=object)
array(['../../../Resources/Images/HAR/Stage/Effects/Steps/010_highlight_002.png',
None], dtype=object) ] | docs.toonboom.com |
- Citrix MCS for Nutanix AHV connector configuration
- Nutanix AHV connector configuration
- Network File Share connector configuration
A Nutanix AHV Connector Configuration contains the credentials and storage container the appliance needs to connect to Nutanix Acropolis.
You can use this connector configuration to access a specific location in your Nutanix environment when you:
You can use your Nutanix Acropolis environment for creating Layers, and publishing Layered Images. Each Connector Configuration accesses a specific storage container in your Nutanix Acropolis environment where you can create your layers or publish layered images.
You may need more than one Nutanix Acropolis Connector Configuration to access the correct container for each purpose. Further, you may want to publish each Layered Image to a container convenient to the systems you will be provisioning with the published image. For more about Connectors, and Connector Configurations, see About Connectors.
If this is your first time using the App Layering service
When publishing Layered Images to Acropolis, you will need at least one Connector Configuration for each storage container you plan to publish to. You can add Connector Configurations when creating an Image Template from which you will publish Layered Images. If you don't yet have the right Connector Configuration for the task, you can create one by clicking New on the Connector wizard tab (see details below).
Required information for Acropolis Connector Configuration settings
The Acropolis Connector Configuration wizard let's you define the credentials and container to use for a new configuration.
Important: The fields are case sensitive, so any values that you enter manually must match the case of the object in Acropolis, or the validation will fail.:
When viewing virtual machines through the Nutanix web console, you can search for virtual machines by filtering on:.
To enter values:
To add a new Connector Configuration:
When creating a new Connector Configuration, you can configure an optional Powershell script to run on any Windows machine running the App Layering Agent. These scripts must be stored on the same machine that the Agent is installed on, and will only be executed.
Script Configuration fields
Other Script Configuration values
Powershell variables
When the script is executed the following variables will. | https://docs.citrix.com/zh-cn/citrix-app-layering/4/nutanix-ahv/configure/connector-essentials/nutanix-ahv-connector-configuration.html | 2018-07-16T04:40:28 | CC-MAIN-2018-30 | 1531676589179.32 | [] | docs.citrix.com |
With the Real-Time Audio-Video feature, you can use the local client system's webcam or microphone on a remote desktop. Real-Time Audio-Video is compatible with standard conferencing applications and browser-based video applications, and supports standard webcams, audio USB devices, and analog audio input.
For. | https://docs.vmware.com/en/VMware-Horizon-Client-for-Mac/4.7/horizon-client-mac-installation/GUID-7B9E57BF-CF17-4B30-B3E0-08072C7CAEC6.html | 2018-07-16T05:20:25 | CC-MAIN-2018-30 | 1531676589179.32 | [] | docs.vmware.com |
Specifies.
Provides access to the axis label settings.
Specifies the value to be raised to a power when generating ticks for an axis of the logarithmic type.
Coupled with the BootstrapChartValueAxisBuilder.MinValue method, focuses the widget on a specific chart segment. Applies only to the axes of the "continuous" and "logarithmic" type.
Coupled with the BootstrapChartValueAxisBuilder.MaxValue method,.
Adds a pixel-measured empty space between two side-by-side value axes. Applies if several value axes are located on one side of the chart.
Specifies the name of the value axis.
Binds the value axis to a pane.
Relocates the axis.
Specifies whether to show zero on the value axis.
Declares a collection of strips belonging to the argument axis.
Synchronizes two or more value axes with each other at a specific value..
Sets the type of values.
Specifies whether the axis line is visible. | https://docs.devexpress.com/ASPNETCoreBootstrap/DevExpress.AspNetCore.Bootstrap.BootstrapChartValueAxisBuilder._members | 2018-10-15T10:55:55 | CC-MAIN-2018-43 | 1539583509170.2 | [] | docs.devexpress.com |
Contents Now Platform Administration Previous Topic Next Topic Instance Usage ... SAVE AS PDF Selected Topic Topic & Subtopics All Topics in Contents Other Share Instance Usage The System Usage modules track usage for ServiceNow applications and for ServiceNow Store apps. The usage analytics process collects data on all your instances and regularly updates the reports in the Usage Overview and ServiceNow Store Usage Overview reports. System Usage Overview reportsThe Usage Overview modules display reports on usage of ServiceNow applications and ServiceNow Store apps. On this page Send Feedback Previous Topic Next Topic | https://docs.servicenow.com/bundle/istanbul-platform-administration/page/administer/subscription-management/concept/c_UsageAnalyticsModule.html | 2018-10-15T11:03:25 | CC-MAIN-2018-43 | 1539583509170.2 | [] | docs.servicenow.com |
Contents Now Platform User Interface Previous Topic Next Topic Control access to a top search widget ... SAVE AS PDF Selected Topic Topic & Subtopics All Topics in Contents Other Share Control access to a top search widget You can control who has access to top searches widgets by restricting who can add content to homepages or by applying roles to the widget. Before you beginRole required: admin Procedure Navigate to System UI > Widgets. Select Text Search. Click the edit user roles icon and select the required access rights, then click Done. Click Update. Related TasksUse a top search widgetDefine a table for a top searchView user preference for time periodUpdate a top search statistic On this page Send Feedback Previous Topic Next Topic | https://docs.servicenow.com/bundle/istanbul-platform-user-interface/page/administer/homepage-administration/task/t_ControlAccessToATopSearchWidget.html | 2018-10-15T11:03:50 | CC-MAIN-2018-43 | 1539583509170.2 | [] | docs.servicenow.com |
You can get an idea of the relative performance of inserts, updates, and selects by looking at the underlying statistics.
An example follows the table.
These VSD charts show the rates of reads and writes to the various tables in the application.
You can use these statistics to see that the NEW_ORDER table is growing over time. After the initial orders are loaded, new orders are being placed (created) faster than customers are paying for (destroying) them. | http://gemfirexd.docs.pivotal.io/docs/1.4.0/userguide/manage_guide/Topics/vsd_tables.html | 2019-02-16T02:13:23 | CC-MAIN-2019-09 | 1550247479729.27 | [] | gemfirexd.docs.pivotal.io |
Connecting to a Cluster¶
Galaxy is designed to run jobs on your local system by default, but it can be configured to run jobs on a cluster. The front-end Galaxy application runs on a single server as usual, but tools are run on cluster nodes instead.
A general reference for the job configuration file is also available.
Distributed Resources Managers¶
Galaxy is known to work with:
- TORQUE Resource Manager
- PBS Professional
- Open Grid Engine
- Univa Grid Engine (previously known as Sun Grid Engine and Oracle Grid Engine)
- Platform LSF
- HTCondor
- Slurm
-must have a real shell configured in your name service (
/etc/passwd, LDAP, etc.). System accounts may be configured with a disabled shell like
/bin/false(Debian/Ubuntu) or
/bin/nologinFedora/RedHat.
- If Galaxy is configured to submit jobs as real user (see below) then the above must be true for all users of Galaxy.
-¶:
galaxy_user@galaxy_server% git clone /clusterfs/galaxy/galaxy-app
Then that directory should be accessible from all cluster nodes:
galaxy_user@galaxy_server% qsub -I qsub: waiting for job 1234.torque.server to start qsub: job 1234.torque.server ready galaxy_user@node1% cd /clusterfs/galaxy/galaxy-app galaxy_user@node1%
If your cluster nodes have Internet access (NAT is okay) and you want to run the data source tools (upload, ucsc, etc.) on the cluster (doing so is highly recommended), set
new_file_path in
galaxy.yml to a directory somewhere in your shared filesystem:
new_file_path: /clusterfs/galaxy/tmp
Additionally some of the runners including DRMAA may use the
cluster_files_directory for sharing files with the cluster, which defaults to
database/pbs. You may need to create this folder.
cluster_files_directory: database/pbs_job_output_collection option in
galaxy.yml first to see if this solves the problem.
Runner Configuration¶
This documentation covers configuration of the various runner plugins, not how to distribute jobs to the various plugins. Consult the job configuration file documentation for full details on the correct syntax, and for instructions on how to configure tools to actually use the runners explained below.
Local¶
Runs jobs locally on the Galaxy application server (no DRM).
Workers¶
It is possible to configure the number of concurrent local jobs that can be run by using the
workers attribute on the plugin.
<plugins> <plugin id="local" type="runner" load="galaxy.jobs.runners.local:LocalJobRunner" workers="8"/> </plugins>
Slots¶
For each destination using the local runner, it is possible to specify the number of CPU slots to assign (default is 1).
<destinations> <destination id="local_1slot" runner="local"/> <destination id="local_2slots" runner="local"> <param id="local_slots">2</param> </destination> </destinations>
The value of local_slots is used to define GALAXY_SLOTS.
DRMAA¶
Runs jobs via any DRM which supports the Distributed Resource Management Application API. Most commonly used to interface with PBS Professional, Sun Grid Engine, Univa Grid Engine, Platform LSF, and SLURM.
Dependencies¶
Galaxy interfaces with DRMAA via drmaa-python. The
drmaa-python module is provided with Galaxy, but you must tell it where your DRM’s DRMAA library is located, via the
$DRMAA_LIBRARY_PATH environment variable, for example:
galaxy_server% export DRMAA_LIBRARY_PATH=/galaxy/lsf/7.0/linux2.6-glibc2.3-x86_64/lib/libdrmaa.so galaxy_server% export DRMAA_LIBRARY_PATH=/galaxy/sge/lib/lx24-amd64/libdrmaa.so
DRM Notes¶¶).
<plugins> <plugin id="drmaa" type="runner" load="galaxy.jobs.runners.drmaa:DRMAAJobRunner"/> </plugins> <destinations default="sge_default"> <destination id="sge_default" runner="drmaa"/> <destination id="big_jobs" runner="drmaa"> <param id="nativeSpecification">-P bignodes -R y -pe threads 8</param> </destination> </destinations>
PBS¶
Runs jobs via the TORQUE Resource Manager. For PBS Pro, use DRMAA.
Dependencies¶
Galaxy uses the pbs_python module to interface with TORQUE. pbs_python must be compiled against your TORQUE installation, so it cannot be provided with Galaxy. You can install the package as follows:
galaxy_user@galaxy_server% git clone galaxy_user@galaxy_server% cd pbs-python galaxy_user@galaxy_server% source /clusterfs/galaxy/galaxy-app/.venv/bin/activate galaxy_user@galaxy_server% python setup.py install¶).
<plugins> <plugin id="pbs" type="runner" load="galaxy.jobs.runners.pbs:PBSJobRunner"/> </plugins> <destinations default="pbs_default"> <destination id="pbs_default" runner="pbs"/> <destination id="other_cluster" runner="pbs"> <param id="destination">@other.cluster</param> </destination> <destination id="long_jobs" runner="pbs"> <param id="Resource_List">walltime=72:00:00,nodes=1:ppn=8</param> <param id="-p">128</param> </destination> </destinations>
The value of ppn= is used by PBS to define the environment variable
$PBS_NCPUS which in turn is used by galaxy for GALAXY_SLOTS.
Condor¶
Runs jobs via the HTCondor DRM. There are no configurable parameters. Galaxy’s interface is via calls to HTCondor’s command line tools, rather than via an API.
<plugins> <plugin id="condor" type="runner" load="galaxy.jobs.runners.condor:CondorJobRunner"/> </plugins> <destinations> <destination id="condor" runner="condor"/> </destinations>”).
If you need to add additional parameters to your condor submission, you can do so by supplying
<param/>s:
<destinations> <destination id="condor" runner="condor"> <param id="Requirements">(machine == some.specific.host)</param> <param id="request_cpus">4</param> </destination> </destinations>
Pulsar¶.
CLI¶¶
The cli runner requires, at a minimum, two parameters:
shell_plugin
- This required parameter should be [a cli_shell class]()</a> currently one of:
LocalShell,
RemoteShell,
SecureShell,
ParamikoShell, or
GlobusSecureShelldescribing which shell plugin to use.
job_plugin
- This required parameter should be [a cli_job class]() currently one of
Torque,
SlurmTorque, or
Slurm.
All other parameters are specific to the chosen plugins. Parameters to pass to the shell plugin begin with the id
shell_ and parameters to pass to the job plugin begin with the id
job_.
Shell Plugins¶
The
RemoteShell plugin uses
rsh(1) to connect to a remote system and execute shell commands.
shell_username
- Optional user to log in to the remote system as. If unset uses
rsh’s default behavior (attempt to log in with the current user’s username).
shell_hostname
- Remote system hostname to log in to.
shell_rsh
rsh-like command to excute (e.g.
<param id="shell_rsh">/opt/example/bin/remsh</param>) - just defaults to
rst.defaultdefault value is
gsi-ssh
The
ParamikoShell option was added in 17.09 with this pull request from Marius van den Beek.
Job Plugins¶
The
Torque plugin uses
qsub(1) and
qstat(1) to interface with a Torque server on the command line.
job_<PBS_JOB_ATTR></td>
<PBS_JOB_ATTR>refers to a
qsub(1B)or
pbs_submit(3B)argument/attribute (e.g.
<param id="job_Resource_List">walltime=24:00:00,ncpus=4</param>).>
<plugins> <plugin id="cli" type="runner" load="galaxy.jobs.runners.cli:CLIJobRunner"/> </plugins> <destinations default="cli_default"> <destination id="cli_default" runner="cli"> <param id="shell_plugin">SecureShell</param> <param id="job_plugin">Torque</param> <param id="shell_hostname">cluster.example.org</param> </destination> <destination id="long_jobs" runner="cli"> <param id="job_Resource_List">walltime=72:00:00,nodes=1:ppn=8</param> <param id="job_-p">128</param> </destination> </destinations>
Most options available to
qsub(1b) and
pbs_submit(3b) are supported. Exceptions include
-o/Output_Path,
-e/Error_Path, and
-N/Job_Name since these PBS job attributes are set by Galaxy.
Submitting Jobs as the Real User¶¶ prior to running the job, and back to the Galaxy user once the job has completed. It does this by executing a site-customizable script via sudo.
- Two possibilities to determine the system user that corresponds to a galaxy user are implemented: i) the user whos name matches the Galaxy user’s email address (with the @domain stripped off) and ii) the user whos name is equal to the galaxy user name. Until release 17.05 only the former option is available. The latter option is suitable for Galaxy installations that user external authentification (e.g. LDAP) against a source that is also the source of the system users.
- The script accepts a path and does nothing to ensure that this path is a Galaxy working directory per default (and not at all up to release 17.05). So anyone who has access to the Galaxy user could use this script and sudo to change the ownership of any file or directory. Furthermore, anyone with write access to the script could introduce arbitrary (harmful) code – so it might be a good idea to give write access only to trustworthy users, e.g., root.
Configuration¶).
For releases later than 17.05 you can configure the method how the system user is determined in
config/galaxy.yml via the variable
real_system_username. For determining the system user from the email adress stored in Galaxy set it to
user_email, otherwise for determining the system user from the Galaxy user name set it to
username.
Once these are set, you must set the
drmaa_external_* and
external_chown_script settings in the Galaxy config and configure
sudo(8) to allow them to be run. A sudo config using the three scripts set in the sample
galaxy.yml would be:
galaxy ALL = (root) NOPASSWD: SETENV: /opt/galaxy/scripts/drmaa_external_runner.py galaxy ALL = (root) NOPASSWD: SETENV: /opt/galaxy/scripts/drmaa_external_killer.py galaxy ALL = (root) NOPASSWD: SETENV: /opt/galaxy/scripts/external_chown_script.py
If your sudo config contains
Defaults requiretty, this option must be disabled.
For Galaxy releases > 17.05, the sudo call has been moved to
galaxy.yml and is thereby configurable by the Galaxy admin. This can be of interest because sudo removes
PATH,
LD_LIBRARY_PATH, etc. variables per default in some installations. In such cases the sudo calls in the three variables in galaxy.yml can be adapted, e.g.,
sudo -E PATH=... LD_LIBRARY_PATH=... /PATH/TO/GALAXY/scripts/drmaa_external_runner.py. In order to allow setting the variables this way adaptions to the sudo configuration might be necessary. For example, the path to the python inside the galaxy’s python virtualenv may have to be inserted before the script call to make sure the virtualenv is used for drmaa submissions of real user jobs.
drmaa_external_runjob_script: sudo -E .venv/bin/python scripts/drmaa_external_runner.py --assign_all_groups
Also for Galaxy releases > 17.05: In order to allow
external_chown_script.py to chown only path below certain entry points the variable
ALLOWED_PATHS in the python script can be adapted. It is sufficient to include the directorries
job_working_directory and
new_file_path as configured in
galaxy.yml.
It is also a good idea to make sure that only trusted users, e.g. root, have write access to all three scripts.
Some maintenance and support of this code will be provided via the usual Support channels, but improvements and fixes would be greatly welcomed, as this is a complex feature which is not used by the Galaxy Development Team.
Special environment variables for job resources¶
Galaxy tries to define special environment variables for each job that contain the information on the number of available slots and the amount of available memory:
GALAXY_SLOTS: number of available slots
GALAXY_MEMORY_MB: total amount of available memory in MB
GALAXY_MEMORY_MB_PER_SLOT: amount of memory that is available for each slot in MB
More precisely Galaxy inserts bash code in the job submit script that tries to determine these values. This bash code is defined here:
- lib/galaxy/jobs/runners/util/job_script/CLUSTER_SLOTS_STATEMENT.sh
- lib/galaxy/jobs/runners/util/job_script/MEMORY_STATEMENT.sh
If this code is unable to determine the variables, then they will not be set.
Therefore in the tool XML files the variables should be used with a default,
e.g.
\${GALAXY_SLOTS:-1} (see also).
In particular
GALAXY_MEMORY_MB and
GALAXY_MEMORY_MB_PER_SLOT are currently
defined only for a few cluster types. Contributions are very welcome, e.g. let
the Galaxy developers know how to modify that file to support your cluster. | https://docs.galaxyproject.org/en/master/admin/cluster.html | 2019-02-16T02:11:45 | CC-MAIN-2019-09 | 1550247479729.27 | [] | docs.galaxyproject.org |
Proof-of-stake (PoS)
A consensus protocol ensures that every transaction is replicated and recorded in all the machines in the network in the same order.
In this section of the documentation we will look at a special category of consensus protocols, proof-of-stake protocols, and describe Kowala's approach to the problem and the main properties of the project elements related to the consensus protocol. This section is heavily based on the work of Ethan Buchman as well as on other resources provided by the Tendermint/Cosmos team and resources provided by the Ethereum project.
The Kowala project has its own implementation of the Tendermint protocol. Tendermint is a weakly synchronous, Byzantine fault tolerant, state machine replication protocol, with optimal Byzantine fault tolerance and additional accountability guarantees in the event the BFT assumptions are violated. There are varying ways to implement Proof-of-Stake algorithms, but the two major tenets in Proof-of-Stake design are chain-based PoS and Byzantine Fault Tolerant-based PoS. Kowala implements a hybrid of both - strictly choosing consistency over availability. Some of the main properties of the Kowala project are:
- The codebase is based on Ethereum's go.
- Fast-finality (1 second confirmations) - We believe that fast confirmations will be essential for mass adoption.
- On-chain dynamic validator set management (registry/in-protocol penalties) via genesis smart contracts.
The order at which information is presented is intentional: | https://docs.kowala.tech/consensus/proof-of-stake/ | 2019-02-16T02:11:02 | CC-MAIN-2019-09 | 1550247479729.27 | [] | docs.kowala.tech |
Contents Now Platform User Interface Previous Topic Next Topic Disable user presence Subscribe Log in to subscribe to topics and get notified when content changes. ... SAVE AS PDF Selected Topic Topic & Subtopics All Topics in Contents Share Disable user presence You can disable user presence globally by enabling a system property. Before you beginRole required: admin About this taskEnabling the property turns off all user presence features. Procedure Navigate to sys_properties.list. Locate the property named glide.ui.presence.disabled. Set the Value to true. On this page Send Feedback Previous Topic Next Topic | https://docs.servicenow.com/bundle/istanbul-platform-user-interface/page/use/navigation/task/t_DisableUserPresence.html | 2019-02-16T01:53:22 | CC-MAIN-2019-09 | 1550247479729.27 | [] | docs.servicenow.com |
When you reload an existing application, it shuts down and then reloads.
Procedure
- In the Deployed Applications section of the Application Management page of a tc Runtime instance or group, select the checkbox to the left of one or more applications you want to reload.
- Click Reload.
Results
The application begins to run and the status of the application changes to Running. | https://docs.vmware.com/en/vRealize-Hyperic/5.8.4/com.vmware.hyperic.resource.configuration.metrics.doc/GUID-B721F425-3B62-49DF-B0D4-966EBBAC53B3.html | 2019-02-16T01:15:26 | CC-MAIN-2019-09 | 1550247479729.27 | [] | docs.vmware.com |
On a fresh macOS installation there are three empty directories for add-ons available to all users:
/Library/Ruby
/Library/Python
/Library/Perl
Starting with OS X Lion (10.7), you need
sudo to install to these like so:
sudo gem install,
sudo easy_install or
sudo cpan -i.
An option to avoid sudo is to use an access control list. For example:
chmod +a 'user:<YOUR_NAME_HERE> allow add_subdirectory,add_file,delete_child,directory_inherit' /Library/Python/3.y/site-packages
will let you add packages to Python 3.y as yourself, which is probably safer than changing the group ownership of the directory.
Habit maybe?
One reason is executables go in
/usr/local/bin. Usually this isn’t a writable location. But if you installed Homebrew as we recommend,
/usr/local will be writable without sudo. So now you are good to install the development tools you need without risking the use of sudo.
Rather than changing the rights on
/Library/Python, we recommend the following options:
Note,
easy_install is deprecated. We install
pip (or
pip2 for Python 2) along with python/python2.
We set up distutils such that
pip install will always put modules in
$(brew --prefix)/lib/pythonX.Y/site-packages and scripts in
$(brew --prefix)/share/python. Therefore, you won’t need sudo!
Do
brew info python or
brew info python@2 for precise information about the paths. Note, a brewed Python still searches for modules in
/Library/Python/X.Y/site-packages and also in
~/Library/Python/X.Y/lib/python/site-packages.
This is only recommended if you don’t use a brewed Python.
On macOS, any Python version X.Y also searches in
~/Library/Python/X.Y/lib/python/site-packages for modules. That dir might not yet exist, but you can create it:
mkdir -p ~/Library/Python/2.7/lib/python/site-packages
To teach
easy_install and
pip to install there, either use the
--user switch or create a
~/.pydistutils.cfg file with the following content:
[install] install_lib = ~/Library/Python/$py_version_short/lib/python/site-packages
Virtualenv ships
pip and creates isolated Python environments with separate site-packages, therefore you won’t need sudo.
If you use rbenv or RVM then you should ignore this stuff
Brewed Ruby installs executables to
$(brew --prefix)/opt/ruby/bin without sudo. You should add this to your path. See the caveats in the
ruby formula for up-to-date information.
To make Ruby install to
/usr/local, we need to add
gem: -n/usr/local/bin to your
~/.gemrc. It’s YAML, so do it manually or use this:
echo "gem: -n/usr/local/bin" >> ~/.gemrc
However, all versions of RubyGems before 1.3.6 are buggy and ignore the above setting. Sadly a fresh install of Snow Leopard comes with 1.3.5. Currently the only known way to get around this is to upgrade rubygems as root:
sudo gem update --system
Just install everything into the Homebrew prefix like this:
echo "export GEM_HOME=\"$(brew --prefix)\"" >> ~/.bashrc
Note, maybe you shouldn’t do this on Lion, since Apple has decided it is not a good default.
If you ever did a
sudo gem, etc. before then a lot of files will have been created owned by root. Fix with:
sudo chown -R $USER /Library/Ruby /Library/Perl /Library/Python
The Perl module
local::lib works similarly to rbenv/RVM (although for modules only, not perl installations). A simple solution that only pollutes your
/Library/Perl a little is to install
local::lib with sudo:
sudo cpan local::lib
Note that this will install some other dependencies like
Module::Install. Then put the appropriate incantation in your shell’s startup, e.g. for
.bash_profile you insert the below, for others see the
local::lib docs.
eval $(perl -I$HOME/perl5/lib/perl5 -Mlocal::lib)
Now (after you restart your shell)
cpan or
perl -MCPAN -eshell etc. will install modules and binaries in
~/perl5 and the relevant subdirectories will be in your
PATH and
PERL5LIB etc.
If you don’t even want (or can’t) use sudo for bootstrapping
local::lib, just manually install
local::lib in
~/perl5 and add the relevant path to
PERL5LIB before the
.bashrc eval incantation.
Another alternative is to use
perlbrew to install a separate copy of Perl in your home directory, or wherever you like:
curl -L | bash perlbrew install perl-5.16.2 echo ".~/perl5/perlbrew/etc/bashrc" >> ~/.bashrc
© 2009–present Homebrew contributors
Licensed under the BSD 2-Clause License. | https://docs.w3cub.com/homebrew/gems,-eggs-and-perl-modules/ | 2019-02-16T00:47:58 | CC-MAIN-2019-09 | 1550247479729.27 | [] | docs.w3cub.com |
Troubleshooting¶
Unable to install pycurl library, getting main.ConfigurationError: Could not run curl-config?
Luckily, we have faced this issue. If you ran the install script and still got this error, you can let us know. If not, check this issue on how to fix it.
Unable to run OWTF because of ImportError: No module named cryptography.hazmat.bindings.openssl.binding?
This actually means you do not have cryptography python module installed. It is recommended to rerun the install script (or) to just install the missing python libraries using the following command.
pip2 install --upgrade -r install/owtf.pip
Unable to run OWTF because of TypeError: parse_requirements() missing 1 required keyword argument: ‘session’
This is because of an older version of pip installed in your System. To resolve this run the following commands
pip install --upgrade pip (run as root if required) python install/install.py | http://docs.owtf.org/en/develop/troubleshooting.html | 2019-02-16T02:16:03 | CC-MAIN-2019-09 | 1550247479729.27 | [] | docs.owtf.org |
How Client/Server Connections Work
The server pools in your Pivotal GemFire client processes manage all client connection requests to the server tier. To make the best use of the pool functionality, you should understand how the pool manages the server connections.
Client/server communication is done in two distinct ways. Each kind of communication uses a different type of connection for maximum performance and availability.
- Pool connections. The pool connection is used to send individual operations to the server to update cached data, to satisfy a local cache miss, or to run an ad hoc query. Each pool connection goes to a host/port location where a server is listening. The server responds to the request on the same connection. Generally, client threads use a pool connection for an individual operation and then return the connection to the pool for reuse, but you can configure to have connections owned by threads. This figure shows pool connections for one client and one server. At any time, a pool may have from zero to many pool connections to any of the servers.
Subscription connections. The subscription connection is used to stream cache events from the server to the client. To use this, set the client attribute
subscription-enabledto true. The server establishes a queue to asynchronously send subscription events and the pool establishes a subscription connection to handle the incoming messages. The events sent depend on how the client subscribes.
How the Pool Chooses a Server Connection
The pool gets server connection information from the server locators or, alternately, from the static server list.
- Server Locators. Server locators maintain information about which servers are available and which has the least load. New connections are sent to the least loaded servers. The pool requests server information from a locator when it needs a new connection. The pool randomly chooses the locator to use and the pool sticks with a locator until the connection fails.
- Static Server List. If you use a static server list, the pool shuffles it once at startup, to provide randomness between clients with the same list configuration, and then runs through the list round robin connecting as needed to the next server in the list. There is no load balancing or dynamic server discovery with the static server list.
How the Pool Connects to a Server
When a pool needs a new connection, it goes through these steps until either it successfully establishes a connection, it has exhausted all available servers, or the
free-connection-timeout is reached.
- Requests server connection information from the locator or retrieves the next server from the static server list.
- Sends a connection request to the server.
If the pool fails to connect while creating a subscription connection or provisioning the pool to reach the
min-connections setting, it logs a fine level message and retries after the time indicated by
ping-interval.
If an application thread calls an operation that needs a connection and the pool can’t create it, the operation returns a
NoAvailableServersException.
How the Pool Manages Pool Connections
Each
Pool instance in your client maintains its own connection pool. The pool responds as efficiently as possible to connection loss and requests for new connections, opening new connections as needed. When you use a pool with the server locator, the pool can quickly respond to changes in server availability, adding new servers and disconnecting from unhealthy or dead servers with little or no impact on your client threads. Static server lists require more close attention as the client pool is only able to connect to servers at the locations specified in the list.
The pool adds a new pool connection when one of the following happens:
- The number of open connections is less than the
Pool’s
min-connectionssetting.
- A thread needs a connection, all open connections are in use, and adding another connection would not take the open connection count over the pool’s
max-connectionssetting. If the max-connections setting has been reached, the thread blocks until a connection becomes available.
The pool closes a pool connection when one of the following occurs:
- The client receives a connectivity exception from the server.
- The server doesn’t respond to a direct request or ping within the client’s configured
read-timeoutperiod. In this case, the pool removes all connections to that server.
- The number of pool connections exceeds the pool’s
min-connectionssetting and the client doesn’t send any requests over the connection for the
idle-timeoutperiod.
When it closes a connection that a thread is using, the pool switches the thread to another server connection, opening a new one if needed.
How the Pool Manages Subscription Connections
The pool’s subscription connection is established in the same way as the pool connections, by requesting server information from the locator and then sending a request to the server, or, if you are using a static server list, by connecting to the next server in the list.
Subscription connections remain open for as long as needed and are not subject to the timeouts that apply to pool connections.
How the Pool Conditions Server Load
When locators are used, the pool periodically conditions its pool connections. Each connection has an internal lifetime counter. When the counter reaches the configured
load-conditioning-interval, the pool checks with the locator to see if the connection is using the least loaded server. If not, the pool establishes a new connection to the least loaded server, silently puts it in place of the old connection, and closes the old connection. In either case, when the operation completes, the counter starts at zero. Conditioning happens behind the scenes and does not affect your application’s connection use. This automatic conditioning allows very efficient upscaling of your server pool. It is also useful following planned and unplanned server outages, during which time the entire client load will have been placed on a subset of the normal set of servers. | http://gemfire.docs.pivotal.io/92/geode/topologies_and_comm/topology_concepts/how_the_pool_manages_connections.html | 2019-02-16T01:11:18 | CC-MAIN-2019-09 | 1550247479729.27 | [] | gemfire.docs.pivotal.io |
Install Camunda Cycle
This document describes the installation procedure for Camunda Cycle. You can download a prepackaged distribution which includes Camunda Cycle and a Tomcat server. For information on how to configure the prepackaged distribution, refer to the section on Configure the Pre-packaged Distribution. You can also install Camunda Cycle on a vanilla Tomcat server. This procedure is explained in the section Install Camunda Cycle on a Vanilla Tomcat 7.
This installation guide also details how to configure the Cycle installation, including the setup of the email service, password encryption and the installation of custom connectors.
Installation Environment
We do not recommend to install Camunda Cycle together with the other platform components (webapps, engine, REST API) on the same runtime environment. A combined installation of designtime and runtime components on a single environment is not supported.
Download
Prepackaged Distribution
Download a prepackaged distribution of Camunda Cycle. This distribution includes Camunda Cycle deployed in an Apache Tomcat as well as the SQL scripts. Enterprise subscription customers use the enterprise download page.
Cycle Only
Download a Camunda Cycle from our NEXUS repository. Choose the correct version named
$CYCLE_VERSION/camunda-cycle-tomcat-$CYCLE_VERSION.war.
Database Scripts
Download scripts to create the database schema from our NEXUS repository. Choose the correct version named
$CYCLE_VERSION/camunda-cycle-sql-scripts-$CYCLE_VERSION.jar.
Create the Database Schema
Unless you are using the pre-packaged distribution and do not want to exchange the packaged H2 database, you have to first create a database schema for Camunda Cycle. The Camunda Cycle distribution ships with a set of SQL create scripts that can be executed by a database administrator.
The database creation scripts reside in the
sql/create folder:
camunda-cycle-distro-$CYCLE_VERSION.zip/sql/create/*_cycle.sql
There is an individual SQL script for each supported database. Select the script appropriate for your database and run it with your database administration tool (e.g., SqlDeveloper for Oracle).
We recommend to create a separate database or database schema for Camunda Cycle.
If you have not got the distro at hand, you can also download a file that packages these scripts from our server.
Choose the correct version named
$CYCLE_VERSION/camunda-cycle-sql-scripts-$CYCLE_VERSION.jar.
Install Camunda Cycle on a Vanilla Tomcat 7
You can download the Camunda Cycle web application from our server.
Choose the correct version named
$CYCLE_VERSION/camunda-cycle-tomcat-$CYCLE_VERSION.war.
Create a Datasource
The Cycle datasource is configured in the Cycle web application in the file
META-INF/context.xml. It should be named
jdbc/CycleDS.
In order to use a custom datasource name, you have to edit the file
WEB-INF/classes/META-INF/cycle-persistence.xml in the Cycle web application file.
In order to use the
org.apache.tomcat.jdbc.pool.DataSourceFactory, you need to add the driver of the database you use to the
$TOMCAT_HOME/lib folder.
For example, if you plan to use the H2 database, you would have to add the h2-VERSION.jar.
Tomcat 6.x
On Tomcat 6, you will also have to add the tomcat-jdbc.jar, which comes with Tomcat 7 and the pre-packaged Camunda Cycle distribution, to
$TOMCAT_HOME/lib.
Install the Web Application
- Copy the Cycle war file to
$TOMCAT_HOME/webapps. Optionally you may rename it or extract it to a folder to deploy it to a specific context like
/cycle.
- Start Tomcat.
- Access Camunda Cycle on the context you configured. If Cycle is installed correctly, a screen should appear that allows you to create an initial user. The initial user has administrator privileges and can be used to create more users once you have logged in.
Configure the Pre-packaged Distribution
The distribution comes with a preconfigured H2 database used by Cycle.
The H2 JDBC driver is located at
camunda-cycle-distro-$CYCLE_VERSION.zip/server/apache-tomcat-VERSION/lib/h2-VERSION.jar.
Exchange the Database
To exchange the preconfigured H2 database with your own, e.g., Oracle, you have to do the following:
- Copy your JDBC database driver JAR file to
$TOMCAT_HOME/lib.
- Open
$TOMCAT_HOME/webapps/cycle/META-INF/context.xmland edit the properties of the
jdbc/CycleDSdatasource definition.
Configuration
Configure Email
Note: This step is optional and can be skipped if you do not require Cycle to send a welcome email to newly created users.
Java Mail Library
You need to install the java mail library when NOT using the prepackaged distribution. Download version 1.4.x manually from and copy it into your
$TOMCAT_HOME/lib folder.
In order to use the Cycle email service, you have to configure a mail session in the
META-INF/context.xml file in the Cycle web application.
By default, Cycle looks up a mail session using the JNDI Name
mail/Session.
The name of the mail session to look up can be changed by editing the following file in the Cycle web application:
WEB-INF/classes/spring/configuration.xml
The file defines a Spring Bean named
cycleConfiguration. On this spring bean, set the JNDI name of the Mail Session to a custom name:
<bean id="cycleConfiguration" class="org.camunda.bpm.cycle.configuration.CycleConfiguration"> <!-- ... --> <!-- Cycle email service configuration --> <property name="emailFrom" value="cycle@localhost" /> <property name="mailSessionName" value="my/mail/Session" /> <!-- ... --> </bean>
Configure Connector Password Encryption
Connector passwords are encrypted before they are stored in the Cycle database using the PBEWithMD5AndDES algorithm implementation.
Encryption Key
Cycle uses a default key to encrypt passwords (contained in the source code and hence not really secure). If you want to improve security you can exchange the encryption password by creating a file
$USER_HOME/cycle.password containing a self chosen plain ASCII password.
Add Connectors
You can add own Connectors in form of JAR files to your Camunda Cycle installation. Just follow these steps to add a new Connector.
- Copy the JAR file which contains the Connector implementation to
$TOMCAT_HOME/webapps/cycle/WEB-INF/lib.
- Edit the
$TOMCAT_HOME/webapps/cycle/WEB-INF/classes/spring/connector-configurations.xmlfile and include a variation of the following snippet:
<div class="app-source" data-</div>
After adding the JAR file and updating the Connector configuration file, you can start the server. The added Connector appears in the Add Connector dialog and can be used to create roundtrips.
<div class="bootstrap-code"> <script type="text/xml" id="connector-configurations.xml"> <bean name="svnConnectorDefinition" class="org.camunda.bpm.cycle.entity.ConnectorConfiguration"> <property name="name" value="Subversion Connector"/> <property name="connectorClass" value="org.camunda.bpm.cycle.connector.svn.SvnConnector"/> <property name="properties"> <map> <entry key="repositoryPath" value=""></entry> </map> </property> </bean> </script> <script type="text/ng-template" id="code-annotations"> { "connector-configurations.xml": { "svnConnectorDefinition": "The name of the bean handling the Connector. Choose one which represents the functionality of the Connector and is not taken yet." , "Subversion Connector": "The name of the Connector as it appears in the Add Connector dialog.", "org.camunda.bpm.cycle.connector.svn.SvnConnector": "The qualified name of the class which contains the implementation of the Connector.", "entry" : "Properties which are needed by the Connector (e.g. service URL, proxy settings, etc.)" } } </script> </div>
Migration
Migrate from 3.0 to 3.1
We updated the database schema of Camunda Cycle in the version 3.1.0. So please update your database schema using the migration scripts provided in the
sql/upgrade folder of the Camunda Cycle distribution:
camunda-cycle-distro-$CYCLE_VERSION.zip/sql/upgrade/*_cycle_3.0_to_3.1.sql
There is an individual SQL script for each supported database. Select the script appropriate for your database and run it with your database administration tool (e.g. SqlDeveloper for Oracle). | https://docs.camunda.org/manual/7.4/installation/cycle/ | 2019-02-16T01:55:42 | CC-MAIN-2019-09 | 1550247479729.27 | [] | docs.camunda.org |
...
||Select V-Ray Volume Grid|| > Modify panel > Rendering rollout > Volumetric Options... button > Volumetric Render Settings window
Parameters
...
Based on.
Texture.
Opacity Diagram | transp_t, transp_s, transp_v, transp_f – This diagram specifies the transparency/opacity at given point as a function of the channel selected in the Based on parameter. The selected channel's data range is denoted by a blue-green line.
... | https://docs.chaosgroup.com/pages/diffpagesbyversion.action?pageId=38572102&selectedPageVersions=2&selectedPageVersions=3 | 2019-02-16T02:02:06 | CC-MAIN-2019-09 | 1550247479729.27 | [] | docs.chaosgroup.com |
Configuring & Troubleshooting our Amazon Web Services (AWS) Addon
Covered in this article:
This add-on is available in the Plus and Pro editions of MemberPress.
Overview
MemberPress AWS can be downloaded from here: MemberPress Add-Ons
What is Amazon AWS?
Amazon Web Services (AWS) is a set of tools provided by Amazon to help people host websites, files and do many other things. Amazon S3 is one of the services offered in Amazon AWS ... it allows you to upload, host and protect files and be backed by the reliability, security and speed of Amazon.com! Amazon AWS is widely used by individuals, fortune 500 companies and every other type of entity in between.
You can sign up for Amazon AWS very easily -- it's pay by usage and quite inexpensive.
Uploading Your Files To Amazon S3
Once you've signed up and are accessing your AWS Management Console you can click on the S3 button here:
Then you'll want to create a "Bucket" ... which is basically like a folder:
Make sure your bucket names only contain *lowercase* letters, numbers and dashes ... and that it starts & ends with a lowercase letter. You can read more about these restrictions on Amazon's AWS developer resource website.
Once your bucket is in place, you can click on it ... from within your bucket you can then upload files, create folders and generally organize your files how ever you want:
Alternatively ... There are now some services & programs that connect with Amazon S3 ... a couple of these are Cyberduck (Mac only) and FTP2Cloud (similar to DropBox's interface).
Try to make sure that any of your folder names and files only contain uppercase letters, lowercase letters, numbers or dashes ... please try to avoid spaces & special characters in your folder and file names. While this isn't specifically forbidden by Amazon S3's guidelines, special characters (including spaces) have been known to interfere with MemberPress AWS's ability to work properly.
Retrieving Your Amazon Security Credentials
Before you can start using MemberPress AWS you have to enter the Amazon Security credentials associated with your account into your MemberPress website's Options page.
To get your security credentials, go to the upper right hand corner of your AWS management console, click your name and select "Security Credentials":
You might then see this pop-up if you haven't accessed this area before. Choose "Get Started with IAM Users".
Next choose to add a user and enter a username with Access type* selected as "Programmatic access".
Now you'll set permissions for this new user. Under the filter search for "AmazonS3" to bring up the S3 permissions options.
Select the "AmazonS3ReadOnlyAccess" option and click "Next".
Review everything and make sure it looks like the screenshot below. If so, click "Create User".
Lastly you'll need to download the CSV file and keep it safe somewhere.
Now you're ready to paste these two keys into the AWS tab in MemberPress -> Options:
It's IMPORTANT to make sure that your buckets and the files in your buckets are not publicly accessible, as that would defeat any protection of direct access to those files.
Using MemberPress AWS
Now that you've got MemberPress and the MemberPress AWS add-on installed, you can start using the following shortcodes in your pages, posts and custom post types:
A shortcode that displays an expiring url to your protected file:
[mepr-s3-url src="coolbucket/coolfile.zip"]
A shortcode that creates a link to an expiring url to your protected file:
[mepr-s3-link src="coolbucket/anothercoolfile.pdf"]Download My E-Book[/mepr-s3-link]
A shortcode that embeds a protected audio file (using an encoded Media Element Player and expiring urls):
[mepr-s3-audio src="coolbucket/coolaudiofile.mp3"]
A shortcode that embeds a protected video file (using an encoded Media Element Player and expiring urls):
[mepr-s3-video src="coolbucket/coolmp4s/coolvideofile.mp4"]
Shortcode Options
Common Options
These options are available for all of the MemberPress AWS shortcodes:
src: This is a way to identify the Amazon S3 bucket and path to the protected Amazon file. These values are formatted "<bucket>/<file path>" -- for example if I had an S3 bucket named 'mycoolzips' and I had a file within that bucket called 'funny.zip' and wanted to create a temporary link or embedded media resource then you'd have an src equal to 'mycoolzips/funny.zip' ...
rule: This identifies the id of the rule that should be used to determine whether or not to display the shortcode.
expires: The time that the amazon link will be valid. This can be any value that would be accepted by PHP's time functions. A good example would be "+5 minutes" for a link that expires within 5 minutes or "+30 seconds" for a link expiring in 30 seconds
target: Set to "new" to open links in a new tab. This option only applies to the mepr-s3-link shortcode.
download: Set to "force" to force the user's browser to download the file when they click the link instead of opening it in the browser. This option only applies to the mepr-s3-url and mepr-s3-link shortcodes.
mepr-s3-url
This shortcode is used to calculate and print out an Amazon S3 expiring link.
mepr-s3-link
This shortcode is used to calculate an Amazon S3 expiring link and display it as the href of a link. This shortcode needs to wrap the text you want displayed for this link.
mepr-s3-audio & mepr-s3-video
These shortcodes are used to embed protected audio and video files onto your pages and posts. Aside from the expire and rule attributes ... and the fact that the src attributes represent file paths for Amazon AWS S3 these shortcodes behave exactly like the audio and video shortcodes built-in to WordPress itself ... including the ability to include fallback video files to maximize compatibility across all browsers.
You can also use any of the shortcode attributes supported by WordPress video shortcodes here:
Considerations about Audio and Video
Since the underlying links to audio and video content displayed with these shortcodes are expiring links ... if any file lasts longer than the expiration time there can be issues with users scrubbing video forward or back.
When determining your expire time take these facts into account:
- The shorter the expiration of the links is the more chance there is for issues with the user experience ...
- The longer the expiration of the links is the more chance there is for users to download your video content.
AWS V4 Signatures and so we still recommend you stay with the old signature formats if you can. Some newer Amazon AWS regions only support V4 signatures though. If you decide to use V4 signatures you'll need to be prepared to also supply MemberPress AWS with the AWS region that you're using. | https://docs.memberpress.com/article/91-amazon-web-services | 2019-02-16T01:46:51 | CC-MAIN-2019-09 | 1550247479729.27 | [array(['https://www.memberpress.com/wp-content/uploads/2013/02/s3.png',
's3'], dtype=object)
array(['https://www.memberpress.com/wp-content/uploads/2013/02/s3bucket.png',
's3bucket'], dtype=object)
array(['https://www.memberpress.com/wp-content/uploads/2013/02/s3upload.png',
's3upload'], dtype=object)
array(['http://d33v4339jhl8k0.cloudfront.net/docs/assets/588bba722c7d3a784630623a/images/59e0f21f042863379ddca96a/file-HP68dZoHK7.png',
None], dtype=object)
array(['http://d33v4339jhl8k0.cloudfront.net/docs/assets/588bba722c7d3a784630623a/images/59e0f2682c7d3a40f0ed743e/file-67JmwxTz4c.png',
None], dtype=object)
array(['http://d33v4339jhl8k0.cloudfront.net/docs/assets/588bba722c7d3a784630623a/images/59e0f2aa042863379ddca96f/file-7M6VKhkoeq.png',
None], dtype=object)
array(['http://d33v4339jhl8k0.cloudfront.net/docs/assets/588bba722c7d3a784630623a/images/59e0f2ee2c7d3a40f0ed7442/file-2mdQdR30gr.png',
None], dtype=object)
array(['http://d33v4339jhl8k0.cloudfront.net/docs/assets/588bba722c7d3a784630623a/images/59e0f34d2c7d3a40f0ed7445/file-XiwDrUMRGb.png',
None], dtype=object)
array(['http://d33v4339jhl8k0.cloudfront.net/docs/assets/588bba722c7d3a784630623a/images/59e0f3842c7d3a40f0ed7448/file-Eu3O2LJRi4.png',
None], dtype=object)
array(['http://d33v4339jhl8k0.cloudfront.net/docs/assets/588bba722c7d3a784630623a/images/59e0f3ba042863379ddca980/file-zN0u4K9J6k.png',
None], dtype=object)
array(['https://www.memberpress.com/wp-content/uploads/2013/02/skitch-1.png',
'skitch-1'], dtype=object) ] | docs.memberpress.com |
Lock Escalation
Lock escalation is the process of converting many fine-grain locks, such as a row, into fewer coarse-grain locks, such as a table. Using lock escalation reduces system overhead.
Microsoft SQL Server Compact 4.0 automatically escalates row locks and page locks into table locks when a transaction exceeds its escalation threshold. In SQL Server Compact 4.0, lock escalation can occur from row to table or from page to table, but not from row to page. When the escalation occurs at the table level, no requests can be made for any lock lower than a table.
For example, when a transaction operates on rows from a table, SQL Server Compact 4.0 automatically acquires locks on those rows affected and puts higher-level intent locks on the pages and table which contain those rows. Any relevant index pages are also locked. When the number of locks held by the transaction exceeds its threshold, SQL Server Compact 4.0 tries to change the intent lock on the table to a stronger lock. For example, an intent exclusive (IX) lock would change to an exclusive (X) lock. After acquiring the stronger lock, all page- and row-level locks held by the transaction on the table are released.
Lock escalation occurs on a per-table basis when a request for a lock causes a specific lock escalation threshold to be exceeded. All sub-table level locks, regardless of type, are counted toward the threshold. The escalation threshold should only be considered as an approximate value, because any locks required by internal operations count toward that threshold. Escalation might occur earlier than expected.
If escalation is not possible because of a lock conflict, the transaction will continue and might try to escalate again later.
Note
Intent locks, row locks, and page locks all count toward the escalation count, unless they are temporary table locks. When the total of intent locks, row locks and page locks on a specific table exceeds the escalation threshold, escalation occurs.
You can control the lock escalation per session by setting the lock escalation threshold, as the following code example shows:
SET LOCK_ESCALATION 1000;
This setting affects all tables in the database. The default is 100.
See Also
Concepts
Displaying Locking Information | https://docs.microsoft.com/en-us/previous-versions/sql/compact/sql-server-compact-4.0/ms172010(v=sql.110) | 2019-02-16T01:49:52 | CC-MAIN-2019-09 | 1550247479729.27 | [] | docs.microsoft.com |
Contains settings used to customize the Data Source wizard used to create new data sources in the DashboardDesigner.
Namespace: DevExpress.DashboardWin
Assembly: DevExpress.Dashboard.v18.2.Win.dll
public class DashboardDataSourceWizardSettings : SqlWizardSettings
Public Class DashboardDataSourceWizardSettings Inherits SqlWizardSettings
The DashboardDesigner exposes the DashboardDesigner.DataSourceWizardSettings property returning the DashboardDataSourceWizardSettings and allowing you to customize the Data Source wizard settings. To learn more, see How to Customize the Data Source Wizard. | https://docs.devexpress.com/Dashboard/DevExpress.DashboardWin.DashboardDataSourceWizardSettings | 2019-02-16T01:26:34 | CC-MAIN-2019-09 | 1550247479729.27 | [] | docs.devexpress.com |
Contents Now Platform Capabilities Previous Topic Next Topic Define an example wizard variable Subscribe Log in to subscribe to topics and get notified when content changes. ... SAVE AS PDF Selected Topic Topic & Subtopics All Topics in Contents Share Define an example wizard variable Define an example On this page Send Feedback Previous Topic Next Topic | https://docs.servicenow.com/bundle/istanbul-servicenow-platform/page/administer/wizards/task/t_DefineAWizardVariable_1.html | 2019-02-16T02:03:53 | CC-MAIN-2019-09 | 1550247479729.27 | [] | docs.servicenow.com |
Recently Viewed Topics
Configure Splunk to Forward Data
The following procedure is performed on the Splunk Indexer that you want to forward data to the LCE Splunk Client.
Steps
- Access Splunk Web as a user with Administrator privileges.
At the top of the Splunk Web interface, click Settings, and then click Forwarding and receiving.
The Forwarding and receiving page appears.
In the Configure forwarding row, in the Actions column, click the Add new link.
The Add new page appears.
In the Host box, type the IP address of the LCE Splunk Client host, and then click the Save button.
The IP address is saved. On the Splunk Web interface, the IP address appears on the Forward data page.
- Access the Splunk Indexer as the root user.
Edit the outputs.conf file, usually located at /opt/splunk/etc/system/local/outputs.conf. The lines you must add appear in bold.
[tcpout]
defaultGroup = default
disabled = 0
indexAndForward = 1
[tcpout-server://LCE_IP_OR_Hostname:9800]
[tcpout:default]
disabled = 0
server = LCE_IP_OR_Hostname:9800
sendCookedData = false
Save the file, and then restart the Splunk services.
Data will now be forwarded to the LCE Splunk Client. | https://docs.tenable.com/lce/Content/LCE_SplunkClient/SPL_ConfigureSplunk.htm | 2019-02-16T02:00:38 | CC-MAIN-2019-09 | 1550247479729.27 | [] | docs.tenable.com |
4.2.2.8. ConnectionMaxLife¶
4.2.2.8.1. Synopsis¶
Limit the sessions’ connection time. By default the value is 0 which means unlimited, there is no deconnection at all, but you can force users to be disconnected. The default unit value is second, but you can change it by adding a suffix to the value (s: seconds, m: minutes, etc…).
4.2.2.8.3. Examples¶
Here is a basic example where the maximum connection time is 24h, the group admin is unlimited and managers groups inherits of the default settings
<Default> Home /home ConnectionMaxLife 24h </Default> <Group admins> ConnectionMaxLife 0 </User> <Group managers> Home /home/managers </User> | https://mysecureshell.readthedocs.io/en/latest/tags/childs/connectionmaxlife.html | 2019-02-16T01:47:18 | CC-MAIN-2019-09 | 1550247479729.27 | [] | mysecureshell.readthedocs.io |
Amazon Redshift
Amazon Redshift is an enterprise-level, petabyte-scale, fully managed, data warehousing service. Redshift is unique in the following ways:
You can launch an Amazon Redshift cluster only in an EC2-VPC platform. EC2-Classic is not supported.
Database encryption is supported only when using a default encryption key assigned by Amazon Redshift or when using an on-site hardware security module (HSM). AWS Key Management Service (AWS KMS) and AWS CloudHSM are not supported.
Cross-region features, such as snapshot copying across regions and COPY from another region, are currently supported only between the Beijing Region and the Ningxia Region.
The following node types are available:
ds2.xlarge
ds2.8xlarge
dc1.large
dc1.8xlarge
The Amazon Redshift Getting Started guide and some tutorials in the Amazon Redshift Database Developer Guide use sample data hosted on Amazon S3 buckets that are not accessible. | http://docs.amazonaws.cn/en_us/aws/latest/userguide/redshift.html | 2019-02-16T02:09:48 | CC-MAIN-2019-09 | 1550247479729.27 | [] | docs.amazonaws.cn |
In this section you find all referenced software downloads and configuration profiles that are needed to operate the VP75.
The Open Source slicing engine Slic3rPE (Slic3r Prusa Edition) is recommended to be used with the VP75 Additive Manufacturing System. On the project is maintained by PRUSA RESEARCH released as packages for all major platforms for download:
Download latest release at:
For Slic3r to be used with the VP75, the software has to be configured for the specific machine parameters. These parameters are provided by Kühling&Kühling as a ready to install profile bundle. Make sure to select the matching profile bundle for your specific hardware revision:
VP75 Rev. 1.x.x | http://docs.kuehlingkuehling.de/vp75/downloads | 2019-02-16T02:16:22 | CC-MAIN-2019-09 | 1550247479729.27 | [] | docs.kuehlingkuehling.de |
Tutorials
Disclaimer
Please note that some of the tutorials and how-tos listed here use internal API. Please be aware that these may change in future releases and therefore may be unstable. This is due to the fact that we do not guarantee backward compatibility for internal API.
On this page we have listed several tutorials and how-tos. Our goal in providing these is to ease your development efforts and help you understand some concepts that we apply. | https://docs.camunda.org/manual/7.4/examples/tutorials/ | 2019-02-16T01:50:05 | CC-MAIN-2019-09 | 1550247479729.27 | [] | docs.camunda.org |
How to duplicate existing page
To duplicate an existing page, please use a great free plugin: Wordpress Post Duplicator
After you install and activate the plugin (in WP Admin > Plugins > Post Duplicator > Activate),. | https://docs.lumbermandesigns.com/article/10-how-to-duplicate-existing-page | 2019-02-16T01:02:06 | CC-MAIN-2019-09 | 1550247479729.27 | [] | docs.lumbermandesigns.com |
Put Your Pen to the Paper
Write documentation alongside your code, reflecting changes to your code with updates to your documentation. Tight coupling of code and docs lowers the bar for developers and technical writers to maintain documentation.
Read the Docs provides the ability to write prose documentation that links to referential or API documentation, including other projects hosted in your organization, or open source projects hosted on readthedocs.org.
- reStructuredText support
- Markdown support
- Sphinx generated documentation
- Supports linking to Python, Ruby, JavaScript, Java, and many more languages.
Continuous Documentation
Your documentation is rebuilt with each new commit to your repository. We consider this Continuous Documentation. It allows for process to be added to your development workflow to ensure your documentation is consistently receiving the attention it deserves.
Documentation is hosted by each version or branch of your software, so that your documentation always reflects the software you support. Your documentation can now evolve alongside your software.
- Pull from Github
- Pull from Bitbucket
- Supports Git, Mercurial, Subversion, and CVS
Documentation You'll Use
Great documentation exists at the intersection of collecting and sharing information. Having the right tools can facilitate writing, which is the first step to increasing the value of your documentation. When you have documentation worth reading, it's still important to have the correct tools to find and use what your wrote.
With all your documentation in one spot, information can link projects together and everything is searchable and easily discoverable.
- Private internal documentation
- Public documentation for your website
- Securely Hosted
- Localization support
- Offline PDF and ePUB | https://readthedocs.com/features/ | 2019-02-16T00:56:04 | CC-MAIN-2019-09 | 1550247479729.27 | [] | readthedocs.com |
Warning! This page documents an old version of Telegraf, which is no longer actively developed. Telegraf v1.2 is the most recent stable version of Telegraf.
Telegraf is a plugin-driven server agent for collecting & reporting metrics... | http://docs.influxdata.com/telegraf/v0.10/ | 2017-03-23T06:15:41 | CC-MAIN-2017-13 | 1490218186780.20 | [] | docs.influxdata.com |
“creature_battleground” table¶
The
creature_battleground table holds information on creatures
spawns which are used in battlegrounds.
guid¶
This references the “creature” table tables unique
guid for which the entry
is valid.
event1¶
The identifier for the event node in the battleground. Event nodes usually are defined in the battleground’s script.
Nodes are locations in a battleground where characters of each faction can perform actions, such as capturing a tower in Alterac Valley, or taking control over the stables in Arathi Basin.
event2¶
The state of the event node. Node status is defined differently in every battleground script.
Node events can occur for every node and usually describe changes due to character interaction. E.g. if stables in Arathi Basin are taken over by Alliance, the stables note state would become Alliance controlled. Similar node states exist for every battleground note.
Note
If you update battleground scripts and make changes to node status values ensure that you provide database update scripts which update the battleground events accordingly. | http://docs.getmangos.com/en/latest/database/world/creature-battleground.html | 2017-03-23T06:18:54 | CC-MAIN-2017-13 | 1490218186780.20 | [] | docs.getmangos.com |
Edit this page
The same example also shows the declaration of custom cell renderers, namely and editor should be used for a cell. They can also define any different cell property that will be assumed for each matching cell.
For example, writing:
columns: [{ type: 'text' }]
Equals:
columns: [{ renderer: Handsontable.renderers.TextRenderer, editor: Handsontable.editors.TextEditor }]
This mapping is defined in file src/cellTypes.js | https://docs.handsontable.com/pro/1.5.1/tutorial-cell-types.html | 2017-03-23T06:16:21 | CC-MAIN-2017-13 | 1490218186780.20 | [] | docs.handsontable.com |
Example Walkthroughs: Managing Access to Your Amazon S3 Resources
This topic provides the following introductory walkthrough examples for granting access to Amazon S3 resources. These examples use the AWS Management Console to create resources (buckets, objects, users) and grant them permissions. The examples then show you how to verify permissions using the command line tools, so you don't have to write any code. We provide commands using both the AWS Command Line Interface (CLI) and the AWS Tools for Windows PowerShell.
Example 1: Bucket Owner Granting Its Users Bucket Permissions
The IAM users you create in your account have no permissions by default. In this exercise, you grant a user permission to perform bucket and object operations.
Example 2: Bucket Owner Granting Cross-Account Bucket Permissions
In this exercise, a bucket owner, Account A, grants cross-account permissions to another AWS account, Account B. Account B then delegates those permissions to users in its account.
Managing object permissions when the object and bucket owners are not the same
The example scenarios in this case are about a bucket owner granting object permissions to others, but not all objects in the bucket are owned by the bucket owner. What permissions does the bucket owner need, and how can it delegate those permissions?
The AWS account that creates a bucket is called the bucket owner. The owner can grant other AWS accounts permission to upload objects, and the AWS accounts that create objects own them. The bucket owner has no permissions on those objects created by other AWS accounts. If the bucket owner writes a bucket policy granting access to objects, the policy does not apply to objects that are owned by other accounts.
In this case, the object owner must first grant permissions to the bucket owner using an object ACL. The bucket owner can then delegate those object permissions to others, to users in its own account, or to another AWS account, as illustrated by the following examples.
Example 3: Bucket Owner Granting Its Users Permissions to Objects It Does Not Own
In this exercise, the bucket owner first gets permissions from the object owner. The bucket owner then delegates those permissions to users in its own account.
Example 4: Bucket Owner Granting Cross-account Permission to Objects It Does Not Own
After receiving permissions from the object owner, the bucket owner cannot delegate permission to other AWS accounts because cross-account delegation is not supported (see Permission Delegation). Instead, the bucket owner can create an IAM role with permissions to perform specific operations (such as get object) and allow another AWS account to assume that role. Anyone who assumes the role can then access objects. This example shows how a bucket owner can use an IAM role to enable this cross-account delegation.
Before You Try the Example Walkthroughs
These examples use the AWS Management Console to create resources and grant permissions. And to test permissions, the examples use the command line tools, AWS Command Line Interface (CLI) and AWS Tools for Windows PowerShell, so you don't need to write any code. To test permissions you will need to set up one of these tools. For more information, see Setting Up the Tools for the Example Walkthroughs.
In addition, when creating resources these examples don't use root credentials of an AWS account. Instead, you create an administrator user in these accounts to perform these tasks.
About Using an Administrator User to Create Resources and Grant Permissions
AWS Identity and Access Management (IAM) recommends not using the root credentials of your AWS account to make requests. Instead, create an IAM user, grant that user full access, and then use that user's credentials to interact with AWS. We refer to this user as an administrator user. For more information, go to Root Account Credentials vs. IAM User Credentials in the AWS General Reference and IAM Best Practices in the IAM User Guide.
All example walkthroughs in this section use the administrator user credentials. If you have not created an administrator user for your AWS account, the topics show you how.
Note that to sign in to the AWS Management Console using the user credentials, you will need to use the IAM User Sign-In URL. The IAM console provides this URL for your AWS account. The topics show you how to get the URL. | http://docs.aws.amazon.com/AmazonS3/latest/dev/example-walkthroughs-managing-access.html | 2017-03-23T06:20:42 | CC-MAIN-2017-13 | 1490218186780.20 | [] | docs.aws.amazon.com |
Who were the Tuskegee Airmen?
During World War II
, the United
States Military, like so much of the nation, was segregated. Jim Crow Laws
kept blacks from entering public places such as libraries, restaurants, and
movie theaters. Although African Americans served in the armed forces, they
were restricted in the types of jobs and positions they could hold. On
April 3, 1939, Public Law 18 was passed which.
The program
for training an all black flying unit took place at the Tuskegee Institute
in Tuskegee, Alabama. The Institute, founded by Booker T. Washington in
1881, had a strong Civilian Pilot Training Program (CPTP) under the
direction of Charles Alfred Anderson, the nation's first African American
to earn a pilot's license. The Army chose Tuskegee as the training grounds
for the new segregated 99th Pursuit Squadron in January 1941 and the
"Tuskegee Airmen" took flight.
From 1941 to
1946 over 2000 African Americans completed training at Tuskegee and nearly
three quarters of them qualified as pilots while the remainder were trained
as navigators or support personnel. The 99th Pursuit Squadron was activated
and became the 99th Fighter Squadron in May 1942. The Tuskegee Airmen saw
combat in over 1500 missions in Europe and North Africa. Not one of the
bombers that the Tuskegee Airmen escorted was lost to enemy fire; the 99th
Fighter Squadron is the only U.S. squadron to hold that distinction during
the Second World War.
Although the
Tuskegee Airmen played an integral part in the outcome of World War II,
their most important victory was the one at home. Due to the bravery,
tenacity, and success of the Tuskegee Airmen, President Harry S. Truman
desegregated the United States Military in 1948.
Eleanor Roosevelt lends her support
First Lady
Eleanor Roosevelt | http://docs.fdrlibrary.marist.edu/tuskegee.shtml | 2017-03-23T06:11:06 | CC-MAIN-2017-13 | 1490218186780.20 | [] | docs.fdrlibrary.marist.edu |
Installation¶
You can install Gears with pip:
$ pip install Gears
If you want to use node.js-dependent compilers or compressors, you need to install other dependencies:
$ pip install gears-less # LESS $ pip install gears-stylus # Stylus $ pip install gears-handlebars # Handlebars $ pip install gears-coffeescript # CoffeeScript $ pip install gears-uglifyjs # UglifyJS $ pip install gears-clean-css # clean-css
Please note that all these compilers and compressors require node.js to be installed on your system.
It is strongly recommended to install Gears within activated virtualenv.
If you want to use one of available extensions (django-gears, Flask-Gears or gears-cli), please refer to its documentation instead. | http://gears.readthedocs.io/en/latest/installation.html | 2017-03-23T06:08:02 | CC-MAIN-2017-13 | 1490218186780.20 | [] | gears.readthedocs.io |
MVEL is very easy to use, and just as easy to integrate into your application. Let's take a quick look at a simple MVEL expression:
foo.name == "Mr. Foo"
This simple expression asks MVEL if the value of foo.name is equal to "Mr. Foo". Simple enough, but what exactly is foo in this case? Well, it can be at least two things:
A context object is something you can use as the base of your expression by which MVEL will attempt to map identifiers to. Consider the following example:
public class Person {
private String name;
public void setName(String name) { this.name = name; }
public String getName() { return this.name; }
}
Lets say we decide to make an instance of this particular class the context object for the expression, and we evaluate it like so:
Person personInst = new Person();
personInst.setName("Mr. Foo");
Object result = MVEL.eval("name == 'Mr. Foo'", personInst);
When we execute this expression using the eval() method, we will get a result. In this particular case, the value of result will be a Boolean true. The reason for this, is that when the expression name == 'Mr. Foo' is evaluated, MVEL looks at the context object to see if it contain a property/field called name and extracts it.
If we wanted to simply extract the value of name from the person instance, we could do that as well:
String result = (String) MVEL.eval("name", personInst);
assert "Mr. Foo".equals(result);
Pretty simple stuff. But what if we want to inject a bunch of variables? MVEL supports that too, and there is both and easy way and a more advanced way (dealing with resolvers – which we won't get to here). The easy way simply involves passing in a Map of variables (names and values), like so:
Map vars = new HashMap();
vars.put("x", new Integer(5));
vars.put("y", new Integer(10));
Integer result = (Integer) MVEL.eval("x * y", vars);
assert result.intValue() == 50; // Mind the JDK 1.4 compatible code :)
Now, so far we've just been looking at using MVEL as a purely interpreted tool. MVEL can also compile expressions to execute them much faster using a different API. Let's convert the last expression into a compiled version:
// The compiled expression is serializable and can be cached for re-use.
CompiledExpression compiled = MVEL.compileExpression("x * y");
Map vars = new HashMap();
vars.put("x", new Integer(5));
vars.put("y", new Integer(10));
// Executes the compiled expression
Integer result = (Integer) MVEL.executeExpression(compiled, vars);
assert result.intValue() == 50;
I hope that wets your appetite. Anyways, you can continue on to the Language Guide and Integration Guide's for lots more information.
Just FYI:
Code Listing #5 (HashMap) has an error. It should be using the overloaded version of eval that takes in a Map as the second argument.
Thanks, fixed! Remember, this documentation is a Wiki, and you're always welcome to register and edit the documents yourself if you see any errors =)
Powered by a free Atlassian Confluence Open Source Project License granted to Codehaus. Evaluate Confluence today. | http://docs.codehaus.org/display/MVEL/Getting+Started+Guide | 2009-07-04T15:00:48 | crawl-002 | crawl-002-020 | [] | docs.codehaus.org |
):
class Dollar { int amount Dollar(int amount) { this.amount = amount } Dollar times(int value) { amount *= value; return this } Dollar divideBy(int value) { amount /= value; return this } }
With traditional JUnit code, we might test it as follows:
import org.junit.Test import org.junit.runner.JUnitCore class StandardTest { @Test void multiplyThenDivide() { assert new Dollar(10).times(6).divideBy(6).amount == 10 } } JUnitCore.main('StandardTest'):
import static net.saff.theories.assertion.api.Requirements.* import net.saff.theories.assertion.api.InvalidTheoryParameterException import net.saff.theories.runner.api.TheoryContainer class GroovyTheoryContainer extends TheoryContainer { def assume(condition) { try { assert condition } catch (AssertionError ae) { throw new InvalidTheoryParameterException(condition, is(condition)) } } def assumeMayFailForIllegalArguments(Closure c) { try { c.call() } catch (IllegalArgumentException e) { throw new InvalidTheoryParameterException(e, isNull()) } } }
Now, our test becomes:
import org.junit.* import org.junit.runner.* import net.saff.theories.methods.api.Theory import net.saff.theories.runner.api.* @RunWith(Theories) class PopperTest extends GroovyTheoryContainer { private log = [] // for explanatory purposes only public static int VAL1 = 0 public static int VAL2 = 1 public static int VAL3 = 2 public static int VAL4 = 5 @Theory void multiplyIsInverseOfDivide(int amount, int m) { assume m != 0 assert new Dollar(amount).times(m).divideBy(m).amount == amount log << [amount, m] } @After void dumpLog() { println log } } JUnitCore.main('PopperTest'):
JUnit version 4.3.1 .[[0, 1], [0, 2], [0, 5], [1, 1], [1, 2], [1, 5], [2, 1], [2, 2], [2, 5], [5, 1], [5, 2], [5, 5]] Time: 0.297 OK ):
// Java import net.saff.theories.methods.api.ParametersSuppliedBy; import java.lang.annotation.*; @Retention(RetentionPolicy.RUNTIME) @ParametersSuppliedBy(BetweenSupplier.class) public @interface Between { int first(); int last(); }
And the backing supplier (coded in Groovy):
import net.saff.theories.methods.api.* import java.util.* public class BetweenSupplier extends ParameterSupplier { public List getValues(test, ParameterSignature sig) { def annotation = sig.supplierAnnotation annotation.first()..annotation.last() } }
Now our Groovy test example could become:
import org.junit.* import org.junit.runner.* import net.saff.theories.methods.api.Theory import net.saff.theories.runner.api.* @RunWith(Theories) class PopperBetweenTest extends GroovyTheoryContainer { private int test, total // for explanatory purposes only @Theory void multiplyIsInverseOfDivide( @Between(first = -4, last = 2) int amount, @Between(first = -2, last = 5) int m ) { total++ assume m != 0 assert new Dollar(amount).times(m).divideBy(m).amount == amount test++ } @After void dumpLog() { println "$test tests performed out of $total combinations" } } JUnitCore.main('PopperBetweenTest')
When run, this yields:
JUnit version 4.3.1 .49 tests performed out of 56 combinations Time: 0.234 OK (1 test):
import net.saff.theories.methods.api.* import net.saff.theories.runner.api.* import org.junit.runner.* @RunWith(Theories.class) class BowlingTests extends GroovyTheoryContainer { public static Game STARTING_GAME = new Game() public static Game NULL_GAME = null public static Bowl THREE = new Bowl(3) public static Bowl FOUR = new Bowl(4) public static Bowl NULL_BOWL = null @DataPoint public Bowl oneHundredBowl() { new Bowl(100) } public static int ONE_HUNDRED = 100 public static int ZERO = 0 @Theory public void shouldBeTenFramesWithTwoRollsInEach(Game game, Bowl first, Bowl second) { assume game && first && second assume game.isAtBeginning() assume !first.isStrike() assume !second.completesSpareAfter(first) 10.times { game.bowl(first) game.bowl(second) } assert game.isGameOver() } @Theory public void maximumPinCountIsTen(Bowl bowl) { assume bowl assert bowl.pinCount() <= 10 } @Theory public void pinCountMatchesConstructorParameter(int pinCount) { assumeMayFailForIllegalArguments { assert new Bowl(pinCount).pinCount() == pinCount } } } JUnitCore.main('BowlingTests') | http://docs.codehaus.org/display/GROOVY/Using+Popper+with+Groovy | 2009-07-04T17:51:44 | crawl-002 | crawl-002-020 | [] | docs.codehaus.org |
Debian GNU/Linux distribution and some derivatives such as Raspbian already have included Salt packages to their repositories. However, current stable release codenamed "Jessie" contains old outdated Salt release. It is recommended to use SaltStack repository for Debian as described below.
Installation from official Debian and Raspbian repositories is described here.
Packages for Debian 8 (Jessie) and Debian 7 (Wheezy) are available in the Official SaltStack repository.
Instructions are at.
Note
Regular security support for Debian 7 ended on April 25th 2016. As a result, 2016.3.1 and 2015.8.10 will be the last Salt releases for which Debian 7 packages are created.
Stretch (Testing) and Sid (Unstable) distributions are already contain mostly up-to-date Salt packages built by Debian Salt Team. You can install Salt components directly from Debian.
On Jessie (Stable) Stretch:
apt-get install salt-minion/stretch
Install the Salt master, minion or other packages from the repository with the apt-get command. These examples each install one of Salt components, but more than one package name may be given at a time:
apt-get install salt-api
apt-get install salt-cloud
apt-get install salt-master
apt-get install salt-minion
apt-get install salt-ssh
apt-get install salt-syndic
Now, go to the Configuring Salt page. | https://docs.saltstack.com/en/latest/topics/installation/debian.html | 2017-05-22T17:23:07 | CC-MAIN-2017-22 | 1495463605485.49 | [] | docs.saltstack.com |
Dependency Management¶
Composer usage overview¶
Composer should be used to manage Drupal core, all contributed dependencies, and most third party libraries. The primary exception to this is front end libraries that may be managed via a front-end specific dependency manager, such as Bower or NPM.
Why do we use Composer for dependency management? It is the dependency manager used by Drupal core.
Make sure to familiarize yourself with basic usage of Composer, especially on how the lock file is used. In short: you should commit both
composer.json and
composer.lock to your project, and every time you update
composer.json, you must also run
composer update to update
composer.lock. You should never manually edit
composer.lock.
You should understand:
- Why dependencies should not be committed
- The role of composer.lock
- How to use version constraints
- The difference between
requireand
require-dev
Recommended tools and configuration¶
Globally install pretissimo for parallelized composer downloads:
composer global require "hirak/prestissimo:^0.3"
If you have xDebug enabled for your PHP CLI binary, it is highly recommended that you disable it to dramatically improve performance.
Contributed projects and third party libraries¶
All contributed projects hosted on drupal.org, including Drupal core, profiles, modules, and themes, can be found on Drupal packagist. Most non-Drupal libraries can be found on Packagist. For any required packaged not hosted on one of those two sites, you can define your own array of custom repositories for Composer to search.
Note that Composer versioning is not identical to drupal.org versioning.
Resources¶
- Composer Versions - Read up on how to specify versions.
- Drupal packagist site - Find packages and their current versions.
- Drupal packagist project - Submit issues and pull requests to the engine that runs Drupal packagist.
- Drupal packagist project - Submit issues and pull requests to the engine that runs Drupal packagist.
- Drupal Composer package naming conventions
- Packagist - Find non-drupal libraries and their current versions.
Add dependencies¶
To add a new package to your project, use the
composer require command. This will add the new dependency to your
composer.json and
composer.lock files, and download the package locally. E.g., to download the pathauto module run,
composer require drupal/pathauto
Commit
composer.json and
composer.lock afterwards.
Update dependencies (core, profile, module, theme, libraries)¶
To update a single package, run
composer update [vendor/package]. E.g.,
composer update drupal/pathauto
To update all packages, run
composer update.
Commit
composer.json and
composer.lock afterwards.
Remove dependencies¶
To remove a package from your project, use the
composer remove command:
composer remove drupal/pathauto
Commit
composer.json and
composer.lock afterwards.
Patch a project¶
Please see patches/README.md for information on patch naming, patch application, and patch contribution guidance.
Modifying BLT's default Composer values¶
BLT merges default values for composer.json using wikimedia/composer-merge-plugin:
"merge-plugin": { "require": [ "vendor/acquia/blt/composer.required.json", "vendor/acquia/blt/composer.suggested.json" ], "include": [ "blt/composer.overrides.json" ], "merge-extra": true, "merge-extra-deep": true, "merge-scripts": true, "replace": false, "ignore-duplicates": true },
This merges the
require,
require-dev,
autoload,
autoload-dev,
scripts, and
extra keys from BLT's own vendored files. The merged values are split into two groups
- composer.require.json: These packages are required for BLT to function properly. You may change their versions via comopser.overrides.json, but you should not remove them.
- composer.suggested.json: You may remove the suggested packages by deleting the
vendor/acquia/blt/composer.suggested.jsonline from your composer.json.
If you'd like to override the default version constraint for a package provided by BLT, you may simply define the desired version in your root composer.json file.
Merging in additional composer.json files¶
In situations where you have local projects, e.g. a custom module, that have their own composer.json files, you can merge them in by including the composer-merge-plugin. Reference these additional composer.json files in the
extra section of your root composer.json file.
"extra": { "merge-plugin": { "require": [ "docroot/modules/custom/example/composer.json" ] } }
Front end dependencies¶
Drupal 8 does not have a definitive solution for downloading front end dependencies. The following solutions are suggested:
- Load the library as an external library. See Adding stylesheets (CSS) and JavaScript (JS) to a Drupal 8 module.
- Use a front end package manager (e.g., NPM) to download your dependencies. Then use BLT's
frontend-buildand
post-deploy-buildtarget-hooks to trigger building those dependencies. E.g., call
npm installin your theme directory via these hooks.
- Commit the library to the repository, typically in
docroot/librares.
- Add the library to composer.json via a custom repository. Designate the package as a
drupal-libraryand define, then commit the dependency. You can use a custom .gitignore file for you project, ensure that it is copied to the deployment artifact, and supply your own, custom .gitignore file to be used in the deployment artifact. | http://blt.readthedocs.io/en/8.x/readme/dependency-management/ | 2017-05-22T17:25:14 | CC-MAIN-2017-22 | 1495463605485.49 | [] | blt.readthedocs.io |
public interface Driver extends OptionalFactory, Factory
Classes implementing this interface basically act as factory for creating connections to coverage sources like files, WCS services, WMS services, databases, etc...
This class also offers basic create / delete functionality (which can be useful for file based coverage formats).
Purpose of this class is to provide basic information about a certain coverage service/format as well as about the parameters needed in order to connect to a source which such a service/format is able to work against.
Notice that as part as the roll of a "factory" interface this class makes available an
isAvailable() method which should check if all the needed dependencies which can be jars as
well as native libs or configuration files.
getImplementationHints
String getName()
While the Title and Description will change depending on the users local this name will be consistent. Please note that a given file may be readable by several Drivers (the description of each implementation should be provided to the user so they can make an intellegent choice in the matter).
Driver
InternationalString getTitle()
InternationalString getDescription()
Driverimplementation.
A description of this
Driver type; the description should indicate the format or
service being made available in human readable terms.
Drivers.
boolean isAvailable()
Driveris available, if it has all the appropriate dependencies (jars or libraries).
One may ask how this is different than
#canConnect(Map), and basically available
can be used by finder mechanisms to list available
Drivers.
isAvailablein interface
OptionalFactory
boolean canAccess(Driver.DriverCapabilities operation, Map<String,Serializable> params)
Map<String,Parameter<?>> getParameterInfo(Driver.DriverCapabilities operation)
EnumSet<Driver.DriverCapabilities> getDriverCapabilities()
CoverageAccess access(Driver.DriverCapabilities opreation, Map<String,Serializable> params, Hints hints, ProgressListener listener) throws IOException
nullin case the delete succeds. TODO think about a neater approach
IOException | http://docs.geotools.org/stable/javadocs/org/geotools/coverage/io/Driver.html | 2020-03-28T22:04:22 | CC-MAIN-2020-16 | 1585370493120.15 | [] | docs.geotools.org |
This Visio Agile Release Plan Template is designed to help Scrum Teams coordinate Product MVP release planning. Uses a Story Mapping method for MVP focus.
How can I plan my Agile Product rollout across multiple workstreams?
A combination of User Story Mapping and Agile Release Planning works well.
This Visio Agile Release Plan includes:-
- Timeline: Configurable.
- Milestones: Drag-able with “self-updating” date/time.
- Iteration / Sprint / Timebox indication: this example shows fortnightly iterations.
- One main team and 3 dependency teams.
- A clear goal for each iteration.
- Iteration Numbers.
- Themes covering 2+ iterations.
- Workstream lead person for each “swim lane” or “workstream”.
- A “Points” box if you want to include estimated iteration points.
- User Stories, Features and Epics for use as appropriate.
The Visio Agile Release Plan shows your key milestones on the timeline.
The Visio Agile Release Plan shows Iterations, and Themes that span iterations
The Agile Pyramid: Goals, EPICs, Features | https://business-docs.co.uk/downloads/visio-agile-release-plan-template/ | 2020-03-28T19:52:59 | CC-MAIN-2020-16 | 1585370493120.15 | [array(['https://i17yj3r7slj2hgs3x244uey9z-wpengine.netdna-ssl.com/wp-content/uploads/edd/2016/03/BDUK-83-Agile-Release-Plan-02-milestones-300x259.png',
'The Visio Agile Release Plan shows your key milestones on the timeline'],
dtype=object)
array(['https://i17yj3r7slj2hgs3x244uey9z-wpengine.netdna-ssl.com/wp-content/uploads/edd/2016/03/BDUK-83-Agile-Release-Plan-02-themes-iterations-300x274.png',
'The Visio Agile Release Plan shows Iterations, and Themes that span iterations'],
dtype=object)
array(['https://i17yj3r7slj2hgs3x244uey9z-wpengine.netdna-ssl.com/wp-content/uploads/edd/2016/03/BDUK-83-Agile-Release-Plan-02-features-epics-300x259.png',
'The Agile Pyramid: Goals, EPICs, Features'], dtype=object) ] | business-docs.co.uk |
On this page
Invalid Shared Secret
Todo
Describe Network | DOCSIS | Invalid Shared Secret symptoms
The registration of this modem has failed because of an invalid MIC string.
Ensure that the shared secret that is in the configuration file is the same as the shared secret that is configured in the cable modem. | https://docs.getnoc.com/master/en/events/network-docsis-invalid-shared-secret.html | 2020-03-28T21:34:48 | CC-MAIN-2020-16 | 1585370493120.15 | [] | docs.getnoc.com |
Détail de l'auteur
Documents disponibles écrits par cet auteur
Article : texte impriméNicole Bridges, Auteur ; Gwyneth Howell, Auteur ; Virginia Schmied, Auteur |"Employing an oline ethnographic research approach, the purpose of this study was to describe the nature of breastfeeding pee support that members seek and receive via closed Facebook groups facilitated by the Australian Breastfeeding Associatio[...] : document cartographique impriméNicole Bridges, Auteur |"The aim of this study was to advance understanding of the experiences of mothers using closed Facebook groups attached to the Australian Breastfeeding Association (ABA) and how these mothers find and share breastfeeding support and information [...] | https://docs.info-allaitement.org/opac_css/index.php?lvl=author_see&id=169 | 2020-03-28T20:42:20 | CC-MAIN-2020-16 | 1585370493120.15 | [array(['./images/orderby_az.gif', 'Tris disponibles'], dtype=object)] | docs.info-allaitement.org |
Networking Provider¶
Overview
OpenLMI-Networking is CIM provider which manages local network devices. It exposes remotely accessible object-oriented API using WBEM set of protocols and technologies.
Clients
The API can be accessed by any WBEM-capable client. OpenLMI already provides:
- Python module lmi.scripts.networking, part of OpenLMI scripts.
- Command line tool: LMI metacommand, with ‘networking’ subcommand.
Features
- Enumerate all network devices
- Display current network IP configuration
- Manage available settings
- Settings activation/deactivation
- Support for bridging and bonding
The provider is based on concept of settings – a network profile that can be applied to the network device(s).
Examples
For examples how to use OpenLMI-Networking provider remotely from :ref:`LMIShell <lmi_shell>`_, see usage of OpenLMI-Networking.
Documentation
This provider is based on following DMTF standards:
The knowledge of these standards is not necessary, but it can help a lot.
Application developers and/or sysadmins should skip the DMTF standards and start at Networking API concepts.
Table of contents
- Networking API concepts
- Usage
- Enumeration of network devices
- Get parameters of network devices
- Get current IP configuration
- Bring up / take down a network device
- Enumerate available settings
- Obtaining setting details
- Create new setting
- Set DNS servers for given setting
- Manage static routes for given setting
- Delete setting
- Apply setting
- Bridging and bonding | https://openlmi.readthedocs.io/en/latest/openlmi-networking/index.html | 2020-03-28T21:16:58 | CC-MAIN-2020-16 | 1585370493120.15 | [] | openlmi.readthedocs.io |
Azure AD Object Attribute Uniqueness
You're a cloud engineer. You're organized, methodical, and thorough in your approach to the integration of your cloud deployment with your existing on-premises workloads. By now, you've successfully deployed DirSync for your organization and federated your Active Directory up to the cloud. Not only that but you've been in the game long enough, you've upgraded from DirSync to AD Sync, and now to Azure AD Connect – and you made it look easy. You have hundreds, if not thousands of users that utilize and rely on the products you support. Your skills as an administrator enable your business to be nimble, your users to be productive, and your career to be blazing a path into the Azure sky!
Sounds like you, right? I thought it might. After all, you are my target audience for today’s post!
While my primary focus is supporting Microsoft Azure, I sometimes work on Exchange Online and Office 365, as both ExO and O365 utilize Azure AD! I ran into a very interesting problem a few weeks back that I think deserves mention. Perhaps this post may help others in the same situation.
Within each Azure AD tenant, every User, Group, and Contact object has the following four attributes: UPN, proxyAddresses, TargetAddress, and Mail. There are more, but we’re going to focus on just these four. Why? These four attributes on each object must have unique values amongst all other objects. Wait, what? That doesn’t mean that your UPN and Mail attributes can’t be the same on a single object, but that these attribute values may not overlap between two objects. One object’s UPN attribute cannot match the contents of another’s proxyAddresses attribute, and so on. Any overlap will result in export errors seen within the Synchronization Service Manager on the Azure AD Connector.
I know that last paragraph was somewhat difficult to follow. The best way to explain this is to actually demonstrate what I’m talking about. Let’s use this example for reference: I am Jimmie L. Lightner, III. Yes, my father and grandfather are named Jimmie as well. If you think this is confusing, try being one of us during a holiday get together! Let’s create three user accounts in our on-premises Active Directory, one for each of us. Active Directory will force us to each have a unique userName as seen in the following screen shot. I've created 'Jim' for my Grandpa, 'JimmieL' for my Dad, and 'Jimmie' for myself.
My account is updated to contain my organizational email address, [email protected], in the Mail attribute of my User Object. This change will be synchronized to Azure AD successfully.
Later on, my grandfather's proxyAddresses attribute is updated to include the same SMTP address as my Mail attribute.
Our on-premises Active Directory does not care that my Mail and Grandpa’s proxyAddresses attribute values overlap with each other, but this is a problem when we’re synchronizing to an Azure AD tenant. Shortly after the change to grandpa's User Object is brought into the Metaverse via the Active Directory Connector, it will fail to be exported to Azure AD. If we look at the Synchronization Service Manager console, we will see export errors on our Windows Azure Active Directory Connector.
Clicking on the hyperlink in the error report will bring up the Object Properties within the Connector Space. Here we can see that our Pending Export is trying to add the proxyAddresses value to Grandpa's User Object in Azure AD, resulting in a collision.
I have yet to find documentation that explicitly defines the requirements for these attributes to be unique, but the below article gives us several methods that will enable us to track down the Object with which we're colliding if we do encounter this error.
KB2643629 One or more objects don't sync when using the Azure Active Directory Sync tool
I hope that you found this post helpful and informative. If you have any questions, please feel free to post them in the comments! Thank you for reading!
Until next time,
Jimmie. | https://docs.microsoft.com/en-us/archive/blogs/jimmielightner/azure-ad-object-attribute-uniqueness | 2020-03-28T22:21:23 | CC-MAIN-2020-16 | 1585370493120.15 | [] | docs.microsoft.com |
Syslog - local
To get local syslog data, point Splunk at a directory of syslog files:
1. From the Home page in Splunk Web, click Add data.
2. Under the To get started... banner, click Syslog.
3. Click Next under Consume any syslog files or directories on this Splunk server.
4. On the Get data from files and directories page, specify the source of the data by clicking on one of the three available choices.
5. In the Full path to your data field, enter the path to the syslog directory:
- On *nix systems, this is usually
/var/log.
- On Windows, the path varies, depending on which third-party syslog daemon you have installed. By default, Windows doesn't provide a syslog facility. Instead, it relies on the Event Log service for logging.! | https://docs.splunk.com/Documentation/Splunk/6.1.3/Data/Sysloglocal | 2020-03-28T22:11:21 | CC-MAIN-2020-16 | 1585370493120.15 | [array(['/skins/OxfordComma/images/acrobat-logo.png', None], dtype=object)] | docs.splunk.com |
This guide is only for Yclas Self Hosted! There have been a lot of talking about HTTPS lately and the overall trend is saying that HTTPS is the future. So. in this guide, I will show you how to enable HTTPS on your classifieds site easily. What is HTTPS and why should I use it? HTTPS is the layering of HTTP on top of SSL/TLS. This basically means that the activities done on the website by users and owners are much more secure. So as a result, the HTTP traffic, including requested URLs, result pages, cookies, media and anything else sent over your HTTP connection, is encrypted when HTTPS is enabled. If an attacker is trying to interfere with your website’s connections – they cannot listen in and intercept traffic, or change it. HTTPS is also useful for providing authentication to users of the website; it allows your visitors to identify your website and verify that it exists, is legitimate, secure and can be trusted. This level of security is compatible with search engines and their web crawlers, so adopting HTTPS indicates that you have a lot to gain. Follow this link to read more about how and why Google is also encouraging this movement What tasks will you need to do? Get an SSL certificate and install it on your server. Go to your website’s panel to make necessary system changes. Follow this link Force a redirect from HTTP to HTTPS. Get an SSL certificate and install it on your server to do this, please note that seeking for assistance from a person with the technical experience/know-how OR contacting a hosting company for installation is advised. The use of CloudFlare is also another alternative. For a free certificate (to install it on your server), you can go to:. An example of a hosting company that offers paid certificates is Namecheap: The CloudFlare alternative allows you to make use of SSL, without going through the complicated installation process. It also has a free option. For more information on this, you can visit:. Also, there is this tutorial for a free SSL CloudFlare certificate with Wordpress. Once you have the SSL certificate and it is installed, proceed to the next task. Go to your website’s panel to make necessary system changes Go to your website. Login into the Admin Panel. Follow this link Change the Config Value from http:// to https:// Click on Submit/Save. Finally, go to Cache -> Delete All. Once you have completed this, proceed to the next task. Force a redirect from HTTP to HTTPS Log into your website’s cPanel. Click on File Manager (and tick to show hidden files). Locate your .htaccess file in the public_html directory. Open the .htaccess file for editing and add the following code on the top: RewriteEngine On RewriteCond %{HTTPS} !=on RewriteRule ^${HTTP_HOST} [L,R=301] Save the file and close it. You have finished every the task and your site should have its https:// enabled. Short Summary If all tasks were completed successfully, your domain name should now be prefixed with “https://” instead of “http://” whenever your website is visited. Depending on the type of SSL certificate you installed earlier, your website should now benefit from all the other new advantages, of using SSL Encryption. If errors are being displayed by your website, make sure that you retrace your steps; so that the troubleshooting process is easier. We hope that you were able to follow the instructions without much difficulty. Thank you for reading this guide and make sure that you subscribe to our blog for more useful posts. Author: Tana | https://docs.yclas.com/move-classifieds-site-http-https/ | 2020-03-28T19:52:13 | CC-MAIN-2020-16 | 1585370493120.15 | [] | docs.yclas.com |
This guide is only for Yclas Self Hosted! One common error that users face is related to the license of their child theme. If you are trying to activate your child theme but you are getting an error indicating that your license number is already in use, then you need to follow the instructions below: Backup your website files and database. Login into your hosting cPanel, go to your Database, table oc2_config, find and delete the entry with “group_name” = “theme” and “config_key” = “your premium theme name”, if exists. Find the entry with config_key = theme and change config_value to default. Go into your cPanel File Manager and delete everything inside oc/cache/ Login into your admin panel and activate your theme! ;) If you still have any questions and need some help to activate your child theme, please use our professional support! In case you don’t have the required technical knowledge to follow this guide, we can fix the error for you by purchasing One time support. | https://docs.yclas.com/troubleshoot-child-theme/ | 2020-03-28T20:15:53 | CC-MAIN-2020-16 | 1585370493120.15 | [] | docs.yclas.com |
Usage¶
Examples for common use cases listed below are written in LMIShell. Where appropriate, an example for LMI metacommand is added.
Note
Examples below are written for openlmi-tools version 0.9.2.
Listing installed packages¶
Simple¶
Simple but slow way:
c = connect("host", "user", "pass") cs = c.root.cimv2.PG_ComputerSystem.first_instance() for identity in cs.associators( AssocClass="LMI_InstalledSoftwareIdentity", Role="System", ResultRole="InstalledSoftware", ResultClass="LMI_SoftwareIdentity"): print(identity.ElementName)
Note
Here we use PG_ComputerSystem as a class representing computer system.
See also
LMI_InstalledSoftwareIdentity
Faster¶
This is much faster. Here we enumerate association class LMI_InstalledSoftwareIdentity and get information from its key properties.
c = connect("host", "user", "pass") for iname in c.root.cimv2.LMI_InstalledSoftwareIdentity.instance_names(): print(iname.InstalledSoftware.InstanceID [len("LMI:LMI_SoftwareIdentity:"):])
Listing repositories¶
LMIShell¶
c = connect("host", "user", "pass") for repo in c.root.cimv2.LMI_SoftwareIdentityResource.instance_names(): print(repo.Name)
See also
LMI_SoftwareIdentityResource
Listing available packages¶
LMIShell¶
Enumerating of LMI_SoftwareIdentity is disabled due to a huge amount of data being generated. That’s why we enumerate them for particular repository represented by LMI_SoftwareIdentityResource:
c = connect("host", "user", "pass") for repo in c.root.cimv2.LMI_SoftwareIdentityResource.instances(): if repo.EnabledState != c.root.cimv2.LMI_SoftwareIdentityResource. \ EnabledStateValues.Enabled: continue # skip disabled repositories print(repo.Name) for identity in repo.associator_names( AssocClass="LMI_ResourceForSoftwareIdentity", Role="AvailableSAP", ResultRole="ManagedElement", ResultClass="LMI_SoftwareIdentity"): print(" " + identity.InstanceID[len("LMI:LMI_SoftwareIdentity:"):])
Note
This is not the same as running:
yum list available
which outputs all available, not installed packages. The example above yields available packages without any regard to their installation status.
Using installation service¶
This method is both simpler and more effective. It also does not list installed packages.
c = connect("host", "user", "pass") service = c.root.cimv2.LMI_SoftwareInstallationService.first_instance() ret = service.FindIdentity(Installed=False) for iname in ret.rparams["Matches"]: # we've got only references to instances print iname.Name[len("LMI:LMI_SoftwareIdentity:"):]
Listing files of package¶
Let’s list files of packages openlmi-tools. Note that package must be installed on system in order to list its files.
LMIShell¶
We need to know exact NEVRA [1] of package we want to operate on. If we don’t, we can find out using FindIdentity() method. See example under Searching for packages.
c = connect("host", "user", "pass") identity = c.root.cimv2.LMI_SoftwareIdentity.new_instance_name( {"InstanceID" : "LMI:LMI_SoftwareIdentity:openlmi-tools-0:0.5-2.fc18.noarch"}) for filecheck in identity.to_instance().associator_names( AssocClass="LMI_SoftwareIdentityChecks", Role="Element", ResultRole="Check", ResultClass="LMI_SoftwareIdentityFileCheck"): print("%s" % filecheck.Name)
See also
LMI_SoftwareIdentityFileCheck
Searching for packages¶
If we know just a fraction of informations needed to identify a package, we may query package database in the following way.
LMIShell¶
c = connect("host", "user", "pass") service = c.root.cimv2.LMI_SoftwareInstallationService.first_instance() # let's find all packages with "openlmi" in Name or Summary without # architecture specific code ret = service.FindIdentity(Name="openlmi", Architecture="noarch") for identity in ret.rparams["Matches"]: # we've got only references to instances print identity.Name[len("LMI:LMI_SoftwareIdentity:"):]
See also
FindIdentity() method
Please don’t use this method to get an instance of package you know precisely. If you know all the identification details, you may just construct the instance name this way:
c = connect("host", "user", "pass") iname = c.root.cimv2.LMI_SoftwareIdentity.new_instance_name( {"InstanceID" : "LMI:LMI_SoftwareIdentity:openlmi-software-0:0.1.1-2.fc20.noarch"}) identity = iname.to_instance()
Package installation¶
There are two approaches to package installation. One is synchronous and the other asynchronous.
Synchronous installation¶
This is a very simple and straightforward approach. We install package by creating a new instance of LMI_InstalledSoftwareIdentity with a reference to some available software identity.
c = connect("host", "user", "pass") identity = c.root.cimv2.LMI_SoftwareIdentity.new_instance_name( {"InstanceID" : "LMI:LMI_SoftwareIdentity:sblim-sfcb-0:1.3.16-3.fc19.x86_64"}) cs = c.root.cimv2.PG_ComputerSystem.first_instance_name() installed_assoc = c.root.cimv2.LMI_InstalledSoftwareIdentity.create_instance( properties={ "InstalledSoftware" : identity, "System" : cs })
If the package is already installed, this operation will fail with the pywbem.CIMError exception being raised initialized with CIM_ERR_ALREADY_EXISTS error code.
Downside of this approach is its slowness. It may block for a long time.
Asynchronous installation¶
Method InstallFromSoftwareIdentity() needs to be invoked with desired options. After the options are checked by provider, a job is returned representing installation process running at background. Please refer to Asynchronous Jobs for more details., # these options request to install available, not installed package InstallOptions=[4] # [Install] # this will force installation if package is already installed # (possibly in different version) #InstallOptions=[4, 3] # [Install, Force installation] )
The result can be checked by polling resulting job for finished status:
finished_statuses = { c.root.cimv2.CIM_ConcreteJob.JobState.Completed , c.root.cimv2.CIM_ConcreteJob.JobState.Exception , c.root.cimv2.CIM_ConcreteJob.JobState.Terminated } job = ret.rparams["Job"].to_instance() while job.JobStatus not in finished_statuses: # wait for job to complete time.sleep(1) job.refresh() print c.root.cimv2.LMI_SoftwareJob.JobStateValues.value_name(job.JobState) # get an associated job method result and check the return value print "result: %s" % job.first_associator( AssocClass='LMI_AssociatedSoftwareJobMethodResult').__ReturnValue # get installed software identity installed = job.first_associator( Role='AffectingElement', ResultRole='AffectedElement', AssocClass="LMI_AffectedSoftwareJobElement", ResultClass='LMI_SoftwareIdentity') print "installed %s at %s" % (installed.ElementName, installed.InstallDate)
You may also subscribe to indications related to LMI_SoftwareInstallationJob and listen for events instead of the polling done above
As you can see, you may force the installation allowing for reinstallation of already installed package. For more options please refer to the documentation of this method.
Combined way¶
We can combine both approaches by utilizing a feature of LMIShell. Method above can be called in a synchronous way (from the perspective of script’s code). It’s done like this:
# note the use of "Sync" prefix ret = service.SyncInstallFromSoftwareIdentity( Source=identity, Target=cs, # these options request to install available, not installed package InstallOptions=[4] # [Install] # this will force installation if package is already installed # (possibly in different version) #InstallOptions=[4, 3] # [Install, Force installation] ) print "result: %s" % ret.rval
The value of .__ReturnValue of LMI_SoftwareMethodResult is placed to the ret.rval attribute. Waiting for job’s completion is taken care of by LMIShell. But we lose the reference to the job itself and we can not enumerate affected elements (that contain, among other things, installed package).
Installation from URI¶
This is also possible with:
c = connect("host", "user", "pass") service = c.root.cimv2.LMI_SoftwareInstallationService.first_instance() cs = c.root.cimv2.PG_ComputerSystem.first_instance_name() ret = service.to_instance().InstallFromSoftwareURI( Source="", Target=cs, InstallOptions=[4]) # [Install]
Supported URI schemes are:
- http
- https
- ftp
- file
In the last case, the file must be located on the managed system.
See also
InstallFromURI() method
Please refer to Asynchronous installation above for the consequent procedure and how to deal with ret value.
Package removal¶
Again both asynchronous and synchronous approaches are available.
Synchronous removal¶
The aim is achieved by issuing an opposite operation than before. The instance of LMI_InstalledSoftwareIdentity is deleted here:
c = connect("host", "user", "pass") identity = c.root.cimv2.LMI_SoftwareIdentity.new_instance_name( {"InstanceID" : "LMI:LMI_SoftwareIdentity:sblim-sfcb-0:1.3.16-3.fc19.x86_64"}) installed_assocs = identity.to_instance().reference_names( Role="InstalledSoftware", ResultClass="LMI_InstalledSoftwareIdentity") if len(installed_assocs) > 0: for assoc in installed_assocs: assoc.to_instance().delete() print("deleted %s" % assoc.InstalledSoftware.InstanceID) else: print("no package removed")
Asynchronous removal¶, InstallOptions=[9]) # [Uninstall]
Again please refer to Asynchronous installation for examples on how to deal with the ret value.
Package update¶
Only asynchronous method is provided for this purpose. But with the possibility of synchronous invocation.
LMIShell¶
Example below shows the synchronous invocation of asynchronous method:.SyncInstallFromSoftwareIdentity( Source=identity, Target=cs, InstallOptions=[5] # [Update] # to force update, when installed package is same or higher version #InstallOptions=[4, 5] # [Install, Update] ) print "installation " + ("successful" if rval == 0 else "failed")
Package verification¶
Installed RPM packages can be verified. Attributes of installed files are compared with those stored in particular RPM package. If some value of attribute does not match or the file does not exist, it fails the verification test. Following attributes come into play in this process:
- File size - in case of regular file
- User ID
- Group ID
- Last modification time
- Mode
- Device numbers - in case of device file
- Link Target - in case the file is a symbolic link
- Checksum - in case of regular file
LMIShell¶
It’s done via invocation of VerifyInstalledIdentity(). This is an asynchronous method. We can not use synchronous invocation if we want to be able to list failed files."}) results = service.VerifyInstalledIdentity( Source=identity, Target=ns.PG_ComputerSystem.first_instance_name()) nevra = ( identity.ElementName if isinstance(identity, LMIInstance) else identity.InstanceID[len('LMI:LMI_SoftwareIdentity:'):]) if results.rval != 4096: # asynchronous job started msg = 'failed to verify identity "%s (rval=%d)"' % (nevra, results.rval) if results.errorstr: msg += ': ' + results.errorstr raise Exception(msg) job = results.rparams['Job'].to_instance() # wait by polling or listening for indication wait_for_job_finished(job) if not LMIJob.lmi_is_job_completed(job): msg = 'failed to verify package "%s"' % nevra if job.ErrorDescription: msg += ': ' + job.ErrorDescription raise Exception(msg) # get the failed files failed = job.associators( AssocClass="LMI_AffectedSoftwareJobElement", Role='AffectingElement', ResultRole='AffectedElement', ResultClass='LMI_SoftwareIdentityFileCheck') for iname in failed: print iname.Name # print their paths
Polling, as a way of waiting for job completion, has been already shown in the example under Asynchronous installation.
See also
LMI_SoftwareIdentityFileCheck
Enable and disable repository¶
LMIShell¶
c = connect("host", "user", "pass") repo = c.root.cimv2.LMI_SoftwareIdentityResource.first_instance_name( key="Name", value="fedora-updates-testing") # disable repository repo.to_instance().RequestStateChange( RequestedState=c.root.cimv2.LMI_SoftwareIdentityResource. \ RequestedStateValues.Disabled) repo = c.root.cimv2.LMI_SoftwareIdentityResource.first_instance_name( key="Name", value="fedora-updates") # enable repository repo.to_instance().RequestStateChange( RequestedState=c.root.cimv2.LMI_SoftwareIdentityResource. \ RequestedStateValues.Enabled)
Supported event filters¶
There are various events related to asynchronous job you may be interested in. All of them can be subscribed to with static filters presented below. Usage of custom query strings is not supported due to a complexity of its parsing. These filters should be already registered in CIMOM if OpenLMI-Software provider is installed. You may check them by enumerating CIM_IndicationFilter class located in root/interop namespace. All of them apply to two different software job classes you may want to subscribe to:
- LMI_SoftwareInstallationJob
- Represents a job requesting to install, update or remove some package.
- LMI_SoftwareVerificationJob
- Represents a job requesting verification of installed package.
Filters below are written for LMI_SoftwareInstallationJob only. If you deal with the other one, just replace the class name right after the ISA operator and classname in filter’s name.
Percent Updated¶
Indication is sent when the LMI_SoftwareJob.PercentComplete property of a job changes.
SELECT * FROM LMI_SoftwareInstModification WHERE SourceInstance ISA LMI_SoftwareInstallationJob AND SourceInstance.CIM_ConcreteJob::PercentComplete <> PreviousInstance.CIM_ConcreteJob::PercentComplete
Registered under filter name "LMI:LMI_SoftwareInstallationJob:PercentUpdated".
Job state change¶
Indication is sent when the LMI_SoftwareJob.JobState property of a job changes.
SELECT * FROM LMI_SoftwareInstModification WHERE SourceInstance ISA LMI_SoftwareInstallationJob AND SourceInstance.CIM_ConcreteJob::JobState <> PreviousInstance.CIM_ConcreteJob::JobState
Registered under filter name "LMI:LMI_SoftwareInstallationJob:Changed".
Job Completed¶
This event occurs when the state of job becomes COMPLETED/OK [2].
SELECT * FROM LMI_SoftwareInstModification WHERE SourceInstance ISA LMI_SoftwareInstallationJob AND SourceInstance.CIM_ConcreteJob::JobState = 17
Registered under filter name "LMI:LMI_SoftwareInstallationJob:Succeeded".
Error¶
This event occurs when the state of job becomes COMPLETED/Error [3].
SELECT * FROM LMI_SoftwareInstModification WHERE SourceInstance ISA LMI_SoftwareInstallationJob AND SourceInstance.CIM_ConcreteJob::JobState = 10
Registered under filter name "LMI:LMI_SoftwareInstallationJob:Failed".
New Job¶
This event occurs when the new instance of LMI_SoftwareJob is created.
SELECT * FROM LMI_SoftwareInstCreation WHERE SourceInstance ISA LMI_SoftwareInstallationJob
Registered under filter name "LMI:LMI_SoftwareInstallationJob:Created". | https://openlmi.readthedocs.io/en/latest/openlmi-providers/software/usage.html | 2020-03-28T21:27:47 | CC-MAIN-2020-16 | 1585370493120.15 | [] | openlmi.readthedocs.io |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.