content
stringlengths 0
557k
| url
stringlengths 16
1.78k
| timestamp
timestamp[ms] | dump
stringlengths 9
15
| segment
stringlengths 13
17
| image_urls
stringlengths 2
55.5k
| netloc
stringlengths 7
77
|
---|---|---|---|---|---|---|
Parenting Objects
When modeling a complex object, such as a watch, you may choose to model the different parts as separate objects. To make all the parts move as one (“the watch”), you can designate one object as the parent of all the other parts. These other parts become its children, and any translation, rotation, or scale of the parent will also affects its children.
Contrary to most biological lifeforms, each object or bone in Blender has at most one parent. If an object already has a parent object and you give it another parent then Blender will remove the previous parent relationship. When the plural “parents” is used in this chapter, it references the hierarchy of parents, so the parent, the grandparent, great grandparent, and so on, of an object.
Make Parent
Reference
- Mode
Object Mode
- Shortcut
Ctrl-P selected objects will have their ‘parent’ set to the active object, and as a result will be.
Tip
You can “move” a child object back to its parent by clearing its origin.
- Type
Blender supports many different types of parenting, listed below. Besides parenting the selected objects, some types add a Modifier or Constraint to the child objects, with the parent as the target object or activates a parent property i.e. Follow Path.
Object
-
Bone
-
-
-
-
Vertex
Vertex (Triangle)
- Keep Transform
The object’s current world transform (so its absolute location, rotation and scale in the world) is computed. The new parent is set, and then the Parent Inverse matrix is computed such that after setting the new parent the object is still at its previous world transform.
Hint
Use the Outliner
There is another way to see the parent-child relationship in groups and that is to use the Outliner view of the Outliner editor.
Parent Inverse
Blender can assign a parent without moving the child object. This is achieved via a hidden matrix called the Parent Inverse matrix, which sits between the transform of the parent and the child.
When objects are parented with Ctrl-P, Parent Inverse matrix is updated. Depending on the choice in the Set Parent menu, the object’s local location, rotation, and scale are also updated. For more details, see Object Parent.
The Parent Inverse matrix can be cleared by using Clear Parent Inverse.
Note
When setting the parent via the Object Properties panel, the Parent Inverse matrix is always reset. This can cause an unexpected jump in the object’s position. To avoid this, use Ctrl-P to set the new parent.
Object Parent
Object Parent is the most general form of parenting that Blender supports. It will take selected objects and make the active object the parent object of all the selected objects. Each child object will inherit the transformations of the parent. The parent object can be of any type.
If the object has a pre-existing parent, that is cleared first. This moves the object to its own location, rotation and scale, without its parent’s influence.
There are three operators that allow you to set an object parent. They differ in the way they compute the Parent Inverse matrix and the local Transform of the object.
Example: Object Parent (Keep Transform)
Object Parent with Keep Transform will keep any previous transformations applied to them from the previous parent object.
Assume that we have a scene consisting of three objects, those being two empty objects named “EmptyA” and “EmptyB”, and a Monkey object. Fig. Scene with no parenting. shows the three objects with no parenting relationships active on them.
Scene with no parenting..
The monkey is the child object of “EmptyA”..
The monkey is the child object of “EmptyB”. and enable Keep Transform, the Monkey would keep its scale information it obtained from the old parent “EmptyA” when it is assigned to the new parent “EmptyB”.
The Object parent with Keep Transform.
If you want to follow along with the above description here is the blend-file:
File:Parent_-_Object_(Keep_Transform)_(Demo_File).blend.
Bone Parent
Bone parenting allows you to make a certain bone in an armature the parent object of another object. This means that when transforming an armature the child object will only move if the specific bone is the child object of moves.
Three pictures of armatures with four bones..
Single armature bone which has a child object cube parented to it using bone parenting..
Single bone with bone relative parent to a cube..
Make Parent without Inverse
Reference
- Mode
Object Mode
This sets the parent, and then resets the Parent Inverse matrix and the object’s local location. As a result, the object will move to the location of the parent, but keep its rotation and scale.
Clear Parent
Reference
- Mode
Object Mode
- Shortcut.
Known Limitations
Non-Uniform Scale. | https://docs.blender.org/manual/en/3.0/scene_layout/object/editing/parent.html | 2022-09-24T19:49:19 | CC-MAIN-2022-40 | 1664030333455.97 | [array(['../../../_images/scene-layout_object_editing_parent_keep-transform-a.png',
'../../../_images/scene-layout_object_editing_parent_keep-transform-a.png'],
dtype=object)
array(['../../../_images/scene-layout_object_editing_parent_keep-transform-b.png',
'../../../_images/scene-layout_object_editing_parent_keep-transform-b.png'],
dtype=object)
array(['../../../_images/scene-layout_object_editing_parent_keep-transform-c.png',
'../../../_images/scene-layout_object_editing_parent_keep-transform-c.png'],
dtype=object)
array(['../../../_images/scene-layout_object_editing_parent_keep-transform-d.png',
'../../../_images/scene-layout_object_editing_parent_keep-transform-d.png'],
dtype=object)
array(['../../../_images/scene-layout_object_editing_parent_bone1.png',
'../../../_images/scene-layout_object_editing_parent_bone1.png'],
dtype=object)
array(['../../../_images/scene-layout_object_editing_parent_bone2.png',
'../../../_images/scene-layout_object_editing_parent_bone2.png'],
dtype=object)
array(['../../../_images/scene-layout_object_editing_parent_bone3.png',
'../../../_images/scene-layout_object_editing_parent_bone3.png'],
dtype=object) ] | docs.blender.org |
Eyedropper
Reference
- Mode
Draw Mode
- Tool
The Eyedropper tool is used to create materials or palette color based on sampled colors in the 3D Viewport.
Tool Settings
- Material
Create a new material with the Stroke Base Color to be the sampled color.
- Palette
Add a new color to the color palette based on the sampled color.
Usage
LMB Create a stroke material.
Shift-LMB Create a fill material.
Shift-Ctrl-LMB Create both a stroke and fill material. | https://docs.blender.org/manual/en/latest/grease_pencil/modes/draw/tools/eyedropper.html | 2022-09-24T19:51:14 | CC-MAIN-2022-40 | 1664030333455.97 | [] | docs.blender.org |
Use this page to define access control entries selected from a filtered list of CloudBees CD/RO users, groups, or projects, or by providing an explicit name. The following instructions are generic. For example, if you chose Add User, the Filter field will be labeled "User Filter".
Click OK to retrieve your information. If you entered filter criteria, the page will refresh with a list of matches. Select any row in this table to create an access control entry for that principal. | https://docs.cloudbees.com/docs/cloudbees-cd/10.1/automation-platform/help-selectaclprincipal | 2022-09-24T18:54:17 | CC-MAIN-2022-40 | 1664030333455.97 | [] | docs.cloudbees.com |
dhtmlxCombo is a cross-browser JavaScript combo box with an autocomplete feature. It extends basic select box functionality and provides the ability to display suggestions while a user is typing in a text field.
dhtmlxCombo can be converted from existing instances of HTML Select, or populated with JavaScript. With Ajax data loading, it can get a list of values dynamically from a server data source. | https://docs.dhtmlx.com/combo__index.html | 2022-09-24T19:59:19 | CC-MAIN-2022-40 | 1664030333455.97 | [] | docs.dhtmlx.com |
Recent Activities
First Activity
Coordinated speaker content, included LITA info shared by rep, marketed session on social media, and facilitated a a session at ALA Annual
Meets LITA’s strategic goals for Education and Professional Development, Member Engagement
Second Activity
Reviewed feedback from LITA and provided comments. Posted pictures and thread of tweet with people who used comment or hashtag for program in ALA Connect Community site for Instructional Technologies IG :
Meets LITA’s strategic goals for Member Engagement
Third Activity
posted welcome message to Chrishelle Thomas (LITA's new Membership and Marketing Manager). Excited to have an opportunity to seek marketing help with fellow IGs.
Meets LITA’s strategic goals for Organizational Stability and Sustainability
What will your group be working on for the next three months?
Last two years I did a program with this interest group instead of a business meeting. I decided not submit a program this year. Started discussions with frequent collaborator on other ideas (meetups, meetings, etc)
Is there anything LITA could have provided during this time that would have helped your group with its work?
Thank you for asking. I did have some feedback relating to the program session.
Marketing on social media and web pages ? At least retweets from speakers who are promoting their LITA sessions on their own? (Example: Bree, Bobbi and I did this for our session)
Although our session was LITA sponsored we were not included in either the Interest Group Discussion Sessions for ALA Annual 2019
or General ALA LITA highlights pages:
The session was on a Monday at the same time as George Takai but we still had a pretty great turn out.
Maybe some extra pre conference tips for IGs/presenters (like what technology you have to bring for session/who to contact for technology issues/role of LITA IG person at session) and people working with IGs/presenters (sending communication early enough for presentation edits) might be helpful.
All that said, I am grateful we had this opportunity and session accepted by ALA and LITA.
Additional comments or concerns:
Thank you, especially Jenny Levine, for support.
Submitted by Lilly Ramin on 10/01/2019 | https://docs.lita.org/2019/10/instructional-technologies-interest-group-september-2019-report/ | 2022-09-24T19:46:27 | CC-MAIN-2022-40 | 1664030333455.97 | [] | docs.lita.org |
This website uses cookies to personalize content and ads.
If you enjoy MiKTeX and want to support the project, then
please become a known MiKTeX user by giving back something. It
encourages me to continue, and is the perfect way to say thank
you!
Visit the MiKTeX Give Back page,
for more information. | https://docs.miktex.org/manual/registering.html | 2022-09-24T19:03:27 | CC-MAIN-2022-40 | 1664030333455.97 | [] | docs.miktex.org |
Difference between revisions of "Scheduler: Grading"
From Carleton Moodle Docs
Revision as of 16:32, 6 July 2017
Return to Schedule module page
When the appointment has been attended, a grade can be given to the student's performance. This is achieved clicking on the student name in the slot list. A screen with a two tab choice will appear. The first tab (left tab) drives to the "grading summary" screen for the student:
File:Student top choice EN.jpg
All appointments instances for that student are shown in the form, showing comments and allowing giving a grade for that appointement.
File:Student grading EN.jpg
The grade can be distributed to all the students appointed in the slot, replacing any older grade individually set for some of the students. | https://docs.moodle.carleton.edu/index.php?title=Scheduler:_Grading&diff=prev&oldid=3949 | 2022-09-24T20:06:57 | CC-MAIN-2022-40 | 1664030333455.97 | [] | docs.moodle.carleton.edu |
Add-ons
In this section, we can install add-on's and integrating software already supported and provided by Splynx, to your Splynx server.
This is the easiest method of installing any add-on or integrating software on Splynx.
To install an addon navigate to
Config / Integrations / Addons:
Once here, you will be provided with a list of add-on's which you can install, reinstall or delete. You can also click on the
i icon to retrieve information on the add-on.
You can either locate the add-on you wish to install manually or simply use the search bar provided to search for the add-on by specific text.
The list will also display addons which you have already installed by highlighting the "Package" name in blue. Also, add-ons with a "Version" number highlighted in yellow, indicates that a update is available for the add-on. If the version is highlighted in green, this means that the add-on is up to date.
Updates can be installed by clicking on the install icon or via the "Update(apt)" button:
| https://docs.splynx.com/configuration/integrations/addons | 2022-09-24T18:31:26 | CC-MAIN-2022-40 | 1664030333455.97 | [array(['http://inside-doc.splynx.com/images/get?path=en%2Fconfiguration%2Fintegrations%2Faddons%2Faddons.png',
'Addons'], dtype=object)
array(['http://inside-doc.splynx.com/images/get?path=en%2Fconfiguration%2Fintegrations%2Faddons%2Faddons2.png',
'Addons'], dtype=object)
array(['http://inside-doc.splynx.com/images/get?path=en%2Fconfiguration%2Fintegrations%2Faddons%2Faddons3.png',
'Addons'], dtype=object) ] | docs.splynx.com |
Deprecation: #90803 - ObjectManager::get in Extbase context¶
See forge#90803
Description¶
To help understand the deprecation of
$objectManager->get(Service::class) let's first have a look at its domain: Dependency Injection
and its history as well as the culprits to deal with.
With the introduction of Extbase over one decade ago, a lot of modern software development paradigms have been introduced into TYPO3. One of that paradigms is Dependency Injection (DI) which is an approach of handling dependencies different than the one the TYPO3 core followed ever since.
Given there is an EmailService class, which is responsible for sending emails, the usual approach of creating such a service was to create it
the moment it was needed. TYPO3 never used the
new keyword to create new objects, but
GeneralUtility::makeInstance(), which pretty much does the same thing.
So, one approach of creating dependencies is creating them in the current scope where the dependency is needed.
Tip
As a rule of thumb, you can remember the following:
Whenever you are creating dependencies yourself with
new or
GeneralUtility::makeInstance(), you are not using Dependency Injection.
Extbase introduced the concept of Dependency Injection (DI) which means, that all dependencies are declared in a way, that the dependency chain is known before runtime.
The most common way of implementing DI is to declare dependencies as constructor arguments. This means, in the scope of the current class, all dependencies are made visible as constructor arguments.
As those dependencies need to be created outside the current scope, a service container implementation is responsible for the creation and management of service instances.
Then, instead of calling
new Service(...), the container needs to be queried for the needed service, e.g. by calling
$container->get(Service::class).
This also assures that the container provide the requested services with their dependencies, as they are created the same way.
There is an service container in Extbase but it's not exposed to the public. Instead, there is the
ObjectManager class, which acts as a proxy for the container and also has a
get method, to query instances of services.
Exactly that
get() method is now deprecated in the extbase context because it should never be called directly.
The usual extbase context is a controller. All controllers are created by the object manager and therefore support DI. Whenever a dependency is needed in an extbase context,
instead of calling
$objectManager->get(Service::class), the usual DI approaches have to be used. Those approaches are constructor, method and property injection.
Migration¶
If you are using code similar to the following example, you should migrate to dependency injection:
class MainController { public function listAction() { $service = $this->objectManager->get(Service::class); $service->doSomething(); } }
Examples how to use dependency injection:
Constructor Injection¶
class MainController { private $service; public function __construct(Service $service) { $this->service = $service; } public function listAction() { $this->service->doSomething(); } }
Tip
Constructor injection is the preferred type of injection for dependencies.
Method Injection¶
class MainController { private $service; public function injectService(Service $service) { $this->service = $service; } public function listAction() { $this->service->doSomething(); } }
Property Injection¶
class MainController { /** * @var Service * @TYPO3\CMS\Extbase\Annotation\Inject */ public $service; public function listAction() { $this->service->doSomething(); } }
Unfortunately, there is even more to consider here. Dependencies usually are services and services are objects which are shareable. TYPO3 users might be more used to the term
Singleton, which means,
that there is just one instance of a service during runtime which is shared across all scopes. Singletons are a great way to save resources but there is more to Singletons than just that.
To be able to share the same instance of a class across all scopes, the instance cannot store information about its state in its properties.
The idea of Singletons is to have an object that always behaves the same, no matter where it is used.
Let's have a look at classes that are no services. We can borrow the term prototype from the Java world. A commonly used prototype object is a model. Each instance of a model clearly has a different state and therefore a different functionality.
Those objects can theoretically be injected but it's very uncommon to do so. Still, in Extbase, instances of prototypes (e.g. instances of models, or other instances that hold state) are very often created with the object manager,
which is bad practice.
new or
GeneralUtility::makeInstance() should be used for instantiating prototypes.
However, when it comes to prototypes, there is a mechanic which cannot be implemented differently yet: the override of an implementation.
It means, that it's possible to tell the
ObjectManager to create an instance of a different class than the one which is requested.
One example of that is class
TYPO3\CMS\Extbase\Persistence\Generic\Storage\Typo3DbBackend, which can be fetched from the
ObjectManager by requesting an instance of the
TYPO3\CMS\Extbase\Persistence\Generic\Storage\BackendInterface interface.
This feature should only be used for services as well but it is often used to override models of other extensions. For models you can either decide to simply instantiate via
new, or if you want to provide support for overwriting models
via XCLASSes configured in
ext_localconf.php (configuration variable:
$GLOBALS['TYPO3_CONF_VARS']['SYS']['Objects']) you may also use
GeneralUtility::makeInstance().
Tip
Conclusion:
Singletons (services without state) should be provided by Dependency Injection wherever possible.
To create prototypes (instances with state), use
new or
GeneralUtility::makeInstance().
ObjectManager->get() must no longer be used.
Impact¶
There is no impact yet. No PHP
E_USER_DEPRECATED error is triggered in TYPO3 10. This will probably change in TYPO3 11.x.
Affected Installations¶
All installations that use
ObjectManager->get() directly to create instances of dependencies in a scope that supports native Dependency Injection. | https://docs.typo3.org/c/typo3/cms-core/main/en-us/Changelog/10.4/Deprecation-90803-DeprecationOfObjectManagergetInExtbaseContext.html | 2022-09-24T20:18:51 | CC-MAIN-2022-40 | 1664030333455.97 | [] | docs.typo3.org |
following behavior is important to remember when using the TRUNCATE TABLE statement in VoltDB:
Executing a TRUNCATE TABLE query on a partitioned table within a single-partitioned stored procedure will only delete the records within the current partition. Records in other partitions will be unaffected.
You cannot execute a TRUNCATE TABLE query on a replicated table from within a single-partition stored procedure. To truncate a replicated table you must execute the query within a multi-partition stored procedure or as an ad hoc query. | https://docs.voltdb.com/v7docs/UsingVoltDB/sqlref_truncate.php | 2022-09-24T19:52:53 | CC-MAIN-2022-40 | 1664030333455.97 | [] | docs.voltdb.com |
Thermistors
Over Temperature Current Limiting
There are two temperature sensitive currrent limiting modules on the ODrive Pro,
<axis>.motor.fet_thermistor and
<axis>.motor.motor_thermistor.
The temperature reading (in Celcius) from each thermistor can be monitored from
<thermistor>.temperature.
When enabled, either module will start limiting the motor current when the
<thermistor>.config.temp_limit_lower threshold is exceeded.
Once
<thermistor>.config.temp_limit_upper is reached the ODrive will exit
CLOSED_LOOP_CONTROL
and return either
INVERTER_OVER_TEMPor
MOTOR_OVER_TEMP, depending on which module reached the upper limit.
Warning
The lower and upper thresholds for
<axis>.motor.fet_thermistor can be changed, but this is not recommended.
Connecting Motor Thermistors
Both the D5065 and D6374 motors come with a built in thermistor. For all other thermistors make sure they are an NTC type thermistor before use.
Connect the thermistor wires to THERMISTOR+ and THERMISTOR- (polarity does not matter)
Single Wire Thermistor
Make sure that the thermistor shares a common ground with the ODrive, we suggest connecting to J8 pin 4 or 11.
Set the thermistor coefficients.
Set both
temp_limit_lowerand
temp_limit_upperaccording to your motors datasheet.
Set
enabled = True.
Note
For users migrating from ODrive v3.*, no external circuitry is required to use a motor thermistor. The ODrive Pro has a built in 1k ohm voltage divider.
Thermistor Coefficients
Every thermistor is different and thus it’s necessary to let the ODrive know how to relate voltage and temperature,
this is done setting the polynomial coefficients
poly_coefficient_[0-3].
The suggested method of setting these is to run
set_motor_thermistor_coeffs(axis, Rload=1000, R_25, Beta, Tmin=-10, Tmax=150, thermistor_bottom=True)
These parameters describe the thermistor characteristics and are defined as follows:
axis: Which axis to set the motor thermistor coefficients for (
odrv0.axis0or
odrv0.axis1).
Rload: The Ohm value of the resistor used in the voltage divider circuit. (advanced users only)
R_25: The resistance of the thermistor when the temperature is 25 degrees celsius.
Can be found in the datasheet of your thermistor.
Can also be measured manually with a multimeter.
Beta: A constant specific to your thermistor.
Can be found in the datasheet of your thermistor.
Tminand
Tmax: The temperature range that is used to create the coefficients.
Make sure to set this range to be wider than what is expected during operation, for example -10 to 150. | http://docs.odriverobotics.com/v/latest/thermistors.html | 2022-09-24T20:16:05 | CC-MAIN-2022-40 | 1664030333455.97 | [] | docs.odriverobotics.com |
Defines GridFunctions to be used in operators, boundary-conditions, interpolation and integration. More...
Defines GridFunctions to be used in operators, boundary-conditions, interpolation and integration.
GridFunctions are expressions build up of some elementary terms and can be used to construct a GridFunctionOperator, can be interpolated to a DOFVector, and can be integrated over a GridView.
Thus, GridFunctions are an important incredient to formulate the bilinear and linear forms und to postprocess the solutions.
Examples:
Remarks:
Expressionis anything, a GridFunction can be created from, sometimes also called PreGridFunction. It includes constants, functors callable with GlobalCoordinates, and any combination of GridFunctions.
Anything that needs a quadrature formula, e.g., makeOperator() and integrate(), needs to determine the (approximative) polynomial degree of the GridFunctions. If the Gridfunction builds a polynomial expression, it can be deduced automatically, i.e. if it includes constants, DOFVectors, and arithmetic operator operator+, operator-, or operator*.
If the polynomial order can not be deduced, the compiler gives an error. Then, these functions accept an additional argument, to provide either the polynomial degree of the expression, or a quadrature rule explicitly.
Examples:
auto op1 = makeOperator(B, 1.0 + pow<2>(prob.solution(_0)));
auto op2 = makeOperator(B, sin(X(0)), 4);
auto op3 = makeOperator(B, sin(X(0)), Dune::QuadratureRules(Dune::GeometryType::simplex, 4));
auto value1 = integrate(sin(X(0)), 4);
Applies Operation::AbsMax to GridFunctions.
Applies Operation::AbsMin to GridFunctions.
Applies Operation::Clamp to GridFunction.
Applies a distance-functor to two vector-valued GridFunctions.
Applies Operation::Dot to two vector-valued GridFunctions.
Applies Operation::Get<I> to GridFunction.
Applies Operation::Get_ to GridFunction.
Generates a Gridfunction representing the gradient of a GridFunction. See DerivativeGridFunction.
Examples:
gradientOf(prob.solution(_0))
gradientOf(X(0) + X(1) + prob.solution(_0))
Applies a infty_norm() functor to a vector-valued GridFunction.
Generator function for FunctorGridFunction.
Applies the functor
f to the grid-functions
gridFcts.... See FunctorGridFunction.
Examples:
invokeAtQP([](Dune::FieldVector<double, 2> const& x) { return two_norm(x); }, X());
invokeAtQP([](double u, auto const& x) { return u + x[0]; }, 1.0, X());
invokeAtQP(Operation::Plus{}, X(0), X(1));
References AMDiS::Concepts::Functor.
Generator for Gridfunctions from Expressions (PreGridfunctions)
Create an evaluable GridFunction from an expression that itself can not be evaluated. Therefore, it binds the GridFunction to a GridView.
Example:
In contrast to Expressions, GridFunctions can be evaluated and
Referenced by ProblemStat< Traits >::addDirichletBC(), GridFunctionMarker< Grid, GridFct >::GridFunctionMarker(), AMDiS::integrate(), DiscreteFunction< Coeff, GB, TreePath >::interpolate_noalias(), AMDiS::makeLocalOperator(), and GridFunctionMarker< Grid, GridFct >::markElement().
Applies Operation::Max to GridFunctions.
Applies Operation::Min to GridFunctions.
Applies a one_norm() functor to a vector-valued GridFunction.
Applies Operation::Multiplies to GridFunctions.
References AMDiS::Concepts::ConstantToGridFunction.
Applies Operation::Plus to GridFunctions.
Applies Operation::Negate to GridFunctions.
Applies Operation::Minus to GridFunctions.
Applies Operation::Divides to GridFunctions.
Applies Operation::Pow.
to GridFunction.
Applies Operation::Pow_ to GridFunction.
Applies Operation::Sqr to GridFunction.
Applies Operation::Trans to a matrix-valued GridFunction.
Applies Operation::TwoNorm to a vector-valued GridFunction.
Applies Operation::UnaryDot to a vector-valued GridFunction. | https://amdis-test.readthedocs.io/en/develop/api/group__GridFunctions.html | 2022-09-24T20:04:09 | CC-MAIN-2022-40 | 1664030333455.97 | [] | amdis-test.readthedocs.io |
Writing Periodic Scripts¶
Periodic Scripts are useful for running tasks that need to occur on a repeated basis. Possible uses include collecting data from external sources, running reports, or performing maintenance tasks. Check out the Periodic Scripts page for more detail on their uses and configuration in BMON. The intent of this page is to provide some guidance to Developers wishing to write a custom Periodic Script that can be run by BMON.
When writing a custom Periodic Script, it is helpful to look at an example
script. The
bmon/bmsapp/periodic_scripts/okofen.py is one such example. A somewhat more complicated example is
bmon/bmsapp/periodic_scripts/ecobee.py.
Basic Requirements of a Custom Periodic Script¶
There are a few requirements for a custom Periodic Script:
- The script must reside in a Python file located in the
bmon/bmsapp/periodic_scripts/directory.
- That Python file must have a
run()function that, at a minimum, accepts arbitrary keyword arguments; i.e. it must have a
**kwargsparameter.
- The return value from the
run()function must be a Python dictionary, although it can be an empty dictionary if there is no need for a return value.
More details on these requirements are presented below.
Arguments Passed to the
run() Function¶
The Periodic Script resides in a Python file located in the
bmon/bmsapp/periodic_scripts/ directory. The name of the Python file,
excluding the “.py” extension is the File Name that is entered in the
Periodic Script configuration inputs. The
run() function in that
file is called at the periodic interval specified when the BMON System
Administrator configures the script.
Here is the minimum signature of the
run() function, which must allow for arbitrary keyword arguments:
def run(**kwargs): # Script code goes here
The
run() function can also include specific keyword parameters with default values, such as:
def run(account_num='', units='metric', **kwargs): # Script code goes here
When the
run() function is called, it is passed a number of keyword
arguments, and the arguments are generated from these sources:
The
Script Parameters in YAML forminput from the Periodic Script configuration inputs. As an example, if the Script Parameters input is:
account_num: 1845236 include_occupancy: True
the
run()function will be called with these arguments:
run(account_num=1845236, include_occupancy=True)
The Results returned from the prior run of the Script. As discussed in more detail below, the Periodic Script returns a Python dictionary. Each one of the key/value pairs in that dictionary are converted to keyword arguments and passed to the next run of the script. Continuing the example above, if the Script returned the following Python dictionary:
{'last_record': 2389, 'last_run_ts': 143234423}
the next call to the
run()function of the Periodic Script will look like this:
run(account_num=1845236, include_occupancy=True, last_record=2389, last_run_ts=143234423)
This example shows the arguments combined from the two sources mentioned so far.
There is special treatment of return values that are in the
hiddenkey of the return dictionary. The purpose of the
hiddenkey is discussed in more detail below, but the return values in that key are processed differently than other keys. The
hiddenkey should contain another dictionary of key/value pairs, and those key/value pairs are extracted from the
hiddenvalue and passed to the
run()function as separate arguments. Continuing the above example, if
run()returns the following dictionary:
{'last_record': 2389, 'last_run_ts': 143234423, 'hidden': {'auth_key': 'x4ab72i'}}
the next call to the
run()function of the Periodic Script will look like this:
run(account_num=1845236, include_occupancy=True, last_record=2389, last_run_ts=143234423, auth_key='x4ab72i')
If the same keyword argument appears in more than one of the above sources, the highest priority is
Script Parameters in YAML form, then visible results from the prior run of the script, and finally hidden results from the prior run of the script.
The Return Value from the
run() Function¶
There are a few different purposes for the Python dictionary that is returned from the
run() function:
- As stated before, values in that dictionary are passed as arguments to the next call to the
run()function. This can be useful for tracking things like the time or ID of the last record extracted from a data source, so that future calls only extract newer data. (Note that storing the same sensor reading multiple times in BMON does not cause an error.)
- The values returned by
run()are displayed in the Django Admin interface, so are useful for debugging script problems or displaying status messages. The values appear in the
Script results in YAML formfield on the form used to configure the Periodic Script. The exception to this are the values that appear in the special
hiddenkey in the return dictionary; they are not displayed in the configuration form, but are passed to the next call to the
run()function. This feature is useful for storing authorization keys that should not be readily viewed by the System Administrator. The feature is also useful if some of the return values from the script would be confusing or not useful if viewed in the System Admin interface.
- Sensor readings acquired by the Periodic Script can be returned in the special
readingskey in the return dictionary, and these readings will be automatically stored in the BMON sensor reading database (more detail later).
- A list of Script Parameter names can be returned in the special
delete_paramskey, and these parameters will automatically be deleted from the
Script Parameters in YAML forminput on the Periodic Script configuration form. This can useful for deleting out authorization keys that are no longer valid or should be hidden from the System Administrator. An example use of the
delete_paramskey in a return dictionary is:
{'last_record': 2389, 'delete_params': ['access_token', 'refresh_token']}. After this dictionary is returned, the script parameters
access_tokenand
refresh_tokenwill be deleted from the
Script Parameters in YAML forminput, if they exist there. Also, this
delete_paramskey/value pair will not be passed to the next call of the
run()function and will not be displayed in the Script Results field in the Admin interface.
A common use of a Periodic Script is to collect sensor readings from an external source. A
special feature has been built into the Periodic Script framework to allow for easy
storage of those collected readings. If the Script returns the sensor readings as a list
of 3-element-tuples, and that list is stored in the
readings key of the return dictionary,
the readings will automatically be stored in BMON’s sensor reading database. Here is an example
return dictionary that contains three sensor readings that will be stored by BMON:
{ 'readings': [(1479769950, '311015614158_temp', 70.1), (1479769950, '311015614158_heat_setpoint', 69.0), (1479769950, '311015614158_rh', 23)] }
Each reading is formatted in a 3-element tuple:
(Unix Timestamp of reading, Sensor ID, Reading Value)
These reading values are not displayed in the
Script Results field of the configuration
screen, but the storage message returned by the BMON sensor reading database is
displayed. Here is an example:
The
reading_insert_message indicates that on the last run of this Periodic Script,
15 readings were collected and stored in the BMON sensor reading database.
The Script Results example above also shows some other data that is added to the Script Results for display to the System Admin. The time when the script ran last is shown, and the amount of time required to run the script is shown. Had an error been raised by the script, the traceback from that error would be shown here as well.
Note that the Periodic Script can collect sensor readings and have them stored in the BMON sensor reading database. However, those readings will not be displayed in charts and reports without configuring each Sensor ID in the Sensors table using the Admin interface. This process is described in the “Adding Sensors” section of the Add Buildings and Sensors document. | https://bmon-documentation.readthedocs.io/en/latest/writing-periodic-scripts.html | 2022-09-24T20:03:02 | CC-MAIN-2022-40 | 1664030333455.97 | [array(['_images/script_results.png', '_images/script_results.png'],
dtype=object) ] | bmon-documentation.readthedocs.io |
Use this page to run a workflow.
The "star" icon allows you to save this job information to your Home page.
The "bread crumbs" Project: Upgrade-End to End / Workflow: Upgrade-Linux provide links to a previous web page.
The name after the Run Workflow page title is the name of the workflow you intend to run.
Starting State—This is the name of the current starting state for this workflow. If this workflow has multiple starting states and this is not the one you want to use, return to the Run Workflow pop-up dialog and select a different starting state.
Parameters:
Any parameters previously specified for this starting start are displayed. If no parameters were defined, this area will be blank.
If values are supplied, these are the default values specified when the parameter was created. If necessary, you can "type-over" these values to change them before running this workflow.
You must enter a value for any blank value field labeled "Required".
Click OK to run the workflow when your selections are complete. | https://docs.cloudbees.com/docs/cloudbees-cd/10.0/automation-platform/help-runworkflow | 2022-09-24T19:30:14 | CC-MAIN-2022-40 | 1664030333455.97 | [] | docs.cloudbees.com |
Cluster operations is a facility to perform maintenance operations on various items in operations center, such as client controllers and update centers. Different operations are applicable to various items such as performing backups or restarts on client controllers, or upgrading or installing plugins in update centers.. waits for anything currently running to finish, and.
_2<<. | https://docs.cloudbees.com/docs/cloudbees-ci/2.303.2.6/traditional-admin-guide/cluster-operations | 2022-09-24T20:07:59 | CC-MAIN-2022-40 | 1664030333455.97 | [array(['../../../cloudbees-common/latest/_images/cluster-operations/all-view.c412a91.png',
'Figure 3. Ad-hoc cluster operation Figure 3. Ad-hoc cluster operation'],
dtype=object)
array(['../../../cloudbees-common/latest/_images/cluster-operations/select-run-adhoc.3bb4c8b.png',
'Figure 4. Ad-hoc cluster operation select run adhoc'],
dtype=object)
array(['../../../cloudbees-common/latest/_images/cluster-operations/ad-hoc-run-now.cafafc3.png',
'Figure 5. Ad-hoc cluster operation ad hoc run now'], dtype=object)
array(['../../../cloudbees-common/latest/_images/cluster-operations/ad-hoc-from-client-master.gif',
'Figure 6. Ad-hoc cluster operation ad hoc from client master'],
dtype=object) ] | docs.cloudbees.com |
Bitbucket
The Bitbucket add-on allows you to manage your Platform.sh environments directly from your Bitbucket repository.
Supported:
- Create a new environment when creating a branch or opening a pull request on Bitbucket.
- Rebuild the environment when pushing new code to Bitbucket.
- Delete the environment when merging a pull request.
Install the add-on
On your Bitbucket account, click on your avatar, select
Manage Account, and simply install the Platform.sh add-on by selecting
Find new add-ons from the left menu. The Platform.sh add-on is under the Deployment category.
note We recommend you install the add-on at the team level (select
Manage Teaminstead) so that every repository that belongs to the team can use the add-on.
note If you have created your account using the bitbucket oAuth Login in order to use the Platform CLI you will need to setup a password which you can do by visiting this page
To connect your Bitbucket repository to Platform.sh, go to the repository page as an administrator on Bitbucket and click on the
Settings icon. Then Click on
Platform.sh integration under
PLATFORM.SH.
You can then Create a new project or even connect to an existing project on Platform.sh if you are the owner of it.
The add-on needs access to some information on your repository. Click on
Grant access. Choose the region where you want your Platform.sh project to be hosted and click
Create free project.
That's it! The bot will build your Platform.sh project and connect it to your Bitbucket repository.
You can already start pushing code (branch, pull request, ...) to your Bitbucket repository and see those changes automatically deployed on Platform.sh.
Types of environments
Environments based on Bitbucket pull requests will have the correct 'parent' environment on Platform.sh and will be activated automatically with a copy of the parent's data.
However, environments based on (non-pull-request) branches cannot have parents and will inherit directly from
master and start inactive by default. | https://docs.platform.sh/administration/integrations/bitbucket.html | 2017-08-16T19:42:30 | CC-MAIN-2017-34 | 1502886102393.60 | [] | docs.platform.sh |
You can get the address of a label defined in the current function (or a containing function) with the unary operator ‘
&&’. serves
&.
[1] The analogous feature in Fortran is called an assigned goto, but that name seems inappropriate in C, where one can do more than simply store label addresses in label variables.
© Free Software Foundation
Licensed under the GNU Free Documentation License, Version 1.3. | http://docs.w3cub.com/gcc~4/labels-as-values/ | 2017-08-16T19:20:08 | CC-MAIN-2017-34 | 1502886102393.60 | [] | docs.w3cub.com |
a folder within the main PresideCMS source repository. The folder can be found at.
You'll find information on ways in which you can contribute in the Content and Build sections.
Technology
Lucee
The documentation build is achieved using Lucee code. The only dependency required to build and locally run the documentation is CommandBox.
Markdown. | https://docs.presidecms.com/docs.html | 2017-08-16T19:23:02 | CC-MAIN-2017-34 | 1502886102393.60 | [] | docs.presidecms.com |
Sharing Your Account
Your Magento account contains information that can be useful to trusted employees and service providers who help to manage your site. As the primary account holder, you have authority to grant limited access to your account to other Magento account holders. When your account is shared, all sensitive information—such as your billing history or credit card information—remains protected. It is not shared at any time with other users.
All actions taken by users with shared access to your account are your sole responsibility. Magento Inc. is not responsible for any actions taken by users to whom you grant shared account access.
- Account ID
- Enter the Acct ID of the new user’s Magento account.
- Enter the Email address that is associated with the new user’s Magento account.
- Your Email
- Your Phone
You are notified when the new role is saved, and the new user record appears in the Manage Permissions section of the Shared Access page. Magento also sends an email invitation with instructions for accessing the shared account to the new user.
Your account dashboard has a new Switch Accounts control in the upper-right corner, with options for “My Account” and the name of the shared account.
The shared account displays a welcome message and contact information. The left panel includes only the items that you have permission to use. | http://docs.magento.com/m2/ce/user_guide/magento/magento-account-share.html | 2017-08-16T19:31:59 | CC-MAIN-2017-34 | 1502886102393.60 | [array(['../Skins/Default/Stylesheets/Images/transparent.gif', 'Closed'],
dtype=object)
array(['../Skins/Default/Stylesheets/Images/transparent.gif', 'Closed'],
dtype=object) ] | docs.magento.com |
- Plexus »
- Render Objects »
Points Render Object
Renders Points and Sprites from vertices. You can choose between different Sprite Sampling modes.
Points Size: Sets the size of the points being rendered.
Get Scale From Vertices: Scales the sizes of the points by the amount of scale of each vertex.
X Offset: Offsets each point by this distance in the X direction.
Y Offset: Offsets each point by this distance in the Y direction.
Z Offset: Offsets each point by this distance in the Z direction.
Get Color From Vertices: Gets the color from the vertices. If unchecked, you can set the color of the points.
Get Opacity From Vertices: Gets the opacity from the vertices. If unchecked, you can set the opacity of the points.
Textured Sprite: A Custom Sprite is rendered instead of a circular point in the plexus. If no layer is selected, the default particle is drawn.
Sprite Controls:
Time Sampling: Sample the sprites at the current time of the composition or at “Random Still” or “Random Loop” modes.
Random Still: Random Still mode samples the sprite layer at random times and those times don’t change through out the length of the composition.
Random Loop: Random Loop mode samples the sprite layer at random times and the those times progress linearly through out the length of the composition. If the sprite layer ends prematurely, the sprite sampling times loop from their respective ‘starting’ random time.
Random Seed: Set the seed for the sprite sampling randomization.
Max No. of Samples: The Max samples restricts the number of ‘times’ the sprite layer is sampled in a given frame. For example, if you have have a structure with vertex count of 100,000 and using sprites, sampling the layer 100k times will be a performance hog. So it only samples the layer ‘Max No. of Samples’ times and randomly assigns those samples to all the vertices.
Points Perspective Aware: Makes the points perspective aware. If selected, the size of the points vary depending on the perspective of the camera, i.e. near points appear bigger and farther points appear smaller.
Draw Only Connected Points: Only Points that are connected to alteast one other different point are rendered. If they are not connected to any other point, they are not rendered.
Effect Only Group: Only vertices that belong to this Group will be rendered. If “All Groups” is selected all vertices in the Plexus are rendered. | http://docs.rowbyte.com/plexus/render_objects/points_render_object/ | 2017-08-16T19:20:43 | CC-MAIN-2017-34 | 1502886102393.60 | [array(['../images/points_render.jpg', 'Points Render Object'],
dtype=object) ] | docs.rowbyte.com |
Following this tutorial will create a demo installation of the Mender, appropriate for testing and experimenting. When you are ready to install for production, please follow the Production installation documentation.
Mender consists of a server and updater client. The Mender server is using the microservices design pattern, meaning that multiple small, isolated services make up the server. The Mender updater client is designed to run on embedded Linux devices and connects to the server so that deployments can be managed across many devices.
In order to make it easy to test Mender as a whole, we have created a docker compose environment that brings all of these components up and connects them together. It even includes a service that runs a virtual device using Quick Emulator (QEMU), which is handy because it means that you can test the client without having to configure any hardware.
We assume you are using Ubuntu 16.04 with Google Chrome as web browser and at least 5 GB disk and 2 GB RAM available.
Follow the documentation to install Docker Engine, version 1.11 or later (17.03 or later in Docker's new versioning scheme).
Follow the documentation to install Docker Compose, version 1.6 or later.
While bringing up the environment, several hundred meg.
In a working directory, clone the Mender integration environment:
curl -L | tar xz
cd integration-1.2.0
You should see a file
docker-compose.yml inside it, which defines the
Mender test environment.
This terminal will be locked while Mender is running as it will output logs from all the services.
As the Mender services start up, you will see a lot of log messages from them in your terminal. This includes output from the Mender virtual QEMU device, similar to the following:
...
mender-client_1 | Hit any key to stop autoboot: 0
mender-client_1 | 3485592 bytes read in 579 ms (5.7 MiB/s)
mender-client_1 | 14249 bytes read in 169 ms (82 KiB/s)
mender-client_1 | Kernel image @ 0x70000000 [ 0x000000 - 0x352f98 ]
mender-client_1 | ## Flattened Device Tree blob at 6fc00000
mender-client_1 | Booting using the fdt blob at 0x6fc00000
mender-client_1 | Loading Device Tree to 7fed9000, end 7fedf7a8 ... OK
mender-client_1 |
mender-client_1 | Starting kernel ...
mender-client_1 |
mender-client_1 | Booting Linux on physical CPU 0x0
mender-client_1 | Initializing cgroup subsys cpuset
...
mender-client_1 | Poky (Yocto Project Reference Distro) 2.2.1 vexpress-qemu ttyAMA0
After a few minutes, the logs will stop coming except for some periodic log messages from the Mender authentication service similar to the following:
mender-api-gateway_1 | 172.18.0.4 - - [07/Oct/2016:03:59:50 +0000] "POST /api/devices/1.0/authentication/auth_requests HTTP/2.0" 401 150 "-" "Go-http-client/2.0" "-"
mender-device-auth_1 | time="2016-10-07T03:59:55Z" level=error msg="unauthorized: dev auth: unauthorized" file="api_devauth.go" func="main.(DevAuthHandler).SubmitAuthRequestHandler" http_code=401 line=142 request_id=df3bc374-060b-4b15-af89-76c85975ab25
mender-device-auth_1 | time="2016-10-07T03:59:55Z" level=info msg="401 4438μs POST /api/1.0/auth_requests HTTP/1.0 - Go-http-client/2.0" file=middleware.go func="accesslog.(AccessLogMiddleware).MiddlewareFunc.func1" line=58 request_id=df3bc374-060b-4b15-af89-76c85975ab25
These messages show that the Mender client running inside the virtual QEMU device is asking to be authorized to join the server. We will come back to this shortly. docker-compose exec mender-useradm /usr/bin/useradm create-user [email protected] --password=mysecretpassword
Your email and password are currently only used to log in to the Mender server. You will not receive any email from Mender. However, this might change in future versions so we recommend to input your real email address.
Congratulations! You have the Mender server and a virtual Mender client successfully running! Please proceed to Deploy to virtual devices.
You can find some steps for maintaining your test environment below.
When you are done testing Mender, simply press Ctrl-C in the terminal you started Mender in, where the log output is shown. Stopping all the services may take about a minute.
Mender can be started again with the same steps as above.
You will lose all state data in your Mender environment by running the commands below, which includes devices you have authorized, software uploaded, logs, deployment reports and any other changes you have made.
If you want to remove all state in your Mender environment and start clean,
run the following commands in the
integration directory:
./stop
./reset
./up
If you just lost the login credentials, you can run the
reset-user script. | https://docs.mender.io/1.2/getting-started/create-a-test-environment | 2017-09-19T16:51:49 | CC-MAIN-2017-39 | 1505818685912.14 | [] | docs.mender.io |
An Act to amend 71.07 (9e) (af) (intro.) and 71.07 (9e) (aj) (intro.) of the statutes; Relating to: repealing the changes made to the earned income tax credit in 2011 Wisconsin Act 32. (FE)
Bill Text (PDF: )
Fiscal Estimates
AB233 ROCP for Committee on Ways and Means (PDF: )
Wisconsin Ethics Commission information | https://docs.legis.wisconsin.gov/2013/proposals/ab233 | 2017-09-19T17:14:40 | CC-MAIN-2017-39 | 1505818685912.14 | [] | docs.legis.wisconsin.gov |
Leaf
In a MemSQL cluster, a
leaf node functions as a storage and compute node. Leaves are responsible for storing slices of data in the MemSQL cluster.
To optimize performance, the MemSQL system automatically distributes data across leaf nodes into partitions. Each leaf is a MemSQL server consisting of several partitions. Each partition is just a database on that server.
Leaf States
Each leaf is in one of four states:
The following diagram summarizes the leaf states and the transitions between them:
Leaf Commands
Starting a Leaf
A leaf node is started as a MemSQL node without additional parameters. Once started, you can run ADD LEAF on the Master Aggregator to add the node (as a leaf) to the cluster. | https://docs.memsql.com/concepts/v5.8/leaf/ | 2017-09-19T17:02:45 | CC-MAIN-2017-39 | 1505818685912.14 | [array(['/images/SSISwDrYQDqF0xTtA7yY_leaf-state-diagram.png', 'image'],
dtype=object) ] | docs.memsql.com |
Byte order signatures are known integer values that are available in the foreign system's byte oder. By examining these on a byte by byte basis, the byte ordering of the remote system can be determined. While typically systems have a consistent ordering of bytes in words and words in longwords, the library does not assume that.
A system that is supplying data to be converted with this library must provide a 16 bit and a 32 bit signature. These are the values 0x0102 and 0x01020304 repsectively written in that system's native byte ordering.
The function
makecvtblock takes those signatures
and delivers a DaqConversion block. The
DaqConversion block is a data structure that provides
conversion tables for foreign to host and host to foreign conversions.
Suppose you have data in a structure that contains these signatures
as fields named
s_ssig and
s_lsig. The example below shows how to
create a conversion block:
Example 48-4. Creating a DaqConversion
#include <cvt.h> ... DaqConversion conversionBlock; makecvtblock(data.s_lsig, data.s_ssig, &conversionBlock); ...
The DaqConversion block
conversionBlock
can be used in subsequent calls to do byte order conversions. | http://docs.nscl.msu.edu/daq/newsite/nscldaq-11.0/x9412.html | 2017-07-20T20:24:56 | CC-MAIN-2017-30 | 1500549423486.26 | [] | docs.nscl.msu.edu |
Graph Analytics¶
Overview¶
It is common to find problems with graphs that have not been constructed fully noded or in graphs with z-levels at intersection that have been entered incorrectly. An other problem is one way streets that have been entered in the wrong direction. We can not detect errors with respect to “ground” truth, but we can look for inconsistencies and some anomalies in a graph and report them for additional inspections.
We do not current have any visualization tools for these problems, but I have used mapserver to render the graph and highlight potential problem areas. Someone familiar with graphviz might contribute tools for generating images with that.
Analyze a Graph¶
With pgr_analyzeGraph the graph can be checked for errors. For example for table “mytab” that has “mytab_vertices_pgr” as the vertices table:
SELECT pgr_analyzeGraph('mytab', 0.000002); NOTICE: Performing checks, pelase wait... NOTICE: Analyzing for dead ends. Please wait... NOTICE: Analyzing for gaps. Please wait... NOTICE: Analyzing for isolated edges. Please wait... NOTICE: Analyzing for ring geometries. Please wait... NOTICE: Analyzing for intersections. Please wait... NOTICE: ANALYSIS RESULTS FOR SELECTED EDGES: NOTICE: Isolated segments: 158 NOTICE: Dead ends: 20028 NOTICE: Potential gaps found near dead ends: 527 NOTICE: Intersections detected: 2560 NOTICE: Ring geometries: 0 pgr_analyzeGraph ---------- OK (1 row)
In the vertices table “mytab_vertices_pgr”:
- Deadends are identified by cnt=1
- Potencial gap problems are identified with chk=1.
SELECT count(*) as deadends FROM mytab_vertices_pgr WHERE cnt = 1; deadends ---------- 20028 (1 row) SELECT count(*) as gaps FROM mytab_vertices_pgr WHERE chk = 1; gaps ----- 527 (1 row)
For isolated road segments, for example, a segment where both ends are deadends. you can find these with the following query:
SELECT * FROM mytab a, mytab_vertices_pgr b, mytab_vertices_pgr c WHERE a.source=b.id AND b.cnt=1 AND a.target=c.id AND c.cnt=1;
If you want to visualize these on a graphic image, then you can use something like mapserver to render the edges and the vertices and style based on cnt or if they are isolated, etc. You can also do this with a tool like graphviz, or geoserver or other similar tools.
Analyze One Way Streets¶
pgr_analyzeOneway analyzes one way streets in a graph and identifies any flipped segments. Basically if you count the edges coming into a node and the edges exiting a node the number has to be greater than one.
This query will add two columns to the vertices_tmp table ein int and eout int and populate it with the appropriate counts. After running this on a graph you can identify nodes with potential problems with the following query.
The rules are defined as an array of text strings that if match the col value would be counted as true for the source or target in or out condition.
Example¶
Lets assume we have a table “st” of edges and a column “one_way” that might have values like:
- ‘FT’ - oneway from the source to the target node.
- ‘TF’ - oneway from the target to the source node.
- ‘B’ - two way street.
- ‘’ - empty field, assume twoway.
- <NULL> - NULL field, use two_way_if_null flag.
Then we could form the following query to analyze the oneway streets for errors.
SELECT pgr_analyzeOneway('mytab', ARRAY['', 'B', 'TF'], ARRAY['', 'B', 'FT'], ARRAY['', 'B', 'FT'], ARRAY['', 'B', 'TF'], ); -- now we can see the problem nodes SELECT * FROM mytab_vertices_pgr WHERE ein=0 OR eout=0; -- and the problem edges connected to those nodes SELECT gid FROM mytab a, mytab_vertices_pgr b WHERE a.source=b.id AND ein=0 OR eout=0 UNION SELECT gid FROM mytab a, mytab_vertices_pgr b WHERE a.target=b.id AND ein=0 OR eout=0;
Typically these problems are generated by a break in the network, the one way direction set wrong, maybe an error related to z-levels or a network that is not properly noded.
The above tools do not detect all network issues, but they will identify some common problems. There are other problems that are hard to detect because they are more global in nature like multiple disconnected networks. Think of an island with a road network that is not connected to the mainland network because the bridge or ferry routes are missing. | http://docs.pgrouting.org/2.3/en/doc/src/tutorial/analytics.html | 2017-07-20T16:27:02 | CC-MAIN-2017-30 | 1500549423269.5 | [] | docs.pgrouting.org |
Medical Devices Industry and Risk Management 2017
Start Date : April 27, 2017
End Date : April 28, 2017
Time : 9:00 am to6:00 pm
Phone : 800-447-9407
Location :
Hotel: Courtyard Seattle Sea-Tac Area 16038 West Valley Highway Tukwila Washington 98188 USA Phone: (425) 255-0300
Description ‘Safety Case’ or ‘Assurance:
- Senior Quality Managers
- Quality Professionals
- Regulatory Professionals
- Compliance Professionals
- Project Managers
- Design Engineers
- Software Engineers
- Process Owners
- Quality Engineers
- Quality Auditors
- Medical Affairs
- Legal Professionals
Agenda:
- Tips and tricks:
Software Risk Management (IEC62304 / FDA software reviewers’ guidance):
- Critical Software Issues
- Software Hazard Mitigation Strategies
- Software Item, Unit and System Definition
- Software Failures as Hazard Sources
- Software Requirements and Design Specification
- Software Tools and Development Environment
-
- Q&A
Lecture 2:: Emergency Medicine, Headache / Migraine, Health & Nutrition, Pain Management, and Physical Medicine. | http://meetings4docs.com/event/medical-devices-industry-and-risk-management-2017/ | 2017-07-20T16:33:07 | CC-MAIN-2017-30 | 1500549423269.5 | [] | meetings4docs.com |
SDV generates several different types of output and results files. This section describes the files and how to use them.
This section includes:
When evaluating the results of a SDV verification, you must examine all of the output carefully and investigate any errors.
Send comments about this topic to Microsoft | https://docs.microsoft.com/en-us/windows-hardware/drivers/devtest/static-driver-verifier-output-files | 2017-07-20T17:11:37 | CC-MAIN-2017-30 | 1500549423269.5 | [] | docs.microsoft.com |
This section provides guidelines for improving the efficiency of the authorization process.
For each new connection request from the client, the system authenticates the connection, and instantiates the callback for authorization of data requests and data updates coming in on that connection. Program to cache as much information as you can in the AccessControl.init method phase for quick authorization of each operation on the connection. Then you can use the cached information in AccessControl.authorizeOperation, which is called for every client operation. The efficiency of the authorizeOperation method directly affects the overall throughput of the GemFire cache.
Authorization in the post-operation phase occurs after the operation is complete and before the results are sent to the client. If the operations are not using FunctionService, the callback can modify the results of certain operations, such as query, get and keySet. For example, a post-operation callback for a query operation can filter out sensitive data or data that the client should not receive. For all operations, the callback can completely disallow the operation. However, if the operations are using FunctionService, the callback cannot modify the results of the operations, but can only completely allow or disallow the operation.
With querying, regions used in the query are obtained in the initial parsing phase. The region list is then passed to the post-operation callback unparsed. In addition, this callback is invoked for updates that are sent by the server to the client on the notification channel. This includes updates from a continuous query registered on the server by the client. The operation proceeds if it is allowed by the callback; otherwise a NotAuthorizedException is sent back to the client and the client throws the exception back to the caller.
For more advanced requirements like per-object authorization, you could modify the cache value in a put operation by the callback in the pre-operation phase to add an authorization token. This token would be propagated through the cache to all cache servers. The token can then be used for fast authorization during region get and query operations, and it can be removed from the object by changing the operation result. This makes the entire process completely transparent to the clients. | http://gemfire702.docs.pivotal.io/userguide/managing/security/authorization_whats_next.html | 2017-07-20T16:32:37 | CC-MAIN-2017-30 | 1500549423269.5 | [] | gemfire702.docs.pivotal.io |
Client Server Server Native Client ODBC driver to connect to an instance of SQL Server.
DB-Library clients
These applications include the SQL Server isql command prompt utility and clients written to DB-Library. SQL Server support for client applications using DB-Library is limited to Microsoft SQL Server 7.0 features.
Note
Although the SQL Server does not include the DB-Library DLL required to run these applications. To run DB-Library or Embedded SQL applications you must have available the DB-Library DLL from SQL Server version 6.5, SQL Server 7.0, or SQL Server 2000. Server.
Setup
Run SQL Server setup to install the network components on a client computer. Individual network libraries can be enabled or disabled during setup when Setup is started from the command prompt.
ODBC Data Source Administrator
The ODBC Data Source Administrator lets you create and modify ODBC data sources on computers running the Microsoft Windows operating system.
In This Section
Configure Client Protocols
Create or Delete a Server Alias for Use by a Client (SQL Server Configuration Manager)
Open the ODBC Data Source Administrator
Check the ODBC SQL Server Driver Version (Windows)
Related Content
Server Network Configuration
Manage the Database Engine Services | https://docs.microsoft.com/en-us/sql/database-engine/configure-windows/client-network-configuration | 2017-07-20T17:36:13 | CC-MAIN-2017-30 | 1500549423269.5 | [] | docs.microsoft.com |
Troubleshooting problems between CGM and the rig¶
First, know how you get data from BG to your rig¶
There are a few ways to get your BG data to your rig:
- Medtronic CGM users: you just upload your BG data with the other pump information to the rig.
- Dexcom CGM users:
- G4/G5 -> Share Servers -> Nightscout -> rig
- G4/G5 -> plug the receiver in to the rig with a second power source
- xdrip+ -> Nightscout -> rig
- xdrip+ -> xdripAPS -> rig
Depending on how you’re getting BG to the rig, you’ll need to do some basic troubleshooting.
Second, troubleshoot the specific components of that setup¶
Medtronic CGM users¶
- If you haven’t been uploading CGM data for a while, or looping for a while, you may need to run
openaps first-uploadto get Nightscout to show CGM readings again.
If you’re using Nightscout:¶
- Make sure your BGs are getting TO Nightscout. If you’re using something to upload, check the uploader. If you’re using the Share bridge to Nightscout, the #1 reason BGs don’t get to Nightscout is because of Share. Make sure a) that you are getting BGs from the receiver/transmitter to the Share app; then b) that the Share app is open (i.e. re-open the app after your phone is restarted); then c) make sure the Dexcom follow app is getting data. Checking all of those usually resolves data to Nightscout.
- To get data FROM Nightscout, the most common problem is if your rig is offline. If your rig is not connected to the internet, it can’t pull BGs from Nightscout. Troubleshoot your internet connectivity (i.e. ping google.com and do what you need to do to get the rig online). After that, also make sure your NS URL and API secret are correct. If they’re not, re-run the setup script with those things corrected.
If you’re using xdrip+ or xdripAPS¶
- For Xdrip+ users If you have no data in Nightscout - first check your uploader phone for data in xdrip+ - If you uploader phone has data then there is a likley problem getting data from the uploader phone to Night Scout - check wifi and/or cellular connectivity of the phone/device similarily to the section above outlining getting BGs to Nightscout. Also, make sure your Xdripbridge-wixel has a charge - you should see a flashing red light on the wixel if it is searching to connect to the uploader device.
- If the Xdrip+ app on your uploader shows stale data (greater than 5 minutes since your last data point), go to ‘System Status’ to see the status of the connection between your xbridge-wixel and your uploader phone. If you show ‘connected’, but you do not have data, you may wish to use the ‘Restart Collector’ button and see if your data restarts. Be mindful that your CGM data is broadcast in 5 minute intervals - so you will see data appear on the ‘5’s‘ if reconnect works.
- It is possible that ‘Restart Collector’ button will not work - in this case you will need to ‘Forget Device’ to reset the connection between the phone and your Xbridge-wixel setup. Once forgetting the connection is done, you will need to go into the menu and select ‘Bluetooth Scan’ - you can now SCAN and select your Xbridge-wixel device. In some cases you will need a complete power-off of your wixel to successfully reset your system - this may require you to unplug your battery if you have not installed a power switch on your Xbridge-wixel device. If you wish to do a hard reboot of your system, turn off/unplug your wixel. Turn back on or replug, then rescan via ‘Bluetooth Scan’, select your Xbridge-wixel in blutooth selection window. Once selected, your wixel name will disappear from the bluetooth scan options. You may wish to do a double check of your system status to ensure you have a connection to your wixel device.
- Infrequently, in addition to the above, you may find your uploader phone needs a complete poweroff and restart as well to get you back up and running.
- Finally, increased frequency in difficulties with no data may indicate a troubled wire in your Xbridge-wixel - carefully double check all your soldered joints and ensure they continue to be good. | http://openaps.readthedocs.io/en/latest/docs/Troubleshooting/CGM-rig-communications-troubleshooting.html | 2017-07-20T16:25:56 | CC-MAIN-2017-30 | 1500549423269.5 | [] | openaps.readthedocs.io |
Setting Up Your Raspberry Pi¶
WARNING - THE RASPBERRY PI IS A DEPRECATED (NOT-RECOMMENDED) SETUP OPTION. We suggest you look to the top of the docs for information on the currently recommended hardware setup instead. (July 2017)¶
Note 1: This page talks about setting up the Raspberry Pi with a Carelink USB stick. If you chose the TI stick for your first setup, you’ll need to utilize directions in the mmeowlink wiki for flashing your TI stick, then return here to continue on with the OpenAPS setup process.
Note 2: Setting up a Raspberry Pi is not specific to OpenAPS. Therefore, it’s very easy to Google and find other setup guides and tutorials to help with this process. This is also a good way to get comfortable with using Google if you’re unfamiliar with some of the command line tools. Trust us - even if you’re an experienced programmer, you’ll be doing this throughout the setup process.
Note 3: Since bluetooth was included on the Raspberry Pi 3, changes were made to the UART configuration that require additional steps. Detailed RPi3-specific OpenAPS setup instructions can be found here.
In order to use the RPi2 with openaps development tools, the RPi2 must have an operating system installed and be set up in a very specific way. There are two paths to the initial operating system installation and WiFI setup. Path 1 is recommended for beginners that are very new to using command prompts or “terminal” on the Mac. Path 2 is considered the most convenient approach for those with more experience with coding and allows the RPi2 to be set up without the use of cables, which is also known as a headless install. Either path will work and the path you choose is a matter of personal preference. Either way, it is recommended that you purchase your RPi2 as a CanaKit, which includes everything you will need for a GUI install.
For the Path 1 GUI install you will need:
- A Raspberry Pi 2 CanaKit or similar, which includes several essential accessories in one package
- USB Keyboard
- USB Mouse
- A TV or other screen with HDMI input
For the Path 2 Headless install, you will need:
- Raspberry Pi 2
- 8 GB micro SD Card [and optional adapter so that you can plug in the micro SD Card into your computer]
- Low Profile USB WiFi Adapter
- 2.1 Amp USB Power Supply
- Micro USB cable
- Raspberry Pi 2 CanaKit
- Console cable, Ethernet cable, or Windows/Linux PC that can write ext4 filesystems
Download and Install Raspbian Jessie¶
Note: If you ordered the recommended CanaKit, your SD card will already come imaged. However, if you don’t already know whether it’s Raspbian 8 Jessie or newer (see below), just treat it as a blank SD card and download and install the latest version of Raspbian (currently version 8.0, codename Jessie).
Download Raspbian¶
Raspbian is the recommended operating system for OpenAPS.
If you don’t plan on running a graphical user interface on your Raspberry Pi, you can download the ‘lite’ version of Raspbian here; the image is much smaller and will download and write to your SD card more quickly.
If you require a full graphical user interface on your Raspberry Pi, download the latest version of Raspbian here.
Make sure to extract the disk .img from the ZIP file. If you downloaded the full GUI version above, note that the large size of the Raspbian Jessie image means its .zip file uses a different format internally, and the built-in unzipping tools in some versions of Windows and MacOS cannot handle it. The file can be successfully unzipped with 7-Zip on Windows and The Unarchiver on Mac (both are free). You can also unzip it from the command line on a Mac, by opening the Terminal application, navigating to the directory where you download the ZIP file, and typing
unzip <filename.zip>.
Write Raspbian to the Micro SD Card¶
Write the Raspbian .img you extracted from the ZIP file above to the SD card using the Installing OS Images instructions
If necessary, you can erase (format) your SD card using
Detailed Windows Instructions¶
- First, format your card to take advantage of the full size it offers
- If you got your through CanaKit, when you put it in your PC it will look like it is 1GB in size despite saying it is 8GB
- Download and install:
- Run SDFormatter
- Make sure your Micro SD Card is out of your Raspberry PI (shut it down first) and attached to your computer
- Choose the drive where your card is and hit “Options”
- Format Type: Change to Full (Erase)
- This will erase your old Raspbian OS and make sure you are using the full SD card’s available memory
- Format the card
- Download Raspbian 8 / Jessie
-
- Extract the IMG file
- Follow the instruction here to write the IMG to your SD card
-
- After writing to the SD card, safely remove it from your computer and put it back into your RPi2 and power it up
Connect and configure WiFi¶
- Insert the included USB WiFi into the RPi2.
- Next, insert the Micro SD Card into the RPi2.
Path 1: Keyboard, Mouse, and HDMI monitor/TV¶
- First, insert your USB keyboard and USB mouse into the RPi2.
- Next, connect your RPi2 to a monitor or T.V. using the included HDMI cable.
- Finally connect your RPi2 using the power adapter.
- You should see the GUI appear on screen.
- As of 12/11/2016 the Raspberry Pi Foundation is disabling SSH by default in Raspbian as a security precaution. To enable SSH from within the GUI, open up the terminal window and type
sudo raspi-config. On the configuartion menu that opens, scroll down and choose
Interfacing Optionsand then navigate to
ssh, press
Enterand select
Enablessh server.
- Configure WiFi per the instruction pamphlet included with your CanaKit. For those not using the CanaKit, click the computer monitors next to the volume control in the upper-right side and there will be a drop-down menu of available WiFi networks. You should see your home network. If you have trouble connecting to the RPi2 via WiFi, check your router settings. The router may need to be switched from WEP to WPA2.
- Once you have installed Raspbian, connected to WiFI, and enabled SSH you can disconnect the mouse, keyboard and HDMI cable.
Remember to keep your RPi2 plugged in, just disconnect the peripherals. Also remember to never disconnect your RPi2 without shutting it down properly using the
sudo shutdown -h now command. If you are unable to access the Pi and must power it off without a shutdown, wait until the green light has stopped flashing (indicating the Pi is no longer writing to the SD card).
You can now skip to Test SSH Access and SSH into your RPi2.
Path 2: Console or Ethernet cable¶
- Get and connect a console cable (use this guide),
- Temporarily connect RPi to a router with an Ethernet cable and SSH in (see below), or
- Connect the RPi directly to your computer with an Ethernet cable (using this guide) and SSH in (see below)
- As of 12/11/2016 the Raspberry Pi Foundation is disabling SSH by default in Raspbian as a security precaution. To enable SSH, create a file called ssh and save it to the boot directory of the mounted drive. The file can be blank, and it has no extensions. This will tell your Pi to enable SSH.
Configure WiFi Settings¶
Once you connect to the Pi, you’ll want to set up your wifi network(s). It is recommended to add both your home wifi network and your phone’s hotspot network if you want to use OpenAPS on the go.
To configure wifi:
Type
sudo bash and hit enter
Input
wpa_passphrase "<my_SSID_hotspot>" "<my_hotspot_password>" >> /etc/wpa_supplicant/wpa_supplicant.conf and hit enter (where
<my_SSID_hotspot> is the name of your phone’s hotspot and
<my_hotspot_password> is the password).
(It should look like:
wpa_passphrase "OpenAPS hotspot" "123loveOpenAPS4ever" >> /etc/wpa_supplicant/wpa_supplicant.conf)
Input your home wifi next:
wpa_passphrase "<my_SSID_home>" "<my_home_network_password>" >> /etc/wpa_supplicant/wpa_supplicant.conf (and hit enter)
You will also want to edit
/etc/network/interfaces to change the following line from
iface wlan0 inet manual to
iface wlan0 inet dhcp
To accomplish this input
sudo nano /etc/network/interfaces and change
manual to
dhcp on the line that has
iface wlan0 inet
The
dhcp tells the ifup process to configure the interface to expect some type of dhcp server on the other end, and use that to configure the IP/Netmask, Gateway, and DNS addresses on your Pi. The
manual indicates to the ifup process that that interface is not to be configured at all. For further reading on the
interfaces and
wpa_supplicant.conf files, type
man 5 interfaces or
man 5 wpa_supplicant when logged into your Pi.
If you are not familiar with nano (the text editor) you may want to check out this tutorial
You can now skip to Test SSH Access and SSH into your RPi2.
Path 3: Headless WiFi configuration (Windows/Linux only)¶
Keep the SD card in the reader in your computer. In this step, the WiFi interface is going to be configured in Raspbian, so that we can SSH in to the RPi2 and access the device remotely, such as on a computer or a mobile device via an SSH client, via the WiFi connection that we configure. Go to the directory where your SD card is with all of the files for running Raspbian on your RPi2, and open this file in a text editor.
/path/to/sd/card/etc/wpa_supplicant/wpa_supplicant.conf
In this file you will list your known WiFi networks so your Pi can connect automatically when roaming (e.g., between your home WiFi and your mobile hotspot).
ctrl_interface=DIR=/var/run/wpa_supplicant GROUP=netdev update_config=1 network={ ssid="YOURMOBILESSID" psk="YOURMOBILEPASS" } network={ ssid="YOURHOMESSID" psk="YOURHOMEPASS" }
You can add as many network as you need, the next reboot your system will connect to the first available network listed in your config files. Once the network to which your board is connected becomes unavailable, it start looking for any other known network in the area, and it connects to it if available.
If you want to connect to a router which doesn’t broadcast an SSID, add a line with
scan_ssid=1 after the
ssid and
psk lines for that network. (More info and examples for the options you can specify for each network are here.)
Boot your Pi. (Put the SD card into the RPi2. Plug in the compatible USB WiFi adapter into a RPi2 USB port. Get a micro USB cable and plug the micro USB end into the side of the RPi2 and plug the USB side into the USB power supply.)
If you are unable to access this file on your computer:
- Connect your Pi to your computer with an Ethernet cable and boot your Pi
- Log in using PuTTY. The Host Name is
raspberrypi.localand the Port is 22. The login is
piand the password is
raspberry.
- Type
sudo nano /etc/wpa_supplicant/wpa_supplicant.confand edit the file as described above.
Test SSH Access¶
Windows¶
Make sure that the computer is connected to the same WiFi router that the RPi2 is using. Download PuTTY here. Hostname is
[email protected] and default password for the user
pi is
raspberry. The port should be set to 22 (by default), and the connection type should be set to SSH. Click
Mac OS X / Linux¶
Make sure that the computer is connected to the same WiFi router that the RPi2 is using.
Open Terminal and enter this command:
ssh [email protected]
Default password for the user
pi is
raspberry
iOS¶
Make sure that the iOS device is connected to the same WiFi network that the RPi2 is using. Download Serverauditor or Prompt 2 (use this if you have a visual impairment). Hostname is
[email protected] and the default password for the user
pi is
raspberry. The port should be set to 22 (by default), and the connection type should be set to SSH.
You probably also want to make your phone a hotspot and configure the WiFi connection (as above) to use the hotspot.
Android¶
Make sure that the Android device is connected to the same WiFi network that the RPi2 is using. Download an SSH client in the Google Play store. Hostname is
[email protected] and the default password for the user
pi is
raspberry. The port should be set to 22 (by default), and the connection type should be set to SSH. You may need to ssh using the ip address instead; the app “Fing - Network Tools” will tell you what the address is if needed.
You probably also want to make your phone a hotspot and configure the WiFi connection (as above) to use the hotspot.
Note: If connecting to the RPi2 fails at this point, the easiest alternative is to temporarily connect RPi to your router with an Ethernet cable and SSH in, making sure both the computer and the RPi2 are connected to the same router.
Configure the Raspberry Pi¶
Verify your Raspbian Version¶
- In order to do this, you must have done Path 1 or Path 2 above so that you have an environment to interact with
- Go to the shell / Terminal prompt. If running the GUI, look at the Menu in the upper left and click the icon three to the right of it (looks like a computer)
- Type
lsb_release -a
- If it says anything about Release 8 / Jessie, you have the correct version and can continue.
- If it says anything else, you need to go back to Download and Install Raspbian Jessie
Run raspi-config¶
Run
sudo raspi-config
Here you can expand filesystem to maximize memory, change user password and set timezone (in internationalization options). This will take effect on the next reboot, so go ahead and reboot if prompted, or run
sudo reboot when you’re ready.
Confirm that your keyboard settings are correct. Click on Menu (upper left corner of the screen, with raspberry icon). Mouse down to Preferences, and over to Mouse and Keyboard Settings. Click on Mouse and Keyboard Settings, then click on the Keyboard tab. Click on Keyboard Layout and be sure your country and variant are correct. For the US, it should be United States and English (US).
Note on Time Zone¶
It is imperative that you set the correct time zone at this step of the configuration process. OpenAPS will look at the timestamp of your CGM data, and the local time on the pump, when making recommendations for basal changes. The system also uses local time on the pi; so times and time zone need to match, or you will run into issues later. If the time zone is incorrect, or you haven’t done this yet, run
sudo dpkg-reconfigure tzdata from the prompt and choose your local zone.
Setting up an SSH key for Password-less Login [optional]¶
You can setup a public/private key identity, and configure your local computer and the Raspberry Pi to automatically use it. This will allow SSH access to the Pi without requiring a password. Some people find this feature very convenient.
Windows¶
If you don’t already have an SSH key, follow this guide from GitHub to create one.
Create a .ssh directory on the Pi: run
mkdir .ssh
Log out by typing
exit
and copy your public SSH key into your RPi2 by entering
ssh-copy-id [email protected]
Now you should be able to log in without a password. Try to SSH into the RPi2 again, this time without a password.
Mac and Linux¶
In this section some of the commands will be run on your local computer and some will be run on your pi. This will be identified in parenthesis after each command.
If you don’t already have an ssh key, then run
ssh-keygen (on your local computer - keep hitting enter to accept all the defaults).
If you created a new key identity and accepted all of the defaults, then the name of the newly generated identity will be
id_rsa. However, if you set a custom name for the new identity (e.g.
id_mypi), then you will need to add it to your local ssh keyring, via
ssh-add ~/.ssh/id_mypi (on your local computer).
Next create a .ssh directory on the Pi:
ssh [email protected] (on your local computer), enter the password for the
pi user on the Pi, and run
mkdir .ssh (on your pi).
Next, add your new identity to the list of identities for which the Pi’s
pi user grants access via ssh:
cat ~/.ssh/<id_name>.pub | ssh [email protected] 'cat >> .ssh/authorized_keys' (on your local computer)
Instead of appending it to the list of authorized keys, you may simply copy your public key to the Pi, overwriting its existing list of authorized keys:
scp ~/.ssh/<id_name>.pub [email protected]:~/.ssh/authorized_keys (on your local computer)
Finally,
ssh [email protected] (on your local computer) to make sure you can log in without a password.
Wifi reliability tweaks [optional]¶
Many people have reported power issues with the 8192cu wireless chip found in many wifi adapters when used with the Raspberry Pi. As a workaround, we can disable the power management features (which this chip doesn’t have anyway) as follows:
sudo bash -c 'echo "options 8192cu rtw_power_mgnt=0 rtw_enusbss=0" >> /etc/modprobe.d/8192cu.conf'
Watchdog [optional]¶
Now you can consider installing watchdog, which restarts the RPi2 if it becomes unresponsive.
Enable the built-in hardware watchdog chip on the Raspberry Pi:
Install the watchdog package, which controls the conditions under which the hardware watchdog restarts the Pi:
sudo apt-get install watchdog
sudo modprobe bcm2708_wdog - If this command does not work, it appears to be ok to skip it.
sudo bash -c 'echo "bcm2708_wdog" >> /etc/modules'
Note: On the RPi3, the kernel module is bcm2835_wdt and is loaded by default in Raspbian Jessie.
Edit the config file by opening up nano text editor
sudo nano /etc/watchdog.conf
Uncomment the following: (remove the # from the following lines, scroll down as needed to find them):
max-load-1 = 24 watchdog-device = /dev/watchdog
Next, add watchdog to startup applications:
sudo update-rc.d watchdog defaults
Finally, start watchdog by entering:
sudo service watchdog start
Note: The init system which handles processes going forward in most Linux systems is systemd. Rc.d may be depreciated in the future, so it may be best to use systemd here. Unfortunately, the watchdog package in Raspbian Jessie(as of 12/10/2016) does not properly handle the systemd unit file. To fix it, do the following:
echo "WantedBy=multi-user.target" | sudo tee --append /lib/systemd/system/watchdog.service > /dev/null
this should place it in the service file under the [Install] heading.
and then to enable it to start at each boot:
sudo systemctl enable watchdog
To start process without rebooting:
sudo systemctl start watchdog
Update the Raspberry Pi [optional]¶
Update the RPi2.
sudo apt-get update && sudo apt-get -y upgrade
The packages will take some time to install.
Disable HDMI to conserve power [optional]¶
Via Raspberry Pi Zero - Conserve power and reduce draw to 80mA:
If you’re running a headless Raspberry Pi, there’s no need to power the display circuitry, and you can save a little power by running
/usr/bin/tvservice -o(
-pto re-enable).
To disable HDMI on boot, use
sudo nano /etc/rc.local to edit the rc.local file. Add
/usr/bin/tvservice -o to the file and save.
Configure Bluetooth Low Energy tethering [optional]¶
The Raspberry Pi can be tethered to a smartphone and share the phone’s internet connection. Bluetooth tethering needs to be enabled and configured on the phone device and your carrier/plan must allow tethering. The Raspberry Pi 3 has an inbuilt Bluetooth Low Energy (BLE) chip, while a BLE USB dongle can be used with the other Pi models.
The main advantages of using BLE tethering are that it consumes less power on the phone device than running a portable WiFi hotspot and it allows the Raspberry Pi to use whatever data connection is available on the phone at any given time - e.g. 3G/4G or WiFi. Some have also found that power consumption on the Raspberry Pi is lower when using BLE tethering compared to using a WiFi connection, although this may vary depending on BLE USB dongle, WiFi dongle, etc.
First, we clone a repository which contains scripts which are used later in the setup -
cd /home/pi git clone
We then copy the required scripts into a ‘bin’ directory -
mkdir -p /home/pi/bin cp /home/pi/RaspberryPi_BTPAN_AutoConnect/bt-pan /home/pi/bin cp /home/pi/RaspberryPi_BTPAN_AutoConnect/check-and-connect-bt-pan.sh /home/pi/bin
To configure a connection from the command line -
sudo bluetoothctl
Enter the following commands to bring up the adapter and make it discoverable -
power on discoverable on agent on default-agent. Instead, the phone may ask you to enter a PIN. If so, enter ‘0000’ and when bluetoothctl asks for a PIN, enter the same code again. Either way, bluetoothctl should inform you that pairing was successful. It will then ask you to authorize the connection - enter ‘yes’.
Execute the paired-devices command to list the paired devices -
paired-devices Device AA:BB:CC:DD:EE:FF Nexus 6P
Your paired phone should be listed (in this example, a Google Nexus 6P). Copy the bluetooth address listed for it; we will need to provide this later.
Now trust the mobile device (notice that bluetoothctl features auto-complete, so you can type the first few characters of the device’s bluetooth address (which we copied previously) and hit
NOTE: Whenever you see ‘AA:BB:CC:DD:EE:FF’ or ‘AA_BB_CC_DD_EE_FF’ in this guide, replace it with the actual address of your mobile Bluetooth device, in the proper format (colons or underscores).
trust AA:BB:CC:DD:EE:FF
Quit bluetoothctl with ‘quit’.
Now, we create a service so that a connection is established at startup. Execute the following commands to create a net-bnep-client.service file and open it for editing in Nano -
cd /etc/systemd/system sudo nano net-bnep-client.service
In the editor, populate the file with the text below, replacing AA:BB:CC:DD:EE:FF with the address noted earlier -
[Unit] After=bluetooth.service PartOf=bluetooth.service [Service] ExecStart=/home/pi/bin/bt-pan client AA:BB:CC:DD:EE:FF [Install] WantedBy=bluetooth.target
Save the file, then enable the service -
sudo systemctl enable net-bnep-client.service
Open your crontab for editing -
crontab -e
...and add an entry to check the connection every minute and reconnect if necessary -
* * * * * /home/pi/bin/check-and-connect-bt-pan.sh
Save the file, then restart -
sudo shutdown -r now
or
sudo systemctl reboot | http://openaps.readthedocs.io/en/latest/docs/Resources/Deprecated-Pi/Pi-setup.html | 2017-07-20T16:24:03 | CC-MAIN-2017-30 | 1500549423269.5 | [] | openaps.readthedocs.io |
Upgrading from MMC 3.4.1 (Only) for MS-SQL Server to the Latest Version of MMC for MS-SQL Server
Overview
MMC version 3.4.1 for MS-SQL Server allows you to persist MMC data to MS SQL Server instead of on MMC’s default internal databases. Users of this version of MMC should upgrade to the latest version of MMC, which includes support for MS-SQL Server.
Upgrading to the latest version of MMC requires that you run a migration script on MS-SQL Server. Optionally, you can also run an additional script to drop indexes not used in MMC versions later than 3.4.1. This page describes the procedure for running both scripts.
Downloading the Migration SQL Scripts
There are two migration SQL scripts for MS-SQL Server:
The migration script,
sqlServerCustomersMigrate_341_to_342.sql, required for all installations
The index drop script,
sqlServerCustomersOptionalIndexDrop_341_to_342.sql, which is optional
Download these from the support portal or see the scripts for copy-paste.
See the optional drop indexes script for copy-paste
Running the Scripts on MS-SQL Server
Before running the scripts, follow these steps:
Ensure that no instance of MMC is connected to the MS-SQL Server – in other words, make sure that you have stopped your MMC 3.4.1, and that your new MMC is not started, or not connected to the database on the MS-SQL Server
In the MMC tracking (Business Events) database, back up the current value of the
SEQUENCE_TABLEfield (located in the only row of
OPENJPA_SEQUENCES_TABLE)
Both scripts use the database
MMC_PERSISTENCY. If your database for MMC data has a different name, modify the name in the scripts by following these steps:
Locate the statement
USE [MMC_PERSISTENCY].
Replace
MMC_PERSISTENCYwith the name of your database.
After completing the above steps, run the migration script
sqlServerCustomersMigrate_341_to_342.sql up to the following statement:
After running the above statement, verify that
SEQUENCE_VALUE in
OPENJPA_SEQUENCE_TABLE is the same as
SEQUENCE_VALUE in
OPENJPA_SEQUENCES_TABLE. After this, run the rest of the script.
See Also
Read an overview of configuring MMC for external databases, which includes links to detailed instructions for each supported database server.
Read the configuration details for Persisting MMC Data to MS SQL Server. | https://docs.mulesoft.com/mule-management-console/v/3.8/upgrading-from-mmc-3.4.1-for-ms-sql-server-to-latest-mmc-for-ms-sql-server | 2017-07-20T16:28:56 | CC-MAIN-2017-30 | 1500549423269.5 | [] | docs.mulesoft.com |
The:
Full support for any kind of dependency between .java and .groovy files is supported.
Since JDT can now understand groovy source (to a degree), it applies some generics checking to the groovy code and will report correct warnings that
groovyc would not normally show:
Changes made to either .java or .groovy files will result in an incremental compilation of the containing project. Even if the dependencies are in a different language, the impact of the change will correctly propagate...
Basic support for Grape and the Grab annotation is included, but the code using the annotation may show a compile error (due to a missing dependency), however it will run as a groovy script:
The wizards available mimic their Java counterparts.
The new Groovy project wizard allows you to create an Eclipse project pre-configured with Groovy support:
It supports all of the options available for Java projects:
This wizard creates a new Groovy class, with many of the same options available for the Java Class creation wizard:
This wizard assists you in creating a new Groovy JUnit 3 or JUnit 4 test case:
On startup, you are reminded to convert any legacy groovy projects in your workspace:
If you can decide to permanently dismiss this dialog box, you can still do the conversion from the preference:
Additionally, the Groovy editor provides the following capabilities.
Errors are highlighted as you type. This functionality is provided by hooking into JDT's reconciling capability::
And now, the inferencing engine is able to determine that the type of
someObject is now a
String:..
Again, notice the affect of reconciling in that the search results view contains both references to
numberOfSides even though the text has not yet been written to disk.
There is an option in the Groovy Preferences page to show the JUnit results pane in monospace font:
This is helpful for viewing the results of Spock tests. With monospace font, the results pane is much easier to read:
than it is without:
There are two facets to M1's debug support: launching and debugging.
There are three launch options for your Groovy files and projects:
This is similar to launching Java applications. A main method of a Groovy or Java class is launched and you are provided with options similar to launching a standard Java application::
And from here, it is still possible to execute a groovy script:
This launch configuration allows the use of the
@Grab annotation since scripts do not need to be compilable before they are executed as Groovy Scripts.:
Note that there is a bug on windows where the
groovy> prompt appears twice on each line. See GRECLIPSE-434..
Once a breakpoint is reached, (as in Java code) you can evaluate variables:
This lets you inspect the current state of the program and even look at the Groovy meta-classes and closures accessible in the current scope.
It is also possible to evaluate code snippets from the display view while in a Groovy scope:
Java syntax is required for evaluating code snippets. What this means is that explicit casting of Groovy objects will be necessary for local variables and dynamically typed references.!
M1 contains several new refactoring and formatting facilities.
The Groovy editor will automatically indent poorly indented code on a paste. So, this:
Becomes this:
It is simple to convert from a
.java file to a
.groovy file and back using the context menu:.
We have fixed over 130 issues for this milestone release: | http://docs.codehaus.org/plugins/viewsource/viewpagesrc.action?pageId=133464433 | 2014-11-21T04:08:09 | CC-MAIN-2014-49 | 1416400372634.37 | [] | docs.codehaus.org |
java.lang.Object
org.springframework.orm.jdo.LocalPersistenceManagerFactoryBeanorg.springframework.orm.jdo.LocalPersistenceManagerFactoryBean
public class LocalPersistenceManagerFactoryBean
FactoryBean that creates a local JDO EntityManagerFactory provider for JTA transactions (which might involve JCA).
NOTE: This class is compatible with both JDO 1.0 and JDO 2.0,
as far as possible. It uses reflection to adapt to the actual API present
on the class path (concretely: for the
getPersistenceManagerFactory
method with either a
Properties or a
Map argument).
Make sure that the JDO API jar on your class path matches the one that
your JDO provider has been compiled against! corresponding JDO properties).
For example, in case of JPOX:
<bean id="persistenceManagerFactory" class="org.jpox.PersistenceManagerFactoryImpl" destroy- <property name="connectionFactory" ref="dataSource"/> <property name="nontransactionalRead" value="true"/> </bean>
Note that such direct setup of a PersistenceManagerFactory implementation is the only way to pass an external connection factory (i.e. a JDBC DataSource) into a JDO PersistenceManagerFactory..Map), method.
A custom implementation could prepare the instance in a specific way,
or use a custom PersistenceManagerFactory implementation.
Implemented to work with either the JDO 1.0
getPersistenceManagerFactory(java.util.Properties) method or
the JDO 2.0
getPersistenceManagerFactory(java.util.Map) method,
detected through reflection.
props- the merged Properties prepared by this LocalPersistenceManagerFactoryBean
javax.jdo.JDOHelper#getPersistenceManagerFactory(java.util.Properties),
JDOHelper.getPersistenceManagerFactory(java.util.Map)
public Object getObject()()
destroyin interface
DisposableBean | http://docs.spring.io/spring/docs/1.2.9/api/org/springframework/orm/jdo/LocalPersistenceManagerFactoryBean.html | 2014-11-21T04:59:16 | CC-MAIN-2014-49 | 1416400372634.37 | [] | docs.spring.io |
The AWS Toolkit for Visual Studio is a plug-in for the Visual Studio 2010, 2012, and 2013 IDE that makes it easier for developers to develop, debug, and deploy .NET applications that use Amazon Web Services. Some of the features of the AWS Toolkit that enhance the development experience are:
AWS Explorer, and deployment to AWS CloudFormation.
AWS Explorer supports multiple AWS accounts, including IAM user accounts, and enables you to easily change the displayed view from one account to another.
From AWS Explorer, you can view available Amazon Machine Images (AMIs), create Amazon EC2 instances from those AMIs, and then connect to those instances using Windows Remote Desktop. AWS Explorer also enables supporting functionality such as the capability to create and manage key pairs and security groups.
Amazon DynamoDB is a fast, highly scalable, highly available, cost-effective, nonrelational database service. The AWS Toolkit for Visual Studio provides functionality for working with Amazon DynamoDB in a development context. With the Toolkit, you can create and edit attributes in Amazon DynamoDB tables and run Scan operations on tables.
AWS CloudFormation makes it easy for you to deploy your .NET Framework application to AWS. AWS CloudFormation provisions the AWS resources needed by your application, which frees you to focus on developing the application's functionality. The AWS Toolkit for Visual Studio includes two ready-to-use AWS CloudFormation templates.
From AWS Explorer, you can create new IAM users and policies, and attach policies to users.
The AWS Toolkit for Visual Studio installs the latest version of the AWS SDK for .NET. From Visual Studio, you can easily modify, build, and run any of the samples included in the SDK.
Note
Toolkit for Visual Studio for Visual Studio 2008 is still available, but not supported. For more information, see Installation.
The Toolkit for Visual Studio. If.
The AWS Toolkit for Visual Studio adds the following new features.
The AWS Toolkit for Visual Studio includes the AWS Standalone Deployment Tool. The deployment tool is a command line tool that enables you to deploy your application to AWS CloudFormation from outside of the Microsoft Visual Studio development environment. With the deployment tool, you can make deployment an automatic part of your build process or include deployment in other scripting scenarios.
Both the deployment wizard and the deployment tool can redeploy a new instance of your application over an already-running instance.
You can designate AWS accounts as AWS GovCloud users. These users are then able to use the AWS GovCloud region.
You can specify whether an Amazon S3 object should use server-side encryption. You can specify this feature at the time that you upload the object or afterwards in the object's properties dialog box.
In AWS Explorer, you can customize which columns are displayed when you are viewing Amazon Machine Images (AMIs), Amazon EC2 instances, and EBS volumes.
From AWS Explorer, you can add tags and tag values to AMIs, Amazon EC2 instances, and EBS volumes. Tags that you add are automatically added as columns in AWS Explorer views, and as with other columns, you can hide these columns if you choose.
When you execute a query in Amazon SimpleDB, the Toolkit for Visual Studio displays only a single "page" of results—either the first 100 results or the number of results specified by the LIMIT parameter, if it is included in the query. The Toolkit for Visual Studio now enables you to fetch either an additional page of results or an additional ten pages of results.
When you send an Amazon SQS message from the Toolkit for Visual Studio, you can now specify a time delay before the message appears in the Amazon SQS queue.
You can export the results of your Amazon SimpleDB queries to a CSV file.. Also, to make AWS more approachable as a platform for prototyping and experimentation, AWS offers a free usage tier. On this tier, services are free below a certain level of usage. For more information about AWS costs and the Free Tier, go to AWS Free Usage Tiers. To obtain an AWS account, go to the AWS home page and click the button. | http://docs.aws.amazon.com/AWSToolkitVS/latest/UserGuide/welcome.html | 2014-11-21T04:05:18 | CC-MAIN-2014-49 | 1416400372634.37 | [] | docs.aws.amazon.com |
Branches are something we break out for serious RnD development that effects the whole library. While creating a branch is hard work and merging it in after more then two weeks even harder.
If at all possible avoid creating a branch, working within the module system offers a more collaborative experience that is easier on everyone.
github branches
We also have developers using github as an alternative to an svn branch.
WFS2 / ResourceID / GeoGIT work:
-
-
- (contains the above two and geogit datastore implementation)
CouchDB work:
Company specific staging area:
gitconfig
If you are checking out any of the above branches please be advised to pay attention to your .gitconfig file (in order to prevent encoding difficulties).
Here is an example of a test failure caused by an encoding difficulty (where "°" is expected rather than "<A1>"):
The fix it to remove "core.autocrlf input" from your .gitconfig and redo your clone from scratch. Asking git to do any kind of line processing results in it tripping up over file encoding. Removing this setting asks it to just blindly copy the bytes into a file - this will mean windows users will need to edit from within Eclipse or some other editor like Notepad++ that can handle linefeeds.
You can check your configuration with: | http://docs.codehaus.org/pages/diffpagesbyversion.action?pageId=54386&selectedPageVersions=12&selectedPageVersions=13 | 2014-11-21T04:41:56 | CC-MAIN-2014-49 | 1416400372634.37 | [] | docs.codehaus.org |
...
For example, to use the source from the Groovy
1.1-beta-2:
...
To build a Groovy distribution archive:
This will build everything, generate documentation and create distribution archives under
target/dist:
...
To publish artifacts to the Maven 2 repository, run:
This will either publish to the release repository or the snapshot repository based on whether the POM version contains
SNAPSHOT or not.
... | http://docs.codehaus.org/pages/diffpages.action?pageId=228177335&originalId=228177331 | 2014-11-21T04:20:42 | CC-MAIN-2014-49 | 1416400372634.37 | [] | docs.codehaus.org |
- User Text Display - A global widget that displays your text / HTML / Mardown. Optionally includes a title.
- Global Differential DropDown - A global widget that allows you to put a differential dropdown on a global dashboard. (since version 1.3)
- Project Alerts - A widget that reports on current project alerts (but not past ones), showing them in a list with links to drilldown views. (since version 1.3)
Usage & Installation
Install via the Update Center or download the jar manually and copy it into the extensions/plugins directory. Restart Sonar and you'll be able to add the new widgets to your dashboards.
Configuration
none
Known Limitations
none
Change Log
Release 1.3 (3 issues)
Release 1.2 (3 issues)
| http://docs.codehaus.org/pages/viewpage.action?pageId=229741584 | 2014-11-21T04:39:05 | CC-MAIN-2014-49 | 1416400372634.37 | [array(['/s/en_GB/5510/701ab0bfc8a95d65a5559a923f8ed8badd272d36.15/_/images/icons/wait.gif',
None], dtype=object)
array(['/s/en_GB/5510/701ab0bfc8a95d65a5559a923f8ed8badd272d36.15/_/images/icons/wait.gif',
None], dtype=object)
array(['/s/en_GB/5510/701ab0bfc8a95d65a5559a923f8ed8badd272d36.15/_/images/icons/wait.gif',
None], dtype=object) ] | docs.codehaus.org |
A frontend for multiple backends¶
A small historical note might help to make this section clearer. So bear with
with me for a couple of lines. Originally PyVISA was a Python wrapper to the
VISA library. More specifically, it was
ctypes wrapper around the
NI-VISA. This approach worked fine but made it difficult to develop other ways
to communicate with instruments in platforms where NI-VISA was not available.
Users had to change their programs to use other packages with different API.
Since 1.6, PyVISA is a frontend to VISA. It provides a nice, Pythonic API and can connect to multiple backends. Each backend exposes a class derived from VisaLibraryBase that implements the low-level communication. The ctypes wrapper around NI-VISA is the default backend (called ni) and is bundled with PyVISA for simplicity.
You can specify the backend to use when you instantiate the resource manager
using the
@ symbol. Remembering that ni is the default, this:
>>> import visa >>> rm = visa.ResourceManager()
is the same as this:
>>> import visa >>> rm = visa.ResourceManager('@ni')
You can still provide the path to the library if needed:
>>> import visa >>> rm = visa.ResourceManager('/path/to/lib@ni')
Under the hood, the
ResourceManager looks for the requested backend and
instantiate the VISA library that it provides.
PyVISA locates backends by name. If you do:
>>> import visa >>> rm = visa.ResourceManager('@somename')
PyVISA will try to import a package/module named
pyvisa-somename which
should be installed in your system. This is a loosely coupled configuration
free method. PyVISA does not need to know about any backend out there until you
actually try to use it.
You can list the installed backends by running the following code in the command line:
python -m visa info
Developing a new Backend¶
What does a minimum backend looks like? Quite simple:
from pyvisa.highlevel import VisaLibraryBase class MyLibrary(VisaLibraryBase): pass WRAPPER_CLASS = MyLibrary
Additionally you can provide a staticmethod named get_debug_info that should
return a dictionary of debug information which is printed when you call
python -m visa info or
pyvisa-info
Note
Your backend name should not end by
-script or it will be discarded.
This is because any script generated by setuptools containing the name
pyvisa will be named
pyvisa-*-script and they are obviously not
backends. Examples are the
pyvisa-shell and
pyvisa-info scripts.
An important aspect of developing a backend is knowing which VisaLibraryBase method to implement and what API to expose.
A complete implementation of a VISA Library requires a lot of functions (basically almost all level 2 functions as described in Architecture (there is also a complete list at the bottom of this page). But a working implementation does not require all of them.
As a very minimum set you need:
- open_default_resource_manager: returns a session to the Default Resource Manager resource.
- open: Opens a session to the specified resource.
- close: Closes the specified session, event, or find list.
- list_resources: Returns a tuple of all connected devices matching query.
(you can get the signature below or here Visa Library)
But of course you cannot do anything interesting with just this. In general you will also need:
- get_attribute: Retrieves the state of an attribute.
- set_atribute: Sets the state of an attribute.
If you need to start sending bytes to MessageBased instruments you will require:
- read: Reads data from device or interface synchronously.
- write: Writes data to device or interface synchronously.
For other usages or devices, you might need to implement other functions. Is really up to you and your needs.
These functions should raise a
pyvisa.errors.VisaIOError or emit a
pyvisa.errors.VisaIOWarning if necessary.
Complete list of level 2 functions to implement:
def read_memory(self, session, space, offset, width, extended=False): def write_memory(self, session, space, offset, data, width, extended=False): def move_in(self, session, space, offset, length, width, extended=False): def move_out(self, session, space, offset, length, data, width, extended=False): def peek(self, session, address, width): def poke(self, session, address, width, data): def assert_interrupt_signal(self, session, mode, status_id): def assert_trigger(self, session, protocol): def assert_utility_signal(self, session, line): def buffer_read(self, session, count): def buffer_write(self, session, data): def clear(self, session): def close(self, session): def disable_event(self, session, event_type, mechanism): def discard_events(self, session, event_type, mechanism): def enable_event(self, session, event_type, mechanism, context=None): def flush(self, session, mask): def get_attribute(self, session, attribute): def gpib_command(self, session, data): def gpib_control_atn(self, session, mode): def gpib_control_ren(self, session, mode): def gpib_pass_control(self, session, primary_address, secondary_address): def gpib_send_ifc(self, session): def in_8(self, session, space, offset, extended=False): def in_16(self, session, space, offset, extended=False): def in_32(self, session, space, offset, extended=False): def in_64(self, session, space, offset, extended=False): def install_handler(self, session, event_type, handler, user_handle): def list_resources(self, session, query='?*::INSTR'): def lock(self, session, lock_type, timeout, requested_key=None): def map_address(self, session, map_space, map_base, map_size, def map_trigger(self, session, trigger_source, trigger_destination, mode): def memory_allocation(self, session, size, extended=False): def memory_free(self, session, offset, extended=False): def move(self, session, source_space, source_offset, source_width, destination_space, def move_asynchronously(self, session, source_space, source_offset, source_width, def move_in_8(self, session, space, offset, length, extended=False): def move_in_16(self, session, space, offset, length, extended=False): def move_in_32(self, session, space, offset, length, extended=False): def move_in_64(self, session, space, offset, length, extended=False): def move_out_8(self, session, space, offset, length, data, extended=False): def move_out_16(self, session, space, offset, length, data, extended=False): def move_out_32(self, session, space, offset, length, data, extended=False): def move_out_64(self, session, space, offset, length, data, extended=False): def open(self, session, resource_name, def open_default_resource_manager(self): def out_8(self, session, space, offset, data, extended=False): def out_16(self, session, space, offset, data, extended=False): def out_32(self, session, space, offset, data, extended=False): def out_64(self, session, space, offset, data, extended=False): def parse_resource(self, session, resource_name): def parse_resource_extended(self, session, resource_name): def peek_8(self, session, address): def peek_16(self, session, address): def peek_32(self, session, address): def peek_64(self, session, address): def poke_8(self, session, address, data): def poke_16(self, session, address, data): def poke_32(self, session, address, data): def poke_64(self, session, address, data): def read(self, session, count): def read_asynchronously(self, session, count): def read_stb(self, session): def read_to_file(self, session, filename, count): def set_attribute(self, session, attribute, attribute_state): def set_buffer(self, session, mask, size): def status_description(self, session, status): def terminate(self, session, degree, job_id): def uninstall_handler(self, session, event_type, handler, user_handle=None): def unlock(self, session): def unmap_address(self, session): def unmap_trigger(self, session, trigger_source, trigger_destination): def usb_control_in(self, session, request_type_bitmap_field, request_id, request_value, def usb_control_out(self, session, request_type_bitmap_field, request_id, request_value, def vxi_command_query(self, session, mode, command): def wait_on_event(self, session, in_event_type, timeout): def write(self, session, data): def write_asynchronously(self, session, data): def write_from_file(self, session, filename, count): | https://pyvisa.readthedocs.io/en/1.10.0/advanced/backends.html | 2021-07-24T00:21:33 | CC-MAIN-2021-31 | 1627046150067.87 | [] | pyvisa.readthedocs.io |
Policies and profiles on Citrix Gateway
Policies and profiles on Citrix Gateway allow you to manage and implement configuration settings under specified scenarios or conditions. An individual policy states or defines the configuration settings that go into effect when a specified set of conditions is met. Each policy has a unique name and can have a profile bound to the policy.
How policies workHow policies work
A policy consists of a Boolean condition and collection of settings called a profile. The condition is evaluated at runtime to determine if the policy must number of time users can stay logged on.
If you are using Citrix Gateway with Citrix Virtual Apps, Citrix Gateway policy names are sent to Citrix Virtual Apps as filters. When configuring Citrix Gateway to be compatible be compatible with Citrix Endpoint Management, see Configuring Settings for Your Citrix Endpoint Management Environment.
For more information about configuring Citrix Gateway to be compatible with Citrix Virtual Apps and Desktops, see Accessing Citrix Virtual Apps and Citrix Virtual Desktops Resources with the Web Interface and Integrating with Citrix Endpoint Management or StoreFront.
For more information about preauthentication policies, see Configuring Endpoint Polices. Citrix Gateway plug-in from outside the internal network, such as from their home computer or by using Micro VPN from a mobile device, to be authenticated by using LDAP and users who are connecting through the WAN to be authenticated using RADIUS.
Note: You cannot use policy conditions based on endpoint analysis results if the policy rule is configured as part of security settings in a session profile..
Create policies on Citrix GatewayCreate Citrix Endpoint Management or StoreFront as part of your deployment, you can use the Quick Configuration wizard to configure the settings for this deployment. For more information about the wizard, see Configuring Settings with the Quick Configuration Wizard. | https://docs.citrix.com/en-us/citrix-gateway/current-release/install-citrix-gateway/policies-and-profiles-on-citrix-gateway.html | 2021-07-24T02:37:02 | CC-MAIN-2021-31 | 1627046150067.87 | [] | docs.citrix.com |
Cmdlet.
Begin Processing Method
Definition
Important
Some information relates to prerelease product that may be substantially modified before it’s released. Microsoft makes no warranties, express or implied, with respect to the information provided here.
When overridden in the derived class, performs initialization of command execution. Default implementation in the base class just returns.
protected: virtual void BeginProcessing();
protected: virtual void BeginProcessing();
virtual void BeginProcessing();
protected virtual void BeginProcessing ();
abstract member BeginProcessing : unit -> unit override this.BeginProcessing : unit -> unit
Protected Overridable Sub BeginProcessing ()
Exceptions
This method is overridden in the implementation of individual Cmdlets, and can throw literally any exception. | https://docs.microsoft.com/sv-se/dotnet/api/System.Management.Automation.Cmdlet.BeginProcessing?view=powershellsdk-7.0.0 | 2021-07-24T02:56:01 | CC-MAIN-2021-31 | 1627046150067.87 | [] | docs.microsoft.com |
This installation guide describes the integration of ThreatSTOP and A10 ADC / TPS devices
Warning: This version has been superseded by the TSCM Based version. This documentation applies to devices created through 2018. New deployments must use the new release - TSCM Web Automation or TSCM CLI
Add a Device
- Go to the portal () and add your device.
- Click on Devices.
- Click on + Add Device.
- The Add Device window will display.
- Fill out the required fields:
- Nickname: Internal device reference
- Manufacturer: A10 Networks.
- Model: Thunder TPS
- IP Address: External IP address that will query the service. This can be determined by visiting
- IP Type (Static or Dynamic) used by the device.
- Policy: A10-TPS
- Location: country of residence (optional)
- Postal Code: The ZIP code, or other postal designator for your location (optional)
- Click Next.
Note: You need to make certain you are always coming from the same IP address and not a NAT pool of IPs. This utility determines the external IP address of the device which is unlikely to rotate.
Prepare the Threat Intelligence Proxy
Create a Virtual Machine (VM) in your preferred Virtual Machine host. The VM will need to have the following system requirements:
- OS: Ubuntu 14.04
- RAM: 10 GB
- Internet connectivity, for DNS requests
- Log into the VM, and update the distribution to the most recent version with the following command:
sudo su –c ‘apt-get update && apt-get –y upgrade && apt-get install libwww-perl libcrypt-ssleay-perl’
- Access the ThreatSTOP FTP through the VM using the following commands, for credential information simply type anonymous:
ftp cd /pub get ts-a10_2.37-03_all.deb
- Download the .deb file from the ThreatSTOP FTP server ()
- Install the downloaded file with the following command:
sudo dpkg –i ts-a10_2.37-03_all.deb
Configure the ThreatSTOP System
- SSH into the VM with the following credentials:
- username: threatstop
- password: threatstop
- Issue the following command:
wget –qO –
- Record the IP address, this is your external IP address.
- Run the automated setup script using the following command:
/opt/threatstop/setup.sh Provide the following information as displayed at the prompts: DDOS zones : <accept the default> Enable NTP (y/n) ? [y] ==> <accept the default> Enable DNS Resolvers (y/n) ? [y] ==> <accept the default> Enable SSDP (y/n) ? [y] ==> <accept the default> Enable SNMP (y/n) ? [y] ==> <accept the default> Enable Drones (y/n) ? [y] ==> <accept the default> Please enter the block_list to use: [] ==> A10-TPS-001-netb.ANetwork.threatstop.local Please enter the allow_list to use: [] ==> A10-TPS-001-neta.ANetwork.threatstop.local Please enter the external IP address of the A10 device: [] ==> <Enter the IP address from steps 2-3> Please enter port to use for DNS queries : [53] ==> Please enter the internal IP address of the A10 device: [] ==> <Device IP> Please enter the directory to store the A10 class lists: [/etc/threatstop/lists] ==>
This will download the ThreatSTOP block and allow lists to your Virtual Machine which will then upload the policies directly to your A10 device.
Configure TPS using ACOS 3.2 or greater
- Import the class list with the following command
import-periodic class-list a10-ddos-block use-mgmt-port scp://threatstop:[email protected]/etc/threatstop/list/block-000-nsp.txt period 7200
- Create the source based policy with the following command
ddos src-based-policy A10-Threat-Intel policy-class-list a10-ddos-block
° Bind the policy to the zone config with the following commands.
ddos dst zone ip 10.10.10.10 operational-mode monitor port 80 tcp src-based-policy A10-Threat-Intel policy-class-list a10-ddos-block deny | https://docs.threatstop.com/a10.html | 2021-07-24T01:27:49 | CC-MAIN-2021-31 | 1627046150067.87 | [] | docs.threatstop.com |
Introduction
This block provides high-resolution 8-bit Hexagon aerial images, either as downloaded or streamed products. These images are acquired with Leica ADS100 and Leica DMC III large format digital airborne sensors on a yearly basis in the USA. The datasets are available starting with 2014.
Technical Information
Block versions
Compatible blocks
Geographic coverage
The datasets are mostly covering urban areas from the following regions in USA:
For more information about data availability and acquisition per year, please refer to the Hexagon website.
Dataset Information
Restrictions
The maximum AOI size is 25.
Examples
Example based on a workflow created with the data block HxGN Content Program, 15 cm (Download):
Example based on a workflow created with the data block HxGN Content Program, 15 cm (Download) and NDVI:
Example based on a workflow created with the data block HxGN Content Program, 15 cm (Streaming) and Vectorization: | https://docs.up42.com/blocks/data/hexagon-aerial-15cm-download/ | 2021-07-24T01:40:30 | CC-MAIN-2021-31 | 1627046150067.87 | [] | docs.up42.com |
QEMU virtio-fs shared file system daemon¶
Description¶¶
-o
OPTION¶
- debug - Enable debug output.
- flock|no_flock - Enable/disable flock. The default is
no_flock.
- modcaps=CAPLIST Modify the list of capabilities allowed; CAPLIST is a colon separated list of capabilities, each preceded by either + or -, e.g. ‘’.
--fd
=FDNUM¶
Accept connections from vhost-user UNIX domain socket file descriptor FDNUM. The file descriptor must already be listening for connections.
--thread-pool-size
=NUM¶
Restrict the number of worker threads per request queue to NUM. The default is 64.
--cache
=none|auto|always¶
Select the desired trade-off between coherency and performance.
noneforbids the FUSE client from caching to achieve best coherency at the cost of performance.
autoacts similar to NFS with a 1 second metadata cache timeout.
alwayssets a long cache lifetime at the expense of coherency. The default is
auto.
xattr-mapping¶ ‘:’ as the separator a rule is of the form:
:type:scope:key:prepend:
scope is:
- ‘client’ - match ‘key’ against a xattr name from the client for
- setxattr/getxattr/removexattr
- ‘server’ - match ‘prepend’ against a xattr name from the server
- for listxattr
- ‘all’ - can be used to make a single rule where both the server
- and client matches are triggered.
type is one of:
- ‘prefix’ - is designed to prepend and strip a prefix; the modified attributes then being passed on to the client/server.
- ‘ok’ - Causes the rule set to be terminated when a match is found while allowing matching xattr’s through unchanged. It is intended both as a way of explicitly terminating the list of rules, and to allow some xattr’s to skip following rules.
- ‘bad’ - If a client tries to use a name matching ‘key’ it’s denied using EPERM; when the server passes an attribute name matching ‘prepend’ it’s hidden. In many ways it’s use is very like ‘ok’ as either an explicit terminator or for special handling of certain patterns.
key is a string tested as a prefix on an attribute name originating on the client. It maybe empty in which case a ‘client’ rule will always match on client names.
prepend is a string tested as a prefix on an attribute name originating on the server, and used as a new prefix. It may be empty in which case a ‘server’ rule will always match on all names from the server.
e.g.:
:prefix:client:trusted.:user.virtiofs.:
will match ‘trusted.’ attributes in client calls and prefix them before passing them to the server.
:prefix:server::user.virtiofs.:
will strip ‘user.virtiofs.’ from all server replies.
:prefix:all:trusted.:user.virtiofs.:
combines the previous two cases into a single rule.
:ok:client:user.::
will allow get/set xattr for ‘user.’ xattr’s and ignore following rules.
:ok:server::security.:
will pass ‘securty.’ xattr’s in listxattr from the server and ignore following rules.
:ok:all:::
will terminate the rule search passing any remaining attributes in both directions.
:bad:server::security.:
would hide ‘security.’ xattr’s in listxattr from the server.
A simpler ‘map’ type provides a shorter syntax for the common case:
:map:key:prepend:
The ‘map’ type adds a number of separate rules to add prepend as a prefix to the matched key (or all attributes if key is empty). There may be at most one ‘map’ rule and it must be the last rule in the set.
xattr-mapping Examples¶
- Prefix all attributes with ‘user.virtiofs.’
-o xattrmap=":prefix:all::user.virtiofs.::bad:all:::"
This uses two rules, using : as the field separator; the first rule prefixes and strips ‘user.virtiofs.’, the second rule hides any non-prefixed attributes that the host set.
This is equivalent to the ‘map’ rule:
:: -o xattrmap=”:map::user.virtiofs.:”
- Prefix ‘trusted.’ ‘trusted.’ and stripping of ‘user.virtiofs.’. The second rule hides unprefixed ‘trusted.’ attributes on the host. The third rule stops a guest from explicitly setting the ‘user.virtiofs.’ path directly. Finally, the fourth rule lets all remaining attributes through.
This is equivalent to the ‘map’ rule:
:: -o xattrmap=”/map/trusted./user.virtiofs./”
- Hide ‘security.’ attributes, and allow everything else
"/bad/all/security./security./ /ok/all///'
The first rule combines what could be separate client and server rules into a single ‘all’ rule, matching ‘security.’ in either client arguments or lists returned from the host. This stops the client seeing any ‘security.’ attributes on the server and stops it setting any.
Examples¶ | https://qemu-stsquad.readthedocs.io/en/docs-next/tools/virtiofsd.html | 2021-07-24T01:23:51 | CC-MAIN-2021-31 | 1627046150067.87 | [] | qemu-stsquad.readthedocs.io |
Cancel Pending Amount Request
User can cancel the amount request.
User can cancel the request if user doesn't have the sufficient amount in his account.
How to cancel the request?
The POST request will be send over HTTPS to the
endpoint.
Sample Request
Sample Response
NOTE:
- customerId – You will get to customerId from list of pending request.
- requestId – You get requestId from the list of Pending request
- verificationHash – SHA256Algorithm.generateSHA256Hash(secKey.trim()+customerId.trim()+requestId.trim())
- walletOwnerId – Provided by
How to generate verification hash?
Verification Hash has to be calculated with following combination using SHA256
algorithm and need to be send along with the authentication parameters in each server-to-server
request:
<secKey><customerId><requestId>
Sample Code
Request Parameters
This reference lists all the standard flow parameters to be send in request.
Response Parameters
This reference list lists all the standard flow parameters to be received in response. | https://docs.transactworld.co.uk/wallet/cancel-pending-amount-request.php | 2021-07-24T00:35:20 | CC-MAIN-2021-31 | 1627046150067.87 | [array(['../assets/img/ajax-loader.gif', None], dtype=object)
array(['../assets/img/ajax-loader.gif', None], dtype=object)
array(['../assets/img/ajax-loader.gif', None], dtype=object)] | docs.transactworld.co.uk |
Shape TK 1.7.1¶
New features¶
Added new types in
OEBOOrientation. All of these are designed to provide a more deterministic search over the reference molecule for those case where the size of the fit molecule is much smaller than the reference, for example, when trying to match a fragment into part of a reference molecule.
OEBOOrientation.InertialAtHeavyAtomsmoves the center of mass of the fit molecule to each reference molecule heavy atom and performs 4 inertial starts at the position. This results in many more starting positions, but provides a more direct way to search over an entire reference molecule, without resorting to random starts.
OEBOOrientation.InertialAtColorAtomsperforms a similar search as above, but just moves to the location of each reference molecule color atom.
OEBOOrientation.UserInertialStarts, used in conjunction with
OEBestOverlay.SetUserStartsallows the user to pick specific points in space to perform the 4 inertial starts.
Fixed a bug when calculating Tanimoto while using a grid as the reference object.
Added more functions to manipulate the color atoms on a molecule. These include the ability to add color atoms one at a time (
OEAddColorAtom) and the ability to get an iterator of color atoms from a molecule (
OEGetColorAtoms).
Added a pair of functions (
OEShape::OEColorAtomsToStringand
OEShape::OEStringToColorAtoms) that allow converting the color atoms of a molecule into a compressed string representation (that is attached to the molecule) and then to restore the actual color atoms from that string.
Bug fixes¶
Fixed a bug that could cause a crash when passing an empty molecule into
OECalcVolumeor
OECalcShapeMultipoles.
Fixed a bug that could cause a crash when passing large molecules to
OECalcVolume. | https://docs.eyesopen.com/toolkits/java/shapetk/releasenotes/version1_7_1.html | 2021-07-24T01:32:00 | CC-MAIN-2021-31 | 1627046150067.87 | [] | docs.eyesopen.com |
WhatsUp Gold Logging Quick Start provides rich summaries, dashboard-like views, interactivity, and charting capabilities needed to leverage event, monitor, resource availability, and other system data along with the necessary export and scheduling controls to distribute this data to stakeholders.
This section outlines easy steps and best practices for tracking health, status, and performance data for your network devices, infrastructure, and applications.
Begin your logging and reporting efforts:
Note: For scheduled reports, it is best practice to create an additional WhatsUp Gold user to ensure that the scheduled export and email of report data maintains consistent settings, graphing modes, and format. WhatsUp Gold Report Settings
(settings for graphing, top n, and thresholding, for example) persist based on the report instance, the current device selected, and the WhatsUp Gold login you use. | https://docs.ipswitch.com/NM/WhatsUpGold2019/03_Help/1033/81638.htm | 2021-07-24T02:39:01 | CC-MAIN-2021-31 | 1627046150067.87 | [] | docs.ipswitch.com |
Containers
A container is a block of code which will provide functionality. Containers are able to communicate with any other containers present in the Luos network.
A container can be an application or a driver.
Each container provides a particular set of tasks such as managing a motor, handling a laser range finder, or more complex operations like computing an inverse-kinematics.
Each container is hosted in a single nodeHardware element (MCU) hosting and running Luos and hosting one or several containers. (MCU), but a node can handle several containers at the same time and manage communication between them and between other containers hosted in other nodes, using the same network interface.
As a developer you will always develop your functionalities into containers, and never into the
main() program. The only information that should be put on the
main() code are MCU setup parameters and containers' run functions.
Container properties
To properly work, each container has some properties allowing other containers to recognize and access it: | https://docs.luos.io/pages/embedded/containers.html | 2021-07-24T01:58:07 | CC-MAIN-2021-31 | 1627046150067.87 | [] | docs.luos.io |
Configured a BizTalk 2013R2 Receive Location to SharePoint Online.
The port is using a host running off an AD account associated with Office 365.
I'm entering the same credentials into the Location configs SharePoint Online Username and Password fields.
There's a simple Notes.xml file in the library we're trying to connect to.
BizTalk doesn't pick the file up (it doesnt vanish from the library as you'd expect it to), and instead generates this Windows Application event warning every second it's polled;
'The adapter "Windows SharePoint Services" raised an error message. Details "Sequence contains no elements".'
Can anyone advise on the correct way to setup a SharePoint Online connection?
Note; we have plenty of SharePoint 2010 connections with BizTalk 2013R2 and none of them have any difficulty in connecting; it's only with this SP Online port.
Thanks | https://docs.microsoft.com/en-us/answers/questions/428873/biztalk-2013r2-to-sharepoint-online-39the-adapter.html | 2021-07-24T02:50:28 | CC-MAIN-2021-31 | 1627046150067.87 | [] | docs.microsoft.com |
JavaScript Proxy SDK
In this guide we explain how to use feature toggles in a Single Page App via The Unleash Proxy. You can also checkout the source code for the JavaScript Proxy SDK.
#Introduction
For single-page apps we have a tiny proxy-client in JavaScript, without any external dependencies, except from browser APIs. This client will store toggles relevant for the current user in local-storage and synchronize with the Unleash Proxy in the background. This means we can bootstrap the toggles for a specific use the next time the user visits the web-page.
We are looking in to also supporting react-native with this SDK. Reach out if you want to help us validate the implementation.
Step 1: Install
Step 2: Initialize the SDK
You need to have a Unleash-hosted instance, and the proxy need to be enabled. In addition you will need a proxy-specific clientKey in order to connect to the Unleash-hosted Proxy.
Step 3: Check if feature toggle is enabled
...or get toggle variant:
Listen for updates via the EventEmitter
The client is also an event emitter. This means that your code can subscribe to updates from the client. This is a neat way to update a single page app when toggle state updates. | https://docs.getunleash.io/sdks/proxy-javascript/ | 2021-07-24T00:24:10 | CC-MAIN-2021-31 | 1627046150067.87 | [] | docs.getunleash.io |
DPL export throws in WASM
Environment
Description
When I try to Generate or Export documents through the Telerik Document Processing in a WASM app, even relatively simple or small documents don't work.
Error Message
WASM: Error: Garbage collector could not allocate 16384 bytes of memory for major heap section
WASM: * Assertion at /mnt/jenkins/workspace/test-mono-mainline-wasm/label/ubuntu-1804-amd64/mono/utils/lock-free-alloc.c:145, condition `sb_header' not met, function:alloc_sb, Failed to allocate memory for the lock free allocator
dotnet.js:1 Uncaught RuntimeError: abort(undefined). Build with -s ASSERTIONS=1 for more info. at abort () at _abort ()
Cause\Possible Cause(s)
It looks like, at the time of writing, the MONO runtime has issues with allocating memory in a WASM scenario. The same code works perfectly fine in a server-side Blazor app or in a console app.
Suggested Workarounds
You can try reducing the size of the file. For example, looping over
worksheet.Columns.Count makes the file size dramatically larger because it has to affect all columns that are available in the sheet - you can replace it with
worksheet.UsedCellRange.ColumnCount to work only with the cells you use.
In some cases, however, this may not help or may not be possible. For such scenarios we can suggest generating the files on the server and returning them to the client through a web request. | https://docs.telerik.com/blazor-ui/knowledge-base/dpl-allocate-memory-error | 2021-07-24T02:35:54 | CC-MAIN-2021-31 | 1627046150067.87 | [] | docs.telerik.com |
Game Creator is a set of tools that will help you kickstart your game in matter of minutes., distribute and update extension packages.
For example, the Inventory module allows to add a complete Inventory to your game with crafting recipes, usable items and customizable effects. The Dialogue module allows to have cinematic conversations between characters with branching options, timed choices, ... You name it!
Join the Game Creator Discord server! | https://docs.gamecreator.io/ | 2019-11-12T05:13:27 | CC-MAIN-2019-47 | 1573496664752.70 | [] | docs.gamecreator.io |
Account balance top up in the test network
The balance of an account in the test network is topped up by 10 WAVES at a time.
From one IP-address it is allowed to top up the balance of any one address once in 15 minutes.
- Go to.
- Enter the address of an account to the Address field.
- Press Request 10 WAVES. | http://docs.wavesplatform.com/en/waves-explorer/account-balance-top-up-in-the-test-network.html | 2019-11-12T05:33:34 | CC-MAIN-2019-47 | 1573496664752.70 | [] | docs.wavesplatform.com |
Deleting Pipelines
Deleting a pipeline removes an existing pipeline from the config.
Warning: Pipeline history is not removed from the database and artifacts are not removed from artifact storage, which may cause conflicts if a pipeline with the same name is later re-created.
To delete a pipeline:
- Navigate to the “Admin” menu and click the “Pipelines” item.
- Locate the pipeline that needs to be deleted.
- In that row, click on the “Delete” icon. | https://docs.gocd.org/current/configuration/deleting_pipelines.html | 2019-11-12T06:53:52 | CC-MAIN-2019-47 | 1573496664752.70 | [] | docs.gocd.org |
ActiveCampaign
Triggers
- Contact Subscribe List
Triggers when a contact subscribes to a specific list
- Contact Unsubscribe List
Triggers when a contact unsubscribes from a specific list
- Contact Updated
Triggers when a contact updated
- Campaign Opened
Triggers when a recipient opens the email sent via a campaign
- Campaign Starts Sending
Triggers when you hit the send button for any email campaign
- Link Clicked
Triggers when a recipient clicks on the campaign link given in the email
Actions
- Add New Campaign
Create and send a new campaign
- Add New Contact
Add a new contact
- Add New List
Create a new mailing list
- Get Campaigns List
Retrieve list of all existing campaigns along with settings information
- Get Contact Details
Retrieve more information on a particular contact
- Get Contact List
Retrieve list of all existing contacts
- Get List of Mailing Lists
Retrieve list of all existing mailing lists
- Get Message Details
Retrieve details of all messages in the mailing list
- Get Messages List
Retrieve list of all existing messages
- Re-send Campaign
Resend a particular campaign | https://docs.webmethods.io/connectors/activecampaign | 2019-11-12T06:46:58 | CC-MAIN-2019-47 | 1573496664752.70 | [array(['/assets/blt03718f1995076b09/activecampaign-128.png',
'activecampaign-128'], dtype=object) ] | docs.webmethods.io |
Configure permissions for VDAs earlier than XenDesktop 7
May 28, 2016
If users have VDAs earlier than XenDesktop 7 installed on their devices,:
``` pre codeblock Service.Connector.WinRM.Identity = Service
You can configure these permissions in one of two ways: 1. Add the service account to the local Administrators group on the desktop machine. 1. tools folder. You must grant permissions to all Director users. To grant the permissions to an Active Directory security group, user, computer account, or for actions like End Application and End Process, run the tool with administrative privileges from a command prompt using the following arguments: ``` pre codeblock ConfigRemoteMgmt.exe /configwinrmuser domain\name
where name is a security group, user, or computer account.
To grant the required permissions to a user security group:
``` pre codeblock ConfigRemoteMgmt.exe /configwinrmuser domain\HelpDeskUsers
To grant the permissions to a specific computer account: ``` pre codeblock ConfigRemoteMgmt.exe /configwinrmuser domain\DirectorServer$
For End Process, End Application, and Shadow actions:
``` pre codeblock ConfigRemoteMgmt.exe /configwinrmuser domain\name /all
To grant the permissions to a user group: ``` pre codeblock ConfigRemoteMgmt.exe /configwinrmuser domain\HelpDeskUsers /all
To display help for the tool:
pre codeblock
ConfigRemoteMgmt.exe | https://docs.citrix.com/en-us/xenapp-and-xendesktop/7-6-long-term-service-release/xad-monitor-article/xad-monitor-director-wrapper/xad-monitor-config-permissions-legacy-vda.html | 2019-11-12T07:13:08 | CC-MAIN-2019-47 | 1573496664752.70 | [] | docs.citrix.com |
Entitle.
Subscriptions
Subscriptions used by Kapacitor work in a cluster. Writes to any node will be forwarded to subscribers across all supported subscription protocols.
PProf Endpoints
The meta nodes now.
The
control.Client provides a Go client to access this functionality as well.Enterprise clusters support backup and restore functionality starting with version 0.7.1. See Backup and Restore for more information.
Features Under Development
HTTP API for performing all cluster and user management functions | https://docs.influxdata.com/enterprise_influxdb/v1.1/features/clustering-features/ | 2019-11-12T06:55:02 | CC-MAIN-2019-47 | 1573496664752.70 | [] | docs.influxdata.com |
February 2010
Cloud Computing: Microsoft Azure for Enterprises
Learn
M
VPL, part of Robotics Developer Studio is intended for novice programmers, but is also useful for testing and prototyping. We write a simple serial port service that allows you to send and receive data.
Trevor Taylor
Columns
Editor's Note:
Not Your Father's MSDN
Changes are coming to MSDN Magazine. They begin this month, with the unveiling of a number of new, monthly columns.
Keith Ward
UI Frontiers:
Sound Generation in WPF Applications
A good case could be made that computers should not make noise except in response to a specific user command. We’re going to ignore that and show you how to play custom sounds in a WPF application.
Charles Petzold
Test Run:
WCF Service Testing with Sockets
There are many ways to test WCF services, but the socket-based approach is flexible and very useful for security and performance testing. We show you show you how to test a WCF service using a network socket based approach.
James McCaffrey
CLR Inside Out:
Formatting and Parsing Time Intervals in the .NET Framework 4
Learn about enhanced TimeSpan formatting and parsing features coming in the .NET Framework 4, and some helpful tips for working with TimeSpan values.
Ron Petrusha
Cutting Edge:
Predictive Fetch with jQuery and the ASP.NET Ajax Library
Dino Esposito builds upon his exploration of new data binding features coming in the ASP.NET Ajax Library, explaining how to implement the predictive fetch design pattern.
Dino Esposito
Security Briefs:
Security Compliance as an Engineering Discipline
Many companies starting out with the SDL are doing so in combination with a security compliance program. We’ll show you some best practices and pitfall we’ve seen when employing SDL principles for compliance.
Brad Hill
Don't Get Me Started:
The Human Touch
People aren't computers; keep this in mind when developing software. When developers confuse people and computers, bad things happen.
David Platt
Going Places:
Gesture Magic
Windows Mobile 6.5 is the first version of the OS to expose gesture support to developers. Marcus Perryman explains how five touch screen gestures are handled, detailing message routing, the physics engine and some handy tips and tricks.
Marcus Perryman | https://docs.microsoft.com/en-us/archive/msdn-magazine/2010/february/msdn-magazine-february-2010-issue | 2019-11-12T06:29:53 | CC-MAIN-2019-47 | 1573496664752.70 | [] | docs.microsoft.com |
Game Timing and Multicore Processors
With power management technologies becoming more commonplace in today's computers, a commonly-used method to obtain high-resolution CPU timings, the RDTSC instruction, may no longer work as expected. This article suggests a more accurate, reliable solution to obtain high-resolution CPU timings by using the Windows APIs QueryPerformanceCounter and QueryPerformanceFrequency.
Background
Since the introduction of the x86 P5 instruction set, many game developers have made use of read time stamp counter, the RDTSC instruction, to perform high-resolution timing. The Windows multimedia timers are precise enough for sound and video processing, but with frame times of a dozen milliseconds or less, they don't have enough resolution to provide delta-time information. Many games still use a multimedia timer at start-up to establish the frequency of the CPU, and they use that frequency value to scale results from RDTSC to get accurate time. Due to the limitations of RDTSC, the Windows API exposes the more correct way to access this functionality through the routines of QueryPerformanceCounter and QueryPerformanceFrequency.
This use of RDTSC for timing suffers from these fundamental issues:
-.
- Availability of dedicated hardware. RDTSC locks the timing information that the application requests to the processor's cycle counter. For many years this was the best way to get high-precision timing information, but newer motherboards are now including dedicated timing devices which provide high-resolution timing information without the drawbacks of RDTSC.
- Variability of the CPU's frequency. The assumption is often made that the frequency of the CPU is fixed for the life of the program. However, with modern power management technologies, this is an incorrect assumption. While initially limited to laptop computers and other mobile devices, technology that changes the frequency of the CPU is in use in many high-end desktop PCs; disabling its function to maintain a consistent frequency is generally not acceptable to users.
Recommendations
Games need accurate timing information, but you also need to implement timing code in a way that avoids the problems associated with using RDTSC. When you implement high-resolution timing, take the following steps:
Use QueryPerformanceCounter and QueryPerformanceFrequency instead of RDTSC. These APIs may make use of RDTSC, but might instead make use of a timing devices on the motherboard or some other system services that provide high-quality high-resolution timing information. While RDTSC is much faster than QueryPerformanceCounter, since the latter is an API call, it is an API that can be called several hundred times per frame without any noticeable impact. (Nevertheless, developers should attempt to have their games call QueryPerformanceCounter as little as possible to avoid any performance penalty.)
When computing deltas, the values should be clamped to ensure that any bugs in the timing values do not cause crashes or unstable time-related computations. The clamp range should be from 0 (to prevent negative delta values) to some reasonable value based on your lowest expected framerate. Clamping is likely to be useful in any debugging of your application, but be sure to keep it in mind if doing performance analysis or running the game in some unoptimized mode.
Compute all timing on a single thread. Computation of timing on multiple threads — for example, with each thread associated with a specific processor — greatly reduces performance of multi-core systems.
Set that single thread to remain on a single processor by using the Windows API SetThreadAffinityMask. Typically, this is the main game thread. While QueryPerformanceCounter and QueryPerformanceFrequency typically adjust for multiple processors, bugs in the BIOS or drivers may result in these routines returning different values as the thread moves from one processor to another. So, it's best to keep the thread on a single processor.
All other threads should operate without gathering their own timer data. We do not recommend using a worker thread to compute timing, as this will become a synchronization bottleneck. Instead, worker threads should read timestamps from the main thread, and because the worker threads only read timestamps, there is no need to use critical sections.
Call QueryPerformanceFrequency only once, because the frequency will not change while the system is running.
Application Compatibility
Many developers have made assumptions about the behavior of RDTSC over many years, so it is quite likely that some existing applications will exhibit problems when run on a system with multiple processors or cores due to the timing implementation. These problems will usually manifest as glitching or slow-motion movement. There is no easy remedy for applications that are not aware of power management, but there is an existing shim for forcing an application to always run on a single processor in a multiprocessor system.
To create this shim, download the Microsoft Application Compatibility Toolkit from Windows Application Compatibility.
Using the Compatibility Administrator, part of the toolkit, create a database of your application and associated fixes. Create a new compatibility mode for this database and select the compatibility fix SingleProcAffinity to force all of the threads of the application to run on a single processor/core. By using the command-line tool Fixpack.exe (also part of the toolkit), you can convert this database into an installable package for installation, testing, and distribution.
For instruction on using Compatibility Administrator, see the toolkit's documentation. For syntax for and examples of using Fixpack.exe, see its command-line help.
For customer-oriented information, see the following knowledge base articles from Microsoft Help and Support:
- Programs that user the QueryPerformanceCounter function may perform poorly in Windows Server 2003 and in Windows XP (article 895980)
- Game performance may be poor on a Windows XP-based computer that is using a dual-core processor (article 909944) | https://docs.microsoft.com/en-us/windows/win32/dxtecharts/game-timing-and-multicore-processors?redirectedfrom=MSDN | 2019-11-12T06:40:13 | CC-MAIN-2019-47 | 1573496664752.70 | [] | docs.microsoft.com |
Podcasts App
Namespaces
The Perch Podcasts App uses the namespace
perch:podcasts.
Master templates
Template IDs
Editing templates
The default templates are stored inside the
perch_podcasts/templates folder however you should not edit these directly.
To modify templates copy the templates from
/perch/addons/apps/perch_podcasts/templates/podcasts to
/perch/templates/podcasts and then make your changes.
If a template has the same name in this folder as the template in the
perch_podcasts folder it will be used rather than the default. You can also create your own templates with any name you like and pass in the name of the template in the function’s options array. | https://docs.grabaperch.com/templates/apps/podcasts/ | 2019-11-12T06:49:58 | CC-MAIN-2019-47 | 1573496664752.70 | [] | docs.grabaperch.com |
Catalog Files and Digital Signatures
A digitally-signed catalog file (.cat) can be used as a digital signature for an arbitrary collection of files. A catalog file contains a collection of cryptographic hashes, or thumbprints. Each thumbprint corresponds to a file that is included in the collection.
Plug and Play (PnP) device installation recognizes the signed catalog file of a driver package as the digital signature for the driver package, where each thumbprint in the catalog file corresponds to a file that is installed by the driver package. Regardless of the intended operating system, cryptographic technology is used to digitally-sign the catalog file.
PnP device installation considers the digital signature of a driver package to be invalid if any file in the driver package is altered after the driver package was signed. Such files include the INF file, the catalog file, and all files that are copied by INF CopyFiles directives. For example, even a single-byte change to correct a misspelling invalidates the digital signature. If the digital signature is invalid, you must either resubmit the driver package to the Windows Hardware Quality Labs (WHQL) for a new signature or generate a new Authenticode signature for the driver package.
Similarly, changes to a device's hardware or firmware require a revised device ID value so that the system can detect the updated device and install the correct driver. Because the revised device ID value must appear in the INF file, you must either resubmit the package to WHQL for a new signature or generate a new Authenticode signature for the driver package. You must do this even if the driver binaries do not change.
The CatalogFile directive in the INF Version section of the driver's INF file specifies the name of the catalog file for the driver package. During driver installation, the operating system uses the CatalogFile directive to identify and validate the catalog file. The system copies the catalog file to the %SystemRoot%\CatRoot directory and the INF file to the %SystemRoot%\Inf directory.
Guidelines for Catalog Files
Starting with Windows 2000, if the driver package installs the same binaries on all versions of Windows, the INF file can contain a single, undecorated CatalogFile directive. However, if the package installs different binaries for different versions of Windows, the INF file should contain decorated CatalogFile directives. For more information about the CatalogFile directive, see INF Version Section.
If you have more than one driver package, you should create a separate catalog file for each driver package and give each catalog file a unique file name. Two unrelated driver packages cannot share a single catalog file. However, a single driver package that serves multiple devices requires only one catalog file.
Feedback | https://docs.microsoft.com/en-us/windows-hardware/drivers/install/catalog-files?redirectedfrom=MSDN | 2019-11-12T05:48:05 | CC-MAIN-2019-47 | 1573496664752.70 | [] | docs.microsoft.com |
How to: Search a Document Incrementally
You.
Note
You cannot use wildcards or regular expressions in search strings for incremental searches.
Incremental searches are performed by default from the current location in the document downward, and from left to right. To move to the next match, press CTRL+I. To reverse the direction of the search, press CTRL+SHIFT+I..
Note
The dialog boxes and menu commands you see might differ from those described in Help depending on your active settings or edition. To change your settings, choose Import and Export Settings on the Tools menu. For more information, see Visual Studio Settings.:
See Also
Tasks
How to: Search Interactively
How to: Search Using Results Lists
Reference
Quick Find, Find and Replace Window
Find in Files, Find and Replace Window | https://docs.microsoft.com/en-us/previous-versions/visualstudio/visual-studio-2008/f27e8wzh%28v%3Dvs.90%29 | 2019-11-12T06:13:42 | CC-MAIN-2019-47 | 1573496664752.70 | [] | docs.microsoft.com |
Twitter App
Namespaces
The Perch Twitter App uses the namespace
perch:twitter.
Master templates
Tweets are imported rather than input in the Control Panel so the Twitter App has no Master Template used for data entry.
Default templates
Template IDs
Editing Templates
The default templates are stored inside the
perch_twitter/templates folder however you should not edit these directly.
To modify templates copy the templates from
/perch/addons/apps/perch_twitter/templates/twitter to
/perch/templates/twitter and then make your changes.
If a template has the same name in this folder as the template in the
perch_twitter folder it will be used rather than the default. You can also create your own templates with any name you like and pass in the name of the template in the function’s options array. | https://docs.grabaperch.com/templates/apps/twitter/ | 2019-11-12T06:49:33 | CC-MAIN-2019-47 | 1573496664752.70 | [] | docs.grabaperch.com |
Source code for modelhublib.imageconverters.imageConverter
[docs]class ImageConverter(object): """ Abstract base class for image converters, following chain of responsibility design pattern. For each image loader derived from :class:`~modelhublib.imageloaders.imageLoader.ImageLoader` you should implement a corresponding image converter using this as base class. Args: sucessor (ImageConverter): Next converter in chain to attempt loading the image if this one fails. """ def __init__(self, successor = None): self._successor = successor[docs] def setSuccessor(self, successor): """ Setting the next converter in chain of responsibility. Args: sucessor (ImageConverter): Next converter in chain to attempt loading the image if this one fails. """ self._successor = successor[docs] def convert(self, image): """ Tries to convert image to numpy and on fail forwards convert request to next handler until sucess or final fail. There should be no need to overwrite this. Overwrite only :func:`~_convert` to convert the image type you want to support and let this function as it is to handle the chain of responsibility and errors. Args: image: Image object to convert. Returns: Numpy array as converted by :func:`~_convert` or a successor converter. Raises: IOError if image could not be converted by any converter in the chain. """ try: npArr = self._convert(image) except: if self._successor: return self._successor.convert(image) else: raise IOError("Could not convert image of type \"%s\" to Numpy array." % type(image).__name__) return npArr[docs] def _convert(self, image): """ Abstract method. Overwrite to implement image conversion to numpy array from the image object type you want to support. When overwriting this, make sure to raise IOError if image cannot be converted. Args: image: Image object to convert. Returns: Should return image object converted to numpy array with 4 dimensions [batchsize, z/color, height, width] """ raise NotImplementedError("This is a method of an abstract class.") | https://modelhub.readthedocs.io/en/latest/_modules/modelhublib/imageconverters/imageConverter.html | 2019-11-12T06:16:08 | CC-MAIN-2019-47 | 1573496664752.70 | [] | modelhub.readthedocs.io |
Getting Started with GoCD on Kubernetes
Docker workflows
Using docker containers to execute docker commands can be done in the following ways. This section identifies the approaches and the drawbacks to keep in mind when using these approaches.
Docker in Docker (DinD)
Docker in Docker involves setting up a docker binary and running an isolated docker daemon inside the container. This requires that the host docker container be run in privileged mode. The privileged flag enables the host container to do almost all of the things that the underlying host machine can do. We have provided the GoCD Agent DinD image that can be used to run docker related tasks in a GoCD agent.
Drawbacks:
As explained by jpetazzo in his blogpost, there are some cases where DinD may not work for you. Additionally, there is a security risk of running a container in privileged mode as well.
Docker Outside of Docker (DooD)
Docker outside of Docker involves volume mounting the host’s docker socket onto the GoCD agent container and use the host’s docker daemon to execute docker related commands from the CI.
This requires the docker binary to be installed in the gocd agent image because the Docker Engine is no longer distributed as (almost) static libraries.
This can be achieved by doing:
docker run -it -v /var/run/docker.sock:/var/run/docker.sock -e GO_SERVER_URL="https://<go-server-ip>:8154/go" <gocd-agent-image-with-docker>
Drawbacks:
- Maintaining a custom gocd agent image with docker.
- Name conflicts may occur if there are two containers with the same name that the GoCD agents bring up.
- Consider the cleanup of the containers after a build completes. The GoCD agent container is brought up and down by an elastic agent plugin. However containers brought up by these ephemeral GoCD agents for build and test are not automatically terminated by the plugin at the end of a build. They must be explicitly cleaned up before the GoCD agent is brought down. In addition, layers of images are cached and reused. Build isolation is lost.
- The containers brought up this way are outside of the helm scope and not easily accessible.
Using a single docker GoCD agent image
In cases where DinD and DooD both don’t work for your use case, an alternative is to package all the build time dependencies into a single docker image. Use this docker image with the GoCD Elastic Agents to run the builds. This works only if you are not choosing to containerize your application builds and tests. In other words, this works well for a workflow that doesn’t involve running docker related commands using elastic agents. | https://docs.gocd.org/current/gocd_on_kubernetes/docker_workflows.html | 2019-11-12T06:41:43 | CC-MAIN-2019-47 | 1573496664752.70 | [] | docs.gocd.org |
Table of Contents
Introduction":"set-your-string-here" }.
Step 1.
Default Ports
By default, Kong Enterprise listens on the following ports:
:8000: incoming HTTP traffic from Consumers, and forwarded to upstream Services.
:8443: incoming HTTPS traffic. This port behaves similarly to the
:8000port, except that it expects HTTPS traffic only.
:8003: Dev Portal listens for HTTP traffic, assuming Dev Portal is enabled.
:8446: Dev Portal listens for HTTPS traffic, assuming Dev Portal is enabled.
:8004: Dev Portal
/filestraffic over HTTP, assuming the Dev Portal is enabled.
:8447: Dev Portal
/filestraffic over HTTPS, assuming the Dev Portal is enabled.
:8001: Admin API listens for HTTP traffic.
:8444: Admin API listens for HTTPS traffic.
:8002: Kong Manager listens for HTTP traffic.
:8445: Kong Manager listens for HTTPS traffic.
Next Steps
With Kong Enterprise started and the Super Admin logged in, it is now possible to create any entity in Kong.
Next, see how to segment the Kong cluster into Workspaces. | https://docs.konghq.com/enterprise/0.36-x/getting-started/start-kong/ | 2019-11-12T06:56:58 | CC-MAIN-2019-47 | 1573496664752.70 | [] | docs.konghq.com |
method is used to populate a NetworkWriter stream from a message object.
Developers may implement this method for precise control of serialization, but they do no have to. An implemenation of this method will be generated for derived classes.
Did you find this page useful? Please give it a rating: | https://docs.unity3d.com/2017.3/Documentation/ScriptReference/Networking.MessageBase.Serialize.html | 2019-11-12T06:29:59 | CC-MAIN-2019-47 | 1573496664752.70 | [] | docs.unity3d.com |
The Particle Switch operator gives you control over which particles will be affected by the particle bind solver.
If a particle has bindings attached to it, but you don’t want that particle to be affected by the bind solver (for example, you want that particle to stay attached to a static object), you should always be sure to de-activate that particle with a Particle Switch operator. When you de-activate a particle, the particle bind solver will treat it as though it has infinite mass, which will allow attached bindings to react to it properly. If you do not de-activate particles properly, your bindings may experience undesirable stretching artifacts.
Activate bindings: activates particle bindings, causing the bind solver to treat them normally.
De-activate bindings: de-activates particle bindings, causing the bind solver to treat them as though they have infinite mass.
Put particles to sleep: forces particles to sleep at the current frame.
Activate wobble: activates the wobble solver for relevant particles.
De-activate wobble: de-activates the wobble solver for affected particles.
Switch on bind overstretch: causes particles to activate only if attached bindings stretch past a certain point.
Stretch %: the threshold stretch percentage that will cause attached particles to activate.
Any: activates particles if any of their bindings stretch past the stretch threshold.
All: only activates particles if all of their bindings stretch past the stretch threshold. | http://docs.tyflow.com/tyflow_particles/operators/particle_switch/ | 2019-11-12T05:21:15 | CC-MAIN-2019-47 | 1573496664752.70 | [] | docs.tyflow.com |
EmberZclCommandContext_t Struct Reference
#include <
zcl-core-types.h>
This structure holds a command context.
Field Documentation
A cluster specification of a command.
CoAP code of a command.
A command identifier.
An endpoint identifier of a command.
A group identifier of a command.
EZ-Mode needs access to the request info structure
Payload of a command.
Payload length of a command.
A remote address of a command.
The documentation for this struct was generated from the following file:
zcl-core-types.h | https://docs.silabs.com/thread/2.6/structEmberZclCommandContext-t | 2019-11-12T05:52:22 | CC-MAIN-2019-47 | 1573496664752.70 | [] | docs.silabs.com |
Using Splunk for Monitoring
Splunk is a third-party monitoring tool that you can use with Exasol for your system monitoring. To know more about Splunk, see Splunk Documentation.
This section explains how you can install and configure Splunk to use with Exasol and collect logs and metrics.
Installing Splunk Server
Prerequisites
- CentOS 7 operating system
- Splunk installer package for Linux
Installing Splunk
Do the following to install Splunk:
- Run the following command to install the package.
rpm -ivh splunk-7.3.1-bd63e13aa157-linux-2.6-x86_64.rpm
The installation directory is /opt/splunk.
- Start Splunk and enter username and password.
/opt/splunk/bin/splunk start
- Skip this step if /etc/hosts is configured properly and the name resolution is working.
Create an SSH port forward to access the Splunk Web UI.
ssh root@HOST-IP -L8000:localhost:8000
- Open the Splunk Web UI and log in.
The username and password are the same as you created in earlier step.
Set Up Index to Store Data
Do the following to set up the index to store data:
- Log in to the Splunk Web UI.
- Go to Settings > Indexer.
- Click New Index.
- On the New Index page, enter index name (remotelogs), type (event), and Max Size (20 GB).
- Click Save.
Create a Listener to Receive Data
Do the following to create a listener to receive data:
- Log in to the Splunk Web UI.
- Go to Settings > Forwarding and receiving.
- Under Receiving Data, click +Add New.
- Enter the port number in Listen to this port.
- Click Save.
- Do one of the following to restart Splunk:
- Run command /opt/splunk/bin/splunk restart
- In Splunk Web UI, go to Settings > Server Controls and click Restart Splunk.
Install Splunk Universal Forwarder
Do the following to install the Splunk Universal Forwarder:
- Download the Splunk Forwarder package for Linux.
- Run the following command to install the package.
rpm -ivh splunkforwarder-7.3.1-bd63e13aa157-linux-2.6-x86_64.rpm
- Run the following command to start the Splunk Forwarder.
/opt/splunkforwarder/bin/splunk start
- Accept the EULA and enter the username and password (same as the Splunk Server).
Set Up Forward-server and Monitor
Do the following to set up a forward-server and monitor:
- Run the following command to add Splunk server as a server to receive forwarded log files:
/opt/splunkforwarder/bin/splunk add forward-server HOST-IP:9700 -auth <USER>:<PASSWORD>
The username and password in the above command are for the Splunk Server.
- Run the following command to add a log file to monitoring list.
/opt/splunkforwarder/bin/splunk add monitor /var/log/audit/audit.log -sourcetype linux_logs -index remotelogs
In the above command, log file name is added with its full path and index name is the one created in Set Up Index to Store Data.
- Run the following command to check if the forward server and log files are configured properly.
/opt/splunkforwarder/bin/splunk list monitor
Once you enter your username and password, you should see the output with the list of monitored logs and files./health/log/watchdog/watchdog.log*
/opt/splunkforwarder/var/log/watchdog/watchdog.log
$SPLUNK_HOME/var/run/splunk/search_telemetry/*search_telemetry.json
$SPLUNK_HOME/var/spool/splunk/...stash_new
Monitored Files:
$SPLUNK_HOME/etc/splunk.version
/var/log/all.log
/var/log/audit/audit.log
Search and View Reports
Do the following to search and view reports:
- Log in to the Splunk Web UI.
- In the App bar, click Search and Reporting.
- Enter your search query in the search bar and press Enter.
Collect Metrics
You can install Splunk Add-on for Unix and Linux to collect metrics from your server. Do the following to install and configure the plug-in:
- Download the plug-in from Splunkbase.
- Unpack and copy the package to Splunk Forwarder folder.
tar xf splunk-add-on-for-unix-and-linux_602.tgz mv Splunk_TA_nix /opt/splunkforwarder/etc/apps/
- Edit the inputs.conf file to enable the metrics you want to collect.
vim /opt/splunkforwarder/etc/apps/Splunk_TA_nix/default/inputs.conf
- Set disable = 0 to enable the metrics in the file and save it.
- Restart the Splunk forwarder.
/opt/splunkforwarder/bin/splunk stop
/opt/splunkforwarder/bin/splunk start | https://docs.exasol.com/db/6.2/administration/gcp/monitoring/splunk.htm | 2022-09-25T08:04:15 | CC-MAIN-2022-40 | 1664030334515.14 | [] | docs.exasol.com |
Nodes can be placed into maintenance mode using the
oc adm utility, or using
NodeMaintenance custom resources (CRs).
Placing are shut down. Virtual machines with a
RunStrategy of
Running or
RerunOnFailure are recreated on another node. Virtual machines with a
RunStrategy of
Manual are not automatically restarted.
When installed as part of OKD. | https://docs.okd.io/latest/virt/node_maintenance/virt-about-node-maintenance.html | 2022-09-25T07:58:26 | CC-MAIN-2022-40 | 1664030334515.14 | [] | docs.okd.io |
How to sign up for the Translator Text API
- Don't have an account? You can create a free account to experiment at no charge.
- Already have an account? Sign in
Create a subscription to the Translator Text API
After you sign in to the portal, you can create a subscription to the Translator Text API as follows:
- Select + Create a resource.
- In the Search the Marketplace search box, enter Translator Text and then select it.
Authentication key
When you sign up for Translator Text, you get a personalized access key unique to your subscription. This key is required on each call to the Translator Text API.
- Retrieve your authentication key by first selecting the appropriate subscription.
- Select Keys in the Resource Management section of your subscription's details.
- Copy either of the keys listed for your subscription.
Learn, test, and get support
Microsoft Translator will generally let your first couple of requests pass before it has verified the subscription account status. If the first few Microsoft Translator API requests succeed then the calls fail, the error response will indicate the problem. Please log the API response so you can see the reason.
Pricing options
Customization
Use Custom Translator to customize your translations and create a translation system tuned to your own terminology and style, starting from generic Microsoft Translator neural machine translation systems. Learn more
Additional resources
Feedback | https://docs.microsoft.com/en-us/azure/cognitive-services/translator/translator-text-how-to-signup?wt.mc_id=RetroArchAIwithIoTEdge-github-pdecarlo | 2019-09-15T08:53:59 | CC-MAIN-2019-39 | 1568514570830.42 | [] | docs.microsoft.com |
How To: Access a Service From a Workflow Application
This topic describes how to call a workflow service from a workflow console application. It depends on completion of the How to: Create a Workflow Service with Messaging Activities topic. Although this topic describes how to call a workflow service from a workflow application, the same methods can be used to call any Windows Communication Foundation (WCF) service from a workflow application.
Create a Workflow Console Application Project
Start Visual Studio 2012.
Load the MyWFService project you created in the How to: Create a Workflow Service with Messaging Activities topic.
Right click the MyWFService solution in the Solution Explorer and select Add, New Project. Select Workflow in the Installed Templates and Workflow Console Application from the list of project types. Name the project MyWFClient and use the default location as shown in the following illustration.
Click the OK button to dismiss the Add New Project Dialog.
After the project is created, the Workflow1.xaml file is opened in the designer. Click the Toolbox tab to open the toolbox if it is not already open and click the pushpin to keep the toolbox window open.
Press Ctrl+F5 to build and launch the service. As before, the ASP.NET Development Server is launched and Internet Explorer displays the WCF Help Page. Notice the URI for this page as you must use it in the next step.
Right click the MyWFClient project in the Solution Explorer and select Add > Service Reference. Click the Discover button to search the current solution for any services. Click the triangle next to Service1.xamlx in the Services list. Click the triangle next to Service1 to list the contracts implemented by the Service1 service. Expand the Service1 node in the Services list. The Echo operation is displayed in the Operations list as shown in the following illustration.
Keep the default namespace and click OK to dismiss the Add Service Reference dialog. The following dialog is displayed.
Click OK to dismiss the dialog. Next, press CTRL+SHIFT+B to build the solution. Notice in the toolbox a new section has been added called MyWFClient.ServiceReference1.Activities. Expand this section and notice the Echo activity that has been added as shown in the following illustration.
Drag and drop a Sequence activity onto the designer surface. It is under the Control Flow section of the toolbox.
With the Sequence activity in focus, click the Variables link and add a string variable named
inString. Give the variable a default value of
"Hello, world"as well as a string variable named
outStringas shown in the following diagram.
Drag and drop an Echo activity into the Sequence. In the properties window bind the
inMsgargument to the
inStringvariable and the
outMsgargument to the
outStringvariable as shown in the following illustration. This passes in the value of the
inStringvariable to the operation and then takes the return value and places it in the
outStringvariable.
Drag and drop a WriteLine activity below the Echo activity to display the string returned by the service call. The WriteLine activity is located in the Primitives node in the toolbox. Bind the Text argument of the WriteLine activity to the
outStringvariable by typing
outStringinto the text box on the WriteLine activity. The workflow should now look like the following illustration.
Right-click the MyWFService solution and select Set Startup Projects .... Select the Multiple startup projects radio button and select Start for each project in the Action column as shown in the following illustration.
Press Ctrl + F5 to launch both the service and the client. The ASP.NET Development Server hosts the service, Internet Explorer displays the WCF help page, and the client workflow application is launched in a console window and displays the string returned from the service ("Hello, world"). | https://docs.microsoft.com/en-us/dotnet/framework/wcf/feature-details/how-to-access-a-service-from-a-workflow-application | 2019-09-15T09:11:47 | CC-MAIN-2019-39 | 1568514570830.42 | [] | docs.microsoft.com |
Configuring Authentication for Reporting Services
New: 12 December 2006
In Reporting Services, authentication is handled by Internet Information Services (IIS). Reporting Services uses the authentication method that is set at the virtual directory level to authenticate user connections to the report server. In most cases, the authentication type is inherited from the parent Web site, but you can specify a different authentication type on the virtual directory.
Reporting Services works with the following authentication methods in IIS:
- Integrated Windows authentication.
- Basic authentication.
- Anonymous authentication, recommended only for forwarding a logon request to a third-party or custom forms-based authentication provider.
Digest and .NET Passport authentication are not supported in Reporting Services.
If you are developing applications that integrate with Reporting Services, you need to know how calls to the Report Server Web service are authenticated. For more information, see Web Service Authentication.
Default Authentication Settings
By default, the report server and Report Manager virtual directories are configured to use Integrated Windows authentication. Anonymous access is not enabled on the virtual directory. No other authentication methods are selected.
If you are using default security,.
The default settings work best if all client and server computers are in the same domain or in a trusted domain, the browser type supports Integrated Windows authentication, and the report server is deployed for intranet access behind a corporate firewall. If you support Internet access to a report server or if you are using Workgroup security, you will most likely need to customize the default settings.
Trusted and single domains are a requirement for passing Windows credentials. Credentials can be passed more than once only if you enable Kerberos version 5 protocol for your servers. If Kerberos is not enabled, credentials can be passed only once before they expire. For more information about configuring credentials for multiple computer connections, see Specifying Credential and Connection Information.
Note
If the Web site that contains the report server virtual directory is configured for Kerberos authentication and you are using a domain user account on the application pool, you might need to create a Service Principal Name (SPN) for the account. For more information, see Configuring Constrained Delegation for Kerberos (IIS 6.0) on the Microsoft TechNet Web site.
Overview of Authentication Types
IIS authenticates a user connection to a report server and to Report Manager. The following list describes the IIS authentication options that you can use.
- Integrated Windows authentication with delegated or impersonated credentials
Connection to the report server uses encrypted domain credentials of the current user. Windows authentication (integrated security) is the default authentication method for the report server and Report Manager virtual directories. Reporting Services Configuration tool and Setup always configure directory security to use this method. If Kerberos authentication is enabled in the domain, the current security ticket can also be used to connect to external data sources that provide data to reports.
Basic authentication
Connection to the report server using a previously assigned Windows account user name and password. With Basic authentication, the user name and password are transmitted in clear text. However, you can make the transmission more secure by using Secure Sockets Layer (SSL) to encrypt user account information before it is sent across the network.
SSL provides an encrypted channel for sending a connection request from the client to the report server over an HTTP TCP/IP connection. For more information, see Using SSL to Encrypt Confidential Data on the Microsoft TechNet Web site.
- Anonymous access
Connection to the report server for all users is made under the Windows user account for Anonymous access. In IIS, this is IUSR_<computername> account by default. Users are not prompted for a user name or password. Anonymous access should be used only if you are using a custom security extension. If you are not using custom authentication, avoid using Anonymous access on the report server virtual directory. You will not be able to vary role assignments in a meaningful way. All users will access the report server under the Anonymous user account, and no one will have permission to administer the report server through Report Manager.
Changing Authentication Settings
Reporting Services uses Integrated Windows authentication by default. If you want to use a different authentication provider, use IIS Manager to specify directory security properties.
- Open IIS Manager.
- Right-click the report server virtual directory and click Properties.
- Click Directory Security.
- In Authentication and access control, click Edit to open the Authentication Methods dialog box.
- (Optional) Clear the Integrated Windows authentication check box.
If the report server virtual directory is configured for both Integrated Windows authentication and Basic authentication, the report server will try Windows authentication first. If you want to use only Basic authentication, you must clear the Integrated Windows authentication check box.
- Select Basic authentication.
- Set the default domain or realm used to authenticate clients to the Web server.
Do not enable Anonymous access unless you are deploying a custom authentication extension or you are enabling access to Report Builder through a report server that is configured for Basic authentication. Do not enable Digest or Passport; they are not supported authentication options in Reporting Services.
When configuring an authentication method for a report server, be sure to use the same method for all components. Do not specify a different authentication type for Report Manager. If you do, users must provide different logon credentials for both Report Manager and report server operations. Similarly, the authentication type for Report Builder should be identical to the authentication provider used by the report server, except when you configure the report server to use Basic authentication. If you use Basic authentication, you must allow Anonymous access on the Report Builder folder to forward a connection request to the ClickOnce application launcher. For more information, see Configuring a Report Server for Report Builder Access.
For more information about enabling Basic authentication and selecting an authentication type in IIS, see Enabling Basic Authentication and Configuring the Realm Name and Selecting a Web Site Authentication Method on the Microsoft TechNet Web site.
Configuring Authentication for Extranet and Internet Access
Integrated Windows authentication is seldom practical for deployment models that require Internet or extranet access. If you are deploying Reporting Services on an Internet-facing Web server, you should replace Windows authentication with a custom authentication extension that gives you more control over how external users are granted access to the report server. Creating a custom authentication extension requires custom code and expertise in ASP.NET security. For more information, see Implementing a Security Extension.
If you do not want to code a custom authentication extension, you can use Microsoft Active Directory groups and accounts, but you should greatly reduce the scope of a report server deployment. The following guidelines describe how to support this scenario:
- Create a low-privileged domain user account with read-only permissions. The account must have access to the computer hosting the report server. Provide a custom Web form so that users can log on using the low-privileged domain account.
- Create role assignments that map the user account to specific items in the report server folder hierarchy. You can limit access to read-only operations by choosing as the role assignment the Browser predefined role.
- Configure reports to use stored credentials to get data for the report. This approach is useful if you want to query the external data source using an account that is different from the account that allows access to the report server. For more information about these options, see Specifying Credential and Connection Information.
See Also
Concepts
Managing Permissions and Security for Reporting Services
Creating, Modifying, and Deleting Role Assignments
Specifying Credential and Connection Information
Connections and Accounts in a Reporting Services Deployment
Configuring a Report Server for Secure Sockets Layer (SSL) Connections
Configuring a Report Server for Report Builder Access
Other Resources
Implementing a Security Extension
Help and Information
Getting SQL Server 2005 Assistance | https://docs.microsoft.com/en-us/previous-versions/sql/sql-server-2005/bb283249%28v%3Dsql.90%29 | 2019-09-15T08:49:45 | CC-MAIN-2019-39 | 1568514570830.42 | [] | docs.microsoft.com |
Configure Print and Document Services
Updated: May 11, 2016. Windows PowerShell cmdlets.
In this document
Step 1: Install v4 drivers
Step 2: Install v3 drivers (if necessary)
Step 3: Create a shared print queue
Step 4: Connect to the print queue
Step 5: Print from Windows
Step 6: Print from a Windows app
Note
This topic includes sample Windows PowerShell cmdlets that you can use to automate some of the procedures described. For more information, see Using Cmdlets.
Step 1: Install v4 drivers
Using v4 drivers with Windows Server 2012 Print Servers is recommended wherever possible. If Windows 8 client computers are being used, users and IT administrators will receive the new printing experience and receive all of the benefits of the v4 driver model. For some devices, using a v3 driver will be necessary and differences between Windows Server 2012 and previous versions of Windows will also be discussed..
After the driver is installed, it should display in the list of installed drivers.
Figure 1: Installed Drivers
Once installed, V4 drivers are identified by the Version field displayed in the Driver Properties:
Figure 2: Driver Properties
The driver name will state Class Driver, the Config File should show PrintConfig.dll, and the driver path should be %systemroot%\system32\DriverStore..
Add-PrinterDriver -Name "HP Color LaserJet 5550 PS Class Driver”
Step 2: Install v3 drivers (if necessary).
Step 3: Create a shared print queue
Add-printer -Name "HP Color LaserJet 4700" -DriverName "HP Color LaserJet 5550 PS Class Driver" -shared -ShareName "HP Color LaserJet 4700" -PortName "192.168.100.100"
Important
The Port and Driver must already exist or the Add-Printer command will fail. Use the Add-PrinterDriver and Add-PrinterPort cmdlets to install the driver and the port before running the Add-Printer command.
Step 4: Connect to the print queue.
Figure 3: Enhanced Point and Print Compatibility Driver.
Step 5: Print from Windows
The Windows 8 Shell includes a new user interface that is designed to help users easily discover and install devices. Print, Fax, and Scan devices are installed from the Settings or Devices charms:
Figure 4: Charms Bar with Devices and Settings
To install a Printer using the Windows 8 Shell.
Figure 5: Searching for Devices
Click or touch the device that you want to install and it will be added to the list of devices. If the printer is using a v4 driver then no other user interaction is necessary to install the device.
Note
In some cases you may need to add a printer through the Control Panel, Devices and Printers, Add Printer user interface. This works the same way as in previous Windows versions.
Important.
Step 6: Print from a Windows app.
Note
Printing from desktop applications remains unchanged and works the same as previous Windows versions.
To print a document from a Windows app
Click or tap the Devices charm or use the CTRL + P hotkey to activate the Printing interface.
Choose a device to print to by clicking or touching the Print device icon:
Figure 6: Choosing a print device
A print preview and basic print settings are displayed. The document can be printed using the Print button:
Figure 7: Basic print settings
Click More Settings to activate the advanced print settings dialog where page layout, paper and quality, and output options can be specified:
Figure 8: More Settings
See also
Create Custom Separator Pages in Windows Server 2012
Assign Delegated Print Administrator and Printer Permission Settings in Windows Server 2012
Print and Document Services Overview
Print and Document Services Architecture
Install Print and Document Services
Print Management Windows PowerShell cmdlets | https://docs.microsoft.com/en-us/previous-versions/windows/it-pro/windows-server-2012-R2-and-2012/jj134163%28v%3Dws.11%29 | 2019-09-15T08:31:03 | CC-MAIN-2019-39 | 1568514570830.42 | [array(['images/jj134163.64b6cdbf-c207-4675-a202-be61ec6e5df6%28ws.11%29.jpeg',
None], dtype=object)
array(['images/jj134163.edf1add6-20b0-444f-b8f3-a9817177c75b%28ws.11%29.jpeg',
None], dtype=object) ] | docs.microsoft.com |
Download Debugging Tools for Windows
The Windows Debugger (WinDbg) can be used to debug kernel-mode and user-mode code, to analyze crash dumps, and to examine the CPU registers while the code executes.
Download WinDbg Preview
WinDbg Preview is a new version of WinDbg with more modern visuals, faster windows, a full-fledged scripting experience, built with the extensible debugger data model front and center. WinDbg Preview is using the same underlying engine as WinDbg today, so all the commands, extensions, and workflows still work as they did before.
Download WinDbg Preview from the Microsoft Store: WinDbg Preview.
Learn more about installation and configuration in WinDbg Preview - Installation.
Debugging Tools for Windows 10 (WinDbg)
If you just need the Debugging Tools for Windows 10, and not Windows Driver Kit (WDK) for Windows 10 or Visual Studio 2017, you can install the debugging tools as a standalone component from the Windows SDK. In the SDK installation wizard, select Debugging Tools for Windows, and deselect all other components.
Get Debugging Tools for Windows (WinDbg) from the SDK: Windows 10 SDK.
Learn more about WinDbg and other debuggers in Debugging Tools for Windows (WinDbg, KD, CDB, NTSD).
Tip
If the Windows SDK is already installed, open Settings, navigate to Apps & features, select Windows Software Development Kit, and then click Modify to change the installation to add Debugging Tools for Windows.
Looking for the debugging tools for earlier version of Windows?
To download the debugger tools for previous versions of Windows, you need to download the Windows SDK for the version you are debugging from the Windows SDK and emulator archive. In the installation wizard of the SDK, select Debugging Tools for Windows, and deselect all other components.
Looking for related downloads?
Commentaires
Chargement du commentaire... | https://docs.microsoft.com/fr-fr/windows-hardware/drivers/debugger/debugger-download-tools | 2019-09-15T07:52:20 | CC-MAIN-2019-39 | 1568514570830.42 | [] | docs.microsoft.com |
..
Note: The HP-UX version of Splunk Enterprise does not register itself to auto-start on reboot. However, you can register it by running the following command in the
$SPLUNK_HOME/bin directory in a shell prompt:
./splunk enable boot-start! | https://docs.splunk.com/Documentation/Splunk/6.1.4/Installation/InstallonHP-UX | 2019-09-15T08:57:01 | CC-MAIN-2019-39 | 1568514570830.42 | [array(['/skins/OxfordComma/images/acrobat-logo.png', None], dtype=object)] | docs.splunk.com |
>> whitelisting
Splunk Enterprise defines whitelists and blacklists.7, 7.2.6, 7.2.8, 7.3.0, 7.3.1
In the section "Wildcards and regular expression metacharacters"
the explanation for the last example
[monitor:///var/.../log[A-Z0-9]*.log]
currently reads:.
-> The explanation sounds more like only [A-Z0-9] was treated as a regex, and * is a wildcard
-> If [A-Z0-9]* is indeed treated as a regex, then the explanation should read:
monitors all files in any subdirectory of the /var/ directory that:
begin with log, then
contain any combination of capital letters (from A-Z) and numbers (from 0-9) (including none), then
end in .log.
Hi Frankwayne,
> If the rules do not support regular expressions containing backslashes, why does the next sentence advise using backslashes?
Because that is how you are able use the wildcards on Windows. Wildcards are not regular expressions, but special elements that let you specify input paths based on the type of element.
> Why would one escape wildcards?
Because that is the only choice you have on Windows.
> Do you mean we should escape the 'asterisk' or 'three periods in a row'? (With the forbidden backslahes?)
Yes.
> Does this mean we cannot evaluate Windows paths (which unavoidably contain backslashes) in whitelist or blacklist rules at all?
No, you just have to escape the path separators.
Assuming I have multiple log files (e.g. for many instances of the process), how to extract an exact filename and present as a field?
[monitor:///foo/bar*.log]
<filename field> = <???>
Thank you,
Jarek
>.
Waechtler_amasol, thanks for the detailed question! I've emailed you a response. | https://docs.splunk.com/Documentation/Splunk/7.3.1/Data/Specifyinputpathswithwildcards | 2019-09-15T08:53:29 | CC-MAIN-2019-39 | 1568514570830.42 | [array(['/skins/OxfordComma/images/acrobat-logo.png', None], dtype=object)] | docs.splunk.com |
These are the common messages. Also see and
alive/heartbeat message containing the MD5sum of the aircraft configuration
Answer to PING datalink message, to measure latencies
message has no fields
Datalink status reported by an aircraft for the ground
Waypoint with id wp_id has been updated/moved to the specified UTM coordinates.
Velocities in body axes (assuming small pitch/roll angles) as measured by the dragspeed module and by the INS.
Telemetry message for monitoring the status of the Distributed Circular Formation.
Relative localization data for other tracked MAVs in terms of x y and z in the body axis
Information about the trajectory followed by the Guidance Vector Field algorithm.
Debug message for the JeVois smart camera corresponding to the standardized messages
Field name | Type | Unit/Values | Description | --------—|---—|----------—|----------—| type | uint8 | | Standardized message type JEVOIS_MSG_[T1|N1|D1|T2|N2|D2|F2|T3|N3|D3|F3] | id | char[] | | Text string describing the reported object | nb | uint8 | | Number of elements in the coordinates array | coord | int16[] | | List of coordinates corresponding to 1D, 2D or 3D messages | dim | uint16[3] | | 1, 2 or 3D dimension | quat | float[4] | | Quaternion that relates the object's frame to the camera's frame, if appropriate |
Wind information returned to the ground station. The wind is reported as a vector, it gives the direction the wind is blowing to. This can be used to acknowledge data comming from the ground wind estimator or from an embedded algorithm. Flags field definition:
Generic message to send position measurement from computer vision
Logger status and error id (dependent of the logging system)
Rotorcraft rate control loop.
Extended Kalman Filter 2 status message which gives feedback about the input- and output states of the filter.
Airflow data returned by OTF and uADC 3D probes from Aeroprobe.
Minimalistic message to track Rotorcraft over very low bandwidth links
Rover status message
Generic message with pixel coordinates of detected targets
Message for key exchange during crypto initialization
This messages includes the messages send by the Optical Flow Hover module, providing data for all three axes.
Electronic Speed Controller data
RTOS monitoring
Air-to-air message for the Distributed Circular Formation algorithm. It transmits the ac's theta to its neighbor
Message for monitoring key exchange status
Wind information. The wind is reported as a vector, it gives the direction the wind is blowing to. This can be comming from the ground wind estimator or from an embedded algorithm. Flags field definition:
message has no fields
This message is used to send joystick input that can be used in any mode for control of the vehicle or the payload depending of the 'joystick_handler' function implementation. The scale of the inputs should be between -PPRZ_MAX (or 0) and PPRZ_MAX.
This message is used to set 3D desired vehicle's states such as accelerations or velocities. The 'flag' field can be used at the user convenience to provide indication about the type of value to track (like position, speed, acceleration, ...) or the reference frame (ENU / NED, LTP / body frame, ...)
Custom navigation pattern or action for mission controller. This will add the mission element correspond to the string identifier 'type' if it has been registered.
Set vehicle position or velocity in NED. Frame can be specified with the bits 0-3 Velocity of position setpoint can be specified with the bits 5-7 Flags field definition:
Position and speed in local frame from a remote GPS or motion capture system Global position transformations are handled onboard if needed
Global position, speed and ID a target for functions like Follow Me
Overcome setting ID and block ID problems in the case of multiboard autopilots like AP/FBW. With this message a KILL command can be sent to AP and FBW at the same time.
Init the table of an aircraft for the Distributed Circular Formation algorithm. If the nei_id is equal to zero, then you wipe out (clean) the whole table of the aircraft.
Message for key exchange during crypto initialization
message has no fields
message has no fields
Datalink status reported by Server for the GCS Combines DATLINK_REPORT (telemetry class) and LINK_REPORT (ground class)
Report a telemetry error
Encapsulated a telemetry class message (when using redundant link)
Encapsulated a datalink class message (when using redundant link)
Datalink status reported by Link for the Server
The SHAPE message used to draw shapes onto the Paparazzi GCS. Field name shape is used to define the type of shape i.e. Circle, Polygon, Line, or Text. This is indexed from 0-3 respectively.
Each shape drawn must have an id number associated with it. This id number in conjuction with the shapetype will be needed to update or delete the shape. A circle can be defined with the same id as a polygon but since they have different shape types they are considered unique.
linecolor and fillcolor take in a color string ie: "red", "blue"
opacity will change the level of transparency of the fill. 0 - Transparent 1 - Light Fill 2 - Medium Fill 3 - Opaque
Passing a status of 0 will create or update the shape specified by id and type. Passing a status of 1 will delete the shape specified by id and type.
latarr is an array of coordinates that contain the latitude coordinate for each point in the shape. The array is comma separated. lonarr is similar to latarr but contain the longitude coordinate for each point in the shape.
Circle and Text type will take the first coordinates given to place the shape. Polygon will take all the coordinates given. Line will take the first two coordinates given.
Radius is only used for the circle.
Text will always be populated with each message using the first set of coordinates. The text field can not be blank or have spaces. If text is not desired for a shape then pass "NULL" into the text field.
Ground reference provided by an external positioning system for instance
Generic 4 axis and 4 buttons joystick or joypad. This message can be provided by the 'input2ivy' tool for other ground agents. Standard joystick axis values are on 16 bits signed integers, but tools like 'input2ivy' may scale them on int8 type.
message has no fields
Raw data fromt the stereocamera. Type defines what kind of data it is. This can be raw image, disparity map, obstacle histogram, ect.
Velocity measured using optical flow and stereovision. All parameters are in the camera frame
Estimated state of the camera. As the stereocamera has no inertial sensors, this data should be sent to the stereocamera to enable onboard derotation of the optical flow
Forward FBW datalink to AP
Forward AP telemetry to FBW | http://docs.paparazziuav.org/latest/paparazzi_messages.html | 2019-09-15T07:37:58 | CC-MAIN-2019-39 | 1568514570830.42 | [] | docs.paparazziuav.org |
This tab provides a simple file select field to select an add-on to be manually installed on your Open-Realty site. The file should be a compressed ZIP file and compatible with the add-on Manager to be successfully installed.
To update an add-on with the Manual Installation function, you must check the "Update Existing Add-On" option to enable the add-on Manager to overwrite the existing version of the add-on. | http://docs.transparent-tech.com/open-realty/latest/manualinstallation.html | 2019-09-15T08:30:40 | CC-MAIN-2019-39 | 1568514570830.42 | [] | docs.transparent-tech.com |
Try it now and let us know what you think. Switch to the new look >>
You can return to the original look by selecting English in the language selector above.
DeregisterTaskFromMaintenanceWindow
Removes a task from a maintenance window.
Request Syntax
{ "WindowId": "
string", "WindowTaskId": "
string" }
Request Parameters
For information about the parameters that are common to all actions, see Common Parameters.
The request accepts the following data in JSON format.
- WindowId
The ID of the maintenance window the task should be removed from.
Type: String
Length Constraints: Fixed length of 20.
Pattern:
^mw-[0-9a-f]{17}$
Required: Yes
- WindowTaskId
The ID of the task to remove from the maintenance window.
Type: String
Length Constraints: Fixed length of 36.
Pattern:
^[0-9a-fA-F]{8}\-[0-9a-fA-F]{4}\-[0-9a-fA-F]{4}\-[0-9a-fA-F]{4}\-[0-9a-fA-F]{12}$
Required: Yes
Response Syntax
{ "WindowId": "string", "WindowTaskId": "string" }
Response Elements
If the action is successful, the service sends back an HTTP 200 response.
The following data is returned in JSON format by the service.
- WindowId
The ID of the maintenance window the task was removed from.
Type: String
Length Constraints: Fixed length of 20.
Pattern:
^mw-[0-9a-f]{17}$
- WindowTaskId
The ID of the task removed: | https://docs.aws.amazon.com/systems-manager/latest/APIReference/API_DeregisterTaskFromMaintenanceWindow.html | 2019-09-15T08:21:57 | CC-MAIN-2019-39 | 1568514570830.42 | [] | docs.aws.amazon.com |
Squawk v2
Squawk is realtime broadcast which includes important headlines, price movement and rumors as stories develop to give traders and investors news in the fastest and most convenient form.
Squawk is built on top of WebRTC. So one can connect to it through WebRTC supported browser using standard WebRTC API methods. Following are the methods that you will need to implement while writing a client for connecting to squawk.
Create a socket connection
First, create a web socket connection to squawk. The Benzinga Squawk service web socket address is:
wss://squawk-lb.benzinga.com/squawk
Authenticate
Once the socket connection is created, authenticate with your
apikey.
{ "apikey": "f5kec5x6gplwdv8o5dcn5aydtyx132u8", "name": "[email protected]", following format:
From
{ " }
userId, you can map the
iceCandidateto use has started playing from another session (could be logged in from another browser/client). So discard all peer connection. | https://docs.benzinga.io/benzinga/squawk-v2.html | 2019-09-15T08:07:25 | CC-MAIN-2019-39 | 1568514570830.42 | [] | docs.benzinga.io |
1 Introduction
To use Mendix Studio Pro’s native app capabilities, you can use the Native Mobile Quickstart app from the Mendix App Store. This app is optimized to quickly build a native mobile app. Out of the box, this app gives you a native page, a native phone profile to enable native device navigation, a native layout with menus, and native widgets and actions which leverage device capabilities.
The Native Mobile Quickstart app also includes four modules:
- Administration – helps you manage users
- Atlas UI Resources package – allows for app styling
- Nanoflow Commons – contains generic useful nanoflow actions
- Native Mobile Actions – contains various native widgets and nanoflow actions that leverage device capabilities
2 Prerequisites
Before starting this how-to, make sure you have completed the following prerequisites:
- Have a mobile device to test your native app
- For information on device requirements, see System Requirements
- If you wish to use an emulator for Android mobile testing, install a product such as Bluestacks or Genymotion (your emulator must have Google Play services supported)
3 Creating a New App Project Based on the Quickstart App
For details on making a Mendix app using the Native Mobile Quickstart app template, download the Make It Native app on either the Google Play store or the Apple App Store. This app template includes the latest version of Atlas UI, as well as the Native Mobile Resources module containing widgets and nanoflow actions for native mobile apps. Using the Make It Native app to view the changes to your Mendix app, see the sections below.
3.1 Starting a Quickstarter App Project
To start a new app based on a template, follow these steps:
Open Mendix Studio Pro. Select File > New Project , and then select the Native Mobile Quickstart app:
Next, click Use this starting point:
Click Create app to close the dialog box:
Click Run Locally to see the app in action:
After running your app, you may see a Windows Security Alert dialog box. Accept the permissions selected by default and click Allow access to close the dialog box:
If asked to create database ‘default’, click Yes:
At this point you have a running native app. To view your app on a mobile device, however, you need to download the Make It Native app.
3.2 Downloading and Installing the Make It Native App
3.2.1 Downloading for Android
To view your app on an Android device (or emulator), you must download and install the Make It Native app from the Google Play store:
3.2.2 Downloading for iOS
To view your app on a iOS device, you must download and install the Make It Native app from the Apple App Store:
3.3 Viewing Your App on Your Testing Device
Viewing your app on a mobile device will allow you to test native features and other aspects of your app. This section is written for mobile devices, but you may use an Android emulator mentioned in the Prerequisites section above. To view your app, follow these steps:
- Locate your app’s QR code in Mendix Studio Pro by clicking the drop-down menu next to the View button, then selecting View in the Mendix App and navigating to the Native mobile tab. Here you will see your test app’s QR code.
- Start the Make It Native app by tapping its icon on your device.
Tap the Scan a QR Code button:
If prompted, grant the app permission to access your device’s camera.
Point your mobile device’s camera at the QR code. It will automatically launch your test app on your mobile device.
Your mobile device has to be on the same network as your development machine for the Make It Native app to work. If this is the case and the connection still fails, make sure that communication between devices is allowed in the Wi-Fi access point.
Now you can see your app on your device. While this is just a template app, whenever you make changes you will be able to view them live on your Make It Native app.
You may notice an Enable dev mode toggle on the Make It Native app home page. Turning this toggle on will give you more detailed warning messages during error screens, as well as additional functionality on the developer app menu:
3.4 Viewing Changes to Your App on Your Testing Device
To see how changes made in Mendix Studio Pro are displayed live on your testing device, make a small change to your app.
Put a text widget on your app’s home page. Then, write some text into it. In this example, “Native rules!” has been added:
Click Run Locally to automatically update the running app on your device, and see your new text:
When you click Run Locally, your app will automatically reload while keeping state.
Should you get an error screen while testing your app, there are easy ways to restart it:
- Tap your test app with three fingers to restart your app
- With the Enable dev mode toggle turned on, hold a three-fingered tap to bring up the developer app menu – here you can access ADVANCED SETTINGS and ENABLE REMOTE JS DEBUGGING
For more detailed instructions on debugging a native app, see Debug Native Apps (Advanced). | https://docs.mendix.com/howto/mobile/getting-started-with-native-mobile | 2019-09-15T07:27:27 | CC-MAIN-2019-39 | 1568514570830.42 | [array(['attachments/getting-started-with-native-mobile/make-it-native-googleplay.png',
'native app on googleplay'], dtype=object)
array(['attachments/getting-started-with-native-mobile/make-it-native-ios.png',
'native app on app store'], dtype=object)
array(['attachments/getting-started-with-native-mobile/enable-dev-mode.png',
'enable dev mode'], dtype=object) ] | docs.mendix.com |
Scaling with Event Hubs
There are two factors which influence scaling with Event Hubs.
- Throughput units
- Partitions
Throughput units
The throughput capacity of Event Hubs is controlled by throughput units. Throughput units are pre-purchased units of capacity. A single throughput lets you:
-.
For more information about the auto-inflate feature, see Automatically scale throughput units.. will have to read events across all 32 partitions. In the latter case, there is no obvious additional cost apart from the extra configuration you have to make on Event Processor Host.
While partitions are identifiable and can be sent to directly, sending directly to a partition is not recommended. Instead, you can use higher level constructs introduced in the Event.
Next steps
You can learn more about Event Hubs by visiting the following links:
Feedback | https://docs.microsoft.com/en-us/azure/event-hubs/event-hubs-scalability | 2019-09-15T08:04:14 | CC-MAIN-2019-39 | 1568514570830.42 | [array(['../includes/media/event-hubs-partitions/partition.png',
'Event Hubs'], dtype=object)
array(['../includes/media/event-hubs-partitions/multiple-partitions.png',
'Event Hubs'], dtype=object) ] | docs.microsoft.com |
Binary
Binary Encoding Binding Element. Max Read Pool Size Message
Binary Encoding Binding Element. Max Read Pool Size Message
Binary Encoding Binding Element. Max Read Pool Size Message
Property
Encoding Binding Element. Max Read Pool Size
Definition
Gets or sets the maximum number of XML readers that are allocated to a pool and are ready for use to process incoming messages.
public: property int MaxReadPoolSize { int get(); void set(int value); };
public int MaxReadPoolSize { get; set; }
member this.MaxReadPoolSize : int with get, set
Public Property MaxReadPoolSize As Integer
Property Value
The maximum number of readers to be kept in the pool. The default value is 64 readers.
Exceptions
ArgumentOutOfRangeException ArgumentOutOfRangeException ArgumentOutOfRangeException ArgumentOutOfRangeException
The value set is less than or equal to zero.
Examples
be.MaxReadPoolSize = 16;
be.MaxReadPoolSize = 16
Remarks
Increasing this number increases memory consumption, but prepares the encoder to deal with sudden bursts of incoming messages because it is able to use readers from the pool are already created instead of having to create new ones. | https://docs.microsoft.com/en-us/dotnet/api/system.servicemodel.channels.binarymessageencodingbindingelement.maxreadpoolsize?view=netframework-4.8 | 2019-09-15T09:09:47 | CC-MAIN-2019-39 | 1568514570830.42 | [] | docs.microsoft.com |
Save-Windows
Image
Syntax
Save-WindowsImage -Path <String> [-CheckIntegrity] [-Append] [-LogPath <String>] [-ScratchDirectory <String>] [-LogLevel <LogLevel>] [<CommonParameters>].
Examples
Example 1: Save servicing changes made to a mounted image
PS C:\> Save-WindowsImage -Path "c:\offline"
This command saves the servicing changes made to the Windows image mounted to c:\offline. It does not unmounts the image.
Parameters
Indicates that this cmdlet specifies the location of an existing .wim file to add the Windows image to when you save it.. the full path to the root directory of the offline Windows image that you want to[]
Microsoft.Dism.Commands.ImageObject
Microsoft.Dism.Commands.ImageObjectWithState
Outputs
Microsoft.Dism.Commands.ImageObject
Related Links
Feedback | https://docs.microsoft.com/en-us/powershell/module/dism/save-windowsimage?view=win10-ps | 2019-09-15T09:01:30 | CC-MAIN-2019-39 | 1568514570830.42 | [] | docs.microsoft.com |
Contents Security Operations Previous Topic Next Topic Use discovery models to match software with vulnerabilities Subscribe Log in to subscribe to topics and get notified when content changes. ... SAVE AS PDF Selected Topic Topic & Subtopics All Topics in Contents Share Use discovery models to match software with vulnerabilities A discovery model is a software model associated with a customer software installation. If your instance uses Software Asset Management or Discovery to search for vulnerable software, you can use discovery models in Vulnerability Response to match software with vulnerable items. Before you beginRole required: sn_vul.vulnerability_write Procedure Navigate to Vulnerability > Libraries > Vulnerable Software. A list of vulnerable software downloaded from the NVD is shown. Click a vulnerable software record to open it. Click the Match discovery model related link. A Possible Software Discovery Model pop-up window displays a possible match for the software. If the suggestion is correct, click the software name. If suggestion is not correct, close the pop-up window, click the magnifying glass list icon on the Software discovery match field, and select a discovery model. Click Confirm Model Auto-Match to confirm that the correct discovery model has been selected. Click Update to save the record. Note: You can also select discovery models for multiple records from the Vulnerable Software list. Select the check boxes for the records you want to match to a discovery model. Then select Match discovery model from the Actions on selected rows choice list. If you match discovery models from the list, review each of the matched discovery models to ensure that they are correct. To confirm that the discovery models are correct, open the record where the model was matched. Then click the Confirm Model Auto-Match link at the bottom of the form. As each record is confirmed, the Auto-Matched Discovery Model and Auto-Match Confirmed check boxes are selected. The Vulnerable Items related list displays the vulnerable items discovered for this software. Related tasksRemediate vulnerabilities On this page Send Feedback Previous Topic Next Topic | https://docs.servicenow.com/bundle/istanbul-security-management/page/product/vulnerability-response/task/t_UseDiscoveryModels.html | 2019-09-15T08:43:53 | CC-MAIN-2019-39 | 1568514570830.42 | [] | docs.servicenow.com |
Gateway MQTT Transport Callbacks
Detailed Description
These callbacks are contributed by the Gateway MQTT Transport plugin.
Function Documentation
This function will be called when the MQTT client for the gateway receives an incoming message on a topic. If the message is processed by the application true should be returned, if the message is not processed return false. This function is called on a separate thread, so no stack calls should be made within the implementation of this function. Instead use a global variable in that function to communicate the message arrival to a stack event or timer running from the main loop.
- Parameters
-
This function will be called when the state of the MQTT client changes.
- Parameters
- | https://docs.silabs.com/thread/2.7/group-transport-mqtt-callbacks | 2019-09-15T07:33:47 | CC-MAIN-2019-39 | 1568514570830.42 | [] | docs.silabs.com |
TOC & Recently Viewed
Recently Viewed Topics
Industrial Security Licensing Requirements
Industrial Security Subscription
An Industrial Security subscription activation code is available to support a number of different asset counts. Currently the following activation code asset count limits are available:
- 150
- 800
- 2000
Activation Code
To obtain a Trial Activation Code for Industrial Security, contact [email protected]. Trial Activation Codes are handled the same way by Industrial Security as full Activation Codes, except that Trial Activation Codes allow monitoring for only 30 days. During a trial of Industrial Security, all features are available. | https://docs.tenable.com/generalrequirements/Content/IndustrialSecurityLicensingRequirements.htm | 2019-09-15T07:53:15 | CC-MAIN-2019-39 | 1568514570830.42 | [] | docs.tenable.com |
Try it now and let us know what you think. Switch to the new look >>
You can return to the original look by selecting English in the language selector above.
PolicyTargetSummary
Contains information about a root, OU, or account that a policy is attached to.
Contents
- Arn
The Amazon Resource Name (ARN) of the policy target.
For more information about ARNs in Organizations, see ARN Formats Supported by Organizations in the AWS Organizations User Guide.
Type: String
Pattern:
^arn:aws:organizations::.+:.+
Required: No
- Name
The friendly name of the policy target.
The regex pattern that is used to validate this parameter is a string of any of the characters in the ASCII character range.
Type: String
Length Constraints: Minimum length of 1. Maximum length of 128.
Required: No
- TargetId})|(\d{12})|(ou-[0-9a-z]{4,32}-[a-z0-9]{8,32})$
Required: No
- Type
The type of the policy target.
Type: String
Valid Values:
ACCOUNT | ORGANIZATIONAL_UNIT | ROOT
Required: No
See Also
For more information about using this API in one of the language-specific AWS SDKs, see the following: | https://docs.aws.amazon.com/organizations/latest/APIReference/API_PolicyTargetSummary.html | 2019-09-15T08:27:28 | CC-MAIN-2019-39 | 1568514570830.42 | [] | docs.aws.amazon.com |
Contribution Guide¶
This is a guide aimed towards contributors of ChainerX which is mostly implemented in C++. It describes how to build the project and how to run the test suite so that you can get started contributing.
Note
Please refer to the Chainer Contribution Guide for the more general contribution guideline that is not specific to ChainerX. E.g. how to download the source code, manage git branches, send pull requests or contribute to Chainer’s Python code base.
Note
There is a public ChainerX Product Backlog.
Running the test suite¶
The test suite can be built by passing
-DCHAINERX_BUILD_TEST=ON to
cmake.
It is not built by default.
Once built, run the suite with the following command from within the
build directory.
$ cd chainerx_cc/build $ ctest -V
Coding standards¶
The ChainerX C++ coding standard is mostly based on the Google C++ Style Guide and principles.
Formatting¶
ChainerX is formatted using clang-format.
To fix the formatting in-place, run the following command from
chainerx_cc directory:
$ cd chainerx_cc $ scripts/run-clang-format.sh --in-place
Lint checking¶
ChainerX uses the cpplint and clang-tidy for lint checking.
Note that clang-tidy requires that you’ve finished running
cmake.
To run cpplint, run
scripts/run-cpplint.sh from
chainerx_cc directory:
$ cd chainerx_cc $ scripts/run-cpplint.sh
To run clang-tidy, run
make clang-tidy from the build directory:
$ cd chainerx_cc/build $ make clang-tidy
Thread sanitizer¶
The thread sanitizer can be used to detect thread-related bugs, such as data races.
To enable the thread sanitizer, pass
-DCHAINERX_ENABLE_THREAD_SANITIZER=ON to
cmake.
You can run the test with
ctest -V as usual and you will get warnings if the thread sanitizer detects any issues.
CUDA runtime is known to cause a thread leak error as a false alarm.
In such case, disable the thread leak detection using environment variable
TSAN_OPTIONS='report_thread_leaks=0'.
Python contributions and unit tests¶
To test the Python binding, run the following command at the repository root:
$ pytest
The above command runs all the tests in the repository, including Chainer and ChainerMN. To run only ChainerX tests, specify the test directory:
$ pytest tests/chainerx_tests
Run tests with coverage:
$ pytest --cov --no-cov-on-fail --cov-fail-under=80 tests/chainerx_tests
Run tests without CUDA GPU:
$ pytest -m 'not cuda' tests/chainerx_tests | https://docs.chainer.org/en/latest/chainerx/contribution.html | 2019-09-15T08:28:20 | CC-MAIN-2019-39 | 1568514570830.42 | [] | docs.chainer.org |
Colors
Within the Configuration module, Supervisors can use the Colors view to configure the colors that WFM uses in the Supervisor Schedule views.
- You can configure default values for these schedule items: Activity Sets, Breaks, Days Off, Exceptions, Marked Times, Meals, Time Offs, and Work.
- You can configure specific colors for: Activity Sets, Exception Types, Marked Times, or Time-Off Types.
To find items in long lists, use Search. To sort the list in ascending or descending order, click the Sort icon
or the Item column header.
Using the drop-down list, you can filter the list by Default, Activity Sets, Exception Types, Marked Times, or Time-Off Types to view specific items for the selected business unit and sites. If you choose Default, the default colors for the business unit are displayed and the Site column is empty.
For details about how to configure default and specific colors, see Configuring Colors.
Security Permissions
To configure colors, you must have the Configuration > Colors in Schedule security permission, which is assigned in WFM Web. See Configuration Role Privileges.
Feedback
Comment on this article: | https://docs.genesys.com/Documentation/WM/8.5.2/SHelp/Colors | 2019-09-15T07:26:46 | CC-MAIN-2019-39 | 1568514570830.42 | [] | docs.genesys.com |
Servo tester
Has a setting to change the input to the servo in millisecond units.
You have to give the servo you want to test the name SERVO_TEST, and don't use it in the command laws.
Additionally, you can make the servo do a series of step maneuvers, with increasing amplitude. Tip: Open up the servo and measure the voltage on the potmeter with a high speed recorder device (like a saleae Logic analyzer with analog inputs). This will let you model the response of the servo.
Add to your firmware section: This example contains all possible configuration options, not all of them are mandatory!
TIME_PER_DEFLECTIONvalue: 0.8
These initialization functions are called once on startup.
These functions are called periodically at the specified frequency from the module periodic loop.
The following headers are automatically included in modules.h | http://docs.paparazziuav.org/latest/module__servo_tester.html | 2019-09-15T08:20:40 | CC-MAIN-2019-39 | 1568514570830.42 | [] | docs.paparazziuav.org |
Try it now and let us know what you think. Switch to the new look >>
You can return to the original look by selecting English in the language selector above.
ListClusters
Returns a list of existing clusters.
Request Syntax
{ "maxResults":
number, "nextToken": "
string" }
Request Parameters
For information about the parameters that are common to all actions, see Common Parameters.
The request accepts the following data in JSON format.
-.
Type: Integer
Required: No
- nextToken
The
nextTokenvalue returned from a previous paginated
ListClusters
Response Syntax
{ "clusterArns": [ "string" ], "nextToken": "string" }
Response Elements
If the action is successful, the service sends back an HTTP 200 response.
The following data is returned in JSON format by the service.
- clusterArns
The list of full Amazon Resource Name (ARN) entries for each cluster associated with your account. clusters for your account.
Sample Request
POST / HTTP/1.1 Host: ecs.us-east-1.amazonaws.com Accept-Encoding: identity Content-Length: 2 X-Amz-Target: AmazonEC2ContainerServiceV20141113.ListClusters X-Amz-Date: 20150429T170621Z Content-Type: application/x-amz-json-1.1 Authorization: AUTHPARAMS {}
Sample Response
HTTP/1.1 200 OK Server: Server Date: Wed, 29 Apr 2015 17:06:21 GMT Content-Type: application/x-amz-json-1.1 Content-Length: 126 Connection: keep-alive x-amzn-RequestId: 123a4b56-7c89-01d2-3ef4-example5678f { "clusterArns": [ "arn:aws:ecs:us-east-1:012345678910:cluster/My-cluster", "arn:aws:ecs:us-east-1:012345678910:cluster/default" ] }
See Also
For more information about using this API in one of the language-specific AWS SDKs, see the following: | https://docs.aws.amazon.com/AmazonECS/latest/APIReference/API_ListClusters.html | 2019-09-15T08:21:13 | CC-MAIN-2019-39 | 1568514570830.42 | [] | docs.aws.amazon.com |
Contributing philosophy¶
Contents
Mission¶ the R3 Alliance, or R3 for short, which consists of over two hundred firms working together to build and maintain this open source enterprise-grade blockchain platform.
Community Locations¶¶¶
Current community maintainers:
Joel Dudley - Contact me:
- On the Corda Slack team, either in the
#communitychannel or by direct message using the handle
@joel
- By email: joel.dudley at r3.com
We anticipate additional maintainers joining the project in the future from across the community.
Existing Contributors¶
Over two hundred individuals have contributed to the development of Corda. You can find a full list of contributors in the CONTRIBUTORS.md list.
Transparency and Conflict Policy¶
The project is supported and maintained by the R3 Alliance, which consists of over two hundred firms working together to build and maintain this open source enterprise-grade blockchain platform. the R3 Alliance. | https://docs.corda.net/contributing-philosophy.html | 2019-09-15T08:23:20 | CC-MAIN-2019-39 | 1568514570830.42 | [] | docs.corda.net |
Personalization
Dictionary
Personalization Dictionary
Personalization Dictionary
Personalization Dictionary
Class
Definition
public ref class PersonalizationDictionary : System::Collections::IDictionary
public class PersonalizationDictionary : System.Collections.IDictionary
type PersonalizationDictionary = class interface IDictionary interface ICollection interface IEnumerable
Public Class PersonalizationDictionary Implements IDictionary
- Inheritance
-
- Implements
-
Remarks
A PersonalizationDictionary instance is a collection of PersonalizationEntry objects, which consist of a personalization scope and an object value. These entries are assigned a key in the PersonalizationDictionary object.
A good practice is to add all properties to the dictionary using the Save method, regardless of the scope of the page. The .NET Framework saves the information in the appropriate way; for example, shared data is saved when the page is in Shared scope. However, shared properties are not saved when a Web Parts value is being saved, the page is in User scope, and the WebPart control was added with the page in Shared scope. | https://docs.microsoft.com/en-us/dotnet/api/system.web.ui.webcontrols.webparts.personalizationdictionary?view=netframework-4.8&viewFallbackFrom=netcore-2.2 | 2019-09-15T07:49:03 | CC-MAIN-2019-39 | 1568514570830.42 | [] | docs.microsoft.com |
[−][src]Module tracing::
dispatcher users should use
set_global_default
instead.
Finally,
tokio users should note that versions of
tokio >= 0.1.22
support an
experimental-tracing feature flag. When this flag is enabled,
the
tokio runtime's thread pool will automatically propagate the default
subscriber. This means that if
tokio::runtime::Runtime::new() or
tokio::run() are invoked when a default subscriber is set, it will also be
set by all worker threads created by that runtime.
Accessing the Default Subscriber
A thread's current default subscriber can be accessed using the
get_default function, which executes a closure with a reference to the
currently default
Dispatch. This is used primarily by
tracing
instrumentation. | https://docs.rs/tracing/0.1.5/tracing/dispatcher/index.html | 2019-09-15T08:26:57 | CC-MAIN-2019-39 | 1568514570830.42 | [] | docs.rs |
examples package: Using generated HTTP client services
To make HTTP calls in Gloop, you will have to use
one-liners from
HttpMethods.
Calls to these one-liners can easily be generated using Coder's
HTTP client service wizard. Using this wizard, the Gloop service which calls the web service is
generated along with the models required for the request and response.
The
examples package contains a couple of models and services
generated from the HTTP client service wizard to demonstrate how HTTP client services work. The generated models are
in
httpClient.model and the generated services are in
httpClient.services. The services were modified
so that they include additional steps relevant to achieve the desired flow of the program (e.g. error-handling).
Related articles
Please see the following articles for more information:
Try it!
Under the Coder Navigator, expand the
examples package entry and navigate to the
code folder. Afterwards, look for the
httpClient package. This package contains
models, Gloop services, and API files, shown below:
Simply run the services under the
httpClient.services package to see HTTP client services in
action. You can also open these services to inspect their contents and read their line comments to understand them
better.
Explanation
The HTTP client services, which are under the
httpClient.services package, use a one-liner from the
HttpMethods
class to make the HTTP call. We've configured these services to call the mock APIs defined in
httpClient.mockApis.
Mock data only
The data generated by the services is mock data only. | https://docs.torocloud.com/integrate/quick-start/resources/examples-package/http-client/ | 2019-09-15T07:51:25 | CC-MAIN-2019-39 | 1568514570830.42 | [] | docs.torocloud.com |
An Act to amend 346.65 (2) (am) 5. of the statutes; Relating to: committing a fifth or sixth offense related to operating a vehicle while intoxicated and providing a penalty. (FE)
Bill Text (PDF: )
LC Bill Hearing Materials
Wisconsin Ethics Commission information
2019 Assembly Bill 16 - A - Criminal Justice and Public Safety | https://docs-preview.legis.wisconsin.gov/2019/proposals/sb6 | 2019-09-15T07:39:11 | CC-MAIN-2019-39 | 1568514570830.42 | [] | docs-preview.legis.wisconsin.gov |
All content with label async+build+expiration+grid+gridfs+hotrod+infinispan+jboss_cache+listener+maven+release+user_guide+write_through.
Related Labels:
podcast, publish, datagrid, coherence, interceptor, server, replication, transactionmanager, dist, partitioning, query, deadlock, intro, pojo_cache, archetype, jbossas, lock_striping, nexus, guide,
schema, cache, s3, amazon, memcached, jcache, test, api, xsd, ehcache, documentation, roadmap, youtube, userguide, write_behind, 缓存, ec2, hibernate, interface, custom_interceptor, clustering, setup, eviction, out_of_memory, fine_grained, concurrency, index,, searchable, demo, cache_server, installation, scala, client, non-blocking, migration, jpa, filesystem, tx, article, gui_demo, eventing, client_server, infinispan_user_guide, murmurhash, standalone, repeatable_read, snapshot, webdav, docs, consistent_hash, batching, whitepaper, jta, faq, as5, 2lcache, jsr-107, lucene, jgroups, locking, rest, hot_rod
more »
( - async, - build, - expiration, - grid, - gridfs, - hotrod, - infinispan, - jboss_cache, - listener, - maven, - release, - user_guide, - write_through )
Powered by a free Atlassian Confluence Open Source Project License granted to Red Hat, Inc.. Evaluate Confluence today. | https://docs.jboss.org/author/label/async+build+expiration+grid+gridfs+hotrod+infinispan+jboss_cache+listener+maven+release+user_guide+write_through | 2019-09-15T09:05:22 | CC-MAIN-2019-39 | 1568514570830.42 | [] | docs.jboss.org |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.