content
stringlengths 0
557k
| url
stringlengths 16
1.78k
| timestamp
timestamp[ms] | dump
stringlengths 9
15
| segment
stringlengths 13
17
| image_urls
stringlengths 2
55.5k
| netloc
stringlengths 7
77
|
---|---|---|---|---|---|---|
System
Navigation
System Manager Navigation
System Manager Navigation
System Manager Navigation
System Manager Navigation
Class
Manager
Definition
public : sealed class SystemNavigationManager : ISystemNavigationManager, ISystemNavigationManager2
struct winrt::Windows::UI::Core::SystemNavigationManager : ISystemNavigationManager, ISystemNavigationManager2
public sealed class SystemNavigationManager : ISystemNavigationManager, ISystemNavigationManager2
Public NotInheritable Class SystemNavigationManager Implements ISystemNavigationManager, ISystemNavigationManager2
// This class does not provide a public constructor.
- Attributes
-
Remarks
The SystemNavigationManager lets you respond to user presses of the system provided back button such as a hardware button, or gestures and voice commands that activate the same event.
To enable your app to respond to the system back-navigation event, call GetForCurrentView to get the SystemNavigationManager object associated with the current view, then register an event handler for the BackRequested event. Your app will receive the event only of it's the foreground app. If you handle the BackRequested event, set the BackRequestedEventArgs.Handled property to true to mark the event as handled. If you don't mark the event as handled, the system decides whether to navigate away from the app (on the Mobile device family) or ignore the event (on the Desktop device family).
If the device doesn't provide any back-navigation button, gesture, or command, the event is not raised. | https://docs.microsoft.com/en-us/uwp/api/Windows.UI.Core.SystemNavigationManager | 2018-06-17T22:49:01 | CC-MAIN-2018-26 | 1529267859817.15 | [] | docs.microsoft.com |
Because a NUMA architecture provides a single system image, it can often run an operating system with no special optimizations.
The high latency of remote memory accesses can leave the processors under-utilized, constantly waiting for data to be transferred to the local node, and the NUMA connection can become a bottleneck for applications with high-memory bandwidth demands.
Furthermore, performance on such a system can be highly variable. It varies, for example, if an application has memory located locally on one benchmarking run, but a subsequent run happens to place all of that memory on a remote node. This phenomenon can make capacity planning difficult.
Some high-end UNIX systems provide support for NUMA optimizations in their compilers and programming libraries. This support requires software developers to tune and recompile their programs for optimal performance. Optimizations for one system are not guaranteed to work well on the next generation of the same system. Other systems have allowed an administrator to explicitly decide on the node on which an application should run. While this might be acceptable for certain applications that demand 100 percent of their memory to be local, it creates an administrative burden and can lead to imbalance between nodes when workloads change.
Ideally, the system software provides transparent NUMA support, so that applications can benefit immediately without modifications. The system should maximize the use of local memory and schedule programs intelligently without requiring constant administrator intervention. Finally, it must respond well to changing conditions without compromising fairness or performance. | https://docs.vmware.com/en/VMware-vSphere/5.5/com.vmware.vsphere.resmgmt.doc/GUID-87E2D2A6-1CE2-49B9-ACAC-4013314DFDC5.html | 2018-06-17T22:01:59 | CC-MAIN-2018-26 | 1529267859817.15 | [] | docs.vmware.com |
The aim of the SOLIDWORKS PDM Professional plugin is to check in DriveWorks specification files (.xls and .xml), all documents and their associated .html files. Then, after model generation, the plugin will check in the models, assemblies and drawings as well as any additional file formats that have been created (.jpg, .eprt, .edrw etc.) .
This plugin can either be used in DriveWorks Administrator (in which case it will run as part of the DriveWorks Addin inside SOLIDWORKS during model generation) or activated in DriveWorks Autopilot to run as part of the Autopilot function.
SOLIDWORKS PDM Professional Client must be installed on the same machine as DriveWorks.
With the SOLIDWORKS PDM Professional Client installed on the same machine as DriveWorks Administrator, DriveWorks User or DriveWorks Autopilot click the Settings button in the header bar of the DriveWorks application.
Click the Plugin Settings category from the settings dialog.
The SOLIDWORKS PDM Professional plugin settings are available from the Application Plugins section of the plugin list.
Selecting the plugin from this category will display a Settings button at the bottom of the list.
Click the Settings button to launch the settings dialog.
Uncheck this option to disable the plugin.
Enter the name of the PDM vault you wish to connect to.
Enter a valid User Name and Password for access to the vault.
This setting automatically checks in the specification files created and used by DriveWorks. Any documents created by DriveWorks will also be checked in with this setting.
Enables the model processing settings below.
This setting automatically checks in all Assemblies, Parts and Drawings as they are created by each new DriveWorks specification. Any additional file types associated with any assembly, part or drawing will also be checked in with this setting..
This setting will force SOLIDWORKS PDM to perform a 'Get Latest' on the reference files before each generation cycle.
This setting will repopulate variables used in the data card when the existing file has been overwritten.
This setting will force SOLIDWORKS PDM to overwrite existing drawings.
There are three settings that activate various levels of logging during the running of the plugin.
The retry time delay should a file check in process fail.
The amount of seconds between retry attempts.
We have introduced a special custom property called DWMasterVersion. If you create this custom propery in your SOLIDWORKS part, assembly or drawing then the PDM plugin will at the point of model generation preparation pick up on this property, and attempt to retrieve that specific version of the model.
There are a number of additional Specification Tasks, specific to SOLIDWORKS PDM, that can be run as part of the Specification Flow.
More information on these can be found at the following links:
Click the Test button to test connection to the vault.
Each machine generating SOLIDWORKS models should have the settings outlined in the topic Info: SOLIDWORKS System Options (KB12121012) applied. of the vault. be regenerated without manually deleting the original out of the vault first. | http://docs.driveworkspro.com/Topic/AutomaticallyInstalledPluginsSolidWorksEnterprisePDMPluginSettings | 2018-06-17T22:08:39 | CC-MAIN-2018-26 | 1529267859817.15 | [] | docs.driveworkspro.com |
Using workbooks users can combine multiple entities of any type (workflows and actions) into one document and upload to Mistral service. When uploading a workbook, Mistral will parse it and save its workflows and actions as independent objects which will be accessible via their own API endpoints (/workflows and /actions). Once it’s done the workbook comes out of the game. User can just start workflows and use references to workflows/actions as if they were uploaded without workbook in the first place. However, if need to modify these individual objects user can modify the same workbook definition and re-upload it to Mistral (or, of course, user can do it independently).
Namespacing
One thing that’s worth noting is that when using a workbook Mistral uses its name as a prefix for generating final names of workflows and actions included into the workbook. To illustrate this principle let’s take a look at the figure below:
So after a workbook has been uploaded its workflows and actions become independent objects but with slightly different names.
--- task2: workflow: global_workflow param1='val1' param2='val2' requires: [task1] ... actions: local_action: input: - str1 - str2 base: std.echo output="<% $.str1 %><% $.str2 %>"
NOTE: Even though names of objects inside workbooks change upon uploading Mistral allows referencing between those objects using local names declared in the original workbook.
Attributes
For more details about Mistral Workflow Language itself, please see Mistral Workflow Language specification
Except where otherwise noted, this document is licensed under Creative Commons Attribution 3.0 License. See all OpenStack Legal Documents. | https://docs.openstack.org/mistral/latest/terminology/workbooks.html | 2018-06-17T21:51:21 | CC-MAIN-2018-26 | 1529267859817.15 | [] | docs.openstack.org |
Error getting tags :
error 404Error getting tags :
error 404
revMacFromUnixPath("/usr/bin/stuff") -- returns "usr:bin:stuff"
revMacFromUnixPath(it)
revMacFromUnixPath(
Use the revMacFromUnixPath function to convert a Revolution-style file path to the Mac OS file path format (for example, if you need to pass a pathname to an external).
Parameters:
The unixPathname is a file or folder pathname in the standard format used by Revolution for file paths.
The convertOSX is true or false. If you don't specify the convertOSX, if OS X is running, Revolution assumes you want to convert an OS X-style path to a Mac OS-style path; otherwise, it assumes you don't want to convert between the OS X style and Mac OS style.
Value:
The revMacFromUnixPath function returns a string with the file path in the format expected by the Mac OS.
The revMacFromUnixPath function converts slashes (/) to colons (:), the folder-level delimiter for Mac OS pathnames. It also deletes leading slashes, so that pathnames are rooted in the volume name (the standard for Mac OS pathnames). It also adjusts relative pathnames.
On Mac OS systems, absolute paths always begin with the name of the disk that the file or folder is on. On OS X systems, the startup disk's name does not appear in absolute file paths. Instead, if a file or folder is on the startup disk, the first part of the file path is the top-level folder that the file is in. If a file or folder is on a disk other than the startup disk, its absolute path starts with "Volumes", followed by the disk name.
The OS X path convention is used by Revolution, but the old Mac OS-style path convention is required by certain applications (such as AppleScript), even on OS X systems. If the convertOSX is true (or if you don't specify the convertOSX and the application is running under OS X), the revMacFromUnixPath function automatically converts absolute paths from the OS X standard to the Mac OS standard, adding the startup disk's name to paths that are on the startup disk, and stripping the "Volumes" element from paths that are not on the startup disk. If the convertOSX is false, the revMacFromUnixPath function does not make these changes to absolute paths.
Revolution always uses the Unix pathname standard for cross-platform compatibility, and automatically converts pathnames to the correct standard for the current platform when executing commands. You need to convert the pathname only if you are passing it to another program or external. If you are using only Revolution commands and functions, you do not need to convert the pathname, since Revolution does it for you.
Note:
When included in a standalone application, the Common library is implemented as a hidden group and made available when the group receives its first openBackground message. During the first part of the application's startup process, before this message is sent, the revMacFromUnixPathfunction is not yet available. This may affect attempts to use this function in startup, preOpenStack, openStack, or preOpenCardhandlers in the main stack. Once the application has finished starting up, the library is available and the revMacFromUnixPathfunction can be used in any handler.
Changes to Revolution:
The convertOSX parameter was introduced in version 2.1.1. In previous versions, the revMacFromUnixPath function did not attempt to convert between the Mac OS and OS X conventions described above. | http://docs.runrev.com/Function/revMacFromUnixPath | 2018-06-17T21:50:45 | CC-MAIN-2018-26 | 1529267859817.15 | [] | docs.runrev.com |
Activating a license is the process of applying a license file to a specific AppDNA database – for example, after buying a full license after running in evaluation mode, or upgrading from AppDNA Standard to Enterprise edition. You can activate additional license files against a database that is already licensed – for example, to increase the number of applications you can view reports for. You can also activate an AppDNA database using a XenDesktop or XenApp Platinum license.
When you activate a license, it is imported into the AppDNA license server and applied to a specific AppDNA database.
To activate a license file:
When the activation has finished, the wizard displays a message if you must unlock the applications in the Apply Licenses screen. | https://docs.citrix.com/en-us/dna/7-13/licensing/activating-license.html | 2018-06-17T21:43:45 | CC-MAIN-2018-26 | 1529267859817.15 | [] | docs.citrix.com |
You can configure ESXi to use a directory service such as Active Directory to manage users.
Creating local user accounts on each host presents challenges with having to synchronize account names and passwords across multiple hosts. Join ESXi hosts to an Active Directory domain to eliminate the need to create and maintain local user accounts. Using Active Directory for user authentication simplifies the ESXi host configuration and reduces the risk for configuration issues that could lead to unauthorized access.
When you use Active Directory, users supply their Active Directory credentials and the domain name of the Active Directory server when adding a host to a domain. | https://docs.vmware.com/en/VMware-vSphere/6.0/com.vmware.vsphere.hostclient.doc/GUID-4FD32125-4955-439D-B39F-C654CCB207DC.html | 2018-06-17T22:29:45 | CC-MAIN-2018-26 | 1529267859817.15 | [] | docs.vmware.com |
.
Prerequisites
Open a vSphere Client session to a vCenter Server.
Verify that you have sufficient permissions to create a host object.
Verify that a Datacenter, folder, or cluster exists in the inventory.
Obtain the user name and password for an account with administrative privileges on the host.
Verify that hosts behind a firewall are able to communicate with the vCenter Server system and all other hosts through port 902 or other custom-configured port.
Verify that all NFS mounts on the host are active.
Procedure
- Select a datacenter, cluster, or folder within a datacenter.
- Enter host name or IP address and administrator credentials and click Next.
- (Optional).
- Review host information and click Next.
- (Optional) Assign a license key to the host if needed and click Next.
- Do one of the following:
- Review the summary information and click Finish.
Results
The host and its virtual machines are added to the inventory. | https://docs.vmware.com/en/VMware-vSphere/6.0/com.vmware.vsphere.hostclient.doc/GUID-B9A73E52-283E-48BA-B498-5E6C34DD7434.html | 2018-06-17T22:29:39 | CC-MAIN-2018-26 | 1529267859817.15 | [] | docs.vmware.com |
18.5.9. Develop with asyncio¶
Asynchronous programming is different than classical “sequential” programming. This page lists common traps and explains how to avoid them.
18.5.9.1. Debug mode of asyncio¶
The implementation of
asyncio has been written for performance.
In order to ease the development of asynchronous code, you may wish to
enable debug mode.
To enable all debug checks for an application:
- Enable the asyncio debug mode globally by setting the environment variable
PYTHONASYNCIODEBUGto
1, or by calling
AbstractEventLoop.set_debug().
- Set the log level of the asyncio logger to
logging.DEBUG. For example, call
logging.basicConfig(level=logging.DEBUG)at startup.
- Configure the
warningsmodule to display
ResourceWarningwarnings. For example, use the
-Wdefaultcommand line option of Python to display them.
Examples debug checks:
- Log coroutines defined but never “yielded from”
call_soon()and
call_at()methods raise an exception if they are called from the wrong thread.
- Log the execution time of the selector
- Log callbacks taking more than 100 ms to be executed. The
AbstractEventLoop.slow_callback_durationattribute is the minimum duration in seconds of “slow” callbacks.
ResourceWarningwarnings are emitted when transports and event loops are not closed explicitly.
See also
The
AbstractEventLoop.set_debug() method and the asyncio logger.
18.5.9.2. Cancellation¶
Cancellation of tasks is not common in classic programming. In asynchronous programming, not only is it something common, but you have to prepare your code to handle it.
Futures and tasks can be cancelled explicitly with their
Future.cancel()
method. The
wait_for() function cancels the waited task when the timeout
occurs. There are many other cases where a task can be cancelled indirectly.
Don’t call
set_result() or
set_exception() method
of
Future if the future is cancelled: it would fail with an exception.
For example, write:
if not fut.cancelled(): fut.set_result('done')
Don’t schedule directly a call to the
set_result() or the
set_exception() method of a future with
AbstractEventLoop.call_soon(): the future can be cancelled before its method
is called.
If you wait for a future, you should check early if the future was cancelled to avoid useless operations. Example:
@coroutine def slow_operation(fut): if fut.cancelled(): return # ... slow computation ... yield from fut # ...
The
shield() function can also be used to ignore cancellation.
18.5.9.3. Concurrency and multithreading¶
AbstractEventLoop.call_soon_threadsafe() method should be used. Example:
loop.call_soon_threadsafe(callback, *args).
To schedule a coroutine object from a different thread, the
run_coroutine_threadsafe() function should be used. It returns a
concurrent.futures.Future to access the result:
future = asyncio.run_coroutine_threadsafe(coro_func(), loop) result = future.result(timeout) # Wait for the result with a timeout
The
AbstractEventLoop.run_in_executor() method can be used with a thread pool
executor to execute a callback in different thread to not block the thread of
the event loop.
See also
The Synchronization primitives section describes ways to synchronize tasks.
The Subprocess and threads section lists asyncio limitations to run subprocesses from different threads.
18.5.9.4. Handle blocking functions correctly¶
AbstractEventLoop.run_in_executor() method.
See also
The Delayed calls section details how the event loop handles time.
18.5.9.5. Logging¶
The
asyncio module logs information with the
logging module in
the logger
'asyncio'.
The default log level for the
asyncio module is
logging.INFO.
For those not wanting such verbosity from
asyncio the log level can
be changed. For example, to change the level to
logging.WARNING:
logging.getLogger('asyncio').setLevel(logging.WARNING)
18.5.9.6. Detect coroutine objects never scheduled¶
When a coroutine function is called and its result is not passed to
ensure_future() or to the
AbstractEventLoop.create_task() method,
the execution of the coroutine object will never be scheduled which is
probably a bug. Enable the debug mode of asyncio
to log a warning to detect it.
Example with the bug:
import asyncio @asyncio.coroutine def test(): print("never scheduled") test()
Output in debug mode:
Coroutine test() at test.py:3 was never yielded from Coroutine object created at (most recent call last): File "test.py", line 7, in <module> test()
The fix is to call the
ensure_future() function or the
AbstractEventLoop.create_task() method with the coroutine object.
See also
18.5.9.7. Detect exceptions never consumed¶
Python usually calls
sys.excepthook() on unhandled exceptions. If
Future.set_exception() is called, but the exception is never consumed,
sys.excepthook() is not called. Instead, a log is emitted when the future is deleted by the garbage collector, with the
traceback where the exception was raised.
Example of unhandled exception:
import asyncio @asyncio.coroutine def bug(): raise Exception("not consumed") loop = asyncio.get_event_loop() asyncio.ensure_future(bug()) loop.run_forever() loop.close()
Output:
Task exception was never retrieved future: <Task finished coro=<coro() done, defined at asyncio/coroutines.py:139> exception=Exception('not consumed',)> Traceback (most recent call last): File "asyncio/tasks.py", line 237, in _step result = next(coro) File "asyncio/coroutines.py", line 141, in coro res = func(*args, **kw) File "test.py", line 5, in bug raise Exception("not consumed") Exception: not consumed
Enable the debug mode of asyncio to get the traceback where the task was created. Output in debug mode:
Task exception was never retrieved future: <Task finished coro=<bug() done, defined at test.py:3> exception=Exception('not consumed',) created at test.py:8> source_traceback: Object created at (most recent call last): File "test.py", line 8, in <module> asyncio.ensure_future(bug()) Traceback (most recent call last): File "asyncio/tasks.py", line 237, in _step result = next(coro) File "asyncio/coroutines.py", line 79, in __next__ return next(self.gen) File "asyncio/coroutines.py", line 141, in coro res = func(*args, **kw) File "test.py", line 5, in bug raise Exception("not consumed") Exception: not consumed
There are different options to fix this issue. The first option is to chain the coroutine in another coroutine and use classic try/except:
@asyncio.coroutine def handle_exception(): try: yield from bug() except Exception: print("exception consumed") loop = asyncio.get_event_loop() asyncio.ensure_future(handle_exception()) loop.run_forever() loop.close()
Another option is to use the
AbstractEventLoop.run_until_complete()
function:
task = asyncio.ensure_future(bug()) try: loop.run_until_complete(task) except Exception: print("exception consumed")
See also
The
Future.exception() method.
18.5.9.8. Chain coroutines correctly¶.ensure_future(create()) asyncio.ensure_future(write()) asyncio.ensure_future(close()) yield from asyncio.sleep(2.0) loop.stop() loop = asyncio.get_event_loop() asyncio.ensure_future(test()) loop.run_forever() print("Pending tasks at exit: %s" % asyncio.Task.all_tasks(loop)) loop.close()
Expected output:
(1) create file (2) write into file (3) close file Pending tasks at exit: set()
Actual output:
(3) close file (2) write into file Pending tasks at exit: {<Task pending create() at test.py:7 wait_for=<Future pending cb=[Task._wakeup()]>>} Task was destroyed but it is pending! task: <Task pending create() done at test.py:5 wait_for=<Future pending cb=[Task._wakeup()]>>
The loop stopped before the
create() finished,
close() has been called
before
write(), whereas coroutine functions were called in this order:
create(),
write(),
To fix the example, tasks must be marked with
yield from:
@asyncio.coroutine def test(): yield from asyncio.ensure_future(create()) yield from asyncio.ensure_future(write()) yield from asyncio.ensure_future(close()) yield from asyncio.sleep(2.0) loop.stop()
Or without
asyncio.ensure_future():
@asyncio.coroutine def test(): yield from create() yield from write() yield from close() yield from asyncio.sleep(2.0) loop.stop()
18.5.9.9. Pending task destroyed¶
If a pending task is destroyed, the execution of its wrapped coroutine did not complete. It is probably a bug and so a warning is logged.
Example of log:
Task was destroyed but it is pending! task: <Task pending coro=<kill_me() done, defined at test.py:5> wait_for=<Future pending cb=[Task._wakeup()]>>
Enable the debug mode of asyncio to get the traceback where the task was created. Example of log in debug mode:
Task was destroyed but it is pending! source_traceback: Object created at (most recent call last): File "test.py", line 15, in <module> task = asyncio.ensure_future(coro, loop=loop) task: <Task pending coro=<kill_me() done, defined at test.py:5> wait_for=<Future pending cb=[Task._wakeup()] created at test.py:7> created at test.py:15>
See also
Detect coroutine objects never scheduled.
18.5.9.10. Close transports and event loops¶
When a transport is no more needed, call its
close() method to release
resources. Event loops must also be closed explicitly.
If a transport or an event loop is not closed explicitly, a
ResourceWarning warning will be emitted in its destructor. By default,
ResourceWarning warnings are ignored. The Debug mode of asyncio section explains how to display them. | https://docs.python.org/3/library/asyncio-dev.html | 2018-06-17T21:49:06 | CC-MAIN-2018-26 | 1529267859817.15 | [] | docs.python.org |
Application programmer interface¶
Overview¶
Additionally to the command line tool cutplace all functions are available as Python API. For a complete reference about all public classes and functions, refer to the Module Index.
This chapter describes how to perform a basic validation of a simple CSV file containing data about some customers. It also explains how to extend cutplace’s fields formats and checks to implement your own.
Logging¶
Cutplace uses Python’s standard
logging module. This provides a
familiar and powerful way to watch what cutplace is doing. However, it also
requires to setup the logging properly in order to gain most from it.
For a quick start, set up your application’s log messages to go to the console and show only information, warning and errors, but no debug messages:
>>> import logging >>> logging.basicConfig(level=logging.INFO)
Next trim cutplace’s logging to show only warnings and errors as you might not be particularly interested in whatever it is cutplace does during a validation:
>>> logging.getLogger('cutplace').setLevel(logging.WARNING)
This should be enough to get you going. To learn more about logging, take a look at logging chapter of the Python library documentation.
Basic usage¶
Reading a CID¶
The class
cutplace.Cid represents a CID. In case you have a CID
stored in a file and want to read it, use:
>>> import os.path >>> import cutplace >>> >>> # Compute the path of a test file in a system independent manner, >>> # assuming that the current folder is "docs". >>> cid_path = os.path.join(os.pardir, 'examples', 'cid_customers.ods') >>> cid = cutplace.Cid(cid_path) >>> cid.field_names ['customer_id', 'surname', 'first_name', 'date_of_birth', 'gender']
This is the easiest way to describe an interface. The input document is human readable even for non coders and quite simple to edit and maintain. It also keeps declaration and validation in separate files.
Validating data¶
Now that we know how our data are supposed to look, we want to validate and
optionally process them. The easiest way to do so are two simple functions
called
cutplace.validate() and
cutplace.rows().
Both of them take to parameters: the path to a CID and the path to the data
to validate or read. For example:
>>> valid_data_path = os.path.join(os.pardir, 'examples', 'customers.csv') >>> cutplace.validate(cid_path, valid_data_path)
If the data are valid,
cutplace.validate() seemingly does nothing.
For broken data, it raises
cutplace.error.DataError.
To also process the data after each row has been validated, use:
>>> for row in cutplace.rows(cid_path, valid_data_path): ... pass # We could also do something useful with the data in ``row`` here.
We could easily extend the loop body to process the data in some meaningful way such as inserting them in a database.
Instead of paths to files, both functions also take a
cutplace.Cid and / or filelike object ready to read:
>>> import io >>> cid = cutplace.Cid(cid_path) >>> with io.open(valid_data_path, 'r', encoding=cid.data_format.encoding, newline='') as data_stream: ... cutplace.validate(cid, data_stream)
If you need more control over the validation or reading process, take a look
at the
cutplace.Reader class. It provides a simple generator function
cutplace.Reader.rows() that returns all data rows. If you are familiar
with Python’s
csv.reader(), you already know how to use it.
Dealing with errors¶
So far we only had to deal with valid data. But what happens if the data do not conform to the CID? Let’s take a look at it:
>>> import cutplace.errors >>> broken_data_path = os.path.join(os.pardir, 'tests', 'data', 'broken_customers.csv') >>> cutplace.validate(cid, broken_data_path) Traceback (most recent call last): ... cutplace.errors.FieldValueError: broken_customers.csv (R3C1): cannot accept field 'customer_id': value must be an integer number: 'abcd'
Apparently the first broken data item causes the validation to stop with an
cutplace.errors.FieldValueError, which is a descendant of
cutplace.errors.CutplaceError. In many cases this is what you want.
Sometimes however the requirements for an application will state that all
valid data should be processed and invalid data should be put aside for
further examination, for example by writing them to a log file. This is
easy to implement using
cutplace.rows() with the optional
parameter
on_error='yield'. With this enabled, the generator always
returns a value even for broken rows. The difference however is that broken
rows do not result in a list of values but in a result of type
cutplace.errors.DataError. It is up to you to detect this and
process the different kinds of results properly.
Here is an example that prints any data related errors detected during validation:
>>> broken_data_path = os.path.join(os.pardir, 'tests', 'data', 'broken_customers.csv') >>> for row_or_error in cutplace.rows(cid, broken_data_path, on_error='yield'): ... if isinstance(row_or_error, Exception): ... if isinstance(row_or_error, cutplace.errors.CutplaceError): ... # Print data related error details and move on. ... print('%s' % row_or_error) ... else: ... # Let other, more severe errors terminate the validation. ... raise row_or_error ... else: ... pass # We could also do something useful with the data in ``row`` here. broken_customers.csv (R3C1): cannot accept field 'customer_id': value must be an integer number: 'abcd' broken_customers.csv (R4C5): cannot accept field 'gender': value is 'unknown' but must be one of: 'female' or 'male' broken_customers.csv (R5C4): cannot accept field 'date_of_birth': date must match format YYYY-MM-DD (%Y-%m-%d) but is: '17.04.1954' (time data '17.04.1954' does not match format '%Y-%m-%d')
Note that it is possible for the reader to throw other exceptions, for example
IOError in case the file cannot be read at all or
UnicodeError
in case the encoding does not match. You should not continue after such errors as
they indicate a problem not related to the data but either in the specification
or environment.
The
on_error parameter can also take the values
'raise' (which is the
default and raises a
cutplace.errors.CutplaceError on encountering the
first error as described above) and
'continue', which silently ignores
any error and moves on with the next row. The latter can be useful during
prototyping a new application when CID’s and data are in a constant state of
flux. In production code
on_error='continue' mainly represents a very
efficient way to shoot yourself into the foot.
Processing data¶
As a first step, we should figure out where in each row we can find the first name and the surname. We need to do this only once so this happens outside of the processing loop. The names used to find the indices must match the names used in the CID:
>>> first_name_index = cid.field_index('first_name') >>> surname_index = cid.field_index('surname')
Now we can read the data just like before. Instead of a simple
pass loop
we obtain the first name from
row and check if it starts with
'T'. If
so, we compute the full name and print it:
>>> for row in cutplace.rows(cid, valid_data_path): ... first_name = row[first_name_index] ... if first_name.startswith('T'): ... surname = row[surname_index] ... full_name = surname + ', ' + first_name ... print(full_name) Beck, Tyler Lopez, Tyler Rose, Tammy
Of course nothing prevents you from doing more glamorous things here like inserting the data into a database or rendering them to a dynamic web page.
Partial validation¶
If performance is an issue, validation of field formats and row checks can be
limited to a specified number of rows using the parameter
validate_until. Any integer value greater than 0 specifies the
number of rows after which validation should stop.
None means that the
whole input should be validated (the default) while the number
0
specifies that no row should be validated.
Functions that support
validate_until are:
cutplace.validate()
cutplace.rows()
cutplace.Reader.__init__()
Pure validation functions such as
cutplace.validate() completely
stop processing the input after reaching the limit while reading functions
such as
cutplace.rows() keep producing rows - just without
validating them.
A typical use case would be enabling full validation during testing and reducing validation to the first 100 rows in the production environment. Ideally this would detect all errors during testing (when performance is less of an issue) and quickly process the data in production while still detecting errors early in the data.
Putting it all together¶
To recapitulate and summarize the previous sections here is a little code fragment containing a complete example you can use as base for your own validation code:
>>> # Validate a test CSV file. >>> import os.path >>> from cutplace import Cid, Reader >>> # Change this to use your own files. >>> cid_path = os.path.join(os.pardir, 'examples', 'cid_customers.ods') >>> data_path = os.path.join(os.pardir, 'examples', 'customers.csv') >>> # Define the interface. >>> cid = Cid(cid_path) >>> # Validate the data. >>> for row in cutplace.rows(cid, data_path): ... pass # We could also do something useful with the data in ``row`` here.
In case you want to process the data, simply replace the
pass inside the
loop by whatever needs to be done.
In case you want to continue even if a row was rejected, use the optional
parameter
on_error='yield' as described earlier.
Writing data¶
To validate written data, use :py:class`cutplace.Writer`. A
Writer needs
a CID to validate against and an output to write to. The output can be any
filelike object such as a file or an
io.StringIO. For example:
>>> import io >>> out = io.StringIO()
Now you can create a writer and write a valid row to it:
>>> writer = cutplace.Writer(cid, out) >>> writer.write_row(['38000', '234', 'John', 'Doe', 'male', '08.03.1957'])
Attempting to write broken data results in an
Exception derived
from
cutplace.errors.CutplaceError:
>>> writer.write_row(['not a number', 'Miller', 'John', '1978-11-27' ,'male']) Traceback (most recent call last): FieldValueError: <io> (R1C2): field 'customer_id' must match format: value must be an integer number: 'not a number'
Note that after a
CutplaceError you can continue writing. For any other
Exception such as
IOError it is recommended to stop writing and
consider it an unrecoverable situation.
Once done, close both the writer and the output:
>>> writer.close() >>> out.close()
As
cutplace.Writer implements the context manager protocol, you
can also use the
with statement to automatically
close() it when done.
Note that
cutplace.Writer.close() performs cutplace checks and
consequently can raise a
cutplace.errors.CheckError.
Advanced usage¶
In the previous section, you learned how to read a CID and use it to validate data using a few simple API calls. You also learned how to handle errors detected in the data.
With this knowledge, you should be able to write your own small validation scripts that process the results. For instance, you could add your own code to log errors, send validation reports via email or automatically insert accepted rows in a data base. The Python standard library offers powerful modules for all these tasks.
In case you are already happy and found everything you need, you can stop reading this chapter and move on with implementing your tasks.
If however you need more flexibility, suffer from API OCPD or just want to know what else cutplace offers in case you might need it one day, the following sections describe the lower level hooks of cutplace API. They are more powerful and flexible, but also more difficult to use.
Building a CID in the code¶
In some cases it might be preferable to include the CID in the code, for instance for trivial interfaces that are only used internally. Here is an example of a simple CID for CSV data with 3 fields:
First, import the necessary modules:
>>> from cutplace import data >>> from cutplace import errors >>> from cutplace import fields >>> from cutplace import interface
Next create an empty CID:
>>> cid = Cid()
As the CID will not be read from an input file, error messages would not be able to refer to any file in case of errors. To have at least some reference, we need to tell the CID that it is declared from source code:
>>> cid.set_location_to_caller()
That way, error messages will refer you to the Python module where this call happened.
Next we can add rows as read from a CID file using
cutplace.Cid.add_data_format(),
cutplace.Cid.add_field_format() and
cutplace.Cid.add_check():
>>> # Use CSV as data format. This is the same as having a spreadsheet >>> # with the cells: >>> # >>> # | F | Format | Delimited | >>> # | F | Item separator | ; | >>> cid.add_data_format_row([cutplace.data.KEY_FORMAT, data.FORMAT_DELIMITED]) >>> cid.add_data_format_row([cutplace.data.KEY_ITEM_DELIMITER, ';']) >>> >>> # Add a couple of fields. >>> cid.add_field_format_row(['id', '', '', '1...5', 'Integer']) >>> cid.add_field_format_row(['name']) >>> cid.add_field_format_row(['date_of_birth', '', 'X', '', 'DateTime', 'YYYY-MM-DD']) >>> >>> # Make sure that the ``id`` field contains only unique values. >>> cid.add_check_row(['id_must_be_unique', 'IsUnique', 'id']) >>> cid.field_names ['id', 'name', 'date_of_birth']
If any of this methods cannot handle the parameters you passed, they raise a
cutplace.errors.CutplaceError with a message describing what went wrong.
For example:
>>> cid.add_check_row([]) Traceback (most recent call last): InterfaceError: <source> (R1C2): check row (marked with 'c') must contain at least 2 columns
Adding your own field formats¶
Cutplace already ships with several field formats found in
cutplace.fields module that should cover most needs. If
however you have some very special requirements, you can write your own
formats.
Simply inherit from
cutplace.fields.AbstractFieldFormat and
optionally provide a constructor to parse the
rule parameter. Next,
implement
validated_value()
which validates that the text in
value conforms to
rule. If not,
raise a
FieldValueError with a descriptive error message.
Here is a very simple example of a field format that accepts values of “red”, “green” and “blue”:
>>> class ColorFieldFormat(fields.AbstractFieldFormat): ... def __init__(self, field_name, is_allowed_to_be_empty, length, rule, data_format): ... super(ColorFieldFormat, self).__init__(field_name, is_allowed_to_be_empty, length, rule, data_format, empty_value='') ... ... def validated_value(self, value): ... # Validate that ``value`` is a color and return it. ... assert value ... if value not in ['red', 'green', 'blue']: ... raise errors.FieldValueError('color value is %r but must be one of: red, green, blue' % value) ... return value >>> color_field = ColorFieldFormat('roof_color', False, '', '', cid.data_format) >>> color_field.validated('red') 'red'
The
value parameter is a string. Cutplace ensures that
validated_value() will never
be called with an empty
value parameter, hence the
assert value - it
will cause an
AssertionError if
value is
'' or
None
because that would mean that the caller is broken.
Of course you could have achieved similar results using
ChoiceFieldFormat. However, a custom field
format can do more. In particular,
validated_value() does not
have to return a string. It can return any Python type and even
None.
Here’s a more advanced :py:class`ColorFieldFormat` that returns the color as a tuple of RGB values between 0 and 1:
>>> class ColorFieldFormat(fields.AbstractFieldFormat): ... def __init__(self, field_name, is_allowed_to_be_empty, length, rule, data_format): ... super(ColorFieldFormat, self).__init__(field_name, is_allowed_to_be_empty, length, rule, data_format, empty_value='') ... ... def validated_value(self, color_name): ... # Validate that ``color_name`` is a color and return its RGB representation. ...
For a simple test, let’s see this field format in action:
>>> color_field = ColorFieldFormat('roof_color', False, '', '', cid.data_format) >>> color_field.validated('red') (1.0, 0.0, 0.0) >>> color_field.validated('yellow') Traceback (most recent call last): ... cutplace.errors.FieldValueError: color name is 'yellow' but must be one of: red, green, blue
Before you learned that
validated_value()
never gets called with an empty value. So what happens if you declare a color
field that allows empty values? For example:
>>> # Sets ``is_allowed_to_be_empty`` to ``True`` to accept empty values. >>> color_field = ColorFieldFormat('roof_color', True, '', '', cid.data_format) >>> color_field.validated('') '' >>> # Not quiet a color tuple...
Well, that’s not quite what we want. Instead of an empty string, a reasonable
default RGB tuple would be a lot more useful. Say,
(0.0, 0.0, 0.0) to
represent black.
Fortunately field formats can just specify that by using the
empty_value
parameter in the constructor. When passed to the
super constructor in
AbstractFieldFormat, everything will be taken
care of. So here’s a slightly modified version:
>>> class ColorFieldFormat(fields.AbstractFieldFormat): ... def __init__(self, field_name, is_allowed_to_be_empty, length, rule, data_format): ... super(ColorFieldFormat, self).__init__(field_name, is_allowed_to_be_empty, length, rule, data_format, ... empty_value=(0.0, 0.0, 0.0)) # Use black as "empty" color. ... ... def validated_value(self, color_name): ... # (Exactly same as before) ... assert color_name ... if color_name == 'red': ... result = (1.0, 0.0, 0.0) ... elif color_name == 'green': ... result = (0.0, 1.0, 0.0) ... elif color_name == 'blue': ... result = (0.0, 1.0, 0.0) ... else: ... raise cutplace.errors.FieldValueError('color name is %r but must be one of: red, green, blue' % color_name) ... return result
Let’s give it a try:
>>> color_field = ColorFieldFormat('roof_color', True, '', '', cid.data_format) >>> color_field.validated('red') (1.0, 0.0, 0.0) >>> color_field.validated('') (0.0, 0.0, 0.0)
Adding your own checks¶
Writing checks is quite similar to writing field formats. However, the interaction with the validation is more complex.
Checks have to implement certain methods described in
cutplace.checks.AbstractCheck. For each check, cutplace performs
the following actions:
- When reading the CID, call the check’s
__init__().
- When starting to read a set of data, call the checks’s
reset().
- For each row of data, call the checks’s :py:meth:
check_row().
- When done with a set of data, call the checks’s
check_at_end().
The remainder of this section describes how to implement each of these methods.
As an example, we implement a check to ensure that each customer’s full name
requires less than 100 characters. The field formats already ensure that
first_name and
last_name are at most 60 characters each. However,
assuming the full name is derived using the expression:
last_name + ', ' + first_name
this could lead to full names with up to 122 characters.
To implements this check, start by inheriting from
cutplace.checks.AbstractCheck:
>>> from cutplace import checks >>> class FullNameLengthIsInRangeCheck(checks.AbstractCheck): ... """Check that total length of customer name is within the specified range."""
Next, implement a constructor to which cutplace can pass the values found in the CID. For example, for our check the CID would contain:
When cutplace encounters this line, it will create a new check by calling
FullNameLengthIsInRangeCheck.__init__(), passing the following
parameters:
description='customer must be unique', which is just a human readable description of the check to refer to it in error messages
rule='...100', which describes what exactly the check should do. Each check can define its own syntax for the rule. In case of
FullNameLengthIsInRangethe rule describes a
cutplace.ranges.Range.
available_field_names=['branch_id', 'customer_id', 'first_name', 'last_name', 'gender', 'date_of_birth'](as defined in the CID and using the same order)
locationbeing the
cutplace.errors.Locationin the CID where the check was defined.
The constructor basically has to do 3 things:
- Call the super constructor
- Perform optional initialization needed by the check that needs to be done only once and not on each new data set. In most cases, this involves parsing the
ruleparameter and obtain whatever information the checks needs from it.
- Call
self.reset(). This is not really necessary for this check, but in most cases it will make your life easier because you can avoid redundant initializations in the constructor.
To sum it up as code:
>>> from cutplace import ranges >>>()
Once cutplace is done reading the CID, it moves on to data. For each set of
data it calls the checks’
reset()
method. For our simple check, no actions are needed so we are good because
reset() already does nothing.
When cutplace validates data, it reads them row by row. For each row, it
calls
validated() on each
cell in the row. In case all cells are valid, it collects them in a
dictionary which maps the field name to its native value. Recall the interface
from the Tutorial, which defined the following fields:
Now consider a data row with the following values:
+———–+———-+——-+————-+=======+ +Customer id+First name+Surname+Date of birth+Gender + +===========+==========+=======+=============+=======+ +96 +Andrew +Dixon +1913-10-02 +male + +———–+———-+——-+————-+——-+
The row map for this row would be:
row_map = { 'customer_id': 96, 'first_name': 'Andrew', 'last_name': 'Dixon', 'date_of_birth': time.struct_time(tm_year=1913, tm_mon=10, tm_mday=2, ...) 'gender': 'male', }
With this knowledge, we can easily implement a
check_row() that
computes the full name and checks that it is within the required range. If
not, it raises a
CheckError:
>>> def check_row(self, row_map, location): ... first_name = row_map['first_name'] ... surname = row_map['surname'] ... full_name = surname + ', ' + first_name ... full_name_length = len(full_name) ... try: ... self._full_name_range.validate('full name', full_name_length) ... except cutplace.RangeValueError as error: ... raise cutplace.errors.CheckError('full name length is %d but must be in range %s: %r' \ ... % (full_name_length, self._full_name_range, full_name))
And finally, there is
cutplace.checks.AbstractCheck.check_at_end()
which is called when all data rows have been processed. Note that
check_at_end() does not have any parameters that contain actual
data. Instead you typically would collect all information needed by
check_at_end() in
check_row() and store them in
instance variables. For an example, take a look at the source code of
cutplace.checks.IsUniqueCheck.
Because our
FullNameLengthIsInRangeCheck does not need to do
anything here, we can omit it and keep inherit an empty implementation from
cutplace.checks.AbstractCheck.check_at_end().
Using your own checks and field formats¶
Now that you know how to write our own checks and field formats, it would be nice to actually utilize them in a CID. For this purpose, cutplace lets you import plugins that can define their own checks and field formats.
Plugins are standard Python modules that define classes based on
cutplace.fields.AbstractCheck and
cutplace.fields.AbstractFieldFormat. For our example, create a
folder named
~/cutplace_plugins and store a Python module named
myplugins.py in it with the following contents:
""" Example plugins for cutplace. """ from cutplace import checks from cutplace import errors from cutplace import fields from cutplace import ranges class ColorFieldFormat(fields.AbstractFieldFormat): """ Field format representing colors as their names. """ def __init__(self, field_name, is_allowed_to_be_empty, length, rule, data_format): # HACK: Use super() in a way that works both in Python 2 and 3. If the code only has to work with Python 3, # use the cleaner `super().__init__(...)`. # FIXME: super(ColorFieldFormat, self).__init__( fields.AbstractFieldFormat.__init__(self, field_name, is_allowed_to_be_empty, length, rule, data_format, empty_value=(0.0, 0.0, 0.0)) # Use black as "empty" color. def validated_value(self, color_name):() def check_row(self, field_name_to_value_map, location): full_name = field_name_to_value_map["last_name"] + ", " + field_name_to_value_map["first_name"] full_name_length = len(full_name) self._full_name_range.validate("length of full name", full_name_length, location)
The CID can now refer to
ColorFieldFormat as
Color (without
FieldFormat) and to
FullNameLengthIsInRangeCheck as
FullNameLengthIsInRange (without
Check). For example:
See:
cid_colors.ods
Here is a data file where all but one row conforms to the CID:
Item,Color flower,red tree,green sky,blue gras,green
See:
colors.csv
To tell cutplace where the plugins folder is located, use the command line
option
--plugins. Assuming that your
myplugins.py is stored in
~/cutplace_plugins you can run:
cutplace --plugins ~/cutplace_plugins cid_colors.ods colors.csv
The output is:
ERROR:cutplace:field error: colors.csv (R5C2): field 'color' must match format: color name is 'yellow' but must be one of: red, green, blue colors.csv: rejected 1 of 5 rows. 0 final checks failed.
If you are unsure which plugins exactly cutplace imports, use
--log=info. For example, the output could contain:
INFO:cutplace:import plugins from "/Users/me/cutplace_plugins" INFO:cutplace: import plugins from "/Users/me/cutplace_plugins/myplugins.py" INFO:cutplace: fields found: ['ColorFieldFormat'] INFO:cutplace: checks found: ['FullNameLengthIsInRangeCheck'] | http://cutplace.readthedocs.io/en/latest/api.html | 2018-06-17T22:02:25 | CC-MAIN-2018-26 | 1529267859817.15 | [] | cutplace.readthedocs.io |
Backup and Restore NAS Shares
Phoenix Editions:
Business
Enterprise
Elite
This guide helps you to configure Phoenix to back up and restore data from NAS shares. It gives an overview of the configuration, details about the hardware and software specifications, the configuration procedures, description of the UI, and troubleshooting information.
- Introduction to Phoenix for NAS Shares
- No image available
- Provides an overview of the Phoenix support to back up shares of a NAS device, system requirements, configuration steps, and backup workflows.
- Set up Phoenix to back up NAS Shares
- No image available
- Provides information about configuring Phoenix to backup and restore NAS shares and the configuration sequence.
- Restore NAS Shares
- No image available
- Provides information on the restore procedure of NAS share.
- Manage NAS Device and its Components
- No image available
- Provides information on managing NAS devices, shares and proxies. This section also describes procedures to manage Phoenix components such as backup sets, backup policies, and administrative groups.
- Manage administrative groups for NAS shares
- Manage backup policies for NAS shares
- Manage backup sets for NAS shares
- Manage NAS proxies for NAS devices
- Manage your NAS device
- Manage your NAS share
- NAS proxy logs and configuration details
- Uninstall NAS proxy from Windows server
- Upgrade NAS proxy on Linux from the command line
- Upgrade NAS proxy on Windows and Linux from Phoenix Management Console
- Upgrade NAS proxy on Windows using installer
- Phoenix Management Console User Interface for Backup of NAS Shares
- No image available
- Describes the user interface changes related to the configuration of NAS devices and shares.
- FAQs
- No image available
- Provides answers to FAQs on Phoenix support for NAS devices. | https://docs.druva.com/Phoenix/030_Configure_Phoenix_for_Backup/040_Back_up_and_restore_NAS_shares | 2018-06-17T22:06:32 | CC-MAIN-2018-26 | 1529267859817.15 | [array(['https://docs.druva.com/@api/deki/files/3644/tick.png?revision=2',
'File:/tick.png'], dtype=object)
array(['https://docs.druva.com/@api/deki/files/3644/tick.png?revision=2',
'File:/cross.png'], dtype=object)
array(['https://docs.druva.com/@api/deki/files/3644/tick.png?revision=2',
'File:/tick.png'], dtype=object) ] | docs.druva.com |
Contents
Deployment of the LFMT Client
This section describes how to deploy and configure the LFMT Client software.
ImportantLFMT Client places plug-in files in the <GAX Installation Directory>\webapps\gax\WEB-INF\lib folder. This folder is created the first time GAX is started. Please ensure GAX has been started at least once prior to the installation of the LFMT Client.
Installing the LFMT Client
The following directories in the LFMT Client distributable contain the LFMT installation packages:
- For Linux, /linux/bX/ip.
- For Windows, \windows\bX\ip.
Installing the LFMT Client on Linux
- In the directory to which the LFMT Client installation package was copied, execute the install.sh script.
- Enter the location to the GAX installation directory.
- Enter the Destination Folder for the LFMT Client installation.
- Ensure the .jar files in the <LFMT Client Install Directory> have been copied to <GAX Installation Directory>/webapps/WEB-INF/lib.
Installing the LFMT Client on Windows
- In the directory to which the LFMT Client installation package was copied, double-click setup.exe to start the installation.
- On the Welcome screen, click Next.
- Enter the Destination Folder for the LFMT Client installation and click Next.
- On the Ready to Install screen, click Next.
- On the Installation Complete screen, click Finish.
- Ensure the .jar files in the <LFMT Client Install Directory> have been copied to <GAX Installation Directory>\webapps\WEB-INF\lib.
Configuring GAX for use with the LFMT Client
- Log into GAX, and navigate to Configuration Manager.
- From the Environment section, select Applications.
- In the Applications section, select the GAX Application to be configured for use with the LFMT Client.
- In the Application Properties pane, select the Ports tab.
- In the Ports tab, add the following listening ports.
- messaging = <an open port on the GAX host>
- ftmessaging = <an open port on the GAX host>
- Navigate to the Application Options tab.
- Add and configure the following section and options in the GAX configuration object.
TipFor more information on the GAX LFMT configuration options, please refer to the LFMT GAX Configuration Options section.
- Add the section lfmt.
- (Optional) Configure the following LFMT FTP options:
- Select the Save button to save changes to the application.
- Restart GAX.
This page was last modified on 8 August 2016, at 06:14.
Feedback
Comment on this article: | https://docs.genesys.com/Documentation/ST/latest/DeploymentGuide/LFMTClient | 2018-06-17T21:58:09 | CC-MAIN-2018-26 | 1529267859817.15 | [] | docs.genesys.com |
Collaborating and sharing
Whether you're working in a public or private project, Kumu makes it easy to collaborate with others and share your work. This guide is a hub of information related to collaborating and sharing, including these topics:
- Public vs. private projects
- Adding contributors
- Handling conflicting changes
- Creating presentations
- Creating share/embed link
Public vs. private projects
When you create a new project from your Kumu dashboard, you'll be prompted to choose whether it should be public or private, and the option you choose will affect how you can share your finished project.
Public projects can be viewed by anyone who has the link, and they can be edited by you and anyone you add as a contributor. They are also indexed by search engines—that is, people can potentially find your project online if they search for the right keywords. Finally, public projects can be forked by other Kumu users, allowing them to build upon your work and offer new insights.
Public projects are free, and you can create as many as you want, no matter what account you own or plan you're subscribed to.
We love it when you share your work, but we know that it's sometimes necessary to keep it under wraps! For that purpose, we offer private projects. Private projects don't get indexed by search engines, and they can only be viewed by you and anyone you have added as a contributor. Private projects are a paid service—check out our guide on accounts and plans to see a full list of pricing options.
Both public and private projects can be shared using presentations and share/embed links, and private projects allow you to password-protect your presentations for an added layer of security.
Add a contributor
Add a contributor to a personal project
To add a contributor to a project owned by your personal account, you first need to make sure the contributor has their own Kumu account. If they don't have one yet, they can sign up for free. Once they have signed up, you can open your project settings, click on MEMBERS, type in the contributor's Kumu username, and click "Add contributor".
Anytime you add a contributor to a project (public or private) owned by your personal account, you are granting them view and edit access. However, they won't be able to add other contributors, change project privacy, or rename, transfer, or delete the project.
Add a contributor to an organization project
When you're adding a contributor to a project owned by an organization, there are a few more steps involved, but you'll be able to choose whether the contributor has view-only, edit, admin, or no access. For more information on how to add contributors to organization projects and manage their access, check out the full guide on organizations.
Handle conflicting changes
If you're editing your project, and somebody else is editing at the same time, their changes won't appear on your screen in real-time (and vice-versa). Instead of showing real-time changes, Kumu waits until you refresh the page to sync you up with your team and show you the most up-to-date version of your project.
This was a design decision we made to encourage you to carefully plan who is working on which part of the project, and when. In the long run, this kind of planning helps reduce complexity—and when you're using Kumu to tackle a tough problem, reducing complexity is vital!
If you happen to edit the same part of the map at the same time (whether that's an element name, map description, view, or something else), Kumu will detect this and prompt you to review any conflicts.
Let's walk through a quick example. Say you and a teammate both happened to be mapping influential people in Silicon Valley one afternoon. You both click on Reid and decide that his bio needs a bit of sprucing up. You save your changes and SURPRISE! The below screen pops up to let you know that you're not the only one editing Reid's bio today.
You click "Get started" and see a window that highlights the changes between your version and the version on our servers. Lines you've added show up as blue text with a "+'' sign in front. Lines that you've deleted or someone else has added show up as red text with a "-" sign in front.
Now you're in charge. Choose which text you want to keep and which you want to remove. Make any remaining corrections and then remove any "+" or "-" added in the merge process. You're finished when you see all grey text:
If there are multiple conflicts, you'll be taken through each conflict one by one. Once you see this screen:
...you're all set. If only handling conflicts in the real world was this easy!
Create a presentation
Presentations combine the best of PowerPoint, Prezi, and Kumu into one easy-to-use tool. You can create a new presentation and edit existing presentations by clicking the menu icon in the upper left of your map, then clicking PRESENTATIONS. For more info on the power of presentations, watch the video below, or check out our full guide on presentations
Create a share/embed link
Share/embed links make it easy to send somebody an interactive, read-only version of your map, or embed that version on a blog or website. To create a share/embed link, click the ellipsis ... in the bottom toolbar, then choose "Share / embed this map". For more information about share/embed links, including customizeable options, check out the full guide. | https://docs.kumu.io/overview/collaboration.html | 2018-06-17T22:05:38 | CC-MAIN-2018-26 | 1529267859817.15 | [array(['../images/add-collaborator.png', 'add contributor'], dtype=object)
array(['../images/merge-1.png', 'Conflicts Step 1'], dtype=object)
array(['../images/merge-2.png', 'Conflicts Step 2'], dtype=object)
array(['../images/merge-3.png', 'Conflicts Step 3'], dtype=object)
array(['../images/merge-4.png', 'Conflicts Step 4'], dtype=object)] | docs.kumu.io |
- Deploying
Some organizations use three firewalls to protect their internal networks. The three firewalls divide the DMZ into two stages to provide an extra layer of security for the internal network. This network configuration is called a double-hop DMZ. You can deploy NetScaler Gateway in a double-hop DMZ with XenApp and StoreFront.
You can configure a double-hop DMZ to work with Citrix StoreFront or the Web Interface. Users connect by using Citrix Receiver.
For more information about this deployment option, see Deploying NetScaler Gateway in a Double-Hop DMZ. | https://docs.citrix.com/de-de/netscaler-gateway/11/double-hop-dmz/ng-deploy-double-hop-con.html | 2018-06-17T21:46:47 | CC-MAIN-2018-26 | 1529267859817.15 | [] | docs.citrix.com |
Difference between revisions of "JRequest::getInt"
From Joomla! Documentation
Revision as of 01:26, 1 July 2012
This Namespace has been archived - Please Do Not Edit or Create Pages in this namespace. Pages contain information for a Joomla! version which is no longer supported. It exists only as a historical reference, will not be improved and its content may be incomplete.
JRequest:
Example
$ourresult = JRequest::getInt( 'inputbox', 0, 'post' );
<CodeExamplesForm /> | https://docs.joomla.org/index.php?title=JRequest::getInt/11.1&diff=68119&oldid=57544 | 2016-02-06T03:43:16 | CC-MAIN-2016-07 | 1454701145751.1 | [] | docs.joomla.org |
SQLAlchemy 1.1 Documentation
SQLAlchemy ORM
- Object Relational Tutorial
- Mapper Configuration
- Types of Mappings
- Mapping Columns and Expressions
- Mapping Class Inheritance Hierarchies¶
- Joined Table Inheritance
- Single Table Inheritance
- Concrete Table Inheritance
- Using Relationships with Inheritance
- Using Inheritance with Declarative
- Non-Traditional Mappings
- Configuring a Version Counter
- Class Mapping API
- Relationship Configuration
- Using the Session
- Events and Internals
- ORM Extensions
- ORM Examples
Project Versions
Mapping Class Inheritance Hierarchies¶
SQLAlchemy supports three forms of inheritance: single table inheritance, where several types of classes are represented by a single table, concrete table inheritance, where each type of class is represented by independent tables, and joined table inheritance, where the class hierarchy is broken up among dependent tables, each class represented by its own table that only includes those attributes local to that class.
The most common forms of inheritance are single and joined table, while concrete inheritance presents more configurational challenges.
When mappers are configured in an inheritance relationship, SQLAlchemy has the ability to load elements polymorphically, meaning that a single query can return objects of multiple types.
Joined Table Inheritance¶
In joined table inheritance, each class along a particular classes’ list of
parents is represented by a unique table. The total set of attributes for a
particular instance is represented as a join along all tables in its
inheritance path. Here, we first define the
Employee class.
This table will contain a primary key column (or columns), and a column
for each attribute that’s represented by
Employee. In this case it’s just
name:
class Employee(Base): __tablename__ = 'employee' id = Column(Integer, primary_key=True) name = Column(String(50)) type = Column(String(50)) __mapper_args__ = { 'polymorphic_identity':'employee', 'polymorphic_on':type }
The mapped table also has a column called
type. The purpose of
this column is to act as the discriminator, and stores a value
which indicates the type of object represented within the row. The column may
be of any datatype, though string and integer are the most common.
Warning
Currently, only one discriminator column may be set, typically on the base-most class in the hierarchy. “Cascading” polymorphic columns are not yet supported.
The discriminator column is only needed if polymorphic loading is desired, as is usually the case. It is not strictly necessary that it be present directly on the base mapped table, and can instead be defined on a derived select statement that’s used when the class is queried; however, this is a much more sophisticated configuration scenario.
The mapping receives additional arguments via the
__mapper_args__
dictionary. Here the
type column is explicitly stated as the
discriminator column, and the polymorphic identity of
employee
is also given; this is the value that will be
stored in the polymorphic discriminator column for instances of this
class.
We next define
Engineer and
Manager subclasses of
Employee.
Each contains columns that represent the attributes unique to the subclass
they represent. Each table also must contain a primary key column (or
columns), and in most cases a foreign key reference to the parent table:
class Engineer(Employee): __tablename__ = 'engineer' id = Column(Integer, ForeignKey('employee.id'), primary_key=True) engineer_name = Column(String(30)) __mapper_args__ = { 'polymorphic_identity':'engineer', } class Manager(Employee): __tablename__ = 'manager' id = Column(Integer, ForeignKey('employee.id'), primary_key=True) manager_name = Column(String(30)) __mapper_args__ = { 'polymorphic_identity':'manager', }
It is standard practice that the same column is used for both the role of primary key as well as foreign key to the parent table, and that the column is also named the same as that of the parent table. However, both of these practices are optional. Separate columns may be used for primary key and parent-relationship, the column may be named differently than that of the parent, and even a custom join condition can be specified between parent and child tables instead of using a foreign key.
Joined inheritance primary keys.
In other words, the
id
columns of both the
engineer and
manager tables are not used to locate
Engineer or
Manager objects - only the value in
employee.id is considered.
engineer.id and
manager.id are
still of course critical to the proper operation of the pattern overall as
they are used to locate the joined row, once the parent row has been
determined within a statement.
With the joined inheritance mapping complete, querying against
Employee will return a combination of
Employee,
Engineer and
Manager objects. Newly saved
Engineer,
Manager, and
Employee objects will automatically populate the
employee.type column with
engineer,
manager, or
employee, as
appropriate.
Basic Control of Which Tables are Queried¶
The
orm.with_polymorphic() function and the
with_polymorphic() method of
Query affects the specific tables
which the
Query selects from. Normally, a query such as this:
session.query(Employee).all()
...selects only from the
employee table. When loading fresh from the
database, our joined-table setup will query from the parent table only, using
SQL such as this:
SELECT employee.id AS employee_id, employee.name AS employee_name, employee.type AS employee_type FROM employee []
provides this.
Telling our query to polymorphically load
Engineer and
Manager
objects, we can use the
orm.with_polymorphic() function
to create a new aliased class which represents a select of the base
table combined with outer joins to each of the inheriting tables: entity returned by
orm.with_polymorphic() is an
AliasedClass
object, which can be used in a
Query like any other alias, including
named attributes for those attributes on the
Employee class. In our' ) )
orm.with_polymorphic() accepts a single class or
mapper, a list of classes/mappers, or the string
'*' to indicate all
subclasses:
# join to the engineer table entity = with_polymorphic(Employee, Engineer) # join to the engineer and manager tables entity = with_polymorphic(Employee, [Engineer, Manager]) # join to all subclass tables entity = with_polymorphic(Employee, '*') # use the 'entity' with a Query object session.query(entity).all()
It also accepts a third employee = Employee.__table__ manager = Manager.__table__ engineer = Engineer.__table__ entity = with_polymorphic( Employee, [Engineer, Manager], employee.outerjoin(manager).outerjoin(engineer) ) # use the 'entity' with a Query object session.query(entity).all()
Note that if you only need to load a single subtype, such as just the
Engineer objects,
orm.with_polymorphic() is
not needed since you would query against the
Engineer class directly.
Query.with_polymorphic() has the same purpose
as
orm.with_polymorphic(), except is not as
flexible in its usage patterns in that it only applies to the first full
mapping, which then impacts all occurrences of that class or the target
subclasses within the
Query. For simple cases it might be
considered to be more succinct:
session.query(Employee).with_polymorphic([Engineer, Manager]).\ filter(or_(Engineer.engineer_info=='w', Manager.manager_data=='q'))
New in version 0.8:
orm.with_polymorphic(), an improved version of
Query.with_polymorphic() method.
The mapper also accepts
with_polymorphic as a configurational argument so
that the joined-style load will be issued automatically. This argument may be
the string
'*', a list of classes, or a tuple consisting of either,
followed by a selectable:
class Employee(Base): __tablename__ = 'employee' id = Column(Integer, primary_key=True) type = Column(String(20)) __mapper_args__ = { 'polymorphic_on':type, 'polymorphic_identity':'employee', 'with_polymorphic':'*' } class Engineer(Employee): __tablename__ = 'engineer' id = Column(Integer, ForeignKey('employee.id'), primary_key=True) __mapper_args__ = {'polymorphic_identity':'engineer'} class Manager(Employee): __tablename__ = 'manager' id = Column(Integer, ForeignKey('employee.id'), primary_key=True) __mapper_args__ = {'polymorphic_identity':'manager'}
The above mapping will produce a query similar to that of
with_polymorphic('*') for every query of
Employee objects.
Using
orm.with_polymorphic() or
Query.with_polymorphic()
will override the mapper-level
with_polymorphic setting..
New in version 0.8:
orm.with_polymorphic()is in addition to the existing
Querymethod
Query.with_polymorphic(), which has the same purpose but is not as flexible in its usage. the examples at Basic Control of Which Tables are Queried.
Advanced Control of Which Tables are Queried¶
The
with_polymorphic functions work fine for
simplistic scenarios. However, direct control of table rendering
is called for, such as the case when one wants to
render to only the subclass table and not the parent table.
This use case can be achieved by using the mapped
Table
objects directly. For example, to
query the name of employees with particular criterion:
engineer = Engineer.__table__ manager = Manager.__table__ manager or engineer,', cascade='all, delete-orphan') class Employee(Base): __tablename__ = 'employee' id = Column(Integer, primary_key=True) type = Column(String(20)) company_id = Column(Integer, ForeignKey('company.id')) __mapper_args__ = { 'polymorphic_on':type, 'polymorphic_identity':'employee', 'with_polymorphic':'*' }
join() method as well as the
any() and
has() operators will create
a join from
company to
employee, without including
engineer or
manager:
employee = Employee.__table__ engineer = Engineer.__table__ session.query(Company).\ join((employee.join(engineer), Company.employees)).\ filter(Engineer.engineer_info=='someinfo')
of_type() accepts a
single class argument. More flexibility can be achieved either by
joining to an explicit join as above, or by using the
orm.with_polymorphic()
function to create a polymorphic selectable:
manager_and_engineer = with_polymorphic( Employee, [Manager, Engineer], aliased=True) session.query(Company).\ join(manager_and_engineer, Company.employees).\ filter( or_(manager_and_engineer.Engineer.engineer_info=='someinfo', manager_and_engineer.Manager.manager_data=='somedata') )
Above, we use the
aliased=True argument with
orm.with_polymorhpic()
so that the right hand side of the join between
Company and
manager_and_engineer
is converted into an aliased subquery. Some backends, such as SQLite and older
versions of MySQL can’t handle a FROM clause of the following form:
FROM x JOIN (y JOIN z ON <onclause>) ON <onclause>
Using
aliased=True instead renders it more like:
FROM x JOIN (SELECT * FROM y JOIN z ON <onclause>) AS anon_1 ON <onclause>
The above join can also be expressed more succinctly by combining
of_type()
with the polymorphic construct:
manager_and_engineer = with_polymorphic( Employee, [Manager, Engineer], aliased=True) session.query(Company).\ join(Company.employees.of_type(manager_and_engineer)).\ filter( or_(manager_and_engineer.Engineer.engineer_info=='someinfo', manager_and_engineer.
New in version 0.8:
of_type() accepts
orm.aliased() and
orm.with_polymorphic() constructs in conjunction
with
Query.join(),
any() and
has().
Eager Loading of Specific or Polymorphic Subtypes¶
The
joinedload(),
subqueryload(),
contains_eager() and
other loading-related options also support
paths which make use of
of_type().
Below we load
Company rows while eagerly loading related
Engineer
objects, querying the
employee and
engineer tables simultaneously:
session.query(Company).\ options( subqueryload(Company.employees.of_type(Engineer)). subqueryload("machines") ) )
As is the case with
Query.join(),
of_type()
also can be used with eager loading and
orm.with_polymorphic()
at the same time, so that all sub-attributes of all referenced subtypes
can be loaded:
manager_and_engineer = with_polymorphic( Employee, [Manager, Engineer], aliased=True) session.query(Company).\ options( joinedload(Company.employees.of_type(manager_and_engineer)) ) )
New in version 0.8:
joinedload(),
subqueryload(),
contains_eager()
and related loader options support
paths that are qualified with
of_type(), supporting
single target types as well as
orm.with_polymorphic() targets.
Another option for the above query is to state the two subtypes separately;
the
joinedload() directive should detect this and create the
above
with_polymorphic construct automatically:
session.query(Company).\ options( joinedload(Company.employees.of_type(Manager)), joinedload(Company.employees.of_type(Engineer)), ) )
New in version 1.0: Eager loaders such as
joinedload() will create a polymorphic
entity when multiple overlapping
of_type()
directives are encountered.:
class Employee(Base): __tablename__ = 'employee' id = Column(Integer, primary_key=True) name = Column(String(50)) manager_data = Column(String(50)) engineer_info = Column(String(50)) type = Column(String(20)) __mapper_args__ = { 'polymorphic_on':type, 'polymorphic_identity':'employee' } class Manager(Employee): __mapper_args__ = { 'polymorphic_identity':'manager' } class Engineer(Employee): __mapper_args__ = { 'polymorphic_identity':'engineer' }
Note that the mappers for the derived classes Manager and Engineer omit the
__tablename__, indicating they do not have a mapped table of
their own.
Concrete Table Inheritance¶
This form of inheritance maps each class to a distinct table. As concrete inheritance has a bit more conceptual overhead, first we’ll illustrate what these tables look like as Core table metadata:
employees_table = Table( 'employee', metadata, Column('id', Integer, primary_key=True), Column('name', String(50)), ) managers_table = Table( 'manager', metadata, Column('id', Integer, primary_key=True), Column('name', String(50)), Column('manager_data', String(50)), ) engineers_table = Table( 'engineer', metadata, Column('id', Integer, primary_key=True), Column('name', String(50)), Column('engineer_info', String(50)), )
Notice in this case there is no
type column; for polymorphic loading,
additional steps will be needed in order to “manufacture” this information
during a query.
Using classical mapping, we can map our three classes independently without
any relationship between them; the fact that
Engineer and
Manager
inherit from
Employee does not have any impact on a classical mapping:
class Employee(object): pass class Manager(Employee): pass class Engineer(Employee): pass mapper(Employee, employees_table) mapper(Manager, managers_table) mapper(Engineer, engineers_table)
However when using Declarative, Declarative assumes an inheritance mapping
between the classes because they are already in an inheritance relationship.
So to map our three classes declaratively, we must include the
orm.mapper.concrete parameter within the
__mapper_args__:
class Employee(Base): __tablename__ = 'employee' id = Column(Integer, primary_key=True) name = Column(String(50)) class Manager(Employee): __tablename__ = 'manager' id = Column(Integer, primary_key=True) name = Column(String(50)) manager_data = Column(String(50)) __mapper_args__ = { 'concrete': True } class Engineer(Employee): __tablename__ = 'engineer' id = Column(Integer, primary_key=True) name = Column(String(50)) engineer_info = Column(String(50)) __mapper_args__ = { 'concrete': True }
Two critical points should be noted:
- We must define all columns explicitly on each subclass, even those of the same name. A column such as
Employee.namehere is not copied out to the tables mapped by
Manageror
Engineerfor us.
- while the
Engineerand
Managerclasses are mapped in an inheritance relationship with
Employee, they still do not include polymorphic loading.
Concrete Polymorphic Loading¶
To load polymorphically, the
orm.mapper.with_polymorphic argument is required, along
with a selectable indicating how rows should be loaded. Polymorphic loading
is most inefficient with concrete inheritance, so if we do seek this style of
loading, while it is possible it’s less recommended. In the case of concrete
inheritance, it means we must construct a UNION of all three tables.
First illustrating this with classical mapping, SQLAlchemy includes a helper
function to create this UNION called
polymorphic_union(), which
will map all the different columns into a structure of selects with the same
numbers and names of columns, and also generate a virtual
type column for
each subselect. The function is called after all three tables are declared,
and is then combined with the mappers:
from sqlalchemy.orm import polymorphic_union.id AS pjoin_id, pjoin.name AS pjoin_name, pjoin.type AS pjoin_type, pjoin.manager_data AS pjoin_manager_data, pjoin.engineer_info AS pjoin_engineer_info FROM ( SELECT employee.id AS id, employee.name AS name, CAST(NULL AS VARCHAR(50)) AS manager_data, CAST(NULL AS VARCHAR(50)) AS engineer_info, 'employee' AS type FROM employee UNION ALL SELECT manager.id AS id, manager.name AS name, manager.manager_data AS manager_data, CAST(NULL AS VARCHAR(50)) AS engineer_info, 'manager' AS type FROM manager UNION ALL SELECT engineer.id AS id, engineer.name AS name, CAST(NULL AS VARCHAR(50)) AS manager_data, engineer.engineer_info AS engineer_info, 'engineer' AS type FROM engineer ) AS pjoin
The above UNION query needs to manufacture “NULL” columns for each subtable in order to accommodate for those columns that aren’t part of the mapping.
In order to map with concrete inheritance and polymorphic loading using
Declarative, the challenge is to have the polymorphic union ready to go
when the mappings are created. One way to achieve this is to continue to
define the table metadata before the actual mapped classes, and specify
them to each class using
__table__:
class Employee(Base): __table__ = employee_table __mapper_args__ = { 'polymorphic_on':pjoin.c.type, 'with_polymorphic': ('*', pjoin), 'polymorphic_identity':'employee' } class Engineer(Employee): __table__ = engineer_table __mapper_args__ = {'polymorphic_identity':'engineer', 'concrete':True} class Manager(Employee): __table__ = manager_table __mapper_args__ = {'polymorphic_identity':'manager', 'concrete':True}
Using the Declarative Helper Classes¶
Another way is to use a special helper class that takes on the fairly
complicated task of deferring the production of
Mapper objects
until all table metadata has been collected, and the polymorphic union to which
the mappers will be associated will be available. This is available via
the
AbstractConcreteBase and
ConcreteBase classes. For
our example here, we’re using a “concrete” base, e.g. an
Employee row
can exist by itself that is not an
Engineer or a
Manager. The
mapping would look like:
from sqlalchemy.ext.declarative import ConcreteBase class Employee(ConcreteBase, Base): __tablename__ = 'employee' id = Column(Integer, primary_key=True) name = Column(String(50)) __mapper_args__ = { 'polymorphic_identity':'employee', 'concrete':True } }
There is also the option to use a so-called “abstract” base; where we wont
actually have an
employee table at all, and instead will only have
manager and
engineer tables. The
Employee class will never be
instantiated directly. The change here is that the base mapper is mapped
directly to the “polymorphic union” selectable, which no longer includes
the
employee table. In classical mapping, this is:
from sqlalchemy.orm import polymorphic_union pjoin = polymorphic_union({ 'manager': managers_table, 'engineer': engineers_table }, 'type', 'pjoin') employee_mapper = mapper(Employee, pjoin, with_polymorphic=('*', pjoin), polymorphic_on=pjoin.c.type) manager_mapper = mapper(Manager, managers_table, inherits=employee_mapper, concrete=True, polymorphic_identity='manager') engineer_mapper = mapper(Engineer, engineers_table, inherits=employee_mapper, concrete=True, polymorphic_identity='engineer')
Using the Declarative helpers, the
AbstractConcreteBase helper
can produce this; the mapping would be:
from sqlalchemy.ext.declarative import AbstractConcreteBase class Employee(AbstractConcreteBase, Base): pass }
See also
Concrete Table Inheritance - in the Declarative reference documentation. | http://docs.sqlalchemy.org/en/latest/orm/inheritance.html | 2016-02-06T02:28:57 | CC-MAIN-2016-07 | 1454701145751.1 | [] | docs.sqlalchemy.org |
Previewing Files
The file browser
preview pane
, located in the lower portion of the file browser window, provides you with in-depth information about a file.
Showing the Preview Pane
Click the the Show Preview button in the toolbar to show the Preview area. The preview area will be displayed and the toolbar icon label will change to Hide Toolbar.
The visual contents of the preview pane will vary according to the type of file you have selected.
Media files such as audio files and QuickTime movies permit you to watch and/or listen to the file.
Image files will display a small copy of the image.
The contents of documentation files will be displayed in the preview area. You can open the document in a separate window by double-clicking on its name.
Max object files provide links to open the object's help file and reference page, as well as a brief description of what the object does.
All other types of files or folders will list information such as type, location, and modification date. | https://docs.cycling74.com/max5/vignettes/core/file_browser_preview.html | 2016-02-06T03:30:33 | CC-MAIN-2016-07 | 1454701145751.1 | [] | docs.cycling74.com |
@Generated(value="OracleSDKGenerator", comments="API Version: 20160918") public final class InstanceConfigurationSummary extends Object
Summary information for an instance configuration.
Note: Objects should always be created or deserialized using the
InstanceConfigurationSummary.Builder. This model distinguishes fields that are
null because they are unset from fields that are explicitly set to
null. This is done in the setter methods of the
InstanceConfigurationS","id","timeCreated","definedTags","freeformTags"}) @Deprecated public InstanceConfigurationSummary(String compartmentId, String displayName, String id, Date timeCreated, Map<String,Map<String,Object>> definedTags, Map<String,String> freeformTags)
public static InstanceConfigurationSummary.Builder builder()
Create a new builder.
public String getCompartmentId()
The OCID of the compartment containing the instance configuration.
public String getDisplayName()
A user-friendly name for the instance configuration.
public String getId()
The OCID of the instance configuration.
public Date getTimeCreated()
The date and time the instance configuration was created, in the format defined by RFC3339. Example:
2016-08-25T21:10:29.600Z | https://docs.cloud.oracle.com/en-us/iaas/tools/java/1.17.5/com/oracle/bmc/core/model/InstanceConfigurationSummary.html | 2020-10-20T01:05:41 | CC-MAIN-2020-45 | 1603107867463.6 | [] | docs.cloud.oracle.com |
?
You're in the right place for the latest documentation for AEM 6.4.
We also have documentation for older versions of Adobe Experience Manager. You can use the version component on any page to move between versions, or pick from this list.
** Indicates versions no longer officially supported by Adobe.
Where are AEM 6.4 release notes?
You can find all the release notes for AEM here:
If you have questions you can reach out to our AEM Community team or ask us on Twitter @AdobeExpCare . | https://docs.adobe.com/content/help/en/experience-manager-64/user-guide/troubleshooting/new.html | 2020-10-20T01:08:51 | CC-MAIN-2020-45 | 1603107867463.6 | [] | docs.adobe.com |
TOPICS×
Creating adaptive forms using JSON Schema
Prerequisites
Authoring an adaptive form using an JSON Schema as its form model requires basic understanding of JSON Schema. It is recommended to read through the following content before this article.
Using a JSON Schema as form:
"birthDate": { "type": "string", "format": "date", "pattern": "date{DD MMMM, YYYY}", "aem:affKeyword": [ "DOB", "Date of Birth" ], "description": "Date of birth in DD MMMM, YYYY", "aem:afProperties": { "displayPictureClause": "date{DD MMMM, YYYY}", "displayPatternType": "date{DD MMMM, YYYY}", "validationPatternType": "date{DD MMMM, YYYY}", "validatePictureClause": "date{DD MMMM, YYYY}", "validatePictureClauseMessage": "Date must be in DD MMMM, YYYY format." }
Common schema properties.
- The extension of JSON Schema file must be kept .schema.json. For example, <filename>.schema.json.
Sample JSON Schema
Here's an example of an JSON Schema.
{ "" } } }
Reusable schema definitions.
Pre-Configuring fields in JSON Schema Definition }
Configure scripts or expressions for form objects
JavaScript is the expression language of adaptive forms. All the expressions are valid JavaScript expressions and use adaptive forms scripting model APIs. You can pre-configure form objects to evaluate an expression on a form event.
Use the aem:afproperties property to preconfigure adaptive form expressions or scripts for adaptive form components. For example, when the initialize event is triggered, the below code sets value of telephone field and prints a value to the log :
"telephone": { "type": "string", "pattern": "/\\d{10}/", "aem:affKeyword": ["phone", "telephone","mobile phone", "work phone", "home phone", "telephone number", "telephone no", "phone number"], "description": "Telephone Number", "aem:afProperties" : { "sling:resourceType" : "fd/af/components/guidetelephone", "guideNodeClass" : "guideTelephone", "events": { "Initialize" : "this.value = \"1234567890\"; console.log(\"ef:gh\") " } } }
You should be a member of the forms-power-user group to configure scripts or expressions for form object. The below table lists all the script events supported for an adaptive form component.
Some examples of using events in a JSON are hiding a field on initialize event and configure value of another field on value commit event. For detailed information about creating expressions for the script events, see Adaptive Form Expressions .
Here is the sample JSON code for aforementioned examples.
Hiding a field on initialize event
"name": { "type": "string", "aem:afProperties": { "events" : { "Initialize" : "this.visible = false;" } } }
Configure value of another field on value commit event
"Income": { "type": "object", "properties": { "monthly": { "type": "number", "aem:afProperties": { "events" : { "Value Commit" : "IncomeYearly.value = this.value * 12;" } } }, "yearly": { "type": "number", "aem:afProperties": { "name": "IncomeYearly" } } } } should be the extension of the JSON schema file?
The extension of JSON Schema file must be .schema.json. For example, <filename>.schema.json. | https://docs.adobe.com/content/help/en/experience-manager-65/forms/adaptive-forms-advanced-authoring/adaptive-form-json-schema-form-model.html | 2020-10-20T00:28:11 | CC-MAIN-2020-45 | 1603107867463.6 | [] | docs.adobe.com |
Webhooks and Changes Feed
This guide describes:
- Changes Feed
The changes feed returns a sorted list of changes made to documents in the database.
- Webhooks
Sync Gateway can detect document updates and post the updated documents to one or more external URLs.
Here’s a table that compares each API in different scenarios:
Changes Feed
This article describes how to use the changes feed API to integrate Sync Gateway with other backend processes. For instance if you have a channel called "needs-email" you could have a bot that sends an email and then saves the document back with a flag to keep it out of the "needs-email" channel.
The changes feed API is a REST API endpoint (
/+{db}+/_changes) that returns a sorted list of changes made to documents in the database.
It permits applications to implement business logic that reacts to changes in documents.
There are several methods of connecting to the changes feed (also know as the feed type).
The first 3 methods (
polling,
longpoll and
continuous) are based on the CouchDB API.
The last method (
websocket) is specific to Sync Gateway.
- polling (default)
Returns the list of changes immediately. A new request must be sent to get the next set of changes.
- longpolling
In addition to regular polling, if the request is sent with a special
last_seqparameter, it will stay open until a new change occurs and is posted.
- continuous
The continuous changes API allows you to receive change notifications as they come, in a single HTTP connection. You make a request to the continuous changes API and both you and Sync Gateway will hold the connection open “forever”.
- websockets
The WebSocket mode is conceptually the same as continuous mode but it should avoid issues with proxy servers and gateways that cause continuous mode to fail in many real-world mobile use cases.
WebSockets
The primary problem with the continuous mode is buggy HTTP chunked-mode body parsing that buffers up the entire response before sending any of it on; since the continuous feed response never ends, nothing gets through to the client. This can often be a problem with proxy servers but can be avoided by using the WebSocket method.
The client requests WebSockets by setting the
_changes URL’s feed query parameter to
websocket, and opening a WebSocket connection to that URL:
GET /db/_changes?feed=websocket HTTP/1.1 Connection: Upgrade Upgrade: websocket ...
Specifying Options
After the connection opens, the client MUST send a single textual message to the server, specifying the feed options.
This message is identical to the body of a regular HTTP POST to
_changes, i.e. it’s a JSON object whose keys are the parameters (for example,
{"since": 112233, "include_docs": true}).
Depending on which client you use, make sure that options are sent as binary.
Messages
Once the server receives the options, it will begin to send text-format messages. The messages are JSON; each contains one or more change notifications (in the same format as the regular feed) wrapped in an array:
[ {"seq":1022,"id":"beer_Indiana_Amber","changes":[{"rev":"1-e8f6b2e1f220fa4c8a64d65e68469842"}]}, {"seq":1023,"id":"beer_Iowa_Pale_Ale","changes":[{"rev":"1-db962c6d93c3f1720cc7d3b6e50ac9df"}]} ]
(The current server implementation sends at most one notification per message, but this could change. Clients should accept any number.)
An empty array is a special case: it denotes that at this point the feed has finished sending the backlog of existing revisions, and will now wait until new revisions are created. It thus indicates that the client has "caught up" with the current state of the database.
The
websocket mode behaves like the
continuous mode: after the backlog of notifications (if any) is sent, the connection remains open and new notifications are sent as they occur.
Compressed Feed
For efficiency, the feed can be sent in compressed form; this greatly reduces the bandwidth and is highly recommended.
To signal that it accepts a compressed feed, the client adds
"accept_encoding":"gzip" to the feed options in the initial message it sends.
Compressed messages are sent from the server as binary. This is of course necessary as they contain gzip data, and it also lets the client distinguish them from uncompressed messages. (The server will only ever send one kind.)
The compressed messages sent from the server constitute a single stream of gzip-compressed data. They cannot be decompressed individually! Instead, the client should open a gzip decompression session when the feed opens, and write each binary message to it as input as it arrives. The output from the decompressor consists of a sequence of JSON arrays, each of which has the same interpretation as a text message (above).; }` } ] } | https://docs.couchbase.com/sync-gateway/2.0/server-integration.html | 2020-10-20T00:59:29 | CC-MAIN-2020-45 | 1603107867463.6 | [] | docs.couchbase.com |
Rename-ADObject
Changes the name of an Active Directory object.
Syntax
Rename-ADObject [-WhatIf] [-Confirm] [-AuthType <ADAuthType>] [-Credential <PSCredential>] [-Identity] <ADObject> [-NewName] <String> [-Partition <String>] [-PassThru] [-Server <String>] [<CommonParameters>]
Description.
The Identity parameter specifies the object to rename. Rename NewName parameter defines the new name for the object and must be specified.
Examples
-------------------------- EXAMPLE 1 --------------------------
C:\PS>Rename-ADObject -Identity "CN=HQ,CN=Sites,CN=Configuration,DC=FABRIKAM,DC=COM" -NewName UnitedKingdomHQ
Description
Rename the name of an existing site 'HQ' to the new name 'UnitedKingdomHQ'. If the distinguished name is provided in the -Identity parameter, then the -Partition parameter is not required.
-------------------------- EXAMPLE 2 --------------------------
C:\PS>Rename-ADObject -Identity "4777c8e8-cd29-4699-91e8-c507705a0966" -NewName "AmsterdamHQ" -Partition "CN=Configuration,DC=FABRIKAM,DC=COM"
Description --------------------------
C:\PS>Rename-ADObject "OU=ManagedGroups,OU=Managed,DC=Fabrikam,DC=Com" -NewName Groups
Description
Rename the object with the DistinguishedName 'OU=ManagedGroups,OU=Managed,DC=Fabrikam,DC=Com' to 'Groups'.
-------------------------- EXAMPLE 4 --------------------------
C:\PS>Rename-ADObject -Identity "4777c8e8-cd29-4699-91e8-c507705a0966" -NewName "DavidAhs"
Description --------------------------
C:\PS>Rename-ADObject "CN=Apps,DC=AppNC" -NewName "InternalApps" -server "FABRIKAM-SRV1:60000"
Description
Rename the container 'CN=Apps,DC=AppNC' to 'InternalApps' in an LDS instance.
Prompts you for confirmation before running the cmdlet.
Specifies the new name of the object. This parameter sets the Name property of the Active Directory object. The LDAP Display Name (ldapDisplayName) of this property is "name".
The following example shows how to set this parameter to a name string.
-NewName "SaraDavis" the new or modified object. By default (i.e. if -PassThru is not specified),"
Shows what would happen if the cmdlet runs. The cmdlet is not run.
Inputs
None or Microsoft.ActiveDirectory.Management.AD
None
Notes
This cmdlet does not work with an Active Directory Snapshot.
This cmdlet does not work with a read-only domain controller. | https://docs.microsoft.com/en-us/powershell/module/activedirectory/rename-adobject?view=winserver2012-ps | 2020-10-20T01:21:28 | CC-MAIN-2020-45 | 1603107867463.6 | [] | docs.microsoft.com |
knife.rb¶
A knife.rb file is used to specify the chef-repo-specific configuration details for knife.
A knife.rb file:
- Is loaded every time this executable is run
- Is not created by default
- Is located by default at ~/chef-repo/.
Settings¶
This configuration file has the following settings:
-.*, *.example.com, *.dev.example.com'
- syntax_check_cache_path
- All files in a cookbook must contain valid Ruby syntax. Use this setting to specify the location in which knife caches information about files that have been checked for valid Ruby syntax.
-[:distro] = 'ubuntu10.04-gems' knife[:template_file] = '' knife[:bootstrap_version] = '' knife[:bootstrap_proxy] = ''
Some of the optional knife.rb settings are used often, such as the template file used in a bootstrap operation. The frequency of use of any option varies from organization to organization, so even though the following settings are often added to a knife.rb file, they may not be the right settings to add for every organization:
- knife[:bootstrap_proxy]
- The proxy server for the node that is the target of a bootstrap operation.
- knife[:bootstrap_version]
- The version of the chef-client to install.
- knife[.
- knife[:template_file]
- The path to a template file to be used during a bootstrap operation..
Warning
Review the full list of optional settings that can be added to the knife.rb file. Many of these optional settings should not be added to the knife.rb file. The reasons for not adding them can vary. For example, using --yes as a default in the knife.rb file will cause knife to always assume that “Y” is the response to any prompt, which may lead to undesirable outcomes. Other settings, such as --hide-healthy (used only with the knife status subcommand) or --bare-directories (used only with the knife list subcommand) probably aren’t used often enough (and in the same exact way) to justify adding them to the knife.rb file. In general, if the optional settings are not listed on the main knife.rb topic, then add settings only after careful consideration. Do not use optional settings in a production environment until after the setting’s performance has been validated in a safe testing environment.'] | https://docs-archive.chef.io/release/11-18/config_rb_knife.html | 2020-10-20T01:05:56 | CC-MAIN-2020-45 | 1603107867463.6 | [] | docs-archive.chef.io |
BlockScout provides a comprehensive, easy-to-use interface for users to view, confirm, and inspect transactions on EVM (Ethereum Virtual Machine) blockchains. BlockScout currently hosts the POA Network, xDai Chain, Ethereum Classic, Sokol & Kovan testnets and other testnets, private chains and sidechains. A complete list of projects is available here.
BlockScout is an Elixir application that allows users to search transactions, view accounts and balances, and verify smart contracts on Ethereum including forks and sidechains.
Currently available block explorers (i.e. Etherscan and Etherchain) are closed systems which are not independently verifiable. As Ethereum sidechains continue to proliferate in both private and public settings, transparent tools are needed to analyze and validate transactions.
Information on the latest release and version history is available on our forum | https://docs.blockscout.com/ | 2020-10-20T00:08:49 | CC-MAIN-2020-45 | 1603107867463.6 | [] | docs.blockscout.com |
Configure Kafka brokers
Learn how to configure PAM authentication for Kafka brokers.
You can enable Kafka to use PAM for client to broker authentication. Broker configuration is done by configuring the required properties in Cloudera Manager.
- In Cloudera Manager select the Kafka service.
- Select Configuration.
- Enable PAM authentication:
- Find the SASL/PLAIN Authentication property.
- Click the radio button next to PAM. Do this for all required Kafka services.
- Configure the PAM service name:
- Find the PAM Service property.
- Enter a valid PAM service name. The property defaults to
- Click Save Changes.
- Restart the Kafka service | https://docs.cloudera.com/runtime/7.2.2/kafka-securing/topics/kafka-secure-pam-broker.html | 2020-10-20T01:02:53 | CC-MAIN-2020-45 | 1603107867463.6 | [] | docs.cloudera.com |
The general format of a URL request to the Smart Construction web API is:[APIName]
Host is the name of the SmartPlant Foundation web server which has the Smart Construction installed on it.
SiteVirtualDirectory is the name of the website as defined in IIS.
v2/SPC is the service prefix
For example, to navigate from a component with ID “6G4N2DNA” to the related drawings, the following URL is used: /Components?$filter=Id eq '6G4N2DNA' &$expand=Document | https://docs.hexagonppm.com/reader/V3wWo1Ej9BTnrR7LejlguQ/Dyhy~47V0NKO9SKaES2J9w | 2020-10-20T00:16:37 | CC-MAIN-2020-45 | 1603107867463.6 | [] | docs.hexagonppm.com |
8.5.2. Trajectory rotation —
MDAnalysis.transformations.rotate¶
Rotates the coordinates by a given angle arround an axis formed by a direction and a point
MDAnalysis.transformations.rotate.
rotateby(angle, direction, point=None, ag=None, weights=None, wrap=False)[source]¶
Rotates the trajectory by a given angle on a given axis. The axis is defined by the user, combining the direction vector and a point. This point can be the center of geometry or the center of mass of a user defined AtomGroup, or an array defining custom coordinates.. | https://docs.mdanalysis.org/stable/documentation_pages/transformations/rotate.html | 2020-10-20T01:06:02 | CC-MAIN-2020-45 | 1603107867463.6 | [] | docs.mdanalysis.org |
The Fedora kernel offers
paravirt Section 9.3, “Kernel Flavors”.
Refer to for information on reporting bugs in the Linux kernel. You may also use for reporting bugs that are specific to Fedora. | http://docs.fedoraproject.org/release-notes/f9preview/en_US/sn-Kernel.html | 2008-05-16T17:53:00 | crawl-001 | crawl-001-009 | [] | docs.fedoraproject.org |
Users of the
mod_dbd module should note that
the
apr-util DBD driver for PostgreSQL is now
distributed as a separate dynamically-loaded module. The driver
module is now included in the apr-util-pgsql
package. A MySQL driver is now also available, in the
apr-util-mysql package.
SQLAlchemy has been updated to 0.4.x. TurboGears Applications developed using SQLAlchemy for their database layer will need to update their startup scripts. Instead of:
import pkg_resources pkg_resources.require('TurboGears')
the start script needs to have:
__requires__ = 'TurboGears[future]' import pkg_resources
Drupal has been updated from the 5.x series to 6.1. For details, refer to:
Remember to log in to your site as the admin user, and disable any third-party modules before upgrading this package. After upgrading the package:
Copy
/etc/drupal/default/settings.php.rpmsave
to
/etc/drupal/default/settings.php, and
repeat for any additional sites' settings.php files.
Browse to to run the upgrade script. | http://docs.fedoraproject.org/release-notes/f9preview/zh_TW/sn-WebServers.html | 2008-05-16T18:21:38 | crawl-001 | crawl-001-009 | [] | docs.fedoraproject.org |
You can find a tour filled with pictures and videos of this exciting new release at.
Denne utgivelsen inkluderer viktige nye versjoner av mange nøkkelkomponenter og -teknologier. Følgende seksjoner gir en kort oversikt over store endringer fra sist versjon av Fedora..
Bluetooth-enheter og -verktøy har nå bedre grafisk- og system-integrasjon.. | http://docs.fedoraproject.org/release-notes/f9preview/nb/sn-OverView.html | 2008-05-16T18:34:49 | crawl-001 | crawl-001-009 | [] | docs.fedoraproject.org |
.
For a complete list of current spins available, and instructions for using them, refer to:.
You can add
liveinst or
textinst
as a boot loader option to perform a direct installation without
booting up the live CD/DVD.
Another way to use these Live images is to put them on a USB
stick. To do this, install the livecd-tools
package from the development.. | http://docs.fedoraproject.org/release-notes/f9preview/da/sn-Live.html | 2008-05-16T18:36:51 | crawl-001 | crawl-001-009 | [] | docs.fedoraproject.org |
You can find a tour filled with pictures and videos of this exciting new release at.
This release includes significant new versions of many key components and technologies. The following sections provide a brief overview of major changes from the last release of Fedora.
For the first time,:
GNOME and KDE desktop environment based bootable Live images that can be installed to a hard disk. These spins are meant for desktop users who prefer a single disk installation and for sharing Fedora with friends, family, and event attendees.
A regular image for desktops, workstations, and server users. This spin provides a good upgrade path and similar environment for users of previous releases of Fedora.
A set of DVD images that includes all software available in the Fedora repository. This spin is intended for distribution to users who do not have broadband Internet access and prefer to have software available on disc..au driver, which is disabled by default in this release, aims to provide free and open source 3D drivers for nVidia cards. End users are asked to provide feedback on this feature to the project developers, to further the goal of having fully functional 3D drivers by default.
In this release, the performance of
yum,.
This release of Fedora includes Liberation fonts, which are metric equivalents for several well-known proprietary fonts found throughout the Internet. These fonts give users better results when viewing and printing shared or downloaded documents.
The proposed plans for the next release of Fedora are available at. | http://docs.fedoraproject.org/release-notes/f7/en_US/sn-OverView.html | 2008-05-16T18:36:32 | crawl-001 | crawl-001-009 | [] | docs.fedoraproject.org |
You can find a tour filled with pictures and videos of this exciting new release at.
Η έκδοση αυτή περιλαμβάνει σημαντικές νέες εκδόσεις αρκετών βασικών προϊόντων και τεχνολογιών. Οι παρακάτω ενότητες παρέχουν μια σύντομη επισκόπηση των κυριότερων αλλαγών από την τελευταία κυκλοφορία του Fedora.
Το Fedora περιλαμβάνει πολλά διαφορετικά spins, τα οποία είναι μεταβολές του χτισίματος του Fedora από ένα καθορισμένο σύνολο πακέτων λογισμικού. Κάθε spin διαθέτει ένα συνδυασμό λογισμικού για να ταιριάζει στις απαιτήσεις ενός συγκεκριμένου είδους τελικού χρήστη. Μαζί με ένα πολύ μικρό αρχείο εικόνας
boot.iso για εγκατάσταση δικτύου, χρήστες διαθέτουν τις παρακάτω επιλογές spin:
Μια κανονική εικόνα για επιφάνειες εργασίες, σταθμούς εργασίας, και χρήστες εξυπηρετητών. Αυτή η εκδοχή παρέχει ένα καλό μονοπάτι αναβάθμισης και ένα περιβάλλον παρόμοιο με προηγούμενες κυκλοφορίες του.
Συσκευές bluetooth και εργαλεία τώρα διαθέτουν καλύτερη ολοκλήρωση γραφικών και συστήματος.. | http://docs.fedoraproject.org/release-notes/f9preview/el/sn-OverView.html | 2008-05-16T18:38:46 | crawl-001 | crawl-001-009 | [] | docs.fedoraproject.org |
Difference between revisions of "User:Mccarrms"
From CSLabsWiki
Latest revision as of 11:48, 15 November 2011
Contents
Contact Info.
About Me
Real Name: Matt McCarrell
I'm an alumnus of Clarkson University. I was an active member of COSI and the ITL from Fall 2006 - Spring 2011. I'm a former director of the labs.
My contributions to this site can be found here and things I've done relating to the labs can be found on StatusNet.
MP* Pages
Fall 2010
Spring 2010
Fall 2009
Spring 2009
Fall 2008
Spring 2008
Fall 2007
Spring 2007
Fall 2006
Last updated by --Matt 11:48, 15 November 2011 (EST) | http://docs.cslabs.clarkson.edu/mediawiki/index.php?title=User:Mccarrms&diff=cur&oldid=3458 | 2021-02-24T21:13:48 | CC-MAIN-2021-10 | 1614178347321.0 | [] | docs.cslabs.clarkson.edu |
TagResource
Add one or more tags (keys and values) to a specified more specific tag values. A tag value acts as a descriptor within a tag key.
Request Syntax
POST /v1/email/tags that you want to add one or more tags to.
Type: String
Required: Yes
A list of the tags that you want to add to the resource. A tag consists of a required tag key (
Key) and an associated tag value (
Value). The maximum length of a tag key is 128 characters. The maximum length of a tag value is 256 characters.
Type: Array of Tag objects: | https://docs.aws.amazon.com/pinpoint-email/latest/APIReference/API_TagResource.html | 2021-02-24T21:17:11 | CC-MAIN-2021-10 | 1614178347321.0 | [] | docs.aws.amazon.com |
Every time a user takes an action in an application, a request is executed by the architecture that may require dozens of different services to participate in order to produce a response. The path of this request is a distributed transaction. Jaeger lets you perform distributed tracing, which follows the path of a request through various microservices that make up an application.
Distributed tracing is a technique that is used to tie the information about different units of work together—usually executed in different processes or hosts—in order to understand a whole chain of events in a distributed transaction. Distributed tracing lets developers visualize call flows in large service oriented architectures. It can be invaluable. Spans may be nested and ordered to model causal relationships.
Jaeger lets service owners instrument their services to get insights into what their architecture is doing. Jaeger is an open source distributed tracing platform that you can use for monitoring, network profiling, and troubleshooting the interaction between components in modern, cloud-native, microservices-based applications. Jaeger is based on the vendor-neutral OpenTracing APIs and instrumentation.
Using Jaeger lets you perform the following functions:
Monitor distributed transactions
Optimize performance and latency
Perform root cause analysis
Jaeger is installed by default as part of Red Hat OpenShift Service Mesh..
Jaeger Console – Jaeger provides a user interface that lets you visualize your distributed tracing data. On the Search page, you can find traces and explore details of the spans that make up an individual trace.
Jaeger tracing is installed with Red Hat Service Mesh by default, and provides backwards compatibility with Zipkin by accepting spans in Zipkin formats (Thrift or JSON v1/v2) over HTTP. | https://docs.openshift.com/container-platform/4.2/service_mesh/service_mesh_arch/ossm-jaeger.html | 2021-02-24T21:17:41 | CC-MAIN-2021-10 | 1614178347321.0 | [] | docs.openshift.com |
Push delivery platform
Starting
The push delivery platform provides the fastest services and SDKs to delivery notifications to any device. Our documentation provides you with all API/SDK references and guides for setup and integration of push delivery in your apps (iOS,Android) also in your CRM,CMS or data-driven services.
Questions
For any questions send us an email at [email protected]. | https://docs.push.delivery/docs/1.0/index.html | 2021-02-24T19:53:43 | CC-MAIN-2021-10 | 1614178347321.0 | [] | docs.push.delivery |
Assets metadata¶
All the data that will be ingested in the Data Lake must be extracted from the data source and stored in files in the platform. Those files are the "raw storage" of the information ingested in the Data Lake and must be in one of the supported file types, currently CSV and Parquet. The reason of using a specific file types is in order to automatize the ingestion process from the raw storage into the optimized storage of the Data Lake. That means that if the information is stored in any other type of file -such as an XML or JSON- in the data source, it must be previously converted to one of the supported file types.
To transfer the data from the raw storage to the Data Lake, the platform uses Spark SQL queries. The asset metadata is used -among other purposes- to automatically generate those queries.
Common columns in tables¶
There are some common columns that are present in several tables:
SecurityPath: It is a path of identifiers separated by
/used for authorization. The order of the identifiers in the path follows the Metadata model hierarchy:
{Data Storage Unit Id}/{Provider Id}/{Entity Id}
For example, the SecurityPath
1/10/100identifies an entity with Id
100which belongs to a provider with Id
10that is contained in a DSU with Id
1.
ParentSecurityPath: The
SecurityPathof the parent element following the metadata model hierarchy.
For example, the
ParentSecurityPathof the entity of the previous example is
1/10.
Provider table¶
Currently, any data from any source is extracted and stored in files. The data source could be a database, an API, a set of files stored in a SFTP server... Providers are a logical collection of entities so they are used to organizing them. Usually the entities from the same data source belong to the same provider.
Tag table¶
The tag table allows assigning tags to entities and providers. Some tags are used by the platform but it can also be created custom tags for a specific Sidra installation.
EntityTag table¶
The entity tag table implements the association between an entity and a tag.
ProviderTag table¶
The provider tag table implements the association between a provider and a tag.
TagType table¶
This table describes the supported types of tags.
The table contains the following static data:
Entity table¶
An entity defines the common properties of a group of assets in terms of structure of the content. The entity table contains data about the format of the entity generated, as well as information about how it should be treated by the system.
TableFormat table¶
This table describes the supported formats of the table created in Databricks.
The table contains the following static data:
EntityEntity table¶
The
EntityEntity table allows to represent many to many relationships between entities. There are several types of relations that can be established between entities, for example, when the data source is a SQL database and two entities represent two tables that are related between them in the data source, that semantic can be included in the Sidra platform using a relationship between those entities. In order to differentiate the entities involved in the relationship, one of them will be called the "Parent" entity and the other the "Child" entity.
These are the supported values for the
Kind column in the relationship between entities:
EntityDeltaLoad table¶
The
EntityDeltaLoad table is used for storing information about incremental load of data from the data source.
Attribute table¶
Attribute contains data about the schema of the files. This information is used to identify columns in a file, create Hive tables, and, when necessary, create SQL tables in which the data is going to be extracted from Data Lake (this is for client applications). Most of the metadata about the columns is used to validate fields, since Hive does not enforce any constraint over the data (like nullables, maximum length, and so on).
Now, attributes from an existing Entity can be encrypted. This means that, when loading data in the Databricks table, the transfer query will encrypt all attributes whose
isEncrypted column is set to true. When querying the data on that table, encrypted columns will show a random string:
In addition, assets from an Entity can be encrypted as well, so anyone who has access to them, will not able to read the contents unless they are previously decrypted. To enable this, it is necessary to update the
AdditionalProperties field from the Entity table, and adding to the JSON the following property:
Apart from that, it is necessary to insert, in the
EntityPipeline table, a relationship between the Entity whose assets are going to be encrypted and the pipeline
FileIngestionWithEncryption.
Lastly, to enable any kind of encryption is necessary to generate two parameters that must be present in the deployment library, which are:
- EncryptionKey: It is a 16, 24 or 32 bytes long string randomly generated.
- EncryptionInitializationVector: It is a 16 bytes long string randomly generated.
AttributeFormat table¶
Some values of the file can contain text that are expected to be interpreted as a particular type, but when handling this data with Hive, it could not be interpreted correctly. For example, take that a column of the file is expected to be a Boolean one. Hive expects for a Boolean value to be TRUE or FALSE. However, these values can be different depending of the system that generate them. AttributeFormat handles that.
Asset table¶
The
Asset table stores information about each of the data elements ingested in the platform.
AssetStatus table¶
This table describes the status of an asset in the system. This table is created in both Core and Client app databases, so the status set includes status for the ingestion flow into the Data Lake and for the extraction flow from the Data Lake. The last one is used in Client apps.
The table contains the following static data:
These are the flows for ingestion in Data Lake -on the left in the image- and for extraction from Data Lake into a client app -on the right in the image-.
AssetPart table¶
The
AssetPart table stores parts of an asset. It is used when a set of files are grouped to get one file. | https://docs.sidra.dev/Sidra-Data-Platform/Sidra-Core/Metadata/Data-Ingestion/Assets-metadata/ | 2021-02-24T20:09:39 | CC-MAIN-2021-10 | 1614178347321.0 | [array(['../../../../attachments/databricks-attribute-encrypted.png',
'Sample of attribute encrypted in Databricks databricks-attribute-encrypted'],
dtype=object)
array(['../../../../attachments/file-status-flows.png',
'Ingestion and extraction flows file-status-flows'], dtype=object)] | docs.sidra.dev |
Test Center Browse Server
Abstract
Add your Signals to the Test Center tool and let's get busy testing these projects! Check out this article for more details!
The Browse Server dialog allows the user to browse the server for the desired signal and import it. The user has the possibility to use filtering for quicker signal selection.
Test center Browse Server dialog | https://docs.webfactory-i4.com/i4scada/en/test-center-browse-server.html | 2021-02-24T20:08:01 | CC-MAIN-2021-10 | 1614178347321.0 | [array(['image/1602a290f79f74.jpg', 'Capture1331.jpg'], dtype=object)] | docs.webfactory-i4.com |
Kinematics is about computation of the tool-centre-point (TCP) out of joint angles and vice versa. First is simple, latter is more tricky, but lets see later on. But before starting any kinematics, it is necessary to define all coordinate systems.
The most important design decision is to let the three upper axis’ intersect in one point, the so-call wrist-center-point (WCP). This decision makes the computation of the inverse kinematic solvable without numeric approaches.
The picture shows the used coordinate systems in the default position of the bot, having all angles at 0°, starting from the base (angle0) and ending with the coordinate system of the hand (angle6). For convenience the forearm (angle1) adds +90° to the real angle in order to have the base position at 0° of the bot, although the illustrated actually is -90°. The coordinate systems have been are arranged according to the Denavit Hardenberg convention, which is:
The transformation from anglei to anglei+1 is given via
rotation around the x-axis by α
translation along the x-axis by α
translation along the z-axis by d, and
rotation around the z-axis by θ
So, the Denavit Hardenberg parameters are:
The general definition of a Denavit Hardenberg (DH) transformation is
which is a homogeneous matrix with two rotations (x,z) and two translations (x,z).
Combined with the DH parameters, the following DH matrixes define the transformation from one joint to its successor:
Forward Kinematics
With the DH transformation matrixes at hand, computation of the bot’s pose (i.e the position and orientation of the gripper) out of the joint angles is straight forward. The matrix representing the gripper’s pose
is
By multiplying the transformation matrix with the origin (as homogeneous vector), we get the absolute coordinates of the tool centre point in world coordinate system (i.e. relative to the bot’s base).
The orientation in terms of roll/nick/yaw of the tool centre point can be derived out of
by taking the part representing the rotation matrix (
). (Wikipedia Roll/Nick/Yaw )
For
we have a singularity (atan2 never becomes this), but wikipedia has a solution for that as well
if
we get
Note: Unfortunately, the gripper’s coordinate system is not appropriate for human interaction, since the default position as illustrated in the Coordinate Systems is not nick/roll/yaw=(0,0,0). So, in the Trajectory Visualizer it is handy to rotate the gripper matrix such that the default position becomes (0,0,0). The according rotation matrix represents a rotation of -90° along x,y, and z, done by the rotation matrix
In the following equations, this is not considered, since it is for convenience in the UI only, so you will find that additional rotation in the source code only.
Inverse Kinematics
Inverse kinematics denotes the computation of all joint angles out of the tool-centre-point’s position and orientation. In general this is hard, and giving a non iterative solution for a 6DOF robot is only feasable, when computation of the grippers position and the grippers orientation can be considered separately, i.e. the angles of the lower three actuators is not depending on the orientation of the gripper. Still, I do not like numerical solutions, even though with todays processors (or FPGAs) this is no more a question of computational power. I just think that a numerical solution is not a real solution but a surrender to complexity. That's why I let the upper three joint angles intersect in the WCP, which is a basic assumption of the following.
Input of inverse kinematics is the TCP’s position and orientation in terms of roll, nick, yaw, abbreviated by γ, β,and α.
First, we need to compute the wrist-centre-point out the tool-centre-point. This is possible by taking the TCP and moving it back along the TCP’s orientation by the hand length. For doing so, we need the transformation matrix from the base to the last joint
which we can derive out of the TCP’s position and orientation.
To build the transformation matrix
we need the rotation matrix defining the orientation of the TCP. This is given by multiplying the rotation matrixes for all axis (γ, β, α) which gives
(see also computation of rotation matrix out of Euler Angles).
Now we can denote the transformation matrix of the TCP by building a homogenous matrix out of TCPorientation and TCPposition:
From the TCP’s perspective, WCP is just translated by d5:
Furthermore,
, so we get the WCP by
in world coordinates.
Having a top view on the robot shows how to compute the first angle θ0:
Actually, this angle exists in two variants: if the bot looks backwards, we get the formula above. But another valid solution is looking backward when we get
Thanks to the design having a wrist-centre-point where the axes of the three upper actuators intersect, the next two angles can be computed by the triangle denoted in orange:
Again, there are two solutions (aka configurations), one configuration corresponds with a natural pose of the elbow, solution II is a rather unhealthy position:
a and b is given by the length of the actuators a1 und d3. So, with cosine law we get the angles α and γ.
Finally, we get
and the second solution
The upper angles θ4, θ5, θ5 can be obtained by considering the chain of transformation matrixes. With
we get
To ease the annoying multiplication
we only need to consider the rotation part of the homogenous matrixes, translation is no more relevant, since the orientation alone defines the three upper angles.
- and therefore the rotation part
- is already known resp. can be obtained out of the given angles θ0, θ1, θ2 by
By equalizing the previous two equations we get a bunch of equations defining the upper angles, still these equations are hard to solve, due to the complex combination of trignometric functions. But, there are some equations which are simpler and can be solved for an angle. First angle that seems to be easily computable is θ4:
gives two solutions
For θ3 there is no easy matrix element, but we can combine
to
which ends up in
again having two solutions depending on θ4. Same is done on θ5:
If θ4=0, we have an infinite number of solutions θ3 and θ5 (gimbal lock). In that case, we consider
:
.
Since we know the trigonometric addition theorem from school
we get
We are free to choose θ3 and arbitrarily select the bot’s current angle θ3, such that this one will not move in that specific case.
In the end, we get eight solutions by combining the possible pose configurations of θ0(forward/backward), θ1 and θ2(triangle flip), and θ4(hand orientation turn).
The correct solution is chosen by taking the one that differs the least from the current bot’s joint angles.
What is with all the other equations from
? We could use them to invalidate some of the four solutions, but still there will remain a list of solutions we need to choose from. So, it does not really make sense to take this route.
The selection algorithm is quite simple:
consider only mechanically valid solutions, i.e. omit those that violate mechanical boundaries.
Compute an angle-wise "distance" to the current position and take the solution with the least distance, i.e. take the solution with the least movement.
The latter has the consequence that the pose will try to remain in a configuration and no sudden movements like a turn of 180° happens.
All this is implemented in Kinematics.cpp. But - as usual - trying out before implementing this is a good idea, so I did that in this spreadsheet.
Speaking of configurations: if you want to change the configuration of the bot, e.g. from elbow down to elbow up which includes turning the base by 180° the approach of having a linear movement from one pose to the other and interplating in between does not work anymore, since the start and end poses are identical, but with a different configuration.
First possibility to change configuration is to have a non-linear movement that interpolates angle-wise resulting in a weired movement that would be really dangerous in a real world:
So, you rather introduce an intermediate pose where you can safely turn the critical actuator (mostly the shoulder). This is not less expansing, but a more controlled way:
Thing is, that one has to go via a singularity (the upright position) which is normally avoided like hell. One reason is the way how we compute θ4=0, and a numerical reason that singularities are kind of a black hole, the closer you get, the more you are sucked into imprecisions of floating numbers since you approach poles of the underlying functions. So it is defininitely best to simply avoid changing configurations or getting too close to singularities. | https://walter.readthedocs.io/en/latest/Kinematics/ | 2021-02-24T20:35:49 | CC-MAIN-2021-10 | 1614178347321.0 | [array(['../images/image027.png', None], dtype=object)
array(['../images/image029.png', None], dtype=object)
array(['../images/image030.png', None], dtype=object)
array(['../images/image031.png', None], dtype=object)
array(['../images/image032.png', None], dtype=object)
array(['../images/image033.png', None], dtype=object)
array(['../images/image034.png', None], dtype=object)
array(['../images/image035.png', None], dtype=object)
array(['../images/image037.png', None], dtype=object)
array(['../images/image038.png', None], dtype=object)
array(['../images/image041.png', None], dtype=object)
array(['../images/image042.png', None], dtype=object)
array(['../images/image043.png', None], dtype=object)
array(['../images/image044.png', None], dtype=object)
array(['../images/image046.png', None], dtype=object)
array(['../images/image047.png', None], dtype=object)
array(['../images/image046.png', None], dtype=object)
array(['../images/image049.png', None], dtype=object)
array(['../images/image051.png', None], dtype=object)
array(['../images/image053.png', None], dtype=object)
array(['../images/image054.png', None], dtype=object)
array(['../images/image057.png', None], dtype=object)
array(['../images/image058.png', None], dtype=object)
array(['../images/image059.png', None], dtype=object)
array(['../images/image061.png', None], dtype=object)
array(['../images/image062.png', None], dtype=object)
array(['../images/image064.png', None], dtype=object)
array(['../images/image065.png', None], dtype=object)
array(['../images/image027.png', None], dtype=object)
array(['../images/image069.png', None], dtype=object)
array(['../images/image072.png', None], dtype=object)
array(['../images/image073.png', None], dtype=object)
array(['../images/image074.png', None], dtype=object)
array(['../images/image075.png', None], dtype=object)
array(['../images/image076.png', None], dtype=object)
array(['../images/image077.png', None], dtype=object)
array(['../images/image078.png', None], dtype=object)
array(['../images/image079.png', None], dtype=object)
array(['../images/image081.png', None], dtype=object)
array(['../images/image086.png', None], dtype=object)
array(['../images/image088.png', None], dtype=object)
array(['../images/image089.png', None], dtype=object)
array(['../images/image090.png', None], dtype=object)
array(['../images/image091.png', None], dtype=object)
array(['../images/image092.png', None], dtype=object)
array(['../images/image093.png', None], dtype=object)
array(['../images/image094.png', None], dtype=object)
array(['../images/image097.png', None], dtype=object)
array(['../images/image098.png', None], dtype=object)
array(['../images/image101.png', None], dtype=object)
array(['../images/different configurations.png', None], dtype=object)
array(['../videos/angle-wise configuration change.gif', None],
dtype=object)
array(['../videos/singularity configuration change.gif', None],
dtype=object) ] | walter.readthedocs.io |
Impersonation - findByAssociatedRole
Impersonation - findByAssociatedRole
Description :
This command returns a list of names of all automation principals that are associated with a given role. This command prints a formatted list with each automation principal name on its own line.
Return type : String
Command Input :
Example
The following example finds all automation principals that are associated with the BLAdmins role.
Script
Impersonation findByAssociatedRole BLAdmins
Was this page helpful? Yes No Submitting... Thank you | https://docs.bmc.com/docs/blcli/86/impersonation-findbyassociatedrole-481026447.html | 2021-02-24T21:11:09 | CC-MAIN-2021-10 | 1614178347321.0 | [] | docs.bmc.com |
Form Load Time drastically reduced with the new form rendering engine.
With these changes you'll see drastic performance improvements to the form load time - and it only gets better the more fields your form have
See more here: Microsoft Dynamics CRM Online 2015 Update 1 - New Form Rendering Engine | https://docs.microsoft.com/en-us/archive/blogs/lystavlen/form-load-time-drastically-reduced-with-the-new-form-rendering-engine | 2021-02-24T21:06:59 | CC-MAIN-2021-10 | 1614178347321.0 | [array(['https://msdnshared.blob.core.windows.net/media/TNBlogsFS/prod.evol.blogs.technet.com/CommunityServer.Blogs.Components.WeblogFiles/00/00/00/75/32/0702_formrendering_3.png',
None], dtype=object) ] | docs.microsoft.com |
RHEL 5.6, 6.x Packaging for Open vSwitch¶
This document describes how to build and install Open vSwitch on a Red Hat Enterprise Linux (RHEL) host. If you want to install Open vSwitch on a generic Linux host, refer to OVN on Linux, FreeBSD and NetBSD instead.
We have tested these instructions with RHEL 5.6 and RHEL 6.0.
For RHEL 7.x (or derivatives, such as CentOS 7.x), you should follow the instructions in the Fedora, RHEL 7.x Packaging for OVN. The Fedora spec files are used for RHEL 7.x.
Prerequisites¶
You may build from an Open vSwitch distribution tarball or from an Open vSwitch Git tree.
The default RPM build directory,
_topdir, has five directories in the
top-level.
- BUILD/
- where the software is unpacked and built
- RPMS/
- where the newly created binary package files are written
- SOURCES/
- contains the original sources, patches, and icon files
- SPECS/
- contains the spec files for each package to be built
-.
Build Requirements¶
You will need to install all required packages to build the RPMs. The command below will install RPM tools and generic build dependencies:
$ yum install @'Development Tools' rpm-build yum-utils
Then it is necessary to install Open vSwitch specific build dependencies. The dependencies are listed in the SPEC file, but first it is necessary to replace the VERSION tag to be a valid SPEC.
The command below will create a temporary SPEC file:
$ sed -e 's/@VERSION@/0.0.1/' rhel/openvswitch.spec.in > /tmp/ovs.spec
And to install specific dependencies, use yum-builddep tool:
$ yum-builddep /tmp/ovs.spec
Once that is completed, remove the file
/tmp/ovs.spec.
If python-sphinx package is not available in your version of RHEL, you can install it via pip with ‘pip install sphinx’.
Open vSwitch requires python 2.7 or newer which is not available in older distributions. In the case of RHEL 6.x and its derivatives, one option is to install python34 from EPEL.
Bootstrapping and Configuring¶
If you are building from a distribution tarball, skip to Building.
If not, you must be building from an Open vSwitch Git tree. Determine what
version of Autoconf is installed (e.g. run
autoconf --version). If it is
not at least version 2.63, then you must upgrade or use another machine to
build the packages.
Assuming all requirements have been met, build the tarball by running:
$ ./boot.sh $ ./configure $ make dist
You must run this on a machine that has the tools listed in Build Requirements as prerequisites for building from a Git tree. Afterward, proceed with the rest of the instructions using the distribution tarball.
Now you have a distribution tarball, named something like
openvswitch-x.y.z.tar.gz. Copy this file into the RPM sources directory,
e.g.:
$ cp openvswitch-x.y.z.tar.gz $HOME/rpmbuild/SOURCES
Broken
build symlink¶.
Note.
Building¶
You should have a distribution tarball named something like openvswitch-x.y.z.tar.gz. Copy this file into the RPM sources directory:
$ cp openvswitch-x.y.z.tar.gz $HOME/rpmbuild/SOURCES
Make another copy of the distribution tarball in a temporary directory. Then
unpack the tarball and
cd into its root:
$ tar xzf openvswitch-x.y.z.tar.gz $ cd openvswitch-x.y.z
Userspace¶
Note
If the build fails with
configure: error: source dir
/lib/modules/2.6.32-279.el6.x86_64/build doesn't exist or similar, then
the kernel-devel package is missing or buggy.
Kernel Module¶
On RHEL 6, to build the Open vSwitch kernel module run:
$ rpmbuild -bb rhel/kmod-openvswitch-rhel6.spec
You might have to specify a kernel version and/or variants, e.g.:
- $ rpmbuild -bb
- -D “kversion 2.6.32-131.6.1.el6.x86_64” -D “kflavors default debug kdump” rhel/kmod-openvswitch-rhel6.spec
This produces an “kmod-openvswitch” RPM for each kernel variant, in this example: “kmod-openvswitch”, “kmod-openvswitch-debug”, and “kmod-openvswitch-kdump”.
Red Hat Network Scripts Integration¶ Refer to the “enable-protocol” command in the ovs-ctl(8) manpage for more information.
In addition, simple integration with Red Hat network scripts has been implemented. Refer to README.RHEL.rst in the source tree or /usr/share/doc/openvswitch/README.RHEL.rst in the installed openvswitch package for details.
Reporting Bugs¶
Report problems to [email protected]. | https://docs.ovn.org/en/latest/intro/install/rhel.html | 2021-02-24T20:25:38 | CC-MAIN-2021-10 | 1614178347321.0 | [] | docs.ovn.org |
Data Binding with Automatic Series Mappings
One of the features of the RadChart is the automatic series mapping. With automatic series mapping, you can easily create a chart by simply setting the RadChart.ItemsSource to the data source you have. RadChart will create a chart series for every numeric field in the data source by mapping the numeric value to the DataPointMember.YValue field for each respective series. The type of the chart depends on the RadChart.DefaultSeriesDefinition property and by default it is set to BarSeriesDefinition.
Note that SeriesDefinition set through the RadChart.DefaultSeriesDefinition property does not support change notifications i.e. if you try to change a RadChart.DefaultSeriesDefinition property after the control is databound, it will not have any effect till the next rebind operation. The recommended approach in this scenario would be to use unique SeriesMapping.SeriesDefinition or alternatively you can access the generated DataSeries directly (i.e. RadChart.DefaultView.ChartArea.DataSeries[i]) and update its DataSeries.Definition properties.
The purpose of this tutorial is to show you how to use RadChart with Automatic Series Mappings. The following cases will be examined:
The automatic mapping mode will not work for chart series that require multiple data fields for its correct operation (e.g. the CandleStick type).
Binding to an Array of Integers
Take a look at this simple array declaration:
int[] dataArray = new int[] { 12, 56, 23, 89, 12, 56, 34, 78, 32, 56 };
Dim dataArray As Integer() = New Integer() {12, 56, 23, 89, 12, 56, 34, 78, 32, 56}
If you set it to the ItemsSource property of the RadChart control, you will have the following result:
radChart.ItemsSource = dataArray;
radChart.ItemsSource = dataArray
Binding to a List of Business Objects
If you have a list of business objects and you set it to the ItemsSource property of the RadChart control, the result will be one chart series per numeric property:
List<Manufacturer> data = new List)); this.telerkChart.ItemsSource = data;
Dim data As New List(Of)) Me.telerkChart.ItemsSource = data
Where the structure of the Manufacturer class is:
public class Manufacturer { public Manufacturer( string name, int sales, int turnover ) { this.Name = name; this.Sales = sales; this.Turnover = turnover; } public string Name { get; set; } public int Sales { get; set; } public int Turnover { get; set; } }
Public Class Manufacturer Public Sub New(ByVal name As String, ByVal sales As Integer, ByVal turnover As Integer) Me.Name = name Me.Sales = sales Me.Turnover = turnover End Sub Private _Name As String Public Property Name() As String Get Return _Name End Get Set(ByVal value As String) _Name = value End Set End Property Private _Sales As Integer Public Property Sales() As Integer Get Return _Sales End Get Set(ByVal value As Integer) _Sales = value End Set End Property Private _Turnover As Integer Public Property Turnover() As Integer Get Return _Turnover End Get Set(ByVal value As Integer) _Turnover = value End Set End Property End Class
The result is shown on the next figure.
As you can see, automatic series mapping can be useful for simple data. However, if you need more data binding options, take a look at the Data Binding with Manual Series Mapping topic and the Data Binding to Nested Collections topic. | https://docs.telerik.com/devtools/wpf/controls/radchart/populating-with-data/data-binding-with-automatic-series-binding | 2021-02-24T21:22:10 | CC-MAIN-2021-10 | 1614178347321.0 | [array(['images/RadChart_PopulatingWithData_AutomaticSeriesMapping_01.png',
None], dtype=object)
array(['images/RadChart_PopulatingWithData_AutomaticSeriesMapping_02.png',
None], dtype=object) ] | docs.telerik.com |
PingOne for Enterprise release notes Updated 18 11,567 people found this helpful Add to MyDocs | Hide Show Table of Contents Table of Contents Expand | Collapse PingOne Release Notes AD Connect Release Notes May, 2018 Enhancements Feature Description ServiceNow provisioner (Kingston, Jakarta, Istanbul) We've added new capabilities for the ServiceNow applications: Configuration options for the create/read/update/delete (CRUD) capabilities. Configuration options for provisioning disabled users. Support for Istanbul, Jakarta, and Kingston. See Known Issues and Limitations for important information. Note: This is a new ServiceNow provisioner. We've rebranded the existing provisioner from ServiceNow to "ServiceNow (Fuji)". Box provisioner We've added new capabilities for the Box applications: An option to create personal folders on user creates. An option to force delete users with managed content. See Known Issues and Limitations for important information. Note: If you have an existing Box application, to take advantage of the new features you'll need to click through to the last page and save the application. Resolved issues Ticket ID Issue SSD-7486 Fixed an issue when adding a new SAML application where changes to the signing algorithm were not being retained after saving the changes. Deprecated features Feature Description Basic SSO and the browser extension Basic SSO and the PingOne browser extension are no longer offered for new PingOne accounts. Accounts that are currently utilizing Basic SSO or the browser extension can continue using these facilities without interruption. For accounts not currently using Basic SSO or the browser extension, availability of these facilities is no longer displayed. Known issues and limitations Subject Issue/Limitation ServiceNow provisioner (Kingston, Jakarta, Istanbul) The following limitations apply: Outbound Group Provisioning and Memberships are not supported. User attributes cannot be cleared once set. They can only be updated. When provisioning to ServiceNow, all user accounts in ServiceNow must have an assigned username (User ID) value. This is not a required field in ServiceNow. However, because the provisioner must use this field to sync with pre-existing users in ServiceNow, it's required for provisioning to function. If a user in ServiceNow resolves to sAMAccountName (the "standard" mapping in the provisioning channel), the accounts will be linked. Currently, if users exist in ServiceNow without an assigned UserName value, this will cause errors in provisioning. In this case, you can resolve the issue by ensuring every user has an assigned UserName, even if they are not intended to be managed by the provisioner. When provisioning users, the username attribute must contain only URL-safe characters. When synchronizing roles with users, the role attribute must contain only URL-safe characters. If a new user is created with the same username as an existing user, a duplicate user will not be created. Instead, the existing user will be updated with any information assigned. Due to limitations with the ServiceNow API, a role can be added to a user but not removed, which may cause a user’s role in the source data store to become out of sync with the user’s role in ServiceNow. For more information, see Enable User Role Removal. When mapping the roles attribute, multiple calls to ServiceNow must be made to sync the user role information. This may impact provisioning performance. Box provisioner The following limitations apply: Clearing fields on updates is not supported. The login attribute can't be updated through provisioning. The Inactive Status Default user attribute has no effect if the Box connector is configured to delete (hard-delete) users instead of disable (soft-delete) users when de-provisioning. Additionally, deleting a user in an LDAP repository will always set the status for the user as "inactive" in the Box application. Outbound Group Provisioning and Memberships are not supported. A Box API limitation prevents login credentials from being updated by the provisioner when the character case differs. For example, "[email protected]", cannot be updated to "[email protected]". When the case differs, the Box API omits the login from the API operation. So, in an update operation, when the case differs, the login is omitted, but any other attributes that may have changed are provisioned and updated. Due to Box API requirements, only primary, validated email addresses can be used to sync users. Enabling Personal Folder functionality will diminish initial synchronization provisioning performance. Tags Capability > Single Sign On; Product > PingOne | https://docs.pingidentity.com/bundle/p1_enterpriseRelNotes_cas/page/releaseNotes.html | 2018-05-20T14:08:13 | CC-MAIN-2018-22 | 1526794863570.21 | [] | docs.pingidentity.com |
Deploy SharePoint Workspace 2010
Applies to: Groove Server 2010, SharePoint Workspace 2010
Topic Last Modified: 2016-10-03
This article describes how to deploy Microsoft SharePoint Workspace 2010 in a managed environment that uses Microsoft Groove Server 2010. SharePoint Workspace is a rich client to Microsoft SharePoint Server 2010 and Microsoft SharePoint Foundation 2010.As such, it enables information workers to synchronize online and offline content with document libraries and lists at a designated SharePoint site, and provides options for creating Groove peer workspaces and shared folder workspaces.
SharePoint Workspace 2010 is included with Microsoft Office 2010 ProPlus installation.
Customizing the installation enables you to decide how SharePoint Workspace 2010 will be deployed and used. For example, you can require intended SharePoint Workspace users to enter a managed account configuration code. This prevents them from creating unmanaged accounts. This article describes how to customize SharePoint Workspace 2010 installation for deployment in your organization.
In this article:
Before you begin
Customize SharePoint Workspace deployment
Configure SharePoint Workspace user accounts
Before you begin
Before you start these procedures, address the following prerequisites:
Install Groove Server 2010 Manager as described in Install and configure Groove Server 2010 Manager.
Install Groove Server 2010 Relay as described in Install and configure Groove Server 2010 Relay.
Assemble the Microsoft Office Professional 2010 or SharePoint Workspace 2010 deployment kit according to your organization’s software deployment strategy.
Consider automating SharePoint Workspace account configuration as described in Automate SharePoint Workspace account configuration/restoration.
Customize SharePoint Workspace deployment via Office Customization Tool settings
The Office Customization Tool (OCT) for Office 2010 enables you to customize the deployment of SharePoint Workspace 2010 by applying various install options. This is especially helpful if you are not running Active Directory and cannot access Group Policy objects.
This section explains how the OCT can help you customize SharePoint Workspace deployment. You can access the OCT by downloading the installation directory from the Microsoft Office Professional 2010 media, and then typing setup /admin from a Windows command line.
As an alternative, if you are using Active Directory, you can customize SharePoint Workspace 2010 deployment via Group Policy. For information about how to use SharePoint Workspace Group Policy objects to customize deployment, see Configure and customize SharePoint Workspace 2010.
For more information about how to use the OCT, see Office Customization Tool in the Office system (\&clcid=0x40).
The following procedure describes Office Customization Tool (OCT) options that you can use to optimize SharePoint Workspace deployment.
To customize SharePoint Workspace deployment via Office Customization Tool settings
Specify OCT feature options as follows:
Under Feature Options, click Modify User Settings.
Select Microsoft SharePoint Workspace 2010 from the list of programs.
To require an administrator-supplied SharePoint Workspace account configuration code, double-click SharePoint Workspace Account Configuration Code Required to open its Properties dialog box, click Enabled, and then click OK. For information about how to automate SharePoint Workspace account configuration, see Automate SharePoint Workspace account configuration/restoration.
To specify the name of the Groove Server Manager to be used for management, double-click Name to open its Properties dialog box, type the name for server name. Then they click OK.
To enable a requirement for a secured communications link between SharePoint Workspace-and Groove Server Manager, double-click Valid Link Security, Enabled. Then they click OK. When this requirement is enabled, the presented Groove Server Manager Secure Socket Layer (SSL) certificate must be valid to enable SharePoint Workspace-to-Manager communication.
To prohibit use of Groove workspaces and Shared folder workspaces, double-click Prohibit Use of Groove workspaces, click Enabled. Then they click OK.
To enable Secure Socket Layer (SSL) encryption for connections to SharePoint sites, double-click SharePoint Workspace Valid Link Security, click Enabled. Then they click OK.
To specify IPv6 installation, if supported on client systems, double-click IPv6, click Enabled. Then they click OK.
To specify IPv4 installation, if supported on client systems, double-click IPv4, click Enabled. Then they click OK
To set the limit for the number of failed proxy connection attempts to Groove Server Relay by the SharePoint Workspace client, click Maximum Number of Proxy Connection Failures to Groove Relay Server. When the limit is reached, additional proxy connection attempts to Groove Server Relay are abandoned. The default value is 1. When you enable this policy setting, the SharePoint Workspace client will be limited to the set number of proxy-to-Relay connection failures. When you disable or do not configure this policy setting, repeated proxy connection tries can disrupt server operation.
To prevent the SharePoint Workspace client from initiating communications to listed Groove Relay servers that are known to be permanently decommissioned, click DeCommissionedRelayList. The format is a comma separated list of fully qualified domain names of Groove Relay servers. Wildcards in the names are supported. The ‘?’ is used for single character substitution and ‘*’ is used for domain part substitution (Examples: relay1.contoso.com,*.contoso.com, relay?.contoso.com). When you enable this policy setting, the SharePoint Workspace client will not initiate communication with the Groove Server Relay DeCommissionedRelayList.
Specify Additional Content as follows:
Use the Add Files option to automatically add files such as SharePoint Workspace templates to user installations.
Use the Remove Files option to automatically remove specific files during installation. Files to be removed may include workspace templates from previous installations.
Use the Remove Files setting if you have special requirements that can only be enabled or disabled via the Windows registry. For example, you can use this setting to remove legacy SharePoint Workspace or Office Groove 2007 device management registry settings or you can set the registry value that disables running the New User video upon configuration of a new account.
Configure SharePoint Workspace user accounts
Activating SharePoint Workspace user accounts involves applying an account configuration code to each SharePoint Workspace client installation. The recommended approach manual procedure, described in Manually delivering SharePoint Workspace account configuration codes.
See Also
Concepts
Prepare Active Directory for Groove Server Manager
Create a SharePoint Workspace user directory for Groove Server Manager
Automate SharePoint Workspace account configuration/restoration
Manually delivering SharePoint Workspace account configuration codes
Other Resources
Configure and customize SharePoint Workspace 2010
Office Customization Tool settings for SharePoint Workspace 2010 | https://docs.microsoft.com/en-us/previous-versions/office/groove-server-2010/ee681753(v=office.14) | 2018-05-20T15:08:05 | CC-MAIN-2018-22 | 1526794863570.21 | [] | docs.microsoft.com |
NTLM (NT LAN Manager) authentication is used to make the communication between App Volumes Manager and agent more secure.
About this task
When an App Volumes agent make an HTTP request to the App Volumes Manager, NTLM is used to authenticate the user and user account with the entry in the Active Directory.
You can disable NTLM by defining a system environment variable on the machine where App Volumes Manager is installed.
See to understand the implications of disabling NTLM.
Procedure
- Log in as administrator to the machine where App Volumes Manager is installed.
- Open Control Panel and click .
The New System Variable window appears.
- In the Variable name text box, enter AVM_NTLM_DISABLED.
- In the Variable value text box, enter 1.
- Restart the computer.
The App Volumes Manager service also restarts. | https://docs.vmware.com/en/VMware-App-Volumes/2.12.1/com.vmware.appvolumes.user.doc/GUID-5C8B88C4-7B87-4D7D-9F51-99DC5C078070.html | 2018-05-20T14:01:03 | CC-MAIN-2018-22 | 1526794863570.21 | [] | docs.vmware.com |
Departments that you do not need at the moment can be moved to a different location in the weekly schedule or they can be completely hidden.
In the category "Schedules" you have the option to change these settings. In the list view as well as the calendar view you have a list of all departments on the left hand side.
Move departments:
While moving the mouse cursor over the departments in the list a small hand symbol appears. Here you can drag&drop a department to a different location in the list. The order in the list view will change immediately.
Hide departments:
Simply click on the department in the list that you want to hide in the weekly schedule. | http://docs.staffomatic.com/staffomatic-help-center/departments/can-i-hide-departments-in-the-weekly-schedule-that-i-do-not-need-at-the-moment | 2018-05-20T14:15:16 | CC-MAIN-2018-22 | 1526794863570.21 | [array(['https://downloads.intercomcdn.com/i/o/31963542/8ad0a09984ff73f36818c560/Bildschirmfoto+2017-08-24+um+10.34.05.png',
None], dtype=object) ] | docs.staffomatic.com |
Deploy custom configurations of the 2007 Office system (step-by-step)
Updated: October 22, 2012
Applies To: Office Resource Kit
This Office product will reach end of support on October 10, 2017. To stay supported, you will need to upgrade. For more information, see , Resources to help you upgrade your Office 2007 servers and clients.
Topic Last Modified: 2016-11-14
This article describes how to deploy an initial customized installation of the 2007 Microsoft Office system to users in your organization. It also includes an example of a Config.xml file.
The following table highlights the process for deploying a custom configuration.
Config.xml example2007" Template="Microsoft Office 2007 Professional Plus Setup(*).txt" />
<INSTALLLOCATION Value="%programfiles%\Microsoft Office" />
<LIS SOURCELIST
<Setting Id="SETUP_REBOOT" Value="NEVER" />
<OptionState Id="ACCESSFiles" State="absent" Children="force" />
</Configuration>
See Also
Concepts
Setup | https://docs.microsoft.com/en-us/previous-versions/office/office-2007-resource-kit/cc178960(v=office.12) | 2018-05-20T14:52:29 | CC-MAIN-2018-22 | 1526794863570.21 | [] | docs.microsoft.com |
DNS Server Root Hints Configuration
Applies To: Windows Server 2008 R2
Root hints are the names and addresses of servers that are authoritative for the root zone of the Domain Name System (DNS) namespace. Root hints can be used for resolving external names, such as the names of Internet host computers. | https://docs.microsoft.com/en-us/previous-versions/windows/it-pro/windows-server-2008-R2-and-2008/dd349628(v=ws.10) | 2018-05-20T14:17:47 | CC-MAIN-2018-22 | 1526794863570.21 | [] | docs.microsoft.com |
Release notes for Gluster 4.0.2
This is a bugfix release. The release notes for 4.0.0, and 4.0.1 contain a listing of all the new features that were added and bugs fixed in the GlusterFS 4.0
No Major issues
Bugs addressed
Bugs addressed since release-4.0.1 are listed below.
- #1558959: [brick-mux] incorrect event-thread scaling in server_reconfigure()
- #1559079: test ./tests/bugs/ec/bug-1236065.t is generating crash on build
- #1559244: enable ownthread feature for glusterfs4_0_fop_prog
- #1561721: Rebalance failures on a dispersed volume with lookup-optimize enabled
- #1562728: SHD is not healing entries in halo replication
- #1564461: gfapi: fix a couple of minor issues
- #1565654: /var/log/glusterfs/bricks/export_vdb.log flooded with this error message "Not able to add to index [Too many links]"
- #1566822: [Remove-brick] Many files were not migrated from the decommissioned bricks; commit results in data loss
- #1569403: EIO errors on some operations when volume has mixed brick versions on a disperse volume
- #1570432: CVE-2018-1088 glusterfs: Privilege escalation via gluster_shared_storage when snapshot scheduling is enabled [fedora-all] | http://gluster.readthedocs.io/en/latest/release-notes/4.0.2/ | 2018-05-20T13:58:28 | CC-MAIN-2018-22 | 1526794863570.21 | [] | gluster.readthedocs.io |
Best Practices for Using Native XML Web Services
This for use in your business solutions. These recommendations are intended to help you in the following ways:
Help secure your installation of SQL Server when you use Native XML Web Services.
Help improve the performance of your installation of SQL Server by offering usage guidelines. These guidelines can help you decide on whether your application is effectively served by using Native XML Web Services.
Security Best Practices
Consider the following security best practice recommendations when you deploy Native XML Web Services:.
Limit Endpoint Connect Permissions to Specific Users or Groups.
Important
The public role is a special database role to which every SQL Server user belongs. This role contains default access permissions for any user that can access the database. Because this database role is a built-in default role of SQL Server and serves as a way to grant access to all users (similar to Everyone or Authenticated Users in Windows permissions), it should be used with caution when you configure permissions on SQL Server.
For more information, see GRANT Endpoint Permissions (Transact-SQL).
Use Secure Sockets Layer to Exchange Sensitive Data.
Use SQL Server Behind a Firewall.
Verify the Windows Guest Account Is Disabled on the Server.
Control and Update Endpoint State As Needed:
Note
After an endpoint is disabled, it cannot be restarted until the SQL Server service (MSSQLServer) is restarted..
Performance Best Practices
Consider the following performance best practice recommendations when you deploy Native XML Web Services:
Deploy in appropriate scenarios.
Factor in additional server resources when planning SOAP-based solutions.
Configure the appropriate WSDL option for your requirements.
Deploy Appropriate Scenarios
Native XML Web Services or later might exceed your requirements.
Similarly, in scenarios with the following requirements, we do not recommend using Native XML Web Services:
Your application is used to insert or retrieve binary large object (BLOB) data, such as large binaryimage, or text values.
Your application requires real-time transaction processing and mission-critical response times.
You are using SQL Server in combination with other processing-intensive applications such as TPC Benchmark C (TPC-C) applications.
Factor in Additional Server Resources When Planning SOAP-based Solutions.
Configure the Appropriate WSDL Option for Your Requirements. | https://docs.microsoft.com/en-us/previous-versions/sql/sql-server-2008-r2/ms190399(v=sql.105) | 2018-05-20T14:39:37 | CC-MAIN-2018-22 | 1526794863570.21 | [] | docs.microsoft.com |
Visual Basic: MSChart Control
AllowDithering Property
See Also Example Applies To
Returns or sets a value that determines whether to disable color dithering for charts on 8-bit color monitors in order to enable use of MSChart control's own color palette and enhance the chart display.
Syntax
object.AllowDithering [ =boolean]
The AllowDithering property syntax has these parts:
Settings
The settings for boolean are: | https://docs.microsoft.com/en-us/previous-versions/visualstudio/visual-basic-6/aa240546(v=vs.60) | 2018-05-20T14:16:51 | CC-MAIN-2018-22 | 1526794863570.21 | [] | docs.microsoft.com |
1. Installation¶
Contents:
1.1. Dependencies¶
Python 2.7 and Python 3.x are both supported.
requests- Kenneth Reitz’s indispensable python-requests library handles the HTTP business. Usually, the latest version available at time of release is the minimum version required; at this writing, that version is 1.2.0, but any version >= 1.0.0 should work.
requests-oauthlib- Used to implement OAuth. The latest version as of this writing is 0.3.3.
requests-kerberos- Used to implement Kerberos.
ipython- The IPython enhanced Python interpreter provides the fancy chrome used by Issues.
filemagic-parameter on methods that take an image object, such as project and user avater creation.
pycrypto- This is required for the RSA-SHA1 used by OAuth. Please note that it’s not installed automatically, since it’s a fairly cumbersome process in Windows. On Linux and OS X, a
pip install pycryptoshould do it.
Installing through
pip takes care of these dependencies for you. | https://jira.readthedocs.io/en/latest/installation.html | 2018-05-20T13:29:41 | CC-MAIN-2018-22 | 1526794863570.21 | [] | jira.readthedocs.io |
How to install Django¶
This document will get you up and running with Django.
Install Python¶
Being a Python Web framework, Django requires Python.
It works with any Python version from 2.5 to 2.
Python on Windows
On Windows, you might need to adjust your PATH environment variable to include paths to Python executable and additional scripts. For example, if your Python is installed in C:\Python27\, the following paths need to be added to PATH:
C:\Python27\;C:\Python27\Scripts;. Another is FastCGI, perfect for using Django with servers other than Apache. Additionally, Django follows the WSGI spec (PEP 3333), which allows it to run on a variety of server platforms. See the server-arrangements wiki page for specific installation instructions for each platform..
In addition to a database backend, you’ll need to make sure your Python database bindings are installed.
If you’re using PostgreSQL, you’ll need the postgresql_psycopg2 package. You might want to refer to our PostgreSQL notes for further technical details specific to this database.
If you’re on Windows, check out the unofficial compiled Windows version.
If you’re using MySQL, you’ll need MySQLdb, version 1.2.1p2 or higher. You will also want to read the database-specific notes for the MySQL backend. with pip¶
This is the recommended way to install Django.
Install pip. The easiest is to use the standalone pip installer. If your distribution already has pip installed, at the shell prompt. If you’re using Windows, start a command shell with administrator privileges and run the command pip install Django. This will install Django in your Python installation’s site-packages directory.
If you’re using a virtualenv, you don’t need sudo or administrator privileges, and this will install Django in the virtualenv’s site-packages directory.
Installing an official release manually¶
- Download the latest release from our download page.
- Untar the downloaded file (e.g. tar xzvf Django-X.Y.tar.gz, where X.Y at the shell prompt. If you’re using Windows, start a command shell with administrator privileges and run the command python setup.py install. This, Git, or Mercurial installed, and that you can run its commands from a shell. (Enter svn help, git help, or hg help at a shell prompt to test this.) Note that the Subversion repository is the canonical source for the official Git and Mercurial repositories and as such will always be the most up-to-date.
# Subversion svn co django-trunk
Mirrors of the Subversion repository can be obtained like so:
# Git (requires version 1.6.6 or later) git clone # or (works with all versions) git clone git://github.com/django/django.git # Mercurial hg clone
Warning
These mirrors should be updated every 5 minutes but aren’t guaranteed to be up-to-date since they are hosted on external services.
Next, make sure that the Python interpreter can load Django’s code. The most convenient way to do this is to modify Python’s search path. Add a .pth file containing the full path to the django-trunk directory to your system’s site-packages directory. For example, on a Unix-like system:
echo WORKING-DIR/django-trunk > SITE-PACKAGES-DIR/django.pth
(In the above line, change SITE-PACKAGES-DIR to match the location of your system’s site-packages directory, as explained in the Where are my site-packages stored? section above. Change WORKING-DIR/django-trunk to match the full path to your new django-trunk directory.).
Warning
Don’t run sudo python setup.py install, because you’ve already carried out the equivalent actions in steps 3 and 4. Furthermore, this is known to cause problems when updating to a more recent version of Django.
When you want to update your copy of the Django source code, just run the command svn update from within the django-trunk directory. When you do this, Subversion will automatically download any changes. The equivalent command for Git is git pull, and for Mercurial hg pull --update. | https://docs.djangoproject.com/en/1.4/topics/install/ | 2015-10-04T09:12:21 | CC-MAIN-2015-40 | 1443736673081.9 | [] | docs.djangoproject.com |
Difference between revisions of "Beginners"
From Joomla! Documentation
Revision as of 21:08, 2
Absolute Beginners Guide to Joomla!
Welcome to Joomla!, a leading open-source Content Management System (or "CMS") platform. You have made a great choice to use Joomla! for your website. Joomla! is a well-tested, extensible and effective tool supported by a very active and friendly community of developers and users.! . | https://docs.joomla.org/index.php?title=Portal:Beginners&diff=prev&oldid=2502 | 2015-10-04T09:33:33 | CC-MAIN-2015-40 | 1443736673081.9 | [] | docs.joomla.org |
Welcome to the Bug Squad
From Joomla! Documentation
Revision as of 23:24, 22 September 2008 by Dextercow.
This article is for new Joomla Bug Squad (JBS) members and also for people just fix bugs in the current Joomla release. Tracker. | https://docs.joomla.org/index.php?title=Welcome_to_the_Bug_Squad&oldid=10826 | 2015-10-04T10:37:50 | CC-MAIN-2015-40 | 1443736673081.9 | [] | docs.joomla.org |
Changes related to "J1.5:Developing a MVC Component/Using the Database"
← J1.5:Developing a MVC Component/Using the Database
This is a list of changes made recently to pages linked from a specified page (or to members of a specified category). Pages on your watchlist are bold.
No changes during the given period matching these criteria. | https://docs.joomla.org/index.php?title=Special:RecentChangesLinked&from=20131111062800&target=J1.5%3ADeveloping_a_MVC_Component%2FUsing_the_Database | 2015-10-04T10:01:42 | CC-MAIN-2015-40 | 1443736673081.9 | [] | docs.joomla.org |
User Guide
Local Navigation
Reorder layers
Layers are displayed in the Layers panel as they are ordered on the canvas; the first layer in the Layers list is the top-most layer on the canvas, the second layer listed is next top-most, and so on. You can reorder the layers on the canvas by changing their order in the Layers list.
Previous topic: Rename a layer
Was this information helpful? Send us your comments. | http://docs.blackberry.com/ko-kr/developers/deliverables/21108/Reorder_layers_630203_11.jsp | 2015-10-04T09:33:52 | CC-MAIN-2015-40 | 1443736673081.9 | [] | docs.blackberry.com |
Working with Machine Learning Transforms on the AWS Glue Console
You can use AWS Glue to create custom machine learning transforms that can be used to cleanse your data. You can use these transforms when you create a job on the AWS Glue console.
For information about how to create a machine learning transform, see Matching Records with AWS Lake Formation FindMatches.
Topics
Transform Properties
To view an existing machine learning transform, sign in to the AWS Management Console,
and open the
AWS Glue console at
The Machine Learning Transforms list displays the following properties for each transform:
- Transform name
The unique name you gave the transform when you created it.
- Transform ID
A unique identifier of the transform.
- Type
The type of machine learning transform; for example, Find matching records.
- Glue version
This value determines which version of AWS Glue this machine learning transform is compatible with. Glue 1.0 is recommended for most customers. If the value is not set, the Glue compatibility defaults to Glue 0.9. For more information, see AWS Glue Versions.
- Status
Indicates whether the transform is Ready or Needs teaching. To run a machine learning transform successfully in a job, it must be
Ready.
When you create a FindMatches transform, you specify the following configuration information:
- Primary key
The name of a column that uniquely identifies rows in the source table.
- Type
The type of machine learning transform; for example, Find matches.
- Merge matching records
Indicates whether the transform is to remove duplicates in the target. The record with the lowest primary key value is written to the output of the transform.
Adding and Editing Machine Learning Transforms
You can view, delete, set up and teach, or tune a transform on the AWS Glue console. Select the check box next to the transform in the list, choose Action, and then choose the action that you want to take.
To add a new machine learning transform, choose the Jobs tab, and
then choose Add job. Follow the instructions in the Add
job wizard to add a job with a machine learning transform such as
FindMatches. For more information, see Matching Records with AWS Lake Formation FindMatches.
Viewing Transform Details
Transform details include the information that you defined when you created the transform. To view the details of a transform, select the transform in the Machine learning transforms list, and review the information on the following tabs:
History
Details
Estimate quality
History
The History tab shows your transform task run history. Several types of tasks are run to teach a transform. For each task, the run metrics include the following:
Run ID is an identifier created by AWS Glue for each run of this task.
Task type shows the type of task run.
Status shows the success of each task listed with the most recent run at the top.
Error shows the details of an error message if the run was not successful.
Start time shows the date and time (local time) that the task started.
Execution time shows the length of time during which the job run consumed resources. The amount is calculated from when the job run starts consuming resources until it finishes.
Last modified shows the date and time (local time) that the task was last modified.
Logs links to the logs written to
stdoutfor this job run.
The Logs link takes you to Amazon CloudWatch Logs. There you can view the details about the tables that were created in the AWS Glue Data Catalog and task.
Download label file shows a link to Amazon S3 for a generated labeling file.
Details
The Details tab includes attributes of your transform. It shows you the details about the transform definition, including the following:
Transform name shows the name of the transform.
Type lists the type of the transform.
Status displays whether the transform is ready to be used in a script or job.
Force output to match labels displays whether the transform forces the output to match the labels provided by the user.
Estimate quality
The Estimate quality tab shows the metrics that you use to measure the quality of the transform. Estimates are calculated by comparing the transform match predictions using a subset of your labeled data against the labels you have provided. These estimates are approximate.You can invoke an Estimate quality task run from this tab.
The Estimate quality tab shows the metrics from the last Estimate quality run including the following properties:
Area under the Precision-Recall curve is a single number estimating the upper bound of the overall quality of the transform. It is independent of the choice made for the precision-recall parameter. Higher values indicate that you have a more attractive precision-recall tradeoff.
Precision estimates how often the transform is correct when it predicts a match.
Recall upper limit estimates that for an actual match, how often the transform predicts the match.
Max F1 estimates the transform's accuracy between 0 and 1, where 1 is the best accuracy. For more information, see F1 score
in Wikipedia.
For information about understanding quality estimates versus true quality, see Quality Estimates Versus End-to-End (True) Quality.
For more information about tuning your transform, see Tuning Machine Learning Transforms in AWS Glue.
Quality Estimates Versus End-to-End (True) Quality
In the
FindMatches machine learning transform, AWS Glue estimates the quality of
your transform by presenting the internal machine-learned model with a number of
pairs of records that you provided matching labels for but that the model has not
seen before. These quality estimates are a function of the quality of the
machine-learned model (which is influenced by the number of records that you label
to “teach” the transform). The end-to-end, or true recall (which is not automatically calculated by the
FindMatches transform) is also influenced by the
FindMatches filtering mechanism that proposes a
wide variety of possible matches to the machine-learned model.
You can tune this filtering method primarily by using the Lower
Cost-Accuracy slider. As you move this slider closer to the
Accuracy end, the system does a more thorough and expensive
search for pairs of records that might be matches. More pairs of records are fed to
your machine-learned model, and your
FindMatches transform's
end-to-end or true recall gets closer to the estimated recall metric. As a result,
changes in the end-to-end quality of your matches as a result of changes in your matches's
cost/accuracy tradeoff will typically not be reflected in the quality estimate. | https://docs.aws.amazon.com/glue/latest/dg/console-machine-learning-transforms.html | 2020-07-02T17:19:33 | CC-MAIN-2020-29 | 1593655879532.0 | [] | docs.aws.amazon.com |
General Information
Quality Assurance and Productivity
Desktop
Frameworks and Libraries
Web
Controls and Extensions
Maintenance Mode
Enterprise and Analytic Tools
End-User Documentation
ColumnView.ClearColumnErrors() Method
Removes error descriptions for the focused row.
Namespace: DevExpress.XtraGrid.Views.Base
Assembly: DevExpress.XtraGrid.v20.1.dll
Declaration
Remarks
Use the ClearColumnErrors method to clear the errors set using the ColumnView.SetColumnError method. For instance, if the end-user entered invalid data, then you could set column errors via ColumnView.SetColumnError and indicate cells with invalid values. After the errors are corrected, you can call the ClearColumnErrors method to remove error icons.
As a rule you need to handle the BaseView.ValidatingEditor or the ColumnView.ValidateRow event in order to check cells contents validity.
Note: the ClearColumnErrors method does not clear errors notified by the data source. To clear these errors, you need to use members provided by your data source.
NOTE
Detail pattern Views do not contain data and they are never displayed within XtraGrid. So, the ClearColumnErrors member must not be invoked for these Views. The ClearColumnErrors. | https://docs.devexpress.com/WindowsForms/DevExpress.XtraGrid.Views.Base.ColumnView.ClearColumnErrors | 2020-07-02T16:24:13 | CC-MAIN-2020-29 | 1593655879532.0 | [] | docs.devexpress.com |
RSS Feeds For Useful* Things
Until the website folk get their collective minds into gear and produce RSS feeds, I've taken to scripting my own.
All care taken, no responsibility accepted.
The Idea:
- The site changed and broke the feed
- The site changed and didn't break the feed:
Feed Me
Because I Love Drivers:
- The Nvidia X64 Drivers RSS Feed
(one item for Graphics, one for Nforce4 AMD Platform Drivers) - just in time for an imagined-and-really-really-hoped-for Battlefield 2 driver drop.
Because it was easy to fiddle the existing script:
- Nvidia x86 32-bit version
(all the interesting stuff I could spot - the filename convention changes at one point, so the NF4AMD one might not work reliably).
And the only faintly ironic:
Newly added:
- Punkbuster
Current Client Versions
This page is going to be my master list for the forseeable future - the other place to check the current feed list is here, but I'm not promising to keep it up to date, see, cos I get stats more simply from here. | https://docs.microsoft.com/en-us/archive/blogs/tristank/rss-feeds-for-useful-things | 2020-07-02T17:03:12 | CC-MAIN-2020-29 | 1593655879532.0 | [] | docs.microsoft.com |
Viewing Alerts
You can view alerts in the Web Management Console Alerts panel, displayed by XAP Alert groups (alerts are grouped by correlation key), and also generate an alert dump for specific grid components.
The Web-UI server utilizes the
<XAP Root>/config/alerts/alerts.xml configuration file. These configurations apply to any client connecting to the Web-UI at the specified host and port.
Alerts are grouped together by type, such as CPU, Memory, etc. When an alert is raised, it is aggregated with other consecutive alerts of the same type. Previous alerts from the aggregation get “pushed” down (circled in red). A resolved alert “closes” the aggregation (circled in green). A new alert of the same type will trigger a new aggregation.
Sort the ‘status’ column in ascending order to show the latest unresolved alerts.
Generating an Alert. | https://docs.gigaspaces.com/xap/12.3/admin/web-management-view-alerts.html | 2020-07-02T16:47:09 | CC-MAIN-2020-29 | 1593655879532.0 | [array(['/attachment_files/web-console/alerts.jpg', 'hosts1.jpg'],
dtype=object)
array(['/attachment_files/web-console/generate_dump.png',
'generate_dump.png'], dtype=object) ] | docs.gigaspaces.com |
sgadmin Troubleshooting
Cluster not reachable
If the cluster is not reachable at all by
sgadmin, you will see the following error message:
Search Guard Admin v6 Will connect to localhost:9300 ERR: Seems there is no elasticsearch running on localhost:9300 - Will exit
Check the hostname of your cluster
- By default, sgadmin uses
localhost
- If your cluster runs on any other host, specify the hostname with the
-hoption
Check the port
- Check that you are running
sgadminagainst the transport port, not the HTTP port
- By default,
sgadminuses
9300
- If you’re running on a different port, use the
-poption to specify the port number
None of the configured nodes are available
If
sgadmin can reach the cluster, but there are issues uploading the configuration, you will see the following error message:) * Add --accept-red-cluster to allow sgadmin to operate on a red cluster.
Check the cluster name
- By default, sgadmin uses
elasticsearchas cluster name
- If your cluster is named differently either:
- let sgadmin ignore the cluster name completely by using the
-iclswith or
- specify the name of your cluster with the
-cnswitch
Check the hostname and hostname verification
- By default, sgadmin will verify that the hostname in your node’s certificate matches the node’s actual hostname
- If this is not the case, e.g. you’re using demo certificates, disable hostname verification by adding the
-nhnvswitch
Check the cluster state
- By default, sgadmin ony executes when the cluster state is at least yellow
- If your cluster state is red, you can stll execute sgadmin, but you need to add the
-arc/--accept-red-clusterswitch
Check the Search Guard index name
- By default, Search Guard uses
searchguardas the name of the confguration index
- If you configured a different index name in
elasticsearch.yml, you need to specify it with the
-ioption
ERR: CN=… is not an admin user
If the TLS certificate used in the sgadmin call cannot be used as admin certificate, you will see a message like:
Connected as CN=node-0.example.com,OU=SSL,O=Test,L=Test,C=DE ERR: CN=node-0.example.com,OU=SSL,O=Test,L=Test,C=DE is not an admin user
Check if a node certificate was used
- Check if the output of
sgadmincontains the following message:
Seems you use a node certificate. This is not permitted, you have to use a client certificate and register it as admin_dn in elasticsearch.yml
- If this is the case it means you used a node certificate, and not an admin certificate in the
sgadmincall.
- Use a certificate that has admin privileges, i.e. that is configured in the
searchguard.authcz.admin_dnsection of
elasticsearch.yml.
- See Types of certificates for more information.
Check if a non-admin certificate was used
- Check if the output of
sgadmincontains the following message:
Seems you use a client certificate but this one is not registered as admin_dn
- If this is the case the used certificate is not listed in the
searchguard.authcz.admin_dnsection of
elasticsearch.yml.
- Follow the steps printed out by sgadmin and add the DN of your certificate to
searchguard.authcz.admin_dn.
- Sample output:
ERR: CN=kirk,OU=client,O=client,L=Test,C=DE is not an admin user Seems you use a client certificate but this one is not registered as admin_dn Make sure elasticsearch.yml on all nodes contains: searchguard.authcz.admin_dn: - "CN=kirk,OU=client,O=client,L=Test,C=DE"
Using the diagnose switch
If you cannot find out why sgadmin is not executing, add the
--diagnose switch to gather debug information, for example
./sgadmin.sh -diagnose -cd ../sgconfig/ -cacert ... -cert ... -key ... -keypass ...
sgadmin will print the location of the generated diagnostic file:
Diagnostic trace written to: /../../sgadmin_diag_trace_2020-<DATE>.txt
Search Guard Community Forum
You can also ask for help on the Search Guard Community Forum.
Always add the diagnose file to any sgadmin related questions on the Community Forum! | https://docs.search-guard.com/7.x-40/troubleshooting-sgadmin | 2020-07-02T16:39:15 | CC-MAIN-2020-29 | 1593655879532.0 | [] | docs.search-guard.com |
CreateResourceGroup.
Request Syntax
{ "resourceGroupTags": [ { "key": "
string", "value": "
string" } ] }
Request Parameters
For information about the parameters that are common to all actions, see Common Parameters.
The request accepts the following data in JSON format.
- resourceGroupTags
A collection of keys and an array of possible values, '[{"key":"key1","values":["Value1","Value2"]},{"key":"Key2","values":["Value3"]}]'.
For example,'[{"key":"Name","values":["TestEC2Instance"]}]'.
Type: Array of ResourceGroupTag objects
Array Members: Minimum number of 1 item. Maximum number of 10 items.
Required: Yes
Response Syntax
{ "resourceGroupArn": "string" }
Response Elements
If the action is successful, the service sends back an HTTP 200 response.
The following data is returned in JSON format by the service.
- resourceGroupArn
The ARN that specifies the resource group
- ServiceTemporarilyUnavailableException
The serice is temporary unavailable.
HTTP Status Code: 400
Example
Sample Request
POST / HTTP/1.1 Host: inspector.us-west-2.amazonaws.com Accept-Encoding: identity Content-Length: 67 X-Amz-Target: InspectorService.CreateResourceGroup X-Amz-Date: 20160331T171757Z User-Agent: aws-cli/1.10.12 Python/2.7.9 Windows/7 botocore/1.4.3 Content-Type: application/x-amz-json-1.1 Authorization: AUTHPARAMS { "resourceGroupTags": [ { "key": "Name", "value": "example" } ] }
Sample Response
HTTP/1.1 200 OK x-amzn-RequestId: 8416dfb4-f764-11e5-872a-fde3682789d5 Content-Type: application/x-amz-json-1.1 Content-Length: 88 Date: Thu, 31 Mar 2016 17:17:58 GMT { "resourceGroupArn": "arn:aws:inspector:us-west-2:123456789012:resourcegroup/0-AB6DMKnv" }
See Also
For more information about using this API in one of the language-specific AWS SDKs, see the following: | https://docs.aws.amazon.com/inspector/latest/APIReference/API_CreateResourceGroup.html | 2020-07-02T16:59:37 | CC-MAIN-2020-29 | 1593655879532.0 | [] | docs.aws.amazon.com |
Adding an Amazon S3 backup location
Add an Amazon S3 backup location. Set a retention policy for the backup location.
Add an Amazon S3 backup location. For more details, see backing up to Amazon S3 and the Amazon S3 transfer acceleration documentation.
cluster_name.confThe location of the cluster_name.conf file depends on the type of installation:
- Package installations: /etc/opscenter/clusters/cluster_name.conf
- Tarball installations: install_location/conf/clusters/cluster_name.conf
Prerequisites
- Ensure Java 8 is installed on the same machine where DataStax Agents are running. Agents require Java 8 to store at an S3 location.
- Make sure you have the proper AWS IAM privileges for the AWS account that the S3 bucket is linked to.
- Ensure that the selected Amazon S3 bucket meets the Amazon S3 bucket requirements.
Procedure
- Access the Create (or Edit) Backup dialog:
- In the Create or Edit Backup dialog, under Location, click +Add Location.The Add Location dialog appears.
- Select Amazon S3 as the backup Location.
- Enter the S3 Bucket name.Note: The bucket name must be at least 4 characters long. Bucket names must.
- Enter the Region where the S3 bucket is located so that OpsCenter can locate it.
If blank, OpsCenter will try to query S3 for the bucket region or use the
remote_backup_regionas a default.Note: Some regions, such as China (Beijing), require a region to be specified and cannot be queried.
- feature is enabled, the S3 throttle is ignored., select Enable S3 server-side encryption. Enabling server-side encryption increases the security of your backup files, but increases the time it takes to complete a backup. For more information on S3 server-side encryption, see Using Server Side Encryption on the AWS website.
Choose the type of encryption you want to use:
- To back up. | https://docs.datastax.com/en/opscenter/6.5/opsc/online_help/services/opscBackupServiceAddS3Location.html | 2020-07-02T15:09:29 | CC-MAIN-2020-29 | 1593655879532.0 | [] | docs.datastax.com |
BearSSL Secure Server Class¶
Implements a TLS encrypted server with optional client certificate validation. See Server Class for general information and BearSSL Secure Client Class for basic server and BearSSL concepts.
setBufferSizes(int recv, int xmit)¶
Similar to the BearSSL::WiFiClientSecure method, sets the receive and transmit buffer sizes. Note that servers cannot request a buffer size from the client, so if these are shrunk and the client tries to send a chunk larger than the receive buffer, it will always fail. This must be called before the server is
Setting Server Certificates¶
TLS servers require a certificate identifying itself and containing its public key, and a private key they will use to encrypt information with. The application author is responsible for generating this certificate and key, either using a self-signed generator or using a commercial certification authority. Do not re-use the certificates included in the examples provided.
This example command will generate a RSA 2048-bit key and certificate:
openssl req -x509 -nodes -newkey rsa:2048 -keyout key.pem -out cert.pem -days 4096
Again, it is up to the application author to generate this certificate and key and keep the private key safe and private.
setRSACert(const BearSSL::X509List *chain, const BearSSL::PrivateKey *sk)¶
Sets a RSA certificate and key to be used by the server when connections are received. Needs to be called before begin()
Requiring Client Certificates¶
TLS servers can request the client to identify itself by transmitting a certificate during handshake. If the client cannot transmit the certificate, the connection will be dropped by the server. | https://arduino-esp8266.readthedocs.io/en/latest/esp8266wifi/bearssl-server-secure-class.html | 2020-07-02T16:12:22 | CC-MAIN-2020-29 | 1593655879532.0 | [] | arduino-esp8266.readthedocs.io |
Clearing the Air on ISATAP
For companies thinking about deploying DirectAccess, the question of whether or not you need to deploy ISATAP will invariably come up. The answer to this question is “no” and the reasons for why you don’t need ISATAP in a DirectAccess deployment are covered in my article over at
However, ISATAP does have a place in a DirectAccess deployment, as I discussed in that article. If you don’t have an existing native IPv6 network infrastructure in place, then you might want to consider enabling ISATAP to fully realize the “manage out” capabilities that are part of a comprehensive DirectAccess solution.
However, in all the coverage of ISATAP, I’ve left out some critical information that can help you decide on how you deploy ISATAP in your organization. It’s at this point I get to tell you that the ole “Edge Man’s” favorite food is crow. When I get to eat crow, it means I said something in public and then find out later I was wrong (or at least not entirely right). I like crow because when I eat it, it means that I’ve learned something new.
Now with that are an introduction, let’s get into what led to this “crow eating” session. If you read this blog on a regular basis, you might remember my article on using a HOSTS file entry to control which systems configure themselves as ISATAP hosts. If you didn’t see it you can check it out at
In that article, I said:
“In general, ISATAP is a good thing…”
Now that seems like a pretty benign statement, doesn’t it? I thought so. But if you look at the Intra-site Automatic Tunnel Addressing Protocol Deployment Guide at you will find the following statement:
“Appropriate Uses for ISATAP on Intranets
ISATAP in Windows is not designed for production networks [italics mine]. On a production network, ISATAP should be used in a limited capacity for testing while you deploy native IPv6 capabilities. Microsoft does not recommend the use of ISATAP across entire production networks. Instead, you should deploy native IPv6 routing capability…”
Ah, well, ahem, er, kaff-kaff, aaaa.., that doesn’t really align with my comment regarding “in general, ISATAP is a good thing”.
Let me explain.
What Does ISATAP Actually Allow You To Do?
To get a better appreciation for the situation, it’s a good start to think about what ISATAP actually does. For most of us, our introduction to ISATAP is with DirectAccess. In fact, for many of us, our introduction to IPv6 is with DirectAccess. When reading about DirectAccess you see that the DirectAccess server is configured as an ISATAP router, as well as a router or relay for Teredo and IP-HTTPS. This gives you the initial impression that ISATAP must be a required component of the UAG DirectAccess solution, and that perhaps it’s a standard for all IPv6 deployments. In truth, ISATAP has limited utility and is designed to support a very specific scenario.
ISATAP is an IPv6 transition technology and is designed as a temporary solution to help you transition your IPv4 network to an IPv6 network. ISATAP enables ISATAP capable hosts to automatically assign an address to an ISATAP adapter, and to communicate with an ISATAP router to get IPv6 routing table information. The purpose of ISATAP then is to provide ISATAP capable hosts with information that enables them to connect to hosts on an IPv6 only network. That connection is made through an ISATAP router.
The ISATAP router has an interface on the IPv4 only network and a second interface on an IPv6 only network. The ISATAP router will take the IPv6 packets that are encapsulated in an IPv4 header, and forward them to the IPv6 only network by removing the IPv4 header before forwarding them. When hosts on the IPv6 only network want to connect to hosts on the IPv4 network, the ISATAP router will receive the packets, put an IPv4 header on them, and forward them to the ISATAP enabled hosts on the IPv4 only network.
When ISATAP enabled hosts on an IPv4 only network communicate with each other, they will preferentially use their ISATAP adapters to communicate. They will query DNS and if they receive both A and AAAA records, they will use the AAAA record’s address and use IPv6 to communicate with the other ISATAP enabled host on the IPv4 only network. This is done because Windows hosts that are capable of using ISATAP (Vista and above, Windows 2008 and above) use IPv6 preferentially. It doesn’t matter that the IPv6 packets are encapsulated with an IPv4 header. Keep in mind that in order to reach the ISATAP adapter on the destination host, the source host also needs the address in the A record to get the IPv4 address of the destination.
In Figure 1, you can see three networks:
- The DirectAccess clients network (which is an IPv6 only network),
- an IPv4 only network, and
- an IPv6 only network.
When a host on the IPv4 only network wants to connect to a host on the IPv6 only network, it uses routing table entries that indicate that it should use its ISATAP adapter to send the IPv6 packets to the IPv4 address ISATAP router. The ISATAP router then removes the IPv4 header to expose the IPv6 header and forwards the IPv6 packet to the host on the IPv6 network.
The UAG DirectAccess server is also configured as an ISATAP router, and it advertises IPv6 routes that ISATAP capable hosts on the IPv4 only network can use to connect to DirectAccess clients. The DirectAccess clients network (which includes network prefixes for the Teredo, 6to4 and IP-HTTPS address spaces) is an IPv6 only network. Therefore, when a host on the IPv4 only network wants to connect to a DirectAccess client, it checks its routing table and finds that it should use its ISATAP adapter to send the IPv6 packets to the IPv4 address on the internal interface of the UAG DirectAccess server. The UAG DirectAccess server (acting as an ISATAP router) removes the IPv4 header and forwards the un-encapsulated IPv6 packet to the DirectAccess client.
Figure 1
As you can see, ISATAP is used to enable connections from an IPv4 only network (meaning the routing infrastructure only supports IPv4 routing) to an IPv6 only network (meaning that the routing infrastructure supports IPv6 and the hosts on the network use only IPv6 addressing) by using an ISATAP router. The idea here is that as you transition to an IPv6 network, you will create dedicated segments devoted to IPv6 and that during this transition, you still need to enable connectivity between the “old” IPv4 only network and the “new” IPv6 only network. The transition phase isn’t meant to be indefinite and when the transition phase of the deployment is over, you’ll disable ISATAP in DNS and take down the ISATAP routers.
ISATAP and the Special Case of DirectAccess
However, DirectAccess is a special case when it comes to ISATAP and IPv6 enablement. We want you to be able to use DirectAccess now. However, we also realize that very few networks are IPv6-only at this time and that it will take several years before the predominate network protocol is IPv6. To solve this problem, we enabled the UAG DirectAccess server as an ISATAP router. For the UAG DirectAccess server, the purpose is to enable full “manage out” capability, so that management servers on the IPv4 intranet can initiate connections with DirectAccess clients. We don’t need ISATAP to support connections initiated by DirectAccess clients to IPv4 resources on the intranet because we have the NAT64/DNS64 solution.
The “manage out” scenario where management servers on the intranet initiate connections to the DirectAccess clients is a relatively limited one in terms of scope. You won’t have that many management servers that need to be able to do this (although you might want to allow Help Desk to connect to DirectAccess clients over RDP, in which case the number of hosts that need to initiate a “manage out” connection will be larger). Since you know in advance who these machines are, you might want to consider using a HOSTS file entry on these “manage out” machines instead of enabling ISATAP for your entire network using DNS. This gets around problems you might run into if you’ve decided to disable IPv6 on the computers on your network (you can find out the details of this situation in my blog post at).
Conclusion
In conclusion, we can reconcile my statement that ISATAP is generally a good thing when we think about the special case of the DirectAccess. However, the purpose of ISATAP in a DirectAccess scenario is a bit different than it is when you’re using ISATAP to transition your intranet to IPv6. Because a limited number of hosts on the intranet need to be IPv6 capable to initiate connections (manage out) DirectAccess clients, there is a good argument for not enabling ISATAP in DNS (it is disabled by default) and only using HOSTS file entries on the machines that require “manage out” capability to DirectAccess clients.
HTH,
Tom
Tom Shinder
[email protected]
Principal Knowledge Engineer, Microsoft DAIP iX/Identity Management
Anywhere Access Group (AAG)
The “Edge Man” blog :
Follow me on Twitter: | https://docs.microsoft.com/en-us/archive/blogs/tomshinder/clearing-the-air-on-isatap | 2020-07-02T17:21:25 | CC-MAIN-2020-29 | 1593655879532.0 | [] | docs.microsoft.com |
Workflows (
OrchardCore.Workflows)¶
The Workflows module provides a way for users to visually implement business rules using flowchart diagrams.
General Concepts¶
A workflow is a collection of activities that are connected to each other. These connections are called transitions.
Activities and their transitions are stored in a Workflow Definition.
A workflow is essentially a visual script, where each activity is a statement of that script.
There are two types of activities: Task and Event.
A Task activity typically performs an action, such as publishing a content item, while an Event activity typically listens for an event to happen before execution continues.
In order for a workflow to execute, at least one activity must be marked as the start of the workflow.
Only Event activities can be marked as the start of a workflow.
An example of such an event activity is Content Created, which executes whenever a content item is created.
A workflow can have more than one start event. This allows you to trigger (run) a workflow in response to various types of events.
Each activity has one or more outcomes, which represent a source endpoint from which a connection can be made to the next activity, which are called transitions.
By connecting activities, you are effectively creating a program that can be executed by Orchard in response to a multitude of events.
- Activity Picker (Task / Event)
- Activity actions (click an activity to display activity actions)
- An activity configured as the starting activity of the workflow.
- An activity.
- An Outcome ("Done") of an activity.
- A transition between two activities (from "Content Created" via the "Done" outcome to the "Send Email" activity).
- The workflow editor design surface.
- Edit the workflow definition properties (Name, Enabled, etc.)
- List the workflow instances for this workflow definition.
Vocabulary¶
When working with Orchard Workflows, you will encounter the following terms:
Workflow Definition¶
A document (as in a "document-DB" document) that contains all the necessary information about a workflow, such as its name, whether it's enabled or not, its set of activities and their transitions.
Workflow Instance¶
A document that represents an "instance" of a workflow definition. A workflow instance contains runtime-state of a workflow.
Whenever a workflow is started, a new workflow instance is created of a given workflow definition.
Activity¶
A step in a workflow definition.
An activity performs an action and provides zero or more outcomes, which are used to connect to the next activity to execute.
There are two types of activities: Task and Event.
Task¶
A specialized type of activity. Tasks perform actions such as sending emails, publishing content and making HTTP requests.
Event¶
A specialized type of activity.
Like tasks, events can perform actions, but typically all they do is halt the workflow, awaiting an event to happen before continuing on to the next activity.
When an event is configured as the starting activity of a workflow, that workflow is started when that event is triggered.
Workflow Editor¶
An editor that allows you to create and manage a workflow definition using a drag & drop visual interface.
Activity Editor¶
Most activities expose settings that can be configured via the activity editor.
To configure an activity, you can either double-click an activity on the design surface of the workflow editor, or click an activity once to activate a small popup that provides various actions you can perform on an activity.
One of these actions is the Edit action.
Activity Picker¶
When you are in the Workflow Editor, you use the Activity Picker to add activities to the design surface.
Open the activity picker by clicking Add Task or Add Event to add a task or event, respectively.
Outcome¶
Each activity has zero or more outcomes. When an activity has executed, it yields control back to the workflow manager along with a list of outcomes.
The workflow manager uses this list of outcomes to determine which activities to execute next.
Although many activities support multiple outcomes, they typically return only one of them when done executing.
For example, the Send Email activity has two possible outcomes: "Done" and "Failed".
When the email was sent successfully, it yields "Done" as the outcome, and "Failed" otherwise.
Transition¶
A transition is the connection between the outcome of one activity to another activity. Transitions are created using drag & drop operations in the workflow editor.
Workflow Manager¶
A service class that can execute workflows. When a workflow is executed, it takes care of creating a workflow instance which is then executed.
Workflow Execution Context¶
When the Workflow Manager executes a workflow, it creates an object called the Workflow Execution Context. The Workflow Execution Context is a collection of all information relevant to workflow execution.
For example, it contains a reference to the workflow instance, workflow definition, correlation values, input, output and properties.
Each activity has access to this execution context.
Correlation¶
Correlation is the act of associating a workflow instance with one or more identifiers. These identifiers can be anything.
For example, when a workflow has the Content Created event as its starting point, the workflow instance will be associated, or rather correlated to the content item ID that was just created.
This allows long-running workflow scenarios where only workflow instances associated with a given content item ID are resumed.
Input¶
When a workflow is executed, the caller can provide input to the workflow instance. This input is stored in the
Input dictionary of the workflow execution context.
This is analogous to providing arguments to a function.
Output¶
When a workflow executes, each activity can provide output values to the workflow instance. This output is stored in the
Output dictionary of the workflow execution context.
This is analogous to returning values from a function.
Properties¶
When a workflow executes, each activity can set property values to the workflow instance. These properties are stored in the
Properties dictionary of the workflow execution context.
Each activity can set and access these properties, allowing a workflow to compute and retrieve information that can then be processed by other activities further down the chain.
This is analogous to a function setting local variables.
Workflow Execution¶
When a workflow executes, the Workflow Manager creates a Workflow Instance and a Workflow Execution Context.
A workflow instance maintains state about the execution, such as which activity to execute next and state that can be provided by individual activities.
A Workflow Instance is ultimately persisted in the underlying data storage provider, while a Workflow Execution Context exists only in memory for the duration of a workflow execution.
Workflows can be short-running as well as long-running.
Short-running workflows¶
When a workflow executes without encountering any blocking activities (i.e. activities that wait for an event to occur, such as Signal), the workflow will run to completion in one go.
Long-running workflows¶
When a workflow executes and encounters a blocking activity (such as an event), the workflow manager will halt execution and persist the workflow instance to the underlying persistence layer.
When the appropriate event is triggered (which could happen seconds, days, weeks or even years from now), the workflow manager will load the workflow instance from storage and resume execution.
Scripts and Expressions¶
Many activities have settings that can contain either JavaScript or Liquid syntax.
For example, when adding the Notify activity, its editor shows the following fields:
These type of fields allow you to enter Liquid markup, enabling access to system-wide variables and filters as well as variables from the workflow execution context.
JavaScript Functions¶
The following JavaScript functions are available by default to any activity that supports script expressions:
JavaScript Functions in HTTP activities¶
The following JavaScript functions are available by default to any HTTP activity that supports script expressions:
Liquid Expressions¶
The following Liquid tags, properties and filters are available by default to any activity that supports Liquid expressions:
Instead of using the indexer syntax on the three workflow dictionaries
Input,
Output and
Properties, you can also use dot notation, e.g.:
{{ Workflow.Input.ContentItem }}
Liquid Expressions and ContentItem Events¶
When handling content related events using a workflow, the content item in question is made available to the workflow via the
Input dictionary.
For example, if you have a workflow that starts with the Content Created Event activity, you can send an email or make an HTTP request and reference the content item from liquid-enabled fields as follows:
{{ Workflow.Input.ContentItem | display_url }} {{ Workflow.Input.ContentItem | display_text }} {{ Workflow.Input.ContentItem.DisplayText }}
For more examples of supported content item filters, see the documentation on Liquid.
Activities out of the box¶
The following activities are available with any default Orchard installation:
Developing Custom Activities¶
Orchard is built to be extended, and the
Workflows module is no different. When creating your own module, you can develop custom workflow activities.
Developing custom activities involve the following steps:
- Create a new class that directly or indirectly implements
IActivity. In most cases, you either derive from
TaskActivityor
EventActivity, depending on whether your activity represents an event or not. Although not required, it is recommended to keep this class in a folder called
Activities.
- Create a new display driver class that directly or indirectly implements
IDisplayDriver. An activity display driver controls the activity's display on the workflow editor canvas, the activity picker and the activity editor. Although not required, it is recommended to keep this class in a folder called
Drivers.
- Optionally implement a view model if your activity has properties that the user should be able to configure.
- Implement the various Razor views for the various shapes provided by the driver. Although not required, it is recommended to store these files in the
Views/Itemsfolder. Note that it is required for your views to be discoverable by the display engine.
Activity Display Types¶
An activity has the following display types:
- Thumbnail
- Design
Thumbnail Used when the activity is rendered as part of the activity picker.
Design Used when the activity is rendered as part of the workflow editor design surface.
IActivity¶
IActivity has the following members:
Name
Category
Properties
HasEditor
GetPossibleOutcomes
CanExecuteAsync
ExecuteAsync
ResumeAsync
OnInputReceivedAsync
OnWorkflowStartingAsync
OnWorkflowStartedAsync
OnWorkflowResumingAsync
OnWorkflowResumedAsync
OnActivityExecutingAsync
OnActivityExecutedAsync
The
IEvent interface adds the following member:
CanStartWorkflow
The following is an example of a simple activity implementation that displays a notification:
public class NotifyTask : TaskActivity { private readonly INotifier _notifier; private readonly IStringLocalizer S; private readonly IHtmlLocalizer H; public NotifyTask(INotifier notifier, IStringLocalizer<NotifyTask> s, IHtmlLocalizer<NotifyTask> h) { _notifier = notifier; S = s; H = h; } // The technical name of the activity. Activities on a workflow definition reference this name. public override string Name => nameof(NotifyTask); // The displayed name of the activity, so it can use localization. public override LocalizedString DisplayText => S["Notify Task"]; // The category to which this activity belongs. The activity picker groups activities by this category. public override LocalizedString Category => S["UI"]; // A description of this activity's purpose. public override LocalizedString Description => S["Display a message."]; // The notification type to display. public NotifyType NotificationType { get => GetProperty<NotifyType>(); set => SetProperty(value); } // The message to display. public WorkflowExpression<string> Message { get => GetProperty(() => new WorkflowExpression<string>()); set => SetProperty(value); } // Returns the possible outcomes of this activity. public override IEnumerable<Outcome> GetPossibleOutcomes(WorkflowContext workflowContext, ActivityContext activityContext) { return Outcomes(S["Done"]); } // This is the heart of the activity and actually performs the work to be done. public override async Task<ActivityExecutionResult> ExecuteAsync(WorkflowContext workflowContext, ActivityContext activityContext) { var message = await workflowContext.EvaluateExpressionAsync(Message); _notifier.Add(NotificationType, H[message]); return Outcomes("Done"); } }
The following is an example of a simple activity display driver:
public class NotifyTaskDisplay : ActivityDisplayDriver<NotifyTask, NotifyTaskViewModel> { protected override void EditActivity(NotifyTask activity, NotifyTaskViewModel model) { model.NotificationType = activity.NotificationType; model.Message = activity.Message.Expression; } protected override void UpdateActivity(NotifyTaskViewModel model, NotifyTask activity) { activity.NotificationType = model.NotificationType; activity.Message = new WorkflowExpression<string>(model.Message); } }
The above code performs a simple mapping of a
NotifyTask to a
NotifyTaskViewModel and vice versa.
This simple implementation is possible because the actual creation of the necessary editor and display shapes are taken care of by
ActivityDisplayDriver<TActivity, TEditViewModel>, which looks like this (modified to focus on the important parts):
public abstract class ActivityDisplayDriver<TActivity, TEditViewModel> : ActivityDisplayDriver<TActivity> where TActivity : class, IActivity where TEditViewModel : class, new() { private static string ThumbnailshapeType = $"{typeof(TActivity).Name}_Fields_Thumbnail"; private static string DesignShapeType = $"{typeof(TActivity).Name}_Fields_Design"; private static string EditShapeType = $"{typeof(TActivity).Name}_Fields_Edit"; public override IDisplayResult Display(TActivity activity) { return Combine( Shape(ThumbnailshapeType, new ActivityViewModel<TActivity>(activity)).Location("Thumbnail", "Content"), Shape(DesignShapeType, new ActivityViewModel<TActivity>(activity)).Location("Design", "Content") ); } public override IDisplayResult Edit(TActivity activity) { return Initialize<TEditViewModel>(EditShapeType, model => { return EditActivityAsync(activity, model); }).Location("Content"); } public async override Task<IDisplayResult> UpdateAsync(TActivity activity, IUpdateModel updater) { var viewModel = new TEditViewModel(); if (await updater.TryUpdateModelAsync(viewModel, Prefix)) { await UpdateActivityAsync(viewModel, activity); } return Edit(activity); } }
Notice that the shape names are derived from the activity type, effectively implementing a naming convention for the shape template names to use.
Continuing with the
NotifyTask example, we now need to create the following Razor files:
NotifyTask.Fields.Design.cshtml
NotifyTask.Fields.Thumbnail.cshtml
NotifyTask.Fields.Edit.cshtml
CREDITS¶
jsPlumb¶
License: dual-licensed under both MIT and GPLv
NCrontab¶
License: Apache License 2.0 | https://docs.orchardcore.net/en/dev/docs/reference/modules/Workflows/ | 2020-07-02T15:38:33 | CC-MAIN-2020-29 | 1593655879532.0 | [array(['docs/workflow-editor.png', 'The workflow editor'], dtype=object)] | docs.orchardcore.net |
open fun equals(other: Any?): Boolean
Compares this list with another list instance with the ordered structural equality.
Return true, if other instance is a List of the same size, which contains the same elements in the same order.
Indicates whether some other object is "equal to" this one. Implementations must fulfil the following requirements:
x,
x.equals(x)should return true.
xand
y,
x.equals(y)should return true if and only if
y.equals(x)returns true.
x,
y, and
z, if
x.equals(y)returns true and
y.equals(z)returns true, then
x.equals(z)should return true.
xand
y, multiple invocations of
x.equals(y)consistently return true or consistently return false, provided no information used in
equalscomparisons on the objects is modified.
x,
x.equals(null)should return false.
Read more about equality in Kotlin.
© 2010–2019 JetBrains s.r.o.
Licensed under the Apache License, Version 2.0. | https://docs.w3cub.com/kotlin/api/latest/jvm/stdlib/kotlin.collections/-abstract-mutable-list/equals/ | 2020-07-02T15:51:57 | CC-MAIN-2020-29 | 1593655879532.0 | [] | docs.w3cub.com |
Message handler that handles ping messages. More...
#include <ping_handler.h>
Message handler that handles ping messages.
Responds to ping message types. A ping is a simple message that is meant to test communications channels. A ping simply responds with a copy of the data it was sent.
THIS CLASS IS NOT THREAD-SAFE
Definition at line 60 of file ping_handler.h.
Class initializer.
Definition at line 50 of file ping_handler.cpp.
Class initializer (Direct call to base class with the same name) I couldn't get the "using" form to work/.
Reimplemented from industrial::message_handler::MessageHandler.
Definition at line 81 of file ping_handler.h.
Callback executed upon receiving a ping message.
Implements industrial::message_handler::MessageHandler.
Definition at line 55 of file ping_handler.cpp. | http://docs.ros.org/jade/api/simple_message/html/classindustrial_1_1ping__handler_1_1PingHandler.html | 2020-07-02T16:57:46 | CC-MAIN-2020-29 | 1593655879532.0 | [] | docs.ros.org |
Manual Presence
Manual Presence#
About Manual Presence#
Schema#
Validator for the Manual Presence callflow action
Status#
There are three statuses that may be used in the update:
idle- Typically solid green, for when the
presence_idhas no active calls
ringing- Typically blinking red, for when an incoming call is occurring
busy- Typically solid red, for when an incoming call has been answered | https://docs.2600hz.com/dev/applications/callflow/doc/manual_presence/ | 2020-07-02T14:47:33 | CC-MAIN-2020-29 | 1593655879532.0 | [] | docs.2600hz.com |
Peaking at more than a billion USD of locked assets, Decentralized Finance (DeFi) has boomed in recent years. Aave Protocol launched on Ethereum mainnet in January 2020. The money market size now surpasses 40 million USD allowing users to participate as depositors or borrowers.
The industry has been developing risk frameworks to form industry standards to manage the risks emerging from our hyperconnected ecosystem. Aave holds security at its core, undergoing audits by Open Zeppelin Security and Trail of Bits, and publishing monthly Security Reports on Medium.
The following documentation analyses the fundamental risks of the protocol and describes the processes in place to mitigate them.
If you have any questions, join the Aave community Discord server; our team and members of the community look forward to helping you understand Aave's risks and risk management procedures. | https://docs.aave.com/risk/ | 2020-07-02T14:53:09 | CC-MAIN-2020-29 | 1593655879532.0 | [] | docs.aave.com |
General Information
Quality Assurance and Productivity
Desktop
Frameworks and Libraries
Web
Controls and Extensions
Maintenance Mode
Enterprise and Analytic Tools
End-User Documentation
TreeList.RootValue Property
Gets or sets the value that identifies root records in the data source. The root records must have the RootValue in the field the ParentFieldName property specifies.
Namespace: DevExpress.XtraTreeList
Assembly: DevExpress.XtraTreeList.v20.1.dll
Declaration
[DefaultValue(0)] public object RootValue { get; set; }
<DefaultValue(0)> Public Property RootValue As Object
Property Value
Remarks
The KeyFieldName and TreeList.ParentFieldName properties specify Key and Parent fields in the bound data source. The Tree List control uses these fields to organize the underlying records into a data hierarchy.
Root nodes must have their Parent Field values set to RootValue (by default, 0).
NOTE TreeList. | https://docs.devexpress.com/WindowsForms/DevExpress.XtraTreeList.TreeList.RootValue | 2020-07-02T15:29:03 | CC-MAIN-2020-29 | 1593655879532.0 | [] | docs.devexpress.com |
CA certificate won't appear in the certificates list.
Resolution:
It seems to be a known issue.
There is a workaround:
1) Verify that the certificate is truly functional by running certutil -verifystore my
2) Check in the output that the key is protected by the nCipher Enhanced CSP and passes all tests
3) When you run the wizard, state that you want to use an existing private key (yes this creates a new certificate)
4) After restore, either restore the previous registry from the other CA, or modify the following registry key to use the original certificate's thumbprint:
hklmsystemcurrentcontrolsetservicescertsvcConfigurationCANameCACertHash | https://docs.microsoft.com/en-us/archive/blogs/asiasupp/ca-restore-fails-from-time-to-time | 2020-07-02T17:17:53 | CC-MAIN-2020-29 | 1593655879532.0 | [] | docs.microsoft.com |
Memory Management¶
This page describes how memory management works in Ray and how you can set memory quotas to ensure memory-intensive applications run predictably and reliably.
ObjectID Reference Counting¶
Ray implements distributed reference counting so that any
ObjectID in scope in the cluster is pinned in the object store. This includes local python references, arguments to pending tasks, and IDs serialized inside of other objects.
Frequently Asked Questions (FAQ)¶
My application failed with ObjectStoreFullError. What happened?
Ensure that you’re removing
ObjectID references when they’re no longer needed. See Debugging using ‘ray memory’ for information on how to identify what objects are in scope in your application.
This exception is raised when the object store on a node was full of pinned objects when the application tried to create a new object (either by calling
ray.put() or returning an object from a task). If you’re sure that the configured object store size was large enough for your application to run, ensure that you’re removing
ObjectID references when they’re no longer in use so their objects can be evicted from the object store.
I’m running Ray inside IPython or a Jupyter Notebook and there are ObjectID references causing problems even though I’m not storing them anywhere.
Try Enabling LRU Fallback, which will cause unused objects referenced by IPython to be LRU evicted when the object store is full instead of erroring.
IPython stores the output of every cell in a local Python variable indefinitely. This causes Ray to pin the objects even though your application may not actually be using them.
My application used to run on previous versions of Ray but now I’m getting ObjectStoreFullError.
Either modify your application to remove
ObjectID references when they’re no longer needed or try Enabling LRU Fallback to revert to the old behavior.
In previous versions of Ray, there was no reference counting and instead objects in the object store were LRU evicted once the object store ran out of space. Some applications (e.g., applications that keep references to all objects ever created) may have worked with LRU eviction but do not with reference counting.
Debugging using ‘ray memory’¶
The
ray memory command can be used to help track down what
ObjectID references are in scope and may be causing an
ObjectStoreFullError.
Running
ray memory from the command line while a Ray application is running will give you a dump of all of the
ObjectID references that are currently held by the driver, actors, and tasks in the cluster.
----------------------------------------------------------------------------------------------------- Object ID Reference Type Object Size Reference Creation Site ===================================================================================================== ; worker pid=18301 45b95b1c8bd3a9c4ffffffff010000c801000000 LOCAL_REFERENCE ? (deserialize task arg) __main__..f ; driver pid=18281 f66d17bae2b0e765ffffffff010000c801000000 LOCAL_REFERENCE ? (task call) test.py:<module>:12 45b95b1c8bd3a9c4ffffffff010000c801000000 USED_BY_PENDING_TASK ? (task call) test.py:<module>:10 ef0a6c221819881cffffffff010000c801000000 LOCAL_REFERENCE ? (task call) test.py:<module>:11 ffffffffffffffffffffffff0100008801000000 LOCAL_REFERENCE 77 (put object) test.py:<module>:9 -----------------------------------------------------------------------------------------------------
Each entry in this output corresponds to an
ObjectID that’s currently pinning an object in the object store along with where the reference is (in the driver, in a worker, etc.), what type of reference it is (see below for details on the types of references), the size of the object in bytes, and where in the application the reference was created.
There are five types of references that can keep an object pinned:
1. Local ObjectID references
@ray.remote def f(arg): return arg a = ray.put(None) b = f.remote(None)
In this example, we create references to two objects: one that is
ray.put() in the object store and another that’s the return value from
f.remote().
----------------------------------------------------------------------------------------------------- Object ID Reference Type Object Size Reference Creation Site ===================================================================================================== ; driver pid=18867 ffffffffffffffffffffffff0100008801000000 LOCAL_REFERENCE 77 (put object) ../test.py:<module>:9 45b95b1c8bd3a9c4ffffffff010000c801000000 LOCAL_REFERENCE ? (task call) ../test.py:<module>:10 -----------------------------------------------------------------------------------------------------
In the output from
ray memory, we can see that each of these is marked as a
LOCAL_REFERENCE in the driver process, but the annotation in the “Reference Creation Site” indicates that the first was created as a “put object” and the second from a “task call.”
2. Objects pinned in memory
import numpy as np a = ray.put(np.zeros(1)) b = ray.get(a) del a
In this example, we create a
numpy array and then store it in the object store. Then, we fetch the same numpy array from the object store and delete its
ObjectID. In this case, the object is still pinned in the object store because the deserialized copy (stored in
b) points directly to the memory in the object store.
----------------------------------------------------------------------------------------------------- Object ID Reference Type Object Size Reference Creation Site ===================================================================================================== ; driver pid=25090 ffffffffffffffffffffffff0100008801000000 PINNED_IN_MEMORY 229 test.py:<module>:7 -----------------------------------------------------------------------------------------------------
The output from
ray memory displays this as the object being
PINNED_IN_MEMORY. If we
del b, the reference can be freed.
3. Pending task references
@ray.remote def f(arg): while True: pass a = ray.put(None) b = f.remote(a)
In this example, we first create an object via
ray.put() and then submit a task that depends on the object.
----------------------------------------------------------------------------------------------------- Object ID Reference Type Object Size Reference Creation Site ===================================================================================================== ; worker pid=18971 ffffffffffffffffffffffff0100008801000000 PINNED_IN_MEMORY 77 (deserialize task arg) __main__..f ; driver pid=18958 -----------------------------------------------------------------------------------------------------
While the task is running, we see that
ray memory shows both a
LOCAL_REFERENCE and a
USED_BY_PENDING_TASK reference for the object in the driver process. The worker process also holds a reference to the object because it is
PINNED_IN_MEMORY, because the Python
arg is directly referencing the memory in the plasma, so it can’t be evicted.
4. Serialized ObjectID references
@ray.remote def f(arg): while True: pass a = ray.put(None) b = f.remote([a])
In this example, we again create an object via
ray.put(), but then pass it to a task wrapped in another object (in this case, a list).
----------------------------------------------------------------------------------------------------- Object ID Reference Type Object Size Reference Creation Site ===================================================================================================== ; worker pid=19002 ffffffffffffffffffffffff0100008801000000 LOCAL_REFERENCE 77 (deserialize task arg) __main__..f ; driver pid=18989 -----------------------------------------------------------------------------------------------------
Now, both the driver and the worker process running the task hold a
LOCAL_REFERENCE to the object in addition to it being
USED_BY_PENDING_TASK on the driver. If this was an actor task, the actor could even hold a
LOCAL_REFERENCE after the task completes by storing the
ObjectID in a member variable.
5. Captured ObjectID references
a = ray.put(None) b = ray.put([a]) del a
In this example, we first create an object via
ray.put(), then capture its
ObjectID inside of another
ray.put() object, and delete the first
ObjectID. In this case, both objects are still pinned.
----------------------------------------------------------------------------------------------------- Object ID Reference Type Object Size Reference Creation Site ===================================================================================================== ; driver pid=19047 ffffffffffffffffffffffff0100008802000000 LOCAL_REFERENCE 1551 (put object) ../test.py:<module>:10 ffffffffffffffffffffffff0100008801000000 CAPTURED_IN_OBJECT 77 (put object) ../test.py:<module>:9 -----------------------------------------------------------------------------------------------------
In the output of
ray memory, we see that the second object displays as a normal
LOCAL_REFERENCE, but the first object is listed as
CAPTURED_IN_OBJECT.
Enabling LRU Fallback¶
By default, Ray will raise an exception if the object store is full of pinned objects when an application tries to create a new object. However, in some cases applications might keep references to objects much longer than they actually use them, so simply LRU evicting objects from the object store when it’s full can prevent the application from failing.
Please note that relying on this is not recommended - instead, if possible you should try to remove references as they’re no longer needed in your application to free space in the object store.
To enable LRU eviction when the object store is full, initialize ray with the
lru_evict option set:
ray.init(lru_evict=True)
ray start --lru-evict
Memory Quotas¶
You can set memory quotas to ensure your application runs predictably on any Ray cluster configuration. If you’re not sure, you can start with a conservative default configuration like the following and see if any limits are hit.
For Ray initialization on a single node, consider setting the following fields:
ray.init( memory=2000 * 1024 * 1024, object_store_memory=200 * 1024 * 1024, driver_object_store_memory=100 * 1024 * 1024)
For Ray usage on a cluster, consider setting the following fields on both the command line and in your Python script:
Tip
200 * 1024 * 1024 bytes is 200 MiB. Use double parentheses to evaluate math in Bash:
$((200 * 1024 * 1024)).
# On the head node ray start --head --redis-port=6379 \ --object-store-memory=$((200 * 1024 * 1024)) \ --memory=$((200 * 1024 * 1024)) \ --num-cpus=1 # On the worker node ray start --object-store-memory=$((200 * 1024 * 1024)) \ --memory=$((200 * 1024 * 1024)) \ --num-cpus=1 \ --address=$RAY_HEAD_ADDRESS:6379
# In your Python script connecting to Ray: ray.init( address="auto", # or "<hostname>:<port>" if not using the default port driver_object_store_memory=100 * 1024 * 1024 )
For any custom remote method or actor, you can set requirements as follows:
@ray.remote( memory=2000 * 1024 * 1024, )
Concept Overview¶
There are several ways that Ray applications use memory:
- Ray system memory: this is memory used internally by Ray
Redis: memory used for storing task lineage and object metadata. When Redis becomes full, lineage will start to be be LRU evicted, which makes the corresponding objects ineligible for reconstruction on failure.
Raylet: memory used by the C++ raylet process running on each node. This cannot be controlled, but is usually quite small.
- Application memory: this is memory used by your application
Worker heap: memory used by your application (e.g., in Python code or TensorFlow), best measured as the resident set size (RSS) of your application minus its shared memory usage (SHR) in commands such as
top. The reason you need to subtract SHR is that object store shared memory is reported by the OS as shared with each worker. Not subtracting SHR will result in double counting memory usage.
Object store memory: memory used when your application creates objects in the objects store via
ray.putand when returning values from remote functions. Objects are LRU evicted when the store is full, prioritizing objects that are no longer in scope on the driver or any worker. There is an object store server running on each node.
Object store shared memory: memory used when your application reads objects via
ray.get. Note that if an object is already present on the node, this does not cause additional allocations. This allows large objects to be efficiently shared among many actors and tasks.
By default, Ray will cap the memory used by Redis at
min(30% of node memory, 10GiB), and object store at
min(10% of node memory, 20GiB), leaving half of the remaining memory on the node available for use by worker heap. You can also manually configure this by setting
redis_max_memory=<bytes> and
object_store_memory=<bytes> on Ray init.
It is important to note that these default Redis and object store limits do not address the following issues:
Actor or task heap usage exceeding the remaining available memory on a node.
Heavy use of the object store by certain actors or tasks causing objects required by other tasks to be prematurely evicted.
To avoid these potential sources of instability, you can set memory quotas to reserve memory for individual actors and tasks.
Heap memory quota¶
When Ray starts, it queries the available memory on a node / container not reserved for Redis and the object store or being used by other applications. This is considered “available memory” that actors and tasks can request memory out of. You can also set
memory=<bytes> on Ray init to tell Ray explicitly how much memory is available.
Important
Setting available memory for the node does NOT impose any limits on memory usage unless you specify memory resource requirements in decorators. By default, tasks and actors request no memory (and hence have no limit).
To tell the Ray scheduler a task or actor requires a certain amount of available memory to run, set the
memory argument. The Ray scheduler will then reserve the specified amount of available memory during scheduling, similar to how it handles CPU and GPU resources:
# reserve 500MiB of available memory to place this task @ray.remote(memory=500 * 1024 * 1024) def some_function(x): pass # reserve 2.5GiB of available memory to place this actor @ray.remote(memory=2500 * 1024 * 1024) class SomeActor(object): def __init__(self, a, b): pass
In the above example, the memory quota is specified statically by the decorator, but you can also set them dynamically at runtime using
.options() as follows:
# override the memory quota to 100MiB when submitting the task some_function.options(memory=100 * 1024 * 1024).remote(x=1) # override the memory quota to 1GiB when creating the actor SomeActor.options(memory=1000 * 1024 * 1024).remote(a=1, b=2)
Enforcement: If an actor exceeds its memory quota, calls to it will throw
RayOutOfMemoryError and it may be killed. Memory quota is currently enforced on a best-effort basis for actors only (but quota is taken into account during scheduling in all cases).
Object store memory quota¶
Use
@ray.remote(object_store_memory=<bytes>) to cap the amount of memory an actor can use for
ray.put and method call returns. This gives the actor its own LRU queue within the object store of the given size, both protecting its objects from eviction by other actors and preventing it from using more than the specified quota. This quota protects objects from unfair eviction when certain actors are producing objects at a much higher rate than others.
Ray takes this resource into account during scheduling, with the caveat that a node will always reserve ~30% of its object store for global shared use.
For the driver, you can set its object store memory quota with
driver_object_store_memory. Setting object store quota is not supported for tasks.
Questions or Issues?¶
If you have a question or issue that wasn’t covered by this page, please get in touch via on of the following channels:
[email protected]: For discussions about development or any general questions and feedback.
StackOverflow: For questions about how to use Ray.
GitHub Issues: For bug reports and feature requests. | https://docs.ray.io/en/latest/memory-management.html | 2020-07-02T16:10:02 | CC-MAIN-2020-29 | 1593655879532.0 | [] | docs.ray.io |
Custom Files Configuration
This is an example configuration for this hack with comments explaining how it works.
This configuration file goes into
CustomFiles.ini in the root of your mod.
We do not recommend copying this entire example into your mod. We recommend only using what is necessary.
; CustomFiles.ini ; Occlude files, redirect files and handle files with Lua scripts. ; [Miscellaneous] Section ; OccludedPath: Occlude a file from the game's view, this will make the game think it doesn't exist. Be careful with this. Repeat for each file. ; [PathRedirections] Section: Set Path Redirections for Files ; PATH_TO_FILE: Set a path to another file, this will tell the game to use that one instead of the original. Repeat for each file. ; [PathHandlers] Section: Set Handlers for Files ; PATH_TO_FILE: Set a path to a Lua script, this will allow that script to handle the file. Repeat for each file. ; Notes ; * Wildcard can be used in a path to indicate anything of any length. ; ? Wildcard can be used in a path to indicate anything of a set length. ; OccludedPath is best used to hide introduction movies as the game will simply carry on if they are not found. [Miscellaneous] ; Hide the Vivendi Universal Games intro movie. OccludedPath=movies\\vuglogo.rmv [PathRedirections] ; Redirect all License screens to this new one. art\\frontend\\dynaload\\images\\license\\*.p3d=art\\license\\license.p3d [PathHandlers] ; Handle famil_v with our script over there. art\\cars\\family_v.p3d=Resources/scripts/handlers/famil_v.lua | https://docs-old.donutteam.com/books/lucas-simpsons-hit-run-mod-launcher/page/custom-files-configuration | 2020-07-02T17:53:41 | CC-MAIN-2020-29 | 1593655879738.16 | [] | docs-old.donutteam.com |
If your stack is service-oriented or relies on microservices, there are some best practices and special considerations for implementing Full Stack that you should understand.
Service-oriented sites have two implementation options: use Optimizely as a service or include the Optimizely SDK in every service. This topic describes the advantages and disadvantages of each option.
If your stack is service oriented and you would benefit from a centralized decision service, we highly recommend using Optimizely Agent, an Optimizely developed open-source service running the Go SDK in a docker container with custom configuration options. Optimizely Agent exposes HTTP endpoints that map to all the Full Stack SDK functions.
This approach has the advantage of centralizing challenging tasks like datafile management and event dispatching. You implement those solutions one time, and your team doesn’t need to worry about them when they start testing. Handling the datafile and event dispatches to Optimizely in only one place reduces the overall setup and maintenance effort.
The disadvantage of this approach is that a network call is required to determine whether to activate an experiment or to grant a user access to a feature. Communicating with Optimizely Agent has higher latency than using a Full Stack SDK directly in your service(s).
In this approach, you would include instances of the Optimizely SDK in every service and rely on the deterministic and stateless nature of the SDKs to show users the same experience throughout each service. If you choose this option, you need to synchronize datafiles across all instances of the SDKs.
The primary advantage of this option that it preserves near-zero latency decisions by the SDK. It is more performant than option 1 because all the decisions are made in memory without the need for a network call to a service. This option is also more configurable: each team sets up the SDK as best fits their implementation.
The disadvantage of this option is that teams must implement their SDK themselves, and maintenance costs increase. For example, if Optimizely releases a new SDK, you’ll need to update the SDK in every service where you have it installed.
Note
Although this approach requires synchronizing the datafile across your services, most customers find that they don’t need to enforce exact synchronization everywhere. As long as each implementation has a relatively short cache expiration, occasional brief discrepancies are okay.
Updated 2 months ago | https://docs.developers.optimizely.com/full-stack/docs/microservices | 2020-07-02T19:04:03 | CC-MAIN-2020-29 | 1593655879738.16 | [] | docs.developers.optimizely.com |
Making Workflow Steps Conditional
You can specify that workflow steps are executed only when specified conditions are met. Setting conditions can help you:
- Make a workflow more flexible — for example, you might want to install an anti-virus program only on Windows VMs.
- Add
In this topic:
Available operators
Note: Operators must be lower case.
Example:
"#{target.guestOS}" -contains Microsoft
Example: Conditional approval workflow
To create a workflow that triggers the approval process only if the monthly cost is greater than 100 USD:
- Go to Configuration > Self-Service.
- Click the Approval tab.
- Create an approval workflow for new service requests or change requests.
- On the Assigned Groups page, decide whether to apply this workflow globally or to specific users or groups.
- In the step details section:
- Add a Send Approval Email step to the workflow.
- For Step Execution, select Execute when conditions are met.
- Click Edit and then enter the following condition in the editor:
"#{request.cost.monthly}" -gt 100
- In the Address List field, enter one or more email addresses (semicolon-separated).
- Customize the Email Body as required.
Example: Making a workflow step conditional on a previous step).
- Some other action.
A Perform Power Action step, with the action Start, with a condition to run only if Step 1 did not run. The condition would read:
"#{steps[1].skipped}" -eq true
Example: A single workflow covering multiple service types
Commander supports multiple service types, including:
- VM
- Virtual Service (vApp)
- Database
- Load Balancer
- Auto Scaling Group
- Application Stack
You can use the "
#{target.type}" variable to run a workflow step only on a particular service type. This variable allows you to support multiple service types in a single workflow.
For example, to specify that a step that applies only to load balancers, enter the following condition:<![CDATA[ ]]>
"#{target.type}" -eq "Load Balancer"
The
#{target.type} variable is supported in the following workflow types:
- Approval workflows for change requests
- Component-level completion workflows
- Command workflows
More example step conditions
More complex examples using Boolean operators
Conditional Send Approval Email)
Conditional Perform Power Action step
We want to power off certain running VMs. In each of the following examples, we're adding a Perform Power Action step with the action Stop, and making the step conditional.
Power off running AWS VMs:
("#{target.state}" -eq running) -and ("#{target.cloudAccount.type}" -eq amazon_aws)
Power off running VMs, unless they're Azure VMs:
("#{target.state}" -eq running) -and (-not ("#{target.cloudAccount.type}" -eq ms_azure))
Power off running AWS and VMware VMs:
("#{target.state}" -eq running) -and ("#{target.cloudAccount.type}" -eq amazon_aws) -or ("#{target.cloudAccount.type}" -eq vc)
Troubleshooting
When you encounter a syntax error, check for the following first, and then review the next section for more information:
- mismatched and misplaced parentheses
- an operator without a hyphen preceding it
- unclosed quotes
Syntax rules
- Invalid conditions are evaluated to false.
- You must enclose each variable in quotation marks.
- When using Boolean operators, you must enclose each condition in parentheses. For example:
("#{target.state}" -eq running) -and ("#{target.cloudAccount.type}" -eq amazon_aws)
- Parameters with spaces must be enclosed in double quotation marks. For example:
"2019/03/12 08:30"
- Because conditions are parsed after variable substitution has occurred, if a variable value will contain spaces, the entire variable must be enclosed in double quotation marks. For example:
"#{request.requestedServices[1].components[1].description}"
- No escaping of characters is supported; escape characters are passed as is.
- An empty string "" is used to indicate null or not set.
- If
- A)
- Accuracy of date matching is determined by the specified date.
Example 1:
"#{target.settings.expiryDate}" -gt 2012/01/11would
- For strings and enumerations:
All string comparisons are case-insensitive.
If the data type of a Commander variable is an enumeration, then the list of string choices is limited.
- If.
- If the left side of an expression can't immediately be interpreted by Commander and the right side of the expression is a date, the left side will also be evaluated as a date. For example:
"#{any.numeric.property}" -eq 2010/01/01
This expression would be evaluated as two dates. | https://docs.embotics.com/commander/set_condition.htm | 2020-07-02T17:54:17 | CC-MAIN-2020-29 | 1593655879738.16 | [] | docs.embotics.com |
WiLogUtl.exe does not work with Windows Installer 3.1 log files:
- Open the verbose log file in a text editor such as notepad
- Change the version number in the first line of the log file from 3.01.xxxx.xxxx to 3.00.xxxx.xxxx
- Save and close the verbose log file
- Run wilogutl.exe again and load the log file. | https://docs.microsoft.com/en-us/archive/blogs/astebner/wilogutl-exe-does-not-work-with-windows-installer-3-1-log-files | 2020-07-02T20:23:53 | CC-MAIN-2020-29 | 1593655879738.16 | [] | docs.microsoft.com |
Property Builder
The Property builder is divided into three sections: Chart, Series and Appearance.
The first section allows you to choose the type of chart you will be creating, the type of axes for the chart and the palette.
Figure 1: Chart Section
The second section is labeled Series. Here is where you can add, remove and setup series. You can add new series through the split button below the list with currently added series. You can also remove or copy a series. The latter would make a copy of the series along with all the data points. Here you can also specify the text that will appear in the legend should you decide to show one.
There are two ways you can provide data to RadChartView. The first is “Unbound mode” and allows you to enter the data manually through the grid view. You can enter data quickly by using the Enter or Tab keys. When you enter data in a cell you can press either key and the focus will be transferred to the next cell or if you are on the last cell, data will be committed and the focus will go to the first cell in the add new row.
Figure 2: Series Unbound Mode
The second mode is “Bound mode” where you provide data to RadChartView from some data source. After changing the Data bound mode the UI will change accordingly allowing you to choose a data source and set up DataMember. The number and names of the drop downs change in accordance to the type of the series currently selected.
Figure 3: Series Bound Mode
Figure 4: Appearance Section
The third section allows you to setup different options like grid, axes labels, series labels and the interactive features of RadChartView like the pan and zoom, trackball, title and legend.
| https://docs.telerik.com/devtools/winforms/controls/chartview/design-time/property-builder | 2020-07-02T18:07:12 | CC-MAIN-2020-29 | 1593655879738.16 | [array(['images/chartview-propety-builder001.png',
'chartview-propety-builder 001'], dtype=object)
array(['images/chartview-propety-builder002.png',
'chartview-propety-builder 002'], dtype=object)
array(['images/chartview-propety-builder003.png',
'chartview-propety-builder 003'], dtype=object)
array(['images/chartview-propety-builder004.png',
'chartview-propety-builder 004'], dtype=object)] | docs.telerik.com |
You may want to manually seal the Secret Store to protect its contents from an intruder. Sealed Secret Stores cannot be accessed from the web interface. Secret values cannot be retrieved using the Secrets API. Services that depend on values in the Secret Store may fail to deploy.
To seal the Secret Store, complete the following steps to seal a single instance of
dcos-secrets. If the cluster URL obtained through
dcos config show core.dcos_url points to a load balancer and there is more than one master node in the cluster, then these steps should be issued against each master node instead, and the cluster URL should be changed to the address of individual master nodes.
The intended status of the seal is persisted, so after sealing the store the restart of
dcos-secrets will not unseal it - only the steps depicted in unsealing the store will..
From a terminal prompt, check the status of the Secret Store via the following command.
curl --cacert dcos-ca.crt -H "Authorization: token=$(dcos config show core.dcos_acs_token)" $(dcos config show core.dcos_url)/secrets/v1/seal-status/default
The Secret Store service should return a response similar to this:
{"sealed":false,"threshold":1,"shares":1,"progress":0}
If the value of
"sealed"is
true, do not complete the rest of this procedure. If the Secret Store is already sealed, you cannot seal it again.
Use the following command to seal the Secret Store.
curl -X PUT --cacert dcos-ca.crt -H "Authorization: token=$(dcos config show core.dcos_acs_token)" $(dcos config show core.dcos_url)/secrets/v1/seal/default
Confirm that the Secret Store was sealed with this command.
curl --cacert dcos-ca.crt -H "Authorization: token=$(dcos config show core.dcos_acs_token)" $(dcos config show core.dcos_url)/secrets/v1/seal-status/default
It should return the following JSON:
{"sealed":true,"threshold":1,"shares":1,"progress":0} | http://docs-staging.mesosphere.com/mesosphere/dcos/1.11/security/ent/secrets/seal-store/ | 2020-07-02T19:51:24 | CC-MAIN-2020-29 | 1593655879738.16 | [] | docs-staging.mesosphere.com |
IT operations management integrations for cloud monitoring
BMC Helix Multi-Cloud Service Management provides integration with IT operations management (ITOM) vendors such as Azure Monitor and creates incidents in Remedy ITSM to track the availability, consistency, reliability and quality of your services.
BMC Helix Multi-Cloud Service Management cloud monitoring feature enables you to monitor your Azure resources by receiving the Azure alerts metadata through the webhook option. BMC Helix Multi-Cloud Service Management currently supports Metric alerts and Activity log alerts monitoring data. BMC Helix Multi-Cloud Service Management receives the Azure alerts metadata (JSON) and creates incidents in Remedy ITSM based on the set of flows, connectors, connector targets, and vendor data that you configure in BMC Helix Multi-Cloud Service Management.
Currently the following features are supported:
- Incidents are created in Remedy ITSM based on the Azure alerts.
- Incidents are updated in Azure Monitor based on updates to Remedy ITSM incident.
- Azure alerts or events information is displayed in the Remedy with Smart IT interface.
Example: A ticket is created in Remedy ITSM, whenever a virtual machine in Microsoft Azure has CPU consumption greater than 70% and server response time more than 4 hours.
Using BMC Helix Multi-Cloud Service Management without Smart IT
You can integrate BMC Helix Multi-Cloud Service Management with ITOM vendors without using Smart IT. Instead of the Smart IT console, you can use Remedy Mid-Tier to view incidents. When working without Smart IT, you cannot view the vendor ticket details. However, you can view the work logs to verify that tickets are being brokered.
Ticket creation with BMC Helix Multi-Cloud Service Management
BMC Helix Multi-Cloud Service Management uses connectors, flows, and processes to create Remedy ITSM tickets from monitoring service notifications. Whenever the monitoring service fires notifications, the flow created between Azure Alerts connector and Multi-Cloud connector is triggered and BMC Helix Multi-Cloud Service Management receives the ticket details. From BMC Helix Multi-Cloud Service Management, the data is passed on to Remedy ITSM using the process defined in BMC Helix Platform. The following diagram illustrates the data flow between the applications.
Cloud monitoring service notification to Remedy ITSM incident
Connector: A connector is an integration (connection point) with a BMC application or a third-party application. Connectors are configured in BMC Helix Integration Service.
Flow: A flow is a connection between two connectors that enables you to accomplish a certain task. A triggering event in the source application causes an action to take place in the target application. Flows are configured in BMC Helix Integration Service.
Process: A process uses rules and actions to implement the business logic for a given business use case. Processes are configured in BMC Helix Platform.
The out-of-the-box configuration requires minimal changes for most scenarios. As an administrator, you can customize the configuration based on your organization's requirements.
How incidents are consolidated into Remedy ITSM
A Remedy ITSM incident is created when an Azure alert is fired and meets the trigger condition defined in the flow.
The following diagram illustrates how BMC Helix Multi-Cloud Service Management uses connectors, flows, and processes when a vendor ticket is created:
Where to go from here
To configure ITOM integration with Azure Monitor, see Enabling prebuilt integration with Azure Monitor. | https://docs.bmc.com/docs/multicloudprevious/it-operations-management-integrations-for-cloud-monitoring-919898178.html | 2020-07-02T19:45:49 | CC-MAIN-2020-29 | 1593655879738.16 | [] | docs.bmc.com |
ARVRController¶
Inherits: Spatial < Node < Object
A spatial node representing a spatially-tracked controller.
Description¶
This is a helper spatial node that is linked to the tracking of controllers. It also offers several handy passthroughs to the state of buttons and such on the controllers.
Controllers are linked by their ID. You can create controller nodes before the controllers are available. If your game always uses two controllers (one for each hand), you can predefine the controllers with ID 1 and 2; they will become active as soon as the controllers are identified. If you expect additional controllers to be used, you should react to the signals and add ARVRController nodes to your scene.
The position of the controller node is automatically updated by the ARVRServer. This makes this node ideal to add child nodes to visualize the controller.
Signals¶
Emitted when a button on this controller is pressed.
Emitted when a button on this controller is released.
Emitted when the mesh associated with the controller changes or when one becomes available. Generally speaking this will be a static mesh after becoming available.
Property Descriptions¶.
The degree to which the controller vibrates. Ranges from
0.0 to
1.0 with precision
.01. If changed, updates ARVRPositionalTracker.rumble accordingly.
This is a useful property to animate if you want the controller to vibrate for a limited duration.
Method Descriptions¶
If active, returns the name of the associated controller if provided by the AR/VR SDK used.
- TrackerHand get_hand ( ) const
Returns the hand holding this controller, if known. See TrackerHand.Server.
If provided by the ARVRInterface, this returns a mesh associated with the controller. This can be used to visualize the controller.
Returns
true if the button at index
button is pressed. See JoystickList, in particular the
JOY_VR_* constants. | https://docs.godotengine.org/zh_CN/latest/classes/class_arvrcontroller.html | 2020-07-02T19:10:43 | CC-MAIN-2020-29 | 1593655879738.16 | [] | docs.godotengine.org |
Path to Purchase
The path customers take that leads to a sale is sometimes called the path to purchase. In this quick tour, we take a look at pages of strategic value that customers usually visit while shopping in your store. We also consider different store features that can be leveraged at each stage of the customer journey.
Sample Luma storefront
- Your home page is like the front window display of your store. As the primary landing page, its design entices visitors to come inside for a closer look.
- Catalog Page
- The Catalog page shows products from your catalog in either a list or grid format. Customers can make selections based on a category they choose from the main menu, by using the layered navigation on the left or from the results of a search. They can examine the item in more detail or place it directly into the shopping cart.
- Search Results
- Did you know that people who use search are nearly twice as likely to make a purchase as those who rely on navigation alone? You might consider these shoppers to be pre-qualified.
- Product Page
- The product page provides detailed information about a specific item in your catalog. Customers can read reviews, add the product to their wish lists, compare it to other products, share the link with friends, and most importantly, place the item into their shopping carts.
- Shopping Cart
- The shopping cart lists each item by price and quantity selected, and calculates the subtotal. Shoppers can apply discount coupons, and generate an estimate of shipping and tax charges. | https://docs.magento.com/user-guide/quick-tour/path-to-purchase.html | 2020-07-02T19:36:28 | CC-MAIN-2020-29 | 1593655879738.16 | [] | docs.magento.com |
Deploying the 2007 Office system by using System Center Configuration Manager:
· Microsoft Systems Management Server (SMS) 2003 or System Center Configuration Manager 2007
· Office Customization Tool (OCT)
· 2007 Microsoft Office system, Microsoft Office 2003, or Microsoft Office XP
Introduction to the deployment process System Center Configuration Manager infrastructure. It also includes procedural information of the steps for deploying 2007 Microsoft Office Enterprise by using System Center Configuration Manager.
Network environment
The test network environment for this article is shown in the following illustration...
Procedural steps for deploying the 2007 Office system by using System Center Configuration Manager
In this example, deployment scenario detailed information is provided for deploying 2007 Office Enterprise in the previously defined System Center Configuration Manager test environment. By following these steps, you can use System Center Configuration Manager, you use the command line setup.exe /admin to start the Office Customization Tool. In this example, using System Center Configuration Manager to deploy the 2007 Office system, at a command prompt you run setup.exe /admin from the package source directory, \SCCMOffice2007Enterprise.
To ensure that the 2007 Office system is silently installed, you need toOffice. create a source directory for the package.
For more information about collections, see “Collections in Configuration Manager” located at the following link:.
Create a package source directory
The package source folder contains all the files and subdirectories needed to run the programs in a package. In this example, the source directory is \SCCMOffice2007Enterprise,Office2007Enterprise.:. | https://docs.microsoft.com/en-us/archive/blogs/office_resource_kit/deploying-the-2007-office-system-by-using-system-center-configuration-manager | 2020-07-02T18:52:22 | CC-MAIN-2020-29 | 1593655879738.16 | [] | docs.microsoft.com |
Contents
In the StreamBase expression language, function is a data type. This sample EventFlow module demonstrates three ways to apply an algorithm to data on a stream.
StreamBase expression language functions can be defined as constant expressions or passed on a stream. As a StreamBase data type, functions return a declared data type, and can accept any other data type as arguments. In this sample, there is one double argument and a double result.
This sample uses Map operators, in which the same degrees Celsius to degrees Fahrenheit conversion expression is applied in three ways:
For comparison, as an expression defined without a function, in the
UseLocalFnMap operator.
The same expression is defined as a module function in the Definitions tab, and called in the
UseModuleFnMap operator.
The same expression is defined as a function field in the
DefineFnoperator, then called by its field name in the
UsePassedFnoperator.
In StreamBase Studio, import this sample with the following steps:
From the top-level menu, select> .
Enter
function datato narrow the list of options.
Select Using the function data type sample from the Data Constructs
function.sbappmodule. Make sure the module is the currently active tab in the EventFlow Editor.
Click the
Run button. This opens the SB Test/Debug perspective and starts the module.
As the server starts, StreamBase Studio switches to the SB Test/Debug perspective.
Wait for the Waiting for fragment to initialize message to clear.
Select the Manual Input tab.
Enter a value for a temperature in
temp_celin the
celsiusinput stream and click .
Observe that the
temp_f_expr,
temp_f_fcn, and
temp_f_streamoutput values are identical._function
See Default Installation
Directories for the default location of
studio-workspace on your system. | https://docs.streambase.com/latest/topic/com.streambase.sfds.ide.help/data/html/samplesinfo/Function.html | 2020-07-02T18:19:51 | CC-MAIN-2020-29 | 1593655879738.16 | [] | docs.streambase.com |
Data source failed and an error (error 10999) box pops up saying Keyword not supported: 'dsn' as shown in the figure below.
Cause: Appeon does not support using ODBC Driver to connect with the SQL Server database.
Solution: To resolve this, go to AEM, and change the ODBC driver to the Native Driver for SQL Server. | https://docs.appeon.com/2015/appeon_troubleshooting_guide/ch04s03s01.html | 2020-07-02T18:27:15 | CC-MAIN-2020-29 | 1593655879738.16 | [] | docs.appeon.com |
Creating a Media Library
Service Portal and Commander users can upload ISO and FLP files to a organization-specific media library and to global media folders, depending on the their permissions. Users can then connect these files to vCenter VMs with CD drives. For example, if a user wants to install antivirus software on a vCenter VM, they can connect an ISO file on the datastore that contains installation files to a CD drive.
You can isolate the datastores and folders that organization members may browse when connecting media. For service providers, this allows you to ensure data isolation for each of your customers. You can create a media library that includes both:
- global folders, which can be seen by all Service Portal users
- organization-specific folders, which can be seen only by members of the assigned organization
In this topic:
See also Connecting Media to vCenter VMs.
Prerequisites
Datastore access
For users to be able to upload files to a media folder, the datastore must be mounted to the host with read/write access.
For users to be able to connect media files to a VM's virtual hardware device, the datastore where the media file is stored must be mounted to the VM's host.
Relevant Service Portal permissions
The following Service Portal Permissions control the ability to connect, upload and delete files in the media library:
Security
The Media Manager is a pop-up window that allows Service Portal and Commander users to upload and delete media files. The Media Manager must stay open during the Upload phase, but the user doesn't need to stay signed in while the upload is in progress. The Media Manager can be closed during the Transfer phase.
The following actions require an active (logged-in) Service Portal or Commander session:
- Browse media folder contents
- Upload a media file
- Delete a media file
- Cancel a file upload
The following actions don't require an active Service Portal or Commander session:
- Pause, resume or retry upload for files in the upload queue
- Clear completed or failed files from the upload queue
Quota and media management
When quota is configured for an organization, media files in an organization's media folder can be included in quota calculations. When media files are included in quota calculations, if an upload would exceed an organization's quota, the Service Portal user is prevented from uploading a file. Commander users, by contrast, are not prevented from uploading files in this situation, but files uploaded to an organization's media folder by Commander users do count towards the organization's quota.
Note that media folders assigned to multiple organizations consume quota from all assigned organizations.
Both resource quota and cost quota are supported for media files. In the case of a cost quota, costs are determined by the cost model assigned to the datacenter where the datastore is located.
Note: If you've implemented the Cost Adjustments feature to apply markups and discounts to your costs, cost quota calculations will use adjusted costs.
Global media folders are never included in quota calculations. Member quotas are not affected by media files.
To include media files in quota calculations, make sure that the storage tier assigned to the datastore where the media files are located is included in the organization's quota. If you want Service Portal users to see their media quota separately in the Service Portal dashboard, put the media files on a separate storage tier and name the tier Media, for example.
To exclude media files from quota calculations, assign a specific storage tier to the datastore where the media files are located. Then, exclude this storage tier from the organization's quota.
See Per-tier storage quota and Setting Storage Tiers for more information.
Creating a media library
To allow Service Portal and Commander users to upload files and connect virtual hardware, you create a media library. Commander allows you to create both global and organization-specific media locations in the media library. Organization-specific media locations ensure that you can segregate media folders and datastore files for each of your organizations.
Note: VMs are only able to access media libraries created under the cloud account on which they reside. This means that if you have a datastore available to multiple vCenters, you must create the media library under each in order for the files to be available to all VMs.
To create a media library:
- On the Media Library page, click Add.
- In the Media Location wizard, on the Name page, provide a name, and an optional description.
- On the Assignment page, choose one of the following:
- Global to make this location available to all users
- Assign this media location to these specific organizations, select an organization in the drop-down menu, and click Add.
Note: If this datastore is later deleted, the media location will have an Alert status.
Uploading files to the media library
Commander users can upload both .iso and .flp files to global and organization-specific media folders.
To upload a file to the Media Library:
- On the Media Library page, select a media folder and click Manage Files.
The Media Manager opens in a separate window. Files previously uploaded to the folder displayed in the Current field are displayed in the top portion of the window, and files that have recently been uploaded, or are queued, are displayed in the Upload Queue portion of the window.
- In the Media Manager, click Upload File.
- In the File Upload dialog, navigate to the folder containing the .iso or .flp file you want to upload.
- Select a file and click Open.
File upload begins, and progress is displayed in the Upload Queue section of the Media Manager.
Important: You must leave the Media Manager window open while the Upload phase is in progress. Once the Transfer phase has begun, it's safe to close the Media Manager window. See Monitoring media file uploads below for more information.
- While a file is being uploaded, you can:
- Click Cancel if you made a mistake and don't want to upload this file
- Click Pause to pause all uploads (before the status in the Upload column is Complete)
- Upload another file
When the Transfer column displays Complete, Service Portal users with access to the target media folder will be able to connect the media file to a VM's virtual hardware.
Monitoring media file uploads
Upload occurs in two phases: upload from the browser to a temporary location on the Commander server, and transfer from the Commander server to the target datastore. The progress of both phases is displayed in the Upload Queue, which appears at the bottom of the Media Manager window.
A task also appears in the Tasks tab at the bottom of the Commander window to indicate that the Transfer phase has begun.
When the Upload phase is complete, the Upload column in the Media Manager displays Complete. Next, the Transfer column displays Pending, and when the transfer begins, the percentage complete is displayed.
When the Transfer column displays Complete, users with access to the target media folder will be able to connect the media file to a VM's virtual hardware.
Note: Clicking Clear in the Media Manager's Upload Queue simply removes a completed entry from the list.
Pausing, resuming and retrying media file uploads
You can pause all in-progress and queued uploads by clicking Pause at the bottom of the Media Manager window. Note that any uploads that are in the second phase of upload (to the datastore) can't be paused, but they can be canceled (see Canceling media file uploads above).
When you're ready to resume upload, click Resume.
If a blue Retry link is displayed in the Upload Queue, clicking Retry will resume from where it left off.
If an error is displayed because an upload was interrupted, such as an interrupted network connection, closing the Media Manager window or a power interruption, you must upload the file again.
Canceling media file uploads
You can cancel an upload during both the Upload and Transfer phases from the Media Manager.
Note: You can also cancel a media file upload in the Transfer phase by right-clicking the task in the Tasks tab at the bottom of the Commander window and selecting Cancel.
When a media file upload is canceled, the incomplete upload and quota reservation are cleared immediately.
Refreshing the information for media folders
When you select a media folder in the Media Library list, information is displayed, such as its size, cost, and location. These details are refreshed nightly, as well as after a file upload or deletion. To refresh the information immediately, select a media folder in the list and click Refresh Details.
You can cancel a Refresh Media Folder Details task when it's in progress by right-clicking it in the Tasks tab at the bottom of the Commander console and selecting Cancel.
If the target datastore can't be checked, the Last Refresh property has a value of Unknown (for example, the root Fileshare datastore in a vCenter cloud account can't be checked).
Deleting media folders
To delete a media folder, select it in the Media Library list and click Delete.
Note: When one or more files are in the process of being transferred to the media folder, the Delete Media Folder task is queued until transfer is complete. If you need to delete a media folder immediately, cancel any Upload Media File tasks for the folder (see Canceling media file uploads below).
Deleting files from the media library
Note: Note that you can't delete a file that's currently connected to a running VM.
To remove a file from the Media Library:
- On the Media Library page, select a media folder and click Manage Files.
The top portion of the Media Manager window displays a list of files in the current folder.
- Select a file in the list and click Delete File.
Troubleshooting
General troubleshooting information
- The global threshold for preventing deployment to a datastore (80% by default) also controls the threshold for media files. This means that if a media file upload will exceed the global threshold, the upload won't start. See Configuring datastore placement preferences for more information.
- The threshold for space consumed in the temporary upload location is controlled by an advanced system property. If the threshold is exceeded during file upload, the upload will be canceled. To learn more, see Advanced Configuration With System Properties.
- Quota is reserved for the full size of a media file when a user starts an upload.
- If an upload is interrupted (for example, the user closes the Media Manager when uploads are in progress) and quota is configured, the quota isn't released until the regular maintenance task clears the incomplete upload.
Alert status on media folder
If the cloud account, datastore, or path has an alert status, this is indicated with a warning icon in the Issues column.
The following conditions can cause an alert:
- The folder doesn't exist on the datastore.
- The datastore is inaccessible, has been removed, or is in maintenance mode.
- The cloud account has been removed or disconnected.
Users attempting to upload files in the Media Manager can't see folders with an alert status. | https://docs.embotics.com/commander/media_library.htm | 2020-07-02T18:10:10 | CC-MAIN-2020-29 | 1593655879738.16 | [array(['Images/media-mgr-cancel.png', 'Upload Queue Upload Queue'],
dtype=object) ] | docs.embotics.com |
Overview
RadRangeSelector provides an elegant solution for end-users to select range (in percentages) and these percentages could be mapped to any kind of visually represented data. Developers can easily set the associated object that will be used as background of RadRangeSelector. The associated object should confront with specific interfaces thanks to which it will be able to communicate with RadRangeSelector.
Currently, RadRangeSelector works out of the box together with RadChartView.
Figure 1: RadRangeSelector
| https://docs.telerik.com/devtools/winforms/controls/rangeselector/overview | 2020-07-02T20:05:59 | CC-MAIN-2020-29 | 1593655879738.16 | [array(['images/rangeselector-overview001.png',
'rangeselector-overview 001'], dtype=object)] | docs.telerik.com |
Properties and servers
When you perform many actions in BMC Server Automation, you can use server properties to specify information that is likely to change from server to server.
Each server inherits a set of properties defined for the Servers property class in the Property Dictionary. Using the Property Dictionary, you can specify the properties that should be inherited by all servers (see Property Dictionary overview). At the server level, you can change the value of these properties to match the configuration and function of each server (see Setting values for system object properties).
For example, if you are deploying an Apache server to various platforms, you can specify a different installation directory for each platform by defining a property in the Property Dictionary that represents the installation directory. For Windows servers, you could set the value of that property to /c/Program Files/Apache. For UNIX servers you could set the property to /usr/local/Apache.
You can also use properties to organize servers into smart groups (see Defining a smart group). For example, you can create a property in the Property Dictionary called Owner, and then assign different values for that property to different servers. If some servers have Owner set to QA and others have Owner set to Development, then smart groups can automatically group QA servers into one group and Development servers into another.
Some server properties are considered intrinsic, meaning the property is derived from the nature and configuration of a server, such as the server's name or root directory. Some intrinsic properties are editable. For a list of these, see Editable intrinsic properties for servers.
After you add a server, you may change some of the server's intrinsic values by doing various things to the server itself. For example, you may apply a new patch to the server, which would change the value of its patch level property. If you want BMC Server Automation to retrieve and display new property values from the server, you need to update the server's properties, as described in Server properties - update concepts.
For more information about properties and servers, see the following topics: | https://docs.bmc.com/docs/ServerAutomation/86/using/managing-servers/properties-and-servers | 2020-07-02T19:23:47 | CC-MAIN-2020-29 | 1593655879738.16 | [] | docs.bmc.com |
setAgentLocation
A Workflow Engine function that sets the Agent Location field of the alert.
This function is available as a feature of the Workflow Engine v1.2 download and later.
This function is available for event, alert, and enrichment workflows.
Back to Workflow Engine Functions Reference.
Arguments
Workflow Engine function
setAgentLocation takes the following arguments: | https://docs.moogsoft.com/Enterprise.8.0.0/setagentlocation.html | 2020-07-02T19:11:00 | CC-MAIN-2020-29 | 1593655879738.16 | [] | docs.moogsoft.com |
cinder.interface.volume_snapshot_revert module¶
Revert to snapshot capable volume driver interface.
- class
VolumeSnapshotRevertDriver¶
Bases:
cinder.interface.base.CinderInterface
Interface for drivers that support revert to snapshot.
revert_to_snapshot(context, volume, snapshot)¶
Revert volume to snapshot.
Note: the revert process should not change the volume’s current size, that means if the driver shrank the volume during the process, it should extend the volume internally.
- Parameters
context – the context of the caller.
volume – The volume to be reverted.
snapshot – The snapshot used for reverting. | https://docs.openstack.org/cinder/latest/contributor/api/cinder.interface.volume_snapshot_revert.html | 2020-07-02T19:44:09 | CC-MAIN-2020-29 | 1593655879738.16 | [] | docs.openstack.org |
Contents
Starting with release 10.3.0, StreamBase and LiveView use a unified security model in which the same set of security configuration settings manage:
Client communication with a running StreamBase server.
Client communication with a running LiveView server.
All communication to and between nodes.
Administration of nodes.
All StreamBase security is applied to nodes. You configure security with HOCON configuration files that establish initial settings. Then while a node is running, you can add to, subtract from, activate, deactivate, and display security settings with administration commands.
All security configuration takes place within the context of one or more security realms.
Even without configuration, there is always a security realm active. To ease development, default settings allow you to install, start, and administer nodes on your local machine without considering security issues. However, as soon as you run a node on another machine or on another network, you must plan the security configuration of such a node to allow you to communicate with it.
A simple list of trusted hosts allows you to extend the default security settings for continued ease of development in a larger network. Later, if you configure an advanced security realm, a list of trusted hosts can be used as an added restriction on top of the realm's restrictions.
The Local Authentication Realm is foundational and the simplest to configure. Use this realm to establish a list of initial users and their roles. Establish the privileges assigned to each user or role with a Role to Privileges Mapping configuration.
You can encrypt sensitive information in configuration files, such as passwords and JDBC URIs. You establish a node-level master secret, then use an epadmin command to encrypt individual strings.
There is support for three enterprise level security realms: LDAP (including Active Directory), OIDC, and Kerberos. A node can have multiple active realm configurations (except for Kerberos).
Independent of realm, you can enable TLS secure transport, using either server only or mutual client-server TLS.
You can reference the name of a configured realm in other HOCON configuration files. The epadmin command has several security related targets that let you upload and activate replacement configurations to a running node.
Pages that discuss the StreamBase security model might use the following acronyms or terms:
- AuthN
Authentication. Are you who you say you are?
- AuthZ
Authorization. Now that you're here, what are you allowed to do?
- LDAP
Lightweight Directory Access Protocol, an open, vendor-neutral industry standard protocol for accessing and maintaining distributed directory services over an IP network. Microsoft's Active Directory product is an extension of the LDAP protocol.
- Distinguished Name (DN)
A naming specification used by LDAP to uniquely name objects such as user and role objects in an LDAP database.
- SSO
Single Sign-On. An authorization mechanism that allows users to identify themselves one time, and then use that identification in many places without needing to provide credentials again.
- OIDC
OpenID Connect. An HTTP-based standard for SSO using third-party identity providers.
- JWT
A JSON Web Token, used by OIDC to carry information about an authenticated user.
- Kerberos
Another SSO specification. StreamBase supports HTTP-based Kerberos only.
- KDC
A Kerberos Key Distribution Center, with which Kerberos clients communicate when requesting and validating Kerberos tickets.
- GSS-API
Generic Security Services API. An API implemented by the JRE that provides Kerberos client and server authentication services, among many other things.
- SPNEGO
Simple and Protected Negotiation. An HTTP-based protocol for negotiating authentication protocols. StreamBase uses the Simple and Protected GSS-API Negotiation mechanism to provide Kerberos authentication services.
- JAAS
Java Authentication and Authorization Service, a collection of APIs implemented by the JRE for authN and authZ. | https://docs.streambase.com/latest/topic/com.streambase.sfds.ide.help/data/html/admin/sec-intro.html | 2020-07-02T19:39:25 | CC-MAIN-2020-29 | 1593655879738.16 | [] | docs.streambase.com |
Calendar
What is the calendar component?
The calendar component enables you to view records from a data table that contains a date type field on a calendar.
Why use calendar?
When working with date and times you can display these records in a calendar to see all your appointments at a glance.
When should you use calendars?
Anytime you're working with dates you can likely use calendars. However, if you're working with dates in combination with resources you might be better off using the Calendar Resource component. The calendar componentn is a single dimension component which means you can see all your events with some details about the event and the date and time the event is scheduled for. To track more than one dimension for example, the date time, details as well as the resource the Calendar resource would be more beneficial.
Adding calendars
To add a calendar choose the calendar component and then choose the data table to be used for this calendar.
Only tables which contain date or date/time field field will be displayed in the list of tables. This includes: Date, Date/Time and Date Range.
Once you've chosen the table, you can choose Quick Add which will add the calendar with some default settings. Alternatively, you can choose Customize to configure all the settings manually.
Choosing QuickAdd is simply a shortcut, you can always go right back into the component to make any changes necessary.
Inside the calendar component you'll see have some standard options available in other components, as well as unique settings only available in the calendar component.
- Data Source
- Calendar Options
Data Source
The Data Source tab is where you can filter which records will be displayed in this component with some 'server-side' filtering. Since this is universal to all components you can view it as its own article here. Learn more about Data Source.
Calendar Options
Under General options you'll have 4 basic options:
- Title
- Description
- Date Field
- Label Field
Title
The title will be the title displayed on top of the calendar.
Description
A description for your reference..
Label Field
This will be the field used to be displayed inside the calendar to reference the record in the calendar. and int he Form View.
Under the Default Options you can choose some default options to be used when the calendar initially loads.
- Default Display Mode
- Default Display Range
- Default Event Color
- Starting Date of the Week
Default Display Mode
This will determine if the default display should be the calendar or list..
Label Color Rules
Add rules to change the colors for each record based on pre-defined criteria.
| https://docs.tadabase.io/categories/manual/article/calendar | 2020-07-02T18:21:51 | CC-MAIN-2020-29 | 1593655879738.16 | [array(['https://tadabase-docs-assets.s3.amazonaws.com/uploads/images/gallery/2020-01/scaled-1680-/image-1579542581337.png',
None], dtype=object)
array(['https://tadabase-docs-assets.s3.amazonaws.com/uploads/images/gallery/2020-01/scaled-1680-/image-1579565002904.png',
None], dtype=object)
array(['https://tadabase-docs-assets.s3.amazonaws.com/uploads/images/gallery/2020-01/scaled-1680-/image-1579565299549.png',
None], dtype=object)
array(['https://tadabase-docs-assets.s3.amazonaws.com/uploads/images/gallery/2020-01/scaled-1680-/image-1579565839282.png',
None], dtype=object) ] | docs.tadabase.io |
What next?¶
You have gone through the Quickstart installation: at this point you should have a running Open edX platform. If you don’t, please follow the instructions from the Troubleshooting section.
Logging-in as administrator¶
Out of the box, Tutor does not create any user for you. You will want to create a user yourself with staff and administrator privileges in order to access the studio. There is a simple command for that.
Importing a demo course¶
To get a glimpse of the possibilities of Open edX, we recommend you import the official demo test course. Tutor provides a simple command for that.
Making Open edX look better¶
Tutor makes it easy to develop and install your own themes. We also provide Indigo: a free, customizable theme that you can install today.
Adding features¶
Hacking into Open edX¶
Tutor works great as a development environment for Open edX developers, both for debugging and developing new features. Please check out the development documentation.
Deploying to Kubernetes¶
Yes, Tutor comes with Kubernetes deployment support out of the box.
Meeting the community¶
Ask your questions and chat with the Tutor community on the official community forums: | https://docs.tutor.overhang.io/whatnext.html | 2020-07-02T19:29:36 | CC-MAIN-2020-29 | 1593655879738.16 | [] | docs.tutor.overhang.io |
Midsummer will mark the high point of the Young Energy Professional Year when the first ever YEP Forum awards evening heads – on 30 June 2015 - to Vinopolis on the Capital’s South Bank. So, the winners don’t have long to wait now as entries, showcasing the best in upcoming energy talent from all over the UK, are coming in to Energy UK’s central London offices right now. Anyone working in the energy industry for less than ten years can enter in categories that include leadership, innovation, customer focus, rising star and YEP of the year.
But the evening isn’t just about the winners. Even if you aren’t entering the competition for an award why not come along and join the fun. If current turn out at the meetings and events is anything to go by it’s sure to be one of the hottest tickets in town. There will be plenty of time to meet and greet friends and professionals from up and down the country over an excellent three course dinner and drinks.
But we’re not there yet. The Forum has other events planned and on 21 May 2015 the next outing is an exciting visit north of the Border with SSE as the hosts in Glasgow. The evening’s topical post-election debate will discuss, ‘How is energy policy affecting the energy industry?’. It’s hardly giving the dust time to settle on the general election on 7 May but, with politics sure still to be in the public eye, there couldn't be a better time to consider how the various permutation of parties and policies will affect us all.
The YEP Forum is fast becoming the place for the next generation of energy industry professionals to meet and share their knowledge and experiences. . It is, above all else, a place to meet and network and sSince the first YEP forum almost two years ago, it has gone from strength to strength. The quarterly events attract large audiences and events range from site visits to hotly contested debates as well as opportunities to hear from current industry leaders themselves.
The YEP Forum helps bring people together and overcomes the view the energy industry is a club of aging, grey guys – just like your Dad. While it is true that companies tend to hang on to talented staff in a way many other sectors envy, the Forum help people new to the industry create a vibrant network of their own, helping build strong relationships, opportunities to learn from your peers and –like the awards – underpinning best practice across the industry.
The YEP forum is the place to be, to learn, to network. So, why not join and get involved today?
For more information: | https://www.docs.energy-uk.org.uk/media-and-campaigns/energy-uk-blogs/5227-what-s-to-come-from-the-yep-forum.html | 2020-07-02T18:17:49 | CC-MAIN-2020-29 | 1593655879738.16 | [] | www.docs.energy-uk.org.uk |
Function: ComparePaths
Synopsis
Compares two paths with options to ignore different casing and different slashes.
Syntax
ComparePaths( <path1>, <path2>, [<case_insensitive>, <slash_insensitive>] )
- path1: The first path.
- path2: The second path.
- case_insensitive: Whether or not the comparison is case insensitive. Defaults to true.
- slash_insensitive: Whether or not the comparison is slash insensitive. Defaults to true.
Examples
-- Result is true local Result = ComparePaths("art\\cars\\famil_v.p3d", "art/cars/famil_v.p3d") -- Result is false, the capitalization is different. local Result = ComparePaths("ART\\CARS\\FAMIL_V.p3d", "art/cars/famil_v.p3d", false) -- Result is false, the slashes are different. local Result = ComparePaths("ART\\CARS\\FAMIL_V.p3d", "ART/CARS/FAMIL_V.p3d", true, false)
Notes
No additional notes.
History
1.19
- Fixed an issue where this function was always case sensitive and always slash sensitive.
Before 1.18
Changes prior to this version were not adequately tracked. | https://docs-old.donutteam.com/books/lucas-simpsons-hit-run-mod-launcher/page/function-comparepaths | 2020-07-02T18:35:51 | CC-MAIN-2020-29 | 1593655879738.16 | [] | docs-old.donutteam.com |
A file parsing error occurs in 64-bit JEUS server console when deploying the Appeon6.5-deployed-Web applications.
Cause: The application configuration file web.xml cannot be parsed by JEUS.
Solution: Step 1: Go to the Web root path that you specified in Web Server Profile Configuration windows in Appeon Developer Config tool.
Step 2: Find the WAR file of the Web application, find the folder "WEB-INF" and extract web.xml.
Step 3: Modify the following line in web.xml:
<web-app
To
<web-app
Step 4: Use the modified web.xml file to replace the old one under "WEB-INF" folder.
Step 5: Manually or automatically deploy the WAR file in JEUS console again. | https://docs.appeon.com/2015/appeon_troubleshooting_guide/ch03s04.html | 2020-07-02T17:57:26 | CC-MAIN-2020-29 | 1593655879738.16 | [] | docs.appeon.com |
Data Input Methods
Data input methods allow Cacti to retrieve data to insert into data sources and ultimately put on a graph. There are different ways for Cacti to retrieve data, the most popular being through an external script or from SNMP.
Creating a Data Input Method
To create a new data input method, select the Data Input Methods option under the Management heading. Once on that screen, click Add on the right. You will be presented with a few fields to populate on the following screen.
Table 9-1. Field Description: Data Input Methods
When you are finished filling in all necessary fields, click the Create button to continue. You will be redirected back to the same page, but this time with two new boxes, Input Fields and Output Fields. The Input Fields box is used to define any fields that require information from the user. Any input fields referenced to in the input string must be defined here. The Output Fields box is used to define each field that you expect back from the script. All data input methods must have at least one output field defined, but may have more for a script.
Data Input Fields
To define a new field, click Add next to the input or output field boxes. You will be presented with some or all of the fields below depending on whether you are adding an input or output field.
Table 9-2. Field Description: Data Input Fields
When you are finished filling in all necessary fields, click the Create button to continue. You will be redirected back to the data input method edit page. From here you can continue to add additional fields, or click Save on this screen when finished.
Making Your Scripts Work With Cacti
The simplest way to extend Cacti's data gathering functionality is through external scripts. Cacti comes with a number of scripts out of the box which are located in the scripts/ directory. These scripts are used by the data input methods that are present in a new installation of Cacti.
To have Cacti call an external script to gather data you must create a new data input method, making sure to specify Script/Command for the Input Type field. See the previous section, Creating a Data Input Method for more information about how to create a data input method. To gather data using your data input method, Cacti simply executes the shell command specified in the Input String field. Because of this, you can have Cacti run any shell command or call any script which can be written in almost any language.
What Cacti is concerned with is the output of the script. When you define your data input method, you are required to define one or more output fields. The number of output fields that you define here is important to your script\'s output. For a data input method with only one output field, your script should output its value in the following format:
<value_1>
So if I wrote a script that outputs the number of running processes, its output might look like the following:
Example 9-1. Example script output using 1 field
67
Data input methods with more than one output field are handled a bit differently when writing scripts. Scripts that output more than one value should be formatted like the following:
<fieldname_1>:<value_1> <fieldname_2>:<value_2> ... <fieldname_n>:<value_n>
Lets say that I write a script that outputs the 1, 5, and 10 minute load average of a Unix machine. In Cacti, I name the output fields \'1min\', \'5min\', and \'10min\', respectively. Based on these two things, the output of the script should look like the following:
Example 9-2. Example script output using 3 fields
1min:0.40 5min:0.32 10min:0.01
One last thing to keep in mind when writing scripts for Cacti is that they will be executed as the user the data gatherer runs as. Sometimes a script may work correctly when executed as root, but fails due to permissions problems when executed as a less privileged user.
Note: Spine requires, that multiple parameters are spit out by a single “print” statement. Do not print them in a loop!
Walkthrough: My First Data Input Method
Data Input Method returning a single value
Lets start with a simple script, that takes a hostname or IP address as input parameter, returning a single value. You may find this one as <path_cacti>/scripts/ping.pl:
#!/usr/bin/perl # take care for tcp:hostname or TCP:[email protected] $host = $ARGV[0]; $host =~ s/tcp:/$1/gis; # old linux version use "icmp_seq" # newer use "icmp_req" instead open(PROCESS, "ping -c 1 $host | grep 'icmp_[s|r]eq' | grep time |"); $ping = <PROCESS>; close(PROCESS); $ping =~ m/(.*time=)(.*) (ms|usec)/; if ($2 == "") { print "U"; # avoid cacti errors, but do not fake rrdtool stats }elsif ($3 eq "usec") { print $2/1000; # re-calculate in units of "ms" }else{ print $2; }
To define this script as a Data Input Method to cacti, please go to Data Input Methods and click Add. You should see:
Please fill in Name, select Script/Command as Input Type and provide the command that should be used to retrieve the data. You may use <path_cacti> as a symbolical name for the path_to_your_cacti_installation. Those commands will be executed from crontab; so pay attention to providing full path to binaries if required (e.g. /usr/bin/perl instead of perl). Enter all Input Parameters in <> brackets. Click create to see:
Now lets define the Input Fields. Click Add as given above to see:
The DropDown Field [Input] contains one single value only. This is taken from the Input String <host> above. Fill Friendly Name to serve your needs. The Special Type Code allows you to provide parameters from the current Device to be queried. In this case, the hostname will be taken from the current device. Click create to see:
At least, define the Output Fields. Again, click Add as described above:
Provide a short Field [Output] name and a more meaningful Friendly Name. As you will want to save those data, select Update RRD File. Create to see:
Click Save and you're done.
Create the Data Template
Now you want to tell cacti, how to store the data retrieved from this script. Please go to Data Templates and click Add. You should see:
Fill in the Data Templates Name with a reasonable text. This name will be used to find this Template among others. Then, please fill in the Data Source Name. This is the name given to the host-specific Data Source. The variable |host_description| is taken from the actual Device. This is to distinguish data sources for different devices. The Data Input Method is a DropDown containing all known scripts and the like. Select the Data Input Method you just created. The Associated RRA's is filled by default. At the moment there's no need to change this. The lower part of the screen looks like:
The Internal Data Source Name may be defined at your wish. There's no need to use the same name as the Output Field of the Data Input Method, but it may look nicer. Click create to see:
Notice the new DropDown Output Field. As there is only one Output Field defined by our Data Input Method, you'll see only this. Here's how to connect the Data Source Name (used in the rrd file) to the Output Field of the Script. Click Save and you're done.
Create the Graph Template
Now you want to tell cacti, how to present the data retrieved from this script. Please go to Graph Templates and click Add. You should see:
Fill in Name and Title. The variable |host_description| will again be filled from the Device's definition when generating the Graph. Keep the rest as is and Create. See:
Now click Add to select the first item to be shown on the Graphs:
Select the correct Data Source from the DropDown, fill in a color of your liking and select AREA as a Graph Item Type. You want to fill in a Text Format that will be shown underneath the Graph as a legend. Again, Create:
Notice, that not only an entry was made under Graph Template Items, but under Graph Item Inputs as well. Don't bother with that now. Lets fill some more nice legends, see:
Notice, that the Data Source is filled in automagically. Select LEGEND as Graph Item Type (it is not really a Graph Item Type in rrdtool-speak, but a nice time-saver), and click Create to see:
Wow! Three items filled with one action! You may want to define a Vertical Label at the very bottom of the screen and Save.
Apply the Graph Template to a Host
Now go to the Devices and select the one of your choice. See the Associated Graph Templates in the middle of this page:
Select your newly created Graph template from the Add Graph Template DropDown. Click Add to see:
The Template is added and shown as Not Being Graphed. On the top of the page you'll find the Create Graphs for this Host link. Click this to see:
Check the box that belongs to the new template and Create. See the results:
This will automatically
- create the needed Graph Description from the Graph Template. As you may notice from the success message, this Graph takes the hosts name in it: router - Test ping (router is the hosts name of this example).
- create the needed Data Sources Description from the Data Template. Again, you will find the Hosts name replaced for |host_description|
- create the needed rrd file with definitions from the Data Template. The name of this file is derived from the Host and the Data Template in conjunction with an auto-incrementing number.
- create an entry to the poller_table to instruct cacti to gather data on each polling cycle
You'll have to wait at least for two polling cycles to find data in the Graph. Find your Graph by going to Graph Management, filtering for your host and selecting the appropriate Graph (there are other methods as well). This may look like:
Walkthrough: Script with more Output Parameters
The script will be implemented in perl (as I have no profound knowledge of php). As such, it should execute on most platforms.
#!/usr/bin/perl -w # -------------------------------------------------- # ARGV[0] = <hostname> required # ARGV[1] = <snmp port> required # ARGV[2] = <community> required # ARGV[3] = <version> required # -------------------------------------------------- use Net::SNMP; # verify input parameters my $in_hostname = $ARGV[0] if defined $ARGV[0]; my $in_port = $ARGV[1] if defined $ARGV[1]; my $in_community = $ARGV[2] if defined $ARGV[2]; my $in_version = $ARGV[3] if defined $ARGV[3]; # usage notes if ( ( ! defined $in_hostname ) || ( ! defined $in_port ) || ( ! defined $in_community ) || ( ! defined $in_version ) ) { print "usage:\n\n $0 <host> <port> <community> <version>\n\n"; exit; } # list all OIDs to be queried my $udpInDatagrams = ".1.3.6.1.2.1.7.1.0"; my $udpOutDatagrams = ".1.3.6.1.2.1.7.4.0"; # get information via SNMP # create session object my ($session, $error) = Net::SNMP->session( -hostname => $in_hostname, -port => $in_port, -version => $in_version, -community => $in_community, # please add more parameters if there's a need for them: # [-localaddr => $localaddr,] # [-localport => $localport,] # [-nonblocking => $boolean,] # [-domain => $domain,] # [-timeout => $seconds,] # [-retries => $count,] # [-maxmsgsize => $octets,] # [-translate => $translate,] # [-debug => $bitmask,] # [-username => $username,] # v3 # [-authkey => $authkey,] # v3 # [-authpassword => $authpasswd,] # v3 # [-authprotocol => $authproto,] # v3 # [-privkey => $privkey,] # v3 # [-privpassword => $privpasswd,] # v3 # [-privprotocol => $privproto,] # v3 ); # on error: exit if (!defined($session)) { printf("ERROR: %s.\n", $error); exit 1; } # perform get requests for all wanted OIDs my $result = $session->get_request( -varbindlist => [$udpInDatagrams, $udpOutDatagrams] ); # on error: exit if (!defined($result)) { printf("ERROR: %s.\n", $session->error); $session->close; exit 1; } # print results printf("udpInDatagrams:%s udpOutDatagrams:%s", # <<< cacti requires this format! $result->{$udpInDatagrams}, $result->{$udpOutDatagrams}, ); $session->close;
It should produce following output, when executed from command line:
[prompt]> perl udp_packets.pl localhost 161 public 1 udpInDatagrams:10121 udpOutDatagrams:11102
Where “public” may be replaced by your community string. Of course, the numbers will vary.
The Data Input Method
To define this script as a Data Input Method to cacti, please go to Data Input Methods and click Add.
You should see:
Enter the name of the new Data Input Method, select Script/Command and type in the command to call the script. Please use the full path to the command interpreter. Instead of entering the specific parameters, type <symbolic variable name> for each parameter the script needs. Save:
Now Add each of the input parameters in the Input Fields section, one after the other. All of them are listed in sequence, starting with <host>:
<port>
<community>
<version>
We've used some of cacti builtin parameters. When applied to a host, those variables will be replaced by the hosts actual settings. Then, this command will be stored in the poller_command table. Now Save your work to see
After having entered all Input Fields, let's now turn to the Output Fields, respectively. Add the first one, udpInDatagrams:
Now udpOutDatagrams:
Be careful to avoid typos. The strings entered here must exactly match those spitted out by the script. Double check Output Fields! Now, results should be like
Finally Save and be proud!
The Data Template
The previous step explained how to call the script that retrieves the data. Now it's time to tell cacti, how to store them in rrd files. You will need a single Data Template only, even if two different output fields will be stored. rrd files are able to store more than one output fields; rrdtool's name for those is data source. So we will create
- one single Data Template representing one rrd file
- two output fields/data sources
The first step is quite the same as Create the Data Template for a simple Data Input Method. For sure, we provide a different name, Example - UDP Packets. Now, let's enter the first data source. Again, its like above. But we now provide the name of udpInPackets, enter a Maximum value of 100,000 and select the Data Source Type of COUNTER.
Then save and find
Add the second data source by hitting New and provide data for udpOutPackets. Pay attention to select the correct Output Field defined by the Data Input Method.
Please pay attention, as Maximum Value for second and following data sources defaults to 100! In most cases, this value won't fit. To deactivate maximum checking, enter 0, else the desired number. Do not forget to select the correct Data Source Type and the Output Field.
The Graph Template
Again, most of this task was already described at Create the Graph Template of the previous chapter. You will define the Graph Template's global data just as in that example. But now, you will want to add both data sources to the graph. Just copy the steps for data source time twice, one for udpInPackets and for udpOutPackets. Add a Legend for both and you're happy. | https://docs.cacti.net/manual:088:3a_advanced_topics.1_data_input_methods | 2020-07-02T19:25:49 | CC-MAIN-2020-29 | 1593655879738.16 | [] | docs.cacti.net |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.